The **Signal Processing Society** (SP-1) caters to hardware, firmware and software engineers involved in signal processing techniques, implementation and apparatus.

## Technical Meetings, Lectures and Events

## News and Articles

This is the collection of the slides, viewgraphs and materials presented at the technical meetings and lectures of the Signal Processing.

## Cooperative Approaches for Ensuring Secret Wireless Communications

### Dr. Athina Petropulu, Rutgers The State University of New Jersey (2017-05-05)

The prevalence of wireless technologies in our daily life is driven by our desire to communicate from anywhere at anytime. However, due to the broadcast nature of the wireless channel wireless communications are easily accessible to intruders. Ensuring the secrecy of confidential transactions conducted over wireless networks is a pressing need. Conventionally, wireless communications are secured using cryptographic protocols, which were mainly developed for wireline networks and as such have several flaws when applied to wireless networks. The talk discusses approaches to establishing a confidential channel between a source and the legitimate destination in the presence of one of more eavesdroppers. The confidential channel is created through the use of multiple antennas at the source, or via node cooperation, whereby nodes reinforce each others' communications and/or also cooperatively jam the eavesdroppers. Thus, the legitimate destinations will reliably receive the communicated information, but eavesdroppers will not be able to decode the communication signal even if they knew the encoding/decoding schemes and encryption-decryption keys used by the transmitter/receiver.

## Application of Discrete-Time Statistical Signal Processing: Part 2

### Mr. Alan Lipsky, Consultant (2017-03-23)

The concept of probability density and distribution functions are introduced and illustrated with the normal and uniform density functions. The normal density is shown to be a function of its mean and variance only. The notion of a random variable is explained and illustrated. The concept of computing the sample mean is illustrated with a few simple examples such as the average expected from a large number of casts of a die. The sample means and mean square values are further illustrated by deriving the equations for linear regression that minimize the mean square error between the measured data and a straight line. Auto and Cross correlation time functions are defined along with convolution and the unit sample response. For a stationary random process, the equivalence between the ensemble mean, referred to as expectation, and the sample mean is demonstrated. Computation of expectation using the probability density is generalized and illustrated with computation of the mean, mean square value, variance, and correlation. Because most signal processing is in discrete time, wherever possible discussions are illustrated with discrete-time rather than a continuous time independent variable.

## Application of Discrete-Time Statistical Signal Processing: Part 1

### Mr. Alan Lipsky, Consultant (2017-03-09)

This is an introductory lecture, with no math. It mostly concerns applications of detecting, identifying and interpreting, a signal embedded in a noisy background in speech, image, SONAR, and RADAR processing with Weiner and Kalman filters. Both filters are optimum in minimizing the least squares error in their output signal. Developments in statistical signal processing can be traced back to the early 1800’s when both Gauss and Legendre used the method of least squares to extract a comet’s orbit from noisy measurements. In the 1940’s Norbert Wiener published “Extrapolation. Interpolation and smoothing of stationary time series.” He related a random signal’s power density versus frequency characteristic to its autocorrelation. An optimum filter, that minimizes mean square error, in extracting a signal from noise, is named for him. The next big advance in filtering occurred when Rudolf Kalman published a description of his filter in 1960. This filter updates continuously with a recursive solution that offers a low computational burden, and yields both the signal and systems state. A Kalman filter was in the Apollo navigation computer used by Neil Armstrong to go to the moon, and is in many modern applications, particularly autonomous navigation.

## Advanced Techniques of Radar Detection in Non-Gaussian Background

### Dr. Maria Sabrina Greco, Associate Professor at University of Pisa (2016-10-18)

For several decades, the Gaussian assumption on the disturbance modeling in radar systems has been widely used to deal with detection problems. But, in modern high-resolution radar systems, the disturbance cannot be modeled as Gaussian distributed and the classical detectors suffer from high losses. In this talk, after a brief description of modern statistical and spectral models for high-resolution clutter, coherent optimum and sub-optimum detectors, designed for such a background, will be presented and their performance analyzed against a non-Gaussian disturbance. Different interpretations of the various detectors are provided that highlight the relationships and the differences among them. After this first part, some discussion will be dedicated to how to make adaptive the detectors, by incorporating a proper estimate of the disturbance covariance matrix. Recent works on Maximum Likelihood and robust covariance matrix estimation have proposed different approaches such as the Approximate ML (or Fixed-Point) Estimator or the M-estimators. These techniques allow to improve the detection performance in terms of false alarm regulation and detection gain in SNR. Some of results with simulated and real recorded data will be shown.

## Analyzing Feedback Systems with Signal-Flow Graphs

### Alan Lipsky, Consultant (2016-04-26)

Signal-flow graphs facilitate finding transfer functions for linear systems, both mechanical and electrical. They provide an intuitive understanding. Integral differential equations are solved in the Laplace domain. Using Mason’s gain formula the transfer function is found easily from the signal-flow graph. The resulting transfer function is the ratio of polynomials in powers of ‘S’, the complex frequency variable. They are less general than state variable formulations. Since they are useful for solution of linear equation only and don’t consider initial conditions. In contrast with signal-flow diagrams the state variable formulation is ideal for computer solutions of multiple input output systems. Flow graphs, however, yield a better intuitive grasp of the system. Unlike block diagrams, which ignore interactions between the output of one block and the input of the following one, flow-graphs are an accurate representation. In the lecture, the rules for signal flow graphs are introduced and Mason’s gain formula stated. A number of Op-Amp circuits and simple mechanical systems are solved. After stating Bode’s criteria for stability, the graphs are used to illustrate why Op-Amps oscillate with capacitance loads.Presentation Slides: PDF - Click to View (0.5 MB)

## Protection From Lightning

### Alan Lipsky, Consultant (2016-02-09)

Lighting strokes range from a few hundred amps to more than 500 KA; their energy spectrum ranges from DC to above 1 Megahertz. Damage is caused by the large current flow or the voltage it causes. Mitigate the effects of lighting with the appropriate grounding techniques, surge protection at various points. There are a variety of surge protectors: gas discharge tubes, crow bars (Thyristors), Metal Oxide Varistor, and Transient Voltage Suppressors. The performance of each is discussed. The first two handle the largest power and are used on the power input to a facility. The latter 2 are more appropriate for equipment protection, MOV’s at the power input and TVS on boards with especially sensitive components. Because of their capacitance and leakage neither are appropriate for signal or communication inputs. A diode circuit for protecting these inputs is shown and discussed. To ensure equipment survival, specification organizations issued standard test-wave-shapes to simulate lighting caused surges. Examples of these wave shapes are presented. Equipment should be both designed and tested to survive these tests.Presentation Slides: PDF - Click to View (0.4 MB)

## Signal Integrity & Routing Considerations for High-Speed Systems

### Graham Smith, TE Connectivity (2015-10-29)

The connector-to-board interface regions (footprints) reside on either side of any mated connector pair and are an integral part of system electrical performance. As data transfer rates have increased, footprint and routing design considerations have increased concurrently and have a greater contribution to the overall Signal Integrity performance. Engineers must deal with these increased speeds by refining board designs to accommodate Printed Circuit Board (PCB) manufacturing capabilities while simultaneously finding ways to enhance performance to accommodate increased data rates.Presentation Slides: PDF - Click to View (1.4 MB)

## Enhanced Feedback Robustness Via Scaled Dither

### Dr. Lijian Xu, SUNY Farmingdale (2015-04-23)

A new method is introduced to enhance feedback robustness against communication gain uncertainties. The method employs a fundamental property in stochastic differential equations to add a scaled stochastic dither under which tolerable gain uncertainties can be much enlarged, beyond the traditional deterministic optimal gain margin. Algorithms, stability, convergence, and robustness are presented for first-order systems. Extension to higher-dimensional systems is further discussed.Presentation Slides: PDF - Click View (3.6 MB)

## Evolution of Digital Verification

### Walter Gude, Mentor Graphics (2013-10-08)

Verification of digital system used to be a pretty straight forward process. “Create a set of test vectors for each feature, apply these vectors and track down any bug.” The relentless march of Moore’s law has caused these traditional methods to breakdown; first in the ASIC world and now increasingly in the FPGA world. This has caused the system test budget to rise dramatically. Tool-based verification at front is always considered the most viable approach to balance the budget, as the statistics show that most of functional bugs could be caught by front-end verification before the physical unit test and system test.Presentation Slides: PDF - Click to View (7.0 MB)

## Reversing Time: A Way to Unravel Distorted Communications?

### James V. Candy, Lawrence Livermore National Laboratory (2012-10-10)

Communicating in a complex environment is a daunting problem. Such an environment can be a hostile urban setting populated with a multitude of buildings and vehicles, the simple complexity of a large number of sound sources that are common in the stock exchange, or military operations in an environment with a topographic features hills, valleys, mountains, etc. These inherent obstructions cause transmitted sounds or signals to bounce (reflect) and bend (refract) and spread (disperse) in a multitude of directions distorting both their shape and arrival times at the targeted receiver locations. Time-reversal is a simple notion that we have all observed (in a sense) when viewing a movie of the demolition of a building, for example. Merely running the movie in-reverse or equivalently running it backwards in time allows us to reconstruct the building at least visually; even though it cannot be reconstructed physically. Using this same idea, time-reversal can be applied to “reconstruct” communication signals by retracing all of the multiple paths that originally distorted the transmitted signals in the first place! In order to separate or decompose the individual components of the message, the receiver must use its knowledge of the medium to not only separate each path but also to add them together in some coherent manner to extract the message with little or no distortion and increase their signal levels.Presentation Slides: PDF - Click to View (3.0 MB)

## Hardware Verification for Avionics & Safety Critical Design

### Modesto Casas, Aldec (2012-05-23)

The common problems associated during hardware testing of complex FPGAs in Safety Critical Designs are explored, in addition to the time savings attainable by re-using the simulation test-bench as test vectors to perform in-hardware verification at speed. A set of tasks and time required for in-hardware FPGA testing under the DO-254 Design Assurance Guidance for Airborne Electronic Hardware is presented and two methods of verification are contrasted. Traditionally, hardware verification is performed at the board level which contains the FPGA under test as its primary component. The FPGA is also interconnected with other components on the board, and with the lack of test headers, visibility and controllability at the FPGA pin level is limited. At times the board may contain multiple FPGAs, further complicating the verification problem. Verification at the board level without first stabilizing each FPGA individually can lead to many problems and longer project delays. The methodology discussed is based on a bit-accurate in-hardware verification platform that is able to verify and trace the same FPGA level requirements from RTL to the target device at full speed, while saving time and resources.Presentation Slides: PDF - Click to View (4.0 MB)

## Target Detection Using Optical Joint Transform Correlation

### Dr. M. Nazrul Islam, SUNY Farmingdale (2011-11-30)

Automatic identification of a specific object or pattern in an arbitrary input scene is an important part of any authorization, monitoring and security system. Pattern recognition is always a challenging issue because the targets are often non-cooperative; the scene may contain noise and distortions due to variable environmental conditions during recording the image. Additional requirements for an efficient pattern recognition system are that the architecture should be simple so that it can easily be implemented and be user friendly, and it should perform fast enough to make instantaneous decision on the presence of a target in the input scene. Optical joint transform correlation (JTC) technique has been found to be a versatile tool for real-time pattern recognition applications, which employs optical devices, like lens, spatial light modulator, for parallel processing of the given images. The JTC scheme provides a number of advantages over other correlation techniques, like Vanderlugt filter, in that it allows real-time updating of the reference image, permits parallel Fourier transformation of the reference image and input scene, operates at video frame rates and eliminates the precise positioning requirement of a complex matched filter in the Fourier plane. Several modifications have been proposed to improve the correlation performance of the JTC technique, namely binary JTC, phase-only JTC, fringe-adjusted JTC and shifted phase-encoded fringe-adjusted JTC. This presentation will review the features, problems and prospects of optical pattern recognition techniques.Presentation Slides: PDF - Click to View (3.8 MB)

## Tapping the TeraFLOP Potential of GP-GPU

### Brooks Moses, Gil Ettinger, Eran Strod, Mentor Graphics, Sensor Exploitation, Curtiss-Wright (2011-06-15)

High performance image and signal processing applications are significantly benefiting from GP-GPU technology to extract meaningful information from large volumes of rich data sources. This seminar provides an overview of GP-GPU technology and how it can be expected to perform in image and signal processing applications such as target tracking. In addition, we discuss how this technology, which was developed for desktop computing, can be adapted to rugged environmental conditions that are typical of military and aerospace applications.Presentation Slides: PDF - Click to View (1.9 MB)

## Digital Signal Processing For Radar Applications

### Michael Parker, Benjamin Esposito, Altera (2011-03-15)

This seminar features a space-time adaptive processing (STAP) pulsed Doppler Radar simulation using back-end FPGA implementation including: model of a Radar system environment, optimized implementation of STAP back-end processing and FPGA Implementation. Solutions are presented to address challenges often faced by Radar system and implementation engineers. The methodology and tools presented model and simulate systems and algorithms at a high level of abstraction, allow rapid exploration of design options (“what-if” scenarios), while efficiently and optimally implementing designs in FPGAs and ASICs.Presentation Slides: PDF - Click View (4.0 MB)

## Mapping DSP Algorithms Into FPGAs

### Sean Gallagher, Xilinx (2010-11-02)

FPGAs have been used to craft massively parallel custom computing machines since the early 1990s’ and since 2002 they have included embedded multipliers and adders. The next generation of the largest FPGAs from Xilinx, will have an equivalent gate count in the millions and close to 4000 embedded multipliers and adders. The sheer quantity of multipliers and adders allows the designer to build many high throughput DSP functions like digital down-conversion circuits, FFTs, channelizers, etc. However for low throughput requirements it is also possible to use a smaller FPGA device and over-clock (time share) the FPGA resources so as to require less of them. This presentation explores implementation options for efficiently building DSP algorithms like parallel FFTs, channelizers, filters, etc.Presentation Slides: PDF - Click to View (1.0 MB)

## Extending Laplace & Fourier Transforms: A Personal Perspective

### Dr. Shervin Erfani, University of Windsor (2007-05-15)

The classical theory of variable systems is based on the solutions of linear ordinary differential equations with varying coefficients. The varying coefficients are usually functions of an independent variable, so-called the time variable. The “time variable” is assumed to be a real variable for physical systems. This assumption facilitates analysis and synthesis of fixed (so-called time-invariant) systems by allowing the Laplace transform techniques to be used. However, the assumption of “real time” is shown to be inadequate for realization of time-varying systems in the transformed domain. The discussion in this presentation is based on a different point of view. Specifically, the approach consists essentially in investigating the possibility of system realization through an examination of the behavior of systems that are functions of a complex time-variable. This approach allows, in effect, a two-dimensional Laplace transform technique to be used for the time-varying systems in the same manner that the conventional frequency-domain technique is used in connection with fixed systems. The challenge is the physical interpretation of a “complex time variable” versus the “real time,” and its implications on the transformed variable, so-called the “frequency variable.”Presentation Slides: PDF - Click to View (0.3 MB)

## A Self-Coherence Based A Self-Coherence Based Anti-Jamming GPS Receiver

### Moeness Amin, Villanova University (2005-03-10)

Despite the ever-increasing civilian applications, the main drawback of GPS remains to be its high sensibility to multi-path and interference. The effect of interference on the GPS receiver is to reduce the signal-to-noise ratio (SNR) of the GPS signal such that the receiver is unable to obtain measurements from the GPS satellite. The spread-spectrum (SS) scheme, which underlies the GPS signal structure, provides a certain degree of protection against interference. However, when the interferer power becomes much stronger than the signal power, the spreading gain alone is insufficient to yield any meaningful information. This talk discusses a new technique for anti-jam Global Positioning System (GPS). A novel GPS anti-jam receiver using multi-antenna receivers is introduced which relies on the replications of the coarse/acquisition (C/A) code within a GPS symbol. The proposed receiver utilizes the inherent GPS self-coherence property to excise narrowband and broadband interferers that have different temporal structures from that of the GPS signals.Presentation Slides: PDF - Click to View (1.0 MB)

## Evolution of 3G Wireless Systems

### Ariela Zeira, InterDigital (2003-05-14)

Third generation wireless communication systems were introduced to extend the data capabilities of second-generation systems by providing quality of service (QoS) management and enabling the high data rates required for high speed web access, and transmission/reception of high quality images and video. To satisfy predicted future increasing demands on even higher rate data services, additional enhancements are being incorporated into the different 3G air interface standards. The first step in evolving the 3G standards is enabling high-speed packet access in the downlink or forward link, i.e. when the terminal is receiving information from the network. The higher data rates are achieved via new features such as adaptive modulation and coding, Hybrid ARQ and fast scheduling. Other enhancements that are being considered are extending the high-speed packet access to the uplink or reverse link (when the terminal is transmitting information to the network) and smart antenna techniques. In this talk we will review the new features recently introduced or being considered for 3G air interfaces and discusses their impact on the performance of the evolving standards.Presentation Slides: PDF - Click to View (1.0 MB)