Advanced DAQ Features
Last updated
Last updated
The DAQ is compatible with both single-ended (LVCMOS) or differential (LVDS) Image_Sync trigger inputs. The single-ended input is provided for ease of setup during initial integration or in a laboratory environment (e.g. trigger provided directly by a benchtop waveform generator sync output). The DAQ is also capable of generating an Image_sync trigger internally for simulation purposes.
Select the desired Image_sync trigger input from the associated menu on the DAQ Timing Settings control of the Miscellaneous tab:
NOTE: If the internal Image_sync source is selected for simulation purposes, its frequency is also configurable from this control.
A Channel Mixer is provided to calculate the vector sum (i.e. square-root of the sum of the squares) of the 1/H and 2/V channels for optical interferometers employing polarization diverse detection. For a non-polarization diverse (i.e. single-channel) setup, it is advantageous to disable the Channel Mixer and transmit captured data from the connected channel only. By default, systems configured with a single channel will use the 1/H channel exclusively. The DAQ provides the ability to transmit data from either 1/H, or 2/V, or the vector sum of the channels, or both channels concurrently ("interleaved").
Refer to the Hardware Control Tool documentation for instructions on selecting a channel transmission mode via the GUI.
NOTE: The selected channel will persist in the FPGA until a new channel is selected, or the DAQ board is power-cycled. A selected channel can be configured to persist as the DAQ’s power-on default using the FPGA Configuration Script functionality.
Dispersion compensation (spectrally-dependent phase correction) is achieved using a complex-valued window function. The Hardware Control Tool generates a Windowing LUT for apodization and dispersion compensation based on a Taylor series polynomial expansion, for which the user can adjust the coefficients in the linear, quadratic, and cubic terms as necessary.
Refer to the Hardware Control Tool documentation for instructions on setting a window function with dispersion compensation via the GUI.
NOTE: The loaded Windowing LUT will persist in the FPGA until overwritten with a new Windowing LUT, or the DAQ board is power-cycled. A Windowing LUT can be configured to persist as the DAQ’s power-on default.
The DAQ performs dynamic range reduction by converting 16-bit data to 8-bit data by truncating the least significant byte. User-configurable GAIN and OFFSET values allow adjustment of the desired intensity range prior to truncation, somewhat analogous to adjusting black-level/white-level or brightness/contrast values.
GAIN and OFFSET values have both integer and fractional components, but the FPGA uses 16-bit fixed-point representations instead of typical floating-point representations.
OFFSET is a 16-bit signed fixed-point integer with 8 integer bits and 8 fractional bits (Q8.8):
0x0000 is zero offset.
0x8000 to 0xFFFF are negative offset, and
0x0001 to 0x7FFF are positive offset.
e.g. 01.00 (signed '8.8' representation) represents an offset of +1.0, written to FPGA Register 23 as 0x0100 (hexadecimal) or 256 (decimal).
GAIN is a 16-bit unsigned fixed-point integer with 4 integer bits and 12 fractional bits (UQ4.12):
0x1000 is unity gain.
Can be set from 0x0000 to 0xFFFF (0.0 to 15.999).
e.g. 3.02A (unsigned '4.12' representation) represents a gain factor of 3.0103, written to FPGA Register 24 as 0x302A (hexadecimal) or 12330 (decimal).
Enter the desired GAIN and OFFSET values on the Pipeline Modes & Subsampling tab:
Corresponding Reg 23 (hex) and Reg 24 (hex) values are shown for informational purposes. These can be set directly on the FPGA Registers tab or used as the power-on default values in the FPGA Config Script.
NOTE: The selected GAIN & OFFSET will persist in the FPGA until new values are selected, or the DAQ board is power-cycled. GAIN & OFFSET can be configured to persist as the DAQ’s power-on default using the FPGA Configuration Script functionality.
Power-on default register values can be set via a script stored in non-volatile memory on the DAQ board. The FPGA Configuration Script is a list of simple “register-number / register-value” pairs written automatically in a consecutive fashion during initialization of the board. Single register controls such as the default Channel Selection (Reg 20), GAIN (Reg 24), or OFFSET (Reg 23) each occupy one line in the FGPA Configuration Script. Arrays such as a default Windowing LUT (Reg 25), which are loaded as multiple consecutive writes to the same register, are loaded via multiple consecutive lines in the FPGA Configuration Script.
WARNING: Do NOT change the default values of any FPGA register for which you are not instructed in this documentation or by Axsun Technical Support.
Refer to this section for more information
NOTE: Writing the modified configuration script to non-volatile memory will not immediately update the value of any FPGA registers. This script is applied only at power-on initialization and thus requires a DAQ board restart to change actual FPGA register values to those in the current configuration script.
HINT: Read back the configuration script from the DAQ to insure the desired modifications were written successfully. To further validate the modified power-on defaults, restart the DAQ hardware from the power-off state and confirm the register values match those configured in the modified default configuration script.
The Hardware Control Tool provides simple updating of the Windowing LUT in the FPGA Configuration Script. This avoids the need to manually copy & paste rows of register values via a spreadsheet editor in order to configure the power-on default Windowing LUT. Refer to this section on setting the Windowing LUT and this section on saving FPGA register defaults.
Transmission of raw (unprocessed) ADC data or processed data from intermediate locations in the image processing pipeline is achieved by configuring the FPGA to “bypass” all downstream pipeline blocks between the desired location of data access and the final transmission interface (Ethernet or PCIe). Available locations for raw or intermediate data access are shown in the below figure as numbered green points and are referred to as Pipeline Modes.
To maintain the transmitted data bandwidth within the practical limits of the Gigabit Ethernet interface (approximately 800 Mbps or 100 MB/sec), the effective A-line rate of the system may need to be reduced below the native rate determined by the laser sweep frequency. This is achieved by discarding A-lines on the DAQ according to a programmable Subsampling Factor (M), a positive integer indicating the desired bandwidth reduction factor. M = 1 is the native system A-line rate, M = 2 reduces the bandwidth by 2x, M = 10 reduces it by 10x, and so on.
When connected via the Ethernet interface, the Networking tab of the Windows Task Manager or Resource Monitor (or Linux equivalent) is a useful tool to view the effects of the Subsampling Factor as shown in the below figure. Because of the much higher supported data bandwidth over PCIe, subsampling is typically not needed when using the PCIe interface except in rare circumstances (i.e. both parallel channels, pre-FFT, with complex-valued data and high A-line rate lasers).
Refer to the Hardware Control Tool documentation for instructions on setting the Subsampling Factor and Pipeline Mode.
Once a new Pipeline Mode is selected, the AxsunOCTCapture library (and by extension the Image Capture Tool) will automatically adapt to the transmitted data type (U8, I16, U16, U32, or Complex).
WARNING: OCT images which span the reconfiguration of a Pipeline Mode will be discarded by the library; that is, each image must have a consistent data type for all member A-scans. Although not required, it is strongly recommended that the DAQ's imaging state be cycled off/on when making changes to the Pipeline Mode.
NOTE: The selected Pipeline Mode and Subsampling Factor will not persist if the DAQ board is power-cycled. A Pipeline Mode and Subsampling Factor can be configured to persist as the DAQ’s power-on default using the FPGA Configuration Script functionality.
Subtraction of a pre-defined background signature on an A-scan basis is supported for both raw ADC data (Pre-FFT interference fringes) and also Post-FFT (OCT intensity) data. The ability of the digital Background Subtraction feature to remove or suppress the appearance of fixed image artifacts depends on several factors including the depth of the artifact within the OCT scan depth, the stability of the object responsible for generating the artifact, and the frequency with which the pre-defined background vector is updated.
NOTE: Digital background subtraction of this nature rarely results in ideal artifact suppression and has some potential to generate new artifacts in some situations. As such, efforts should be made first to avoid the optical source of any undesirable artifacts by careful design of the interferometer and/or probe.
Configuring the real-time background subtraction functionality on the DAQ is accomplished in two primary parts:
Capturing a background signature, and then
Uploading a captured background signature onto the FPGA for subsequent subtraction from new A-scan data.
The steps for capturing and uploading a background subtraction signature are largely the same for enabling Pre-FFT and Post-FFT background subtraction, with exceptions noted in the detailed steps below. The process can be manually executed by pressing buttons in the Image Capture Tool (ICT) and Hardware Control Tool (HWCT) GUI applications, or automated by integrating the AxsunOCTControl/AxsunOCTControl_LW and AxsunOCTCapture API methods into your own client application.
HINT: Prior to executing the instructions below, launch the Image Capture Tool and the Hardware Control Tool applications and insure that they are effectively communicating with the DAQ & laser hardware and able to capture streamed OCT images.
NOTE: The instructions in this section require that the background subtraction has not yet been configured, or has been disabled/cleared. See instructions below for disabling background subtraction before capturing a new background signature.
Turn Live Imaging ON and Configure the Pipeline Mode to transmit processed OCT images appropriate for visual interpretation (log compressed or downstream).
Block the sample arm optical path or remove objects from the sample beam which are not part of the background you intend to subtract. The live images displayed should consist only of the background and there should be no fluctuation as a result of scanner motion or fiber optic cable disturbance.
On the Pipeline Modes and Subsampling tab on the Hardware Control Tool, set the Subsampling Factor as appropriate for your capture interface bandwidth, and then select either:
the Square Root & Bkg Subtract Pipeline Mode for Post-FFT background subtraction, or
the Raw Data Pipeline Mode for Pre-FFT background subtraction on the desired ADC channel.
A single image containing the background will be paused on the OCT B-scan image display window in the Image Capture Tool. Save it to a .csv file by pressing SAVE BKGND (.csv) button next to the image display and using the Windows dialog to name the file and save in the directory of your choice.
The resulting file will contain a background signature calculated from the mean across all A-scans in the displayed image, in order to reduce white noise.
Reconfigure the DAQ to the desired Pipeline Mode used for normal image visualization.
For Pre-FFT background subtraction, first select the desired ADC Channel. (This step is unnecessary for Post-FFT background subtraction, because the Post-FFT background signature is common to both ADC channels and is applied after the Channel Mixer.)
Press the appropriate LOAD FROM FILE… button on the Background Subtraction tab in the Hardware Control Tool. (There is one button for loading Post-FFT and one for loading Pre-FFT background subtraction.)
Navigate to the directory with a desired background signature file (saved in the previous section) and open it. This will load the background signature from the file and send it to the FPGA for application to all subsequently captured A-scans.
HINT: You will see the background subtraction become enabled if you restart live imaging and view the live image display in the Image Capture Tool while loading the file.
For Pre-FFT background subtraction, first select the desired ADC Channel. (This step is unnecessary for Post-FFT background subtraction, because the Post-FFT background signature is common to both ADC channels and is applied after the Channel Mixer.)
Press the appropriate DISABLE button on the Background Subtraction tab in the Hardware Control Tool. (There is one button for disabling Post-FFT and one for disabling Pre-FFT background subtraction.)
NOTE: The loaded background signature will persist in the FPGA until overwritten with a new loaded background signature (or cleared with zeros), or the DAQ board is power-cycled. A background signature can be configured to persist as the DAQ’s power-on default using the FPGA Configuration Script functionality, however this is not recommended since limited stability of the OCT system likely requires a new background signature to be updated regularly.
System integrators and software developers have access to the Axsun libraries (AxsunOCTCapture.dll, AxsunOCTControl_LW.dll, and AxsunOCTControl.dll) for integration into custom client applications. Library binaries, documentation, and example source code projects are available for download.
The apodization window function is downloaded to the FPGA in the form of a 2048-point lookup-table (LUT) which is subsequently multiplied by the sampled spectral data (length = , exact value depends on laser and K-clock parameters) to shape the spectrum and zero-pad the acquired samples out to 2048 points for the subsequent FFT operation. The FPGA can apply separate window functions for 1/H and 2/V channels if desired for a dual-channel system. The Hardware Control Tool provides a limited set of common window function types, but the user is free to load an arbitrary custom window function via the API if desired.
Generate two 1D arrays of length = 2048, consisting of the REAL and IMAG parts of the desired window function of length concatenated with (2048 – ) zeros. Use a signed (two’s complement) 16-bit integer data type with the values scaled so the maximum representable 16-bit value (32767) corresponds to unity (i.e. application of LUT is a multiplication by 1.0).
The Subsampling Factor is configured by writing a value of to FPGA Register 60 using SetFPGARegister(60, M – 1)
.