05. - 07.10.2021 Weltleitmesse für Bildverarbeitung

Shortlist der besten Einreichungen 2018

PhoXi® 3D Camera

Authors: Tomas Kovacovsky & Jan Zizka

Photoneo’s brand new PhoXi® 3D Camera is the highest resolution and highest accuracy area based 3D camera in the world. It it based on Photoneo’s patented technology called Parallel Structured Light implemented by a custom CMOS image sensor. The novel approach brings a unique set of traits that makes it the most efficient technology for high resolution scanning in motion and thus making the PhoXi® 3D Camera the best close and mid-range 3D camera on the market.

The key benefits of the Parallel Structured Light:

  • Scanning in a rapid motion - one frame acquisition, 40 m/s motion possible
  • 10 times higher resolution & accuracy - more efficient depth coding technique with true, per pixel measurement
  • No motion blur - 10 µs per pixel exposure time
  • Rapid acquisition of 1068 x 800 point-clouds + texture up to 60 fps
  • Outdoor operation under direct sunlight
  • Interreflection suppression - active ambient light rejection
  • Multiple devices operational simultaneously in the same space

In recent years, 3D sensors have opened new opportunities and brought new applications, and we humbly believe that we can contribute to this trend of a continuous advancement of the machine vision industry in the following realms:

  • Bin picking, palletizing, depalletizing, machine tending
  • Online quality control and metrology
  • Autonomous delivery systems
  • People counting, behaviour monitoring, reliable face recognition
  • Object sorting
  • Safety systems
  • Harvesting


Photoneo s.r.o.

Gesamten Abstract lesen: PhoXi® 3D Camera (PDF, 952 KB)

Software-Programmable Vision System-on-Chip

Authors: Jens Döge, Christoph Hoppe, Peter Reichel, Nico Peter, Ludger Irsig, Christian Skubich, Patrick Russell

The IAP VSoC2M is a novel member of the Fraunhofer Vision System-on-Chip (VSoC) family for high-speed image acquisition and processing applications. It combines a multitude of innovative approaches such as

  • analog convolution of image data during the readout process,
  • fast column-parallel image analysis and feature extraction,
  • column-specific storage of intermediate processing results in an analog cache with 32 entries each,
  • column-parallel software-defined A/D-converters with 1...10 Bit resolution,
  • an asynchronous readout path for compression of sparse data and
  • an ASIP processor concept (application-specific instruction set) for software-defined control of all processes on the VSoC.

The combination of all these features opens up a variety of new possibilities in embedded image processing that could not be implemented up to now. Possible applications range from a content-based automatic multi region of interest (ROI) image acquisition via optical measuring methods such as sheet of light (SoL) or optical coherence tomography (OCT) to the possibility of process control based on extracted image characteristics.

On the basis of the VSoC, an OEM sensor module for integration into customer cameras and a complete industrial camera were developed with which specific image processing tasks can be realized.


Fraunhofer-Institut f. Integrierte Schaltungen IIS

Gesamten Abstract lesen: A Software-Programmable Vision System-on-Chip for High-Speed Image Acquisition and Processing (PDF, 3 MB)

EMVA1288 compliant Interpolation Algorithm

Author: Jörg Kunze

CCD sensors are still integrated into a large number of machine vision camera models and used in a wide range of applications, but the availability of CCD sensors is limited. One approach to replace CCD cameras is to use new CMOS cameras, but CMOS sensors are generally no one-to-one replacements in terms of pixel size. This may raise a severe problem for the application, because optics and software are usually well adapted to the pixel size.

One solution might require image interpolation. Common interpolation methods, such as (Bi )Linear or (Bi-)Cubic, create pixels of inhomogeneous size and gain. This leads to implausible EMVA 1288 results (for example too high QE values). In addition, those inhomogeneous interpolated pixels account for visible artefacts.

Basler has solved this issue by developing a novel interpolation algorithm, which creates a homogenous pixel array of a given size. This algorithm is resource-efficient and enables real-time application. This interpolation has been tested, and all EMVA 1288 test reports contain plausible results that were exactly as expected for an image sensor with the respective pixel size. The first camera model using this interpolation is intended to replace ICX618 cameras and has been compared to a camera containing an original ICX618 and to a second camera. The image from the replacement camera matches the original image well whereas the image from the other camera differs in field of view, details and histogram results.



Gesamten Abstract lesen: EMVA1288 compliant Interpolation Algorithm (PDF, 8 MB)

Event-Based Vision enables the next revolution in visual perception for machines

Authors: C. Posch and S. Lavizzari

Event-based vision is a new paradigm in imaging technology inspired by human biology. It promises to enable a smarter and safer industrial world by improving the ability of machines to sense their environment and make intelligent decision about what they see.

Fully autonomous pixels, reacting on changing light intensity in their field-of-view, mimicking our retina behavior, enables ultra-fast acquisition (>10k fps) of dynamic scene, low power (120dB) thanks to pixel-individual asynchronous adaptive sampling. Communication of data to the outside of the pixel array is also handled asynchronously, preserving the pixel data’s high temporal precision. Typically complex, computation-intensive vision algorithms become simple and highly efficient and run on embedded platforms at result update rates in the kHz range.

Industrial automation will benefit from this new vision capabilities: high speed counting, predictive maintenance, kinetic monitoring and automated guided vehicle (AGV), just to name a few, will become faster, better and safer.


74 rue du faubourg saint-antoine
75012 Paris

Gesamten Abstract lesen:Event-Based Vision enables the next revolution in visual perception for machines (PDF, 2 MB)

Inline Computational Imaging

Authors: Svorad Stolc, Petra Thanner, Markus Clabian

AIT Inline Computational Imaging is a novel high-performance image acquisition and processing technology for simultaneous 2D and 3D inspection, which can be used for a wide variety of industrial and infrastructure applications. It is a single camera system that combines advantages of the light filed and photometric stereo approaches in one system. This makes it to work largely independent from surface properties of the inspected objects. Optimized colour images and detailed 3D depth models are calculated in real time and enable for quality control at the highest precision.

Key features:

  • multiple viewing and illumination angles simultaneously,
  • works with diverse material types (glossy, matt) at the same time,
  • enhanced 2D colour texture images (gloss suppression, high-dynamic-range, all-in-focus),
  • simultaneous 2D inspection and 3D measurement,
  • compact, flexible and customizable setup with one camera and standard line scan illumination,
  • patented technology.


AIT Austrian Institute of Technology GmbH
Giefingasse 4
1210 Wien

Gesamten Abstract lesen: ICI - Inline Computational Imaging (PDF, 635 KB)

New Camera Lens Design for Shock and Vibration Resistance

Author: Nina Kürten

Industrial imaging systems are frequently subject to strong accelerations, shocks, and vibrations. This is especially true for mobile systems, like robot-guided 3D scanners, but also for fixed installations. Such mechanical stress often causes a significant reduction of the resolution and a shift of the optical axis – a severe problem for machine vision and optical metrology applications.

Fujifilm has investigated the impact of shocks and vibrations on industrial fixed focal lenses via a dedicated test procedure. Depending on focal length and model the optical axis is shifted up to 26 µm, which corresponds to approx. 7 pixels with 2nd generation Sony Pregius sensors. Such kind of specification is not presented in most datasheets, but crucial in the selection process of lenses for optical metrology.

Further investigations led to the development of the FUJINON Anti-Shock & Vibration technology. The new lens design is based on an elastic and patent pending fixation of the internal lens arrangement. With this new technology incorporated into several Fujinon Machine Vision lenses the shift of the optical axis can be reduced down to just 4 µm (i.e. maximum one pixel) and the resolution degradation is minimized. The new mechanical design makes lenses maintain their high optical performance despite the shocks and vibrations that unavoidably occur in industrial imaging systems.


FUJIFILM Optical Devices Europe GmbH

Gesamten Abstract lesen: New Camera Lens Design for Shock and Vibration Resistance (PDF, 1 MB)

Novel Machine Vision Cameras Featuring CQD Sensors for High Resolution

Authors: Mr. George Wildeman and Dr. Ethan Klem

SWIR Vision Systems Inc. is introducing a new class of cameras featuring 400-1700nm image sensor technology based on colloidal quantum dot photodiodes fabricated on silicon (Si) readout wafers.   SWIR cameras based on IngaAs sensor technology are not new to machine vision, but suffer from very high processing costs and practical constraints on scaling image resolutions greater than VGA 640x512.  SWIR Vision Systems Inc. is commercializing our patented CQD™ sensor technology and high definition Acuros(TM) cameras, achieving measurably lower costs and resolutions to 1920x1080.   The production Acuros cameras feature: InGaAs equivalent noise, pixel operability >99%, 15um pixel pitch, three different pixel array sensor formats (640x512, 1280x1024, and 1920x1080), imaging speeds to 380 fps via GenICam compliant GigE Vision and USB3 Vision interfaces.

Targeted applications include: defect inspection of hot glass bottles, inspection of liquid fill-levels via imaging through consumer and industrial plastic containers, defect inspection of semiconductor wafers and solar cell arrays, detection of oil and water on metal surfaces such as bearings, detection of liquid levels in pharmaceutical vials, hyperspectral imaging, sorting of food products, and detection of moisture on surfaces to name a few.  We expect the lower cost points, higher resolutions, and non-ITAR, EAR99 US export classification of these pioneering cameras to drive higher global adoption rates of SWIR camera technology.


SWIR Vision Systems

Gesamten Abstract lesen: Novel Machine Vision Cameras Featuring CQD Sensors for High Resolution, Lower Cost SWIR Imaging (PDF, 369 KB)