Shortlist der besten Einreichungen 2022

 

Diese Unternehmen haben es in die engere Wahl für den VISION Award geschafft:

Statements der VISION Award Jury zu den Einreichungen

Brighter AI

Edge Impulse

Kitov

Saccade

SWIR

Deep Natural Anonymization (DNAT)

For the anonymization of picture and video material, “Instead of removing pixel information, the Deep Natural Anonymization changes the pixel information and keeps it natural“. Personal information such as faces and license plates, are replaced with a synthetic replacement that reflects the original attributes like line of sight, facial expression, age, gender, ethnicity. for analytics.

Our jury rates this topic as a topic of high-relevance…"politically", or community… or even in society. Anonymization with synthetic replacement is a potential enabler for future applications where personal information is present?

Ground-breaking algorithm that brings real-time object detection, tracking and counting to microcontrollers

Aim of this innovation is to “find the right tradeoff between accuracy, speed, and memory, to shrink your deep learning models to very small sizes while keeping them useful”. The application shows the potentials, compares to the current state of technology and discusses the limits.

Our jury sees Ground breaking work in scope between Algorithm and Architectural work with a potential to enable more embedded vision on devices with very small computation and memory capacity. Up to 95% of cost saving show the potential as a door opener here.

CAD2SCAN - Automating of Visual Inspection planning

Kitov have developed a product highly relevant to manufacturing markets, specifically in light of the market shift towards local manufacturing and mass customization.

The main USP of the company’s offering is the software platform for the automation of inspection planning including robot path planning, collision avoidance and optimized image acquisition based from the primary input of a CAD drawing of the part to be inspected or measured. A separate module is the actual quality inspection of the part. For this, the company has also developed a novel semantic AI classification approach which is based on a known taxonomy of typical component types to aid the inspection task. Both modules make use of a combination of rule-based and deep learning-based vision tools.

With the development Kitov targets to solve a major obstacle for the wider proliferation of vision tech in manufacturing: the need for experts to set-up and fine-tune the vision system for complex inspection and measurement tasks. Kitov intends to fully automate this step.

MEMS Based 3D Camera For Industrial Inspection and Robotic Guidance

Saccade have developed a product for 3D measurement and inspection which very well addresses some major challenges for line-scan based 3D technology today: The feature-based static laser line approach with a special MEMS scanner for high scanning flexibility enables orientation and density of projected lines to be individually adapted to the part and surface properties as well as to the measurement or inspection task.

Only the areas of interest are targeted, and the data resolution is adapted to each specific area based on the digital CAD model analysis of the sample. The Saccade approach avoids the requirement for relative movement between part and sensor, leads to very time-efficient data acquisition and increases the 3D data quality, e.g. through the avoidance of glare or occlusions.

Short wavelength infrared extended SWIR (eSWIR) cameras Colloidal Quantum Dot Sensor

SWIR Vision Systems, Inc. has made a novel camera line Acuros® within the much needed UVA-Vis-SWIR range.

In addition to a wide spectral range the cameras offer significantly higher resolution than comparable InGaAs based cameras. Such cameras will have the potential of being a gamechanger in many important fields of imaging such as spectral imaging, scattering reduction imaging, recycling, food and agricultural quality assessment, semiconductor inspection and medical imaging.

Abstracts

By Patrick Kern + everyone at brighter AI Technologies GmbH

Description of the innovation:

Our Deep Natural Anonymization Technology (DNAT) is an advanced solution to protect personally identifiable information (PII) in image and video data. This technology automatically detects and anonymizes personal information such as faces and license plates, and generates a synthetic replacement that reflects the original attributes. Therefore, the solution protects identities while keeping necessary information for analytics or machine learning.

General video redaction techniques include blurring the PIIs, but this leads to loss of information and context of the image. DNAT on the other hand, replaces the original PII with an artificial one that has a natural appearance and preserves the content information of the image. It preserves the semantic segmentation, and the semantic segmentation consistence measured.

Technical details and advantages of the innovation:

Traditional anonymization methods such as masking, blurring, pixelization, etc usually operate by destroying pixel information. Therefore, there is always a tradeoff between compliance and data quality. Compliance and data quality are two endpoints of a line where one cannot be improved without harming the other.

Instead of removing pixel information, our Deep Natural Anonymization changes the pixel information and keeps it natural. DNAT detects faces and other identifiable characteristics, such as license plates, and generates an artificial replacement for each one of them. Each generated replacement is constrained to match the attributes of the source object as precisely as possible. Nevertheless, this constraint is selectively applied, so that we can control which attributes to maintain and which not. For faces, for example, it could be important to keep attributes like gender and age intact for further analytics. Identifiable aside, the rest of the information that does not contain sensitive personal data is kept without modifications. Thus, DNAT effectively breaks up the tradeoff between removing and anonymizing data.

Relevance and application possibilities of the described innovation for the machine vision industry:

DNAT automatically detect personal identifier such as a face or license plates, and randomly generates natural replacements and faces that do not exist in the real world. At the same time, it preserves face attributes, such as line of sight, facial expression, age, gender, ethnicity. Therefore, it keeps the high quality and accuracy of the raw data. DNAT provides image processing algorithms with the best condition to use such images.

Unique Selling Point (USP):

  • Easy-to-use: Simple integration via cloud API or online user interface
  • Precise: > 99% anonymization accuracy
  • Fast: Fully automatic, AI-powered anonymization software
  • Secure: Hosted on MS Azure with TLS encrypted API
  • AI-compatible: DNAT enables analytics and machine learning compatibility
  • Compliance: Anonymized data is by nature not personal and therefore not subject to data privacy regulations.

When will the innovation be available?

Already available

By Mihajlo Raljic, Edge Impulse

Description of the innovation:

A new machine learning technique developed by researchers at Edge Impulse, a platform for creating ML models for the edge, makes it possible to run real-time object detection on devices with very small computation and memory capacity. Called Faster Objects, More Objects (FOMO), the new deep learning architecture can unlock new computer vision applications.

Most object-detection deep learning models have memory and computation requirements that are beyond the capacity of small processors. FOMO, on the other hand, only requires several hundred kilobytes of memory, which makes it a great technique for TinyML, a subfield of machine learning focused on running ML models on microcontrollers and other memory-constrained devices that have limited or no internet connectivity. TinyML has made great progress in image classification, where the machine learning model must only predict the presence of a certain type of object in an image. On the other hand, object detection requires the model to identify more than object as well as the bounding box of each instance. Image classification is very useful for many applications. For example, a security camera can use TinyML image classification to determine whether there's a person in the frame or not. However, much more can be done.

Technical details and advantages of the innovation:

Earlier object detection ML models had to process the input image several times to locate the objects, which made them slow and computationally expensive. More recent models such as YOLO (You Only Look Once) use single-shot detection to provide near real-time object detection. But their memory requirements are still large. Even models designed for edge applications are hard to run on small devices.

YOLOv5 or MobileNet SSD are very large networks that never will fit on MCU and barely fit on Raspberry Pi-class devices. Moreover, these models are bad at detecting small objects and they need a lot of data. For example, YOLOv5 recommends more than 10,000 training instances per object class.

The idea behind FOMO is that not all object-detection applications require the high-precision output that state-of-the-art deep learning models provide. By finding the right tradeoff between accuracy, speed, and memory, you can shrink your deep learning models to very small sizes while keeping them useful.

Instead of detecting bounding boxes, FOMO predicts the object's center. This is because many object detection applications are just interested in the location of objects in the frame and not their sizes. Detecting centroids is much more compute-efficient than bounding box prediction and requires less data.

Single-shot object detectors are composed of a set of convolutional layers that extract features and several fully-connected layers that predict the bounding box. The convolution layers extract visual features in a hierarchical way. The first layer detects simple things such as lines and edges in different directions. Each convolutional layer is usually coupled with a pooling layer, which reduces the size of the layer's output and keeps the most prominent features in each area.

The pooling layer's output is then fed to the next convolutional layer, which extracts higher-level features, such as corners, arcs, and circles. As more convolutional and pooling layers are added, the feature maps zoom out and can detect complicated things such as faces and objects. Each layer of the neural network encodes specific features from the input image. Finally, the fully connected layers flatten the output of the final convolution layer and try to predict the class and bounding box of objects.

FOMO removes the fully connected layers and the last few convolution layers. This turns the output of the neural network into a sized-down version of the image, with each output value representing a small patch of the input image. The network is then trained on a special loss function so that each output unit predicts the class probabilities for the corresponding patch in the input image. The output effectively becomes a heatmap for object types.

There are several key benefits to this approach. First, FOMO is compatible with existing architectures. For example, FOMO can be applied to MobileNetV2, a popular deep learning model for image classification on edge devices. Also, by considerably reducing the size of the neural network, FOMO lowers the Page 2/6 memory and compute requirements of object detection models. According to Edge Impulse, it is 30 times faster than MobileNet SSD while it can run on devices that have less than 200 kilobytes of RAM.

For example, the following video shows a FOMO neural network detecting objects at 30 frames per second on an Arduino Nicla Vision with a little over 200 kilobytes of memory. On a Raspberry Pi 4, FOMO can detect objects at 60fps as opposed to the 2fps performance of MobileNet SSD. The granularity of FOMO's output can be configured based on the application and can detect many instances of objects in a single image.

The benefits of FOMO do not come without tradeoffs. It works best when objects are of the same size. It's like a grid of equally sized squares, each of which detects one object. Therefore, if there is one very large object in the foreground and many small objects in the background, it will not work so well.

Also, when objects are too close to each other or overlapping, they will occupy the same grid square, which reduces the accuracy of the object detector (see video below). You can overcome this limit to an extent by reducing FOMO's cell size or increasing the image resolution. FOMO is especially useful when the camera is in a fixed location, for example scanning objects on a conveyor belt or counting cars in a parking lot. The Edge Impulse team plans to expand on their work in the future, including making the model even smaller, under 100 kilobytes and making it better at transfer learning.

Relevance and application possibilities of the described innovation for the machine vision industry:

Earlier object detection ML models had to process the input image several times to locate the objects, which made them slow and computationally expensive. More recent models such as YOLO (You Only Look Once) use single-shot detection to provide near real-time object detection. But their memory requirements are still large. Even models designed for edge applications are hard to run on small devices.

YOLOv5 or MobileNet SSD are very large networks that never will fit on MCU and barely fit on Raspberry Pi-class devices. Moreover, these models are bad at detecting small objects and they need a lot of data. For example, YOLOv5 recommends more than 10,000 training instances per object class.

The idea behind FOMO is that not all object-detection applications require the high-precision output that state-of-the-art deep learning models provide. By finding the right tradeoff between accuracy, speed, and memory, you can shrink your deep learning models to very small sizes while keeping them useful.

Instead of detecting bounding boxes, FOMO predicts the object's center. This is because many object detection applications are just interested in the location of objects in the frame and not their sizes. Detecting centroids is much more compute-efficient than bounding box prediction and requires less data.

Single-shot object detectors are composed of a set of convolutional layers that extract features and several fully-connected layers that predict the bounding box. The convolution layers extract visual Page 3/6 features in a hierarchical way. The first layer detects simple things such as lines and edges in different directions. Each convolutional layer is usually coupled with a pooling layer, which reduces the size of the layer's output and keeps the most prominent features in each area.

The pooling layer's output is then fed to the next convolutional layer, which extracts higher-level features, such as corners, arcs, and circles. As more convolutional and pooling layers are added, the feature maps zoom out and can detect complicated things such as faces and objects. Each layer of the neural network encodes specific features from the input image. Finally, the fully connected layers flatten the output of the final convolution layer and try to predict the class and bounding box of objects.

FOMO removes the fully connected layers and the last few convolution layers. This turns the output of the neural network into a sized-down version of the image, with each output value representing a small patch of the input image. The network is then trained on a special loss function so that each output unit predicts the class probabilities for the corresponding patch in the input image. The output effectively becomes a heatmap for object types.

There are several key benefits to this approach. First, FOMO is compatible with existing architectures. For example, FOMO can be applied to MobileNetV2, a popular deep learning model for image classification on edge devices. Also, by considerably reducing the size of the neural network, FOMO lowers the memory and compute requirements of object detection models. According to Edge Impulse, it is 30 times faster than MobileNet SSD while it can run on devices that have less than 200 kilobytes of RAM.

The benefits of FOMO do not come without tradeoffs. It works best when objects are of the same size. It's like a grid of equally sized squares, each of which detects one object. Therefore, if there is one very large object in the foreground and many small objects in the background, it will not work so well.

Also, when objects are too close to each other or overlapping, they will occupy the same grid square, which reduces the accuracy of the object detector (see video below). You can overcome this limit to an extent by reducing FOMO's cell size or increasing the image resolution. FOMO is especially useful when the camera is in a fixed location, for example scanning objects on a conveyor belt or counting cars in a parking lot. The Edge Impulse team plans to expand on their work in the future, including making the model even smaller, under 100 kilobytes and making it better at transfer learning.

Unique Selling Point (USP):

Run Object detection on Microcontrollers or Microporcessors at a fraction of the hardware costs (up to 95% of costs savings for the hardware) before. Where you may have needed a Nvidia Jetson Nano, now you can work with an MPU at significant costs savings and since the algorithm runs on C++ it is hardware agnostic and a natural hedge against hardware volatility.

When will the innovation be available?

already available

By Dr. Yossi Rubner - CTO and Founder, Kitov.ai

Description of the innovation:

CAD2SCAN represents a leap forward in the planning of visual inspection, especially for parts with complex 3D geometry and intricate inspection requirements. Allowing quality engineers to define their inspection requirements directly on complex CAD model, simply and intuitively, saves weeks or even months compared to manually defining such inspection.

Kitov's CAD2SCAN software automatically takes all available information from the CAD, including geometric and component specifications and specific inspection requirements, and uses it to plan the robot inspection, including the best 3D camera and illumination directions for each inspection, and the optimal robot path.

Technical details and advantages of the innovation:

The Problem

A major problem in advanced manufacturing is the visual quality inspection. Even with the introduction of automation and robotics, the vast majority of final product visual inspection is still performed manually, which is limited by the human inspector's judgment, fatigue, and speed. While people are very good at their ability to inspect 3D products and find anomalies, they are slow, expensive, and inconsistent. Inspection results might differ for different people, and many defects might be missed due to fatigue, especially at the end of long shifts. The lack of a digital thread within the inspection process and the potential for human error and inconsistency leads to ongoing and slow-to-adapt quality management Page 1/4 programs.

The cases where final product inspection is performed automatically using Machine Vision are mostly customized solutions for specific products. They are expensive projects as they are tailor-made for a specific product, require expert System Integrators (SI's) to build and maintain, and are limited by the inspection capabilities in which they can perform. Typically, such customized solutions will be deployed only in Low-Mix-High-Volume (LMHV) lines where it is worthwhile to invest the time and money for a customized inspection solution, and will usually consist of one or more fixed cameras and lights which might have limited coverage for products with complex 3D geometry.

The CAD2SCAN Solution

Kitov's CADSCAN software enable the product engineer to set the inspection requirements directly on a CAD software. Once all requirements are marked, the CAD2SCAN automatically extracts the specific geometric and semantic information for each inspection requirement. This information is passed on to the relevant semantic detectors performing the inspection tasks. Kitov's semantic detectors include a surface detector, label detector, screw detector, existence detector, and so forth. In addition, Kitov's open software platform allows easy integration of 3'rd party detectors that enjoy the planning and reporting services that the platform provides.

Once the actual visual inspection is performed on manufactured parts, the semantic detector receives, together with the images taken, the geometrical and semantic information related to the specific inspection, and can be used to enhance the inspection performance. For example, in surface inspection, semantic information includes material properties, such as surface reflectance. Together with the geometry of the surface, it is used to determine the best possible camera and illumination settings and angles. Information on screw type and dimensions can enhance screw inspection. Other Kitov semantic detectors, such as labels, barcodes, etc., benefit too from the relevant semantic information extracted from the CAD model.

Each inspection requirement is processed by the appropriate semantic detector that generates the imaging parameters and the requested 3D poses for the camera and light. Then the inspection plan is optimized in the sense that it fulfills the requirements, guarantees full coverage, reduces the number of images needed, minimizes the total inspection time, and calculates the best camera positions and illumination angles for each specific feature of interest. In addition, the planner captures multiple inspection points in a single image when possible, reducing the overall inspection cycle time. Automatic CAD-based inspection planning is a game-changer for industries that manufacture complex parts and products. For example, CAD2SCAN technology improves the inspection of single-material parts with complex 3D geometric shapes, such as turbine engines, blades, wheels, and metal molding; CNC parts, where it is very hard and time-consuming to carry out full inspection manually; and custom-made or other low-volume parts (such as medical implants or 3D-printed parts), where it is extremely hard and not economical to automate inspection in any other way. Page 2/4 CAD2SCAN technology is implemented as a plugin to common CAD software systems (currently available for SolidWorks and Creo). It also supports the evolving QIF (quality information framework) ISO standard and can parse visual inspection requirements embedded into it.

Surface inspection of non-planar surfaces

Manual customization of the surface inspection of complex 3D parts can be extremely hard as they might require hundreds of separate imaging 3D poses for full coverage of the part. Also, due to a mirror-like interaction between light sources and specular materials, specularities appear, adding drastic changes in the image intensity, which make the inspection process practically impossible to finetune manually. Of course, full coverage can not also be mathematically guaranteed.

Automated planning of visual inspection of non-planar surfaces that are marked on the CAD is challenging as we need to:

  • Ensure proper coverage of all points on the surface
  •  Ensure that any defect on the surface will

Relevance and application possibilities of the described innovation for the machine vision industry:

This is described in detail under "The Problem" in the "Technical details and advantages of the innovation" Section.

In short, most Visual Inspection deployments are Machine Vision projects that are manually customized. This limits the applicability of such projects o high volume lines and to lines where the product complexity is not too high. In addition, such solutions take a long time to deploy, are expensive, and often have limited inspection performance. CAD2SCAN addresses and solves these issues.

 

Unique Selling Point (USP):

As far as we know, CAD2SCAN is the only technology that can automate general-purpose Visual Inspection test requirements directly from the CAD model.

When will the innovation be available?

Currently available

By Alex Shulman, Saccade Vision Ltd.

Description of the innovation:

Many inspection challenges are quite difficult (if not impossible) to solve using 1D or 2D imaging techniques. For example, height variation and inconsistency during the assembly process, surface planarity and uniformity, some defects in plastic injection, extrusion and metal forming, welding and soldering quality, uniformity of material dispensing and many others. These are applications where 3D vision becomes extremely useful.

According to Inder Kohli from Teledyne Dalsa: 'Historically, it required specialist knowledge to build and maintain 3D systems using discrete parts. In expert hands, this might yield the desired performance, but over time field support might erode profits. When selecting a 3D profiler, users must consider not only the cost of the unit, but also software tools, deployment time and, equally important, in-field service.'

Indeed, according to system integrators, the cost for the end-customer of the 3D inspection solution is typically 4-5x and sometimes up to 10x the cost of the machine vision components mainly due to the complex machine vision integration done by expensive experts. Typically, 3D vision is more complex than a conventional 2D vision. It involves optics and an illumination system that depends on several parameters (such as distance and reflectivity range with-in field-of-view, light uniformity etc.) to fit perfectly with application requirements. Moreover, when using 3D profilometers based on laser triangulation, movement of parts must be considered which also increases the complexity of the solution.

Partially, the complexity of 3D inspection implementation is since 3D inspection sensors refer to the field-of-view as uniform and cannot apply different scanning parameters to different regions inside the Page 1/9 field-of-view. For example, imagine you need high resolution in a small region (because you have small details there, but the rest of the part can be scanned with low resolution. Now, you need to scan the whole field of view with high resolution. Or imagine a part with high reflectivity variation. You can obviously use HDR and pick the right exposure for the right region, but then you significantly decrease the overall throughput of the inspection system. Sometimes, certain regions of the part shall be scanned in different orientation to avoid under-sampling or improve visibility.

Saccade-MD is a feature-based static laser line scanner based on a special MEMS scanner for absolute scanning flexibility. The technology is similar to that used in solid-state lidar modules for autonomous vehicles, except that the MEMS mirror is in full control on both axes.

Employing proprietary, patent-pending technology, Saccade simulates the human vision mechanism called saccade (sa-käd). With this mechanism, the human brain captures an image using low-resolution receptors first, identifies objects of interest, and quickly moves eyes to capture interesting parts with high resolution receptors while ignoring non-important parts of the scene. These quick eyes movements are called saccadic movements. By imitating the saccade mechanism of the human perception, we offer machine vision with an unmatched robustness and flexibility. Not only can the direction of the scan be optimised so that the part doesn't have to be rotated to get a different viewing angle, but the system can vary the resolution, scanning the entire part at low resolution and then honing in on certain areas at very high resolution.

This scanning flexibility allows optimizing image acquisition separately for different elements inside the field of view and achieving selective sub-pixel resolution in 3D. The data acquisition is locally optimized based on the digital CAD model analysis of the scanned sample. The machine operator selects the inspected region (see Fig. 1) and the software automatically creates an optimal scanning plan for the specific feature that was selected. This results in a dramatic improvement in time-to-solution, lower development costs, and the capacity of system integrator to deliver more projects with same resources.

Technical details and advantages of the innovation:

Saccade's novel imaging solution overcomes some key limitations present in traditional 2D and 3D imaging technologies:

  • Most of existing scanning profilometers and some structure light scanners have single direction of scanning, which results in non-optimal angles of illumination - particularly when an edge feature is parallel to the illumination - and has the potential for 3D 'drop out' when the angle of the illumination source is obscured from the camera(s). Saccade-MD extracts (from this digital model) basic shapes and optical characteristics of features of interests and creates an optimal 3D data acquisition plan.
  • Another constraint of 3D laser profilometers is the need for the sensor or the sample to be in motion to acquire the image. Motion can introduce localized errors in the 3D point cloud. The Saccade-MD can acquire precision images over varying-sized fields of view without moving the sensor or the part. If needed though, the system can also be used in motion.
  • Virtually all existing 3D solutions produce an image with a fixed scan density. This can result in over-sampling of parts of the image to achieve resolution necessary for local features, or even under-sampling of features that require higher density. Saccade-MD generates an optimal data acquisition plan that may use any combination of different 3D data acquisition methods, so that the high-quality data from every feature of interest is acquired in an optimal and fastest way. The system also ignores all locations on the part that are not among features of interest. More specifically, the engine matches the best light pattern(s) / scanning methods for the requested task. 

Relevance and application possibilities of the described innovation for the machine vision industry:

Saccade's innovative technology may fit into many applications in industrial automation and manufacturing. However, we believe that the real revolution may be achieved in quality inspection in a traditional manufacturing industry (so called "discrete" manufacturing). The company is currently focusing on critical dimension measurements and 3D inspection in few segments of precise manufacturing:

1) Integrated inspection and metrology for metal working machines Advanced metrology becomes competitive differentiator in modern manufacturing. Despite the manufacturing industry taking a leading role in digital transformation, the tools of the trade for building, welding, machining, and the like (such as a fixed coordinate measurement machine (CMM), calipers, gauges, and tape measures) are still relatively rudimentary. This is especially true for traditional tools such as CNC machines, plastic injection molding, metal forming (stamping, bending, extrusion) and other. By offering flexible, compact, fast and precise system that can be integrated on-machine and provide measurements of 100% of manufactured parts with CMM accuracy without slowing down the process, we expect to revolutionize process control and quality of discrete manufacturing. Such integrated metrology will play an important role in precision manufacturing in several tasks:

  • Initial alignment or precise positioning of the piece inside the manufacturing tool.
  • Replace the conventional post-manufacturing off-line inspection.
  • Use precise and fast integrated metrology to compensate the machining or provide predictive process and machine analytics

    Integrated metrology for additive manufacturing is another hot topic tackled by the manufacturing community today. Integrated 3D metrology allows detecting of problems early into fabrication. This offers just-in-time process correction. Saccade Vision partnered with Euclid Labs as one of leading providers of offline robotic programming and together we started delivering a solution to multiple customers. The flexible and generic inspection system is integrated together with the robotic material handling system of Euclid Labs and installed nearby metal working machines.

2) 3D inspection in automated assembly lines Recently S&P500 electronics manufacturer installed Saccade-MD system for 3D quality inspection in its fully automated assembly line (see Fig. 5). To verify that each product that leaves manufacturing plants conforms to its high reliability and quality standards, components, sub-assemblies, and final products are tested multiple times during production. Saccade-MD system is running non-stop for the last six months. The camera system has already inspected more than one million units in that time, where each assembly arrives for inspection on the line every 10 seconds. The system was able to identify defects that the customer wasn't aware of. This is possible because of the system's ability to optimize the scan - if a scan reveals something that might be a defect it can focus on that area in higher resolution in the next scan, all without increasing the scanning time or computation time.

Unique Selling Point (USP):

1)

When will the innovation be available?

The product is available

By SWIR Vision Systems

Description of the innovation:

SWIR Vision Systems has developed the Acuros® CQD® eSWIR camera. These cameras with their Colloidal Quantum Dot (CQD®) Sensors provide sensitivity from 400nm to 2000nm wavelength and are offered in 3 resolutions. This innovation increases the usable SWIR spectrum by >40% with wider spectral bandwidths still possible. To accomplish this, the company leveraged a property of its unique CQD sensor technology, whereby larger diameter quantum dot semiconductor particles are designed to broaden the sensor's optical response to longer wavelengths. Broader bandwidth sensors open more capability for a wide range of SWIR Imaging applications.

Our current 2.1 Megapixel camera is the highest resolution IR Camera globally. These cameras offer the lowest cost per megapixel on the market and the interfaces were designed around well known and expected international standards. SWIR Vision Systems has now extended these same high density, full HD FPAs with responsivity out to 2000 nm.

The CQD® eSWIR camera sensors are fabricated with low cost materials and CMOS-compatible fabrication techniques representing an advancement towards broadly accessible high definition SWIR imaging. It is expected that the camera's lower cost points and its non-ITAR, EAR99 export classification will drive higher adoption rates globally, broadening the market for SWIR camera technology.

Technical details and advantages of the innovation:

SWIR Vision Systems Inc. has pioneered the development and commercialization of Quantum Dots (QDs) for high performance infrared camera sensors. The technology brings valuable new imaging solutions to difficult industry problems, specifically to commercialize lower cost and higher resolution SWIR-band machine vision cameras.

Short wavelength infrared imaging is used in industrial automation, automotive, defense and surveillance system, but incumbent IR camera solutions are expensive and resolution-limited, imposing constraints upon many vision applications. SWIR Vision Systems Inc. has pioneered the development and commercialization of Colloidal Quantum Dots (CQD®) for high performance infrared camera sensors. The technology brings valuable new imaging solutions to difficult industry problems, specifically to commercialize the need for lower cost and higher resolution SWIR-band machine vision cameras.

CQD® particles are typically designed to provide an optimized spectral response tuned for the 400 to 1,700 nm visible-SWIR wavelength band. But some unique properties of QD's also enable focal plane array sensors to be pushed into the extended SWIR (or eSWIR) wavelength band, namely to 2000 nm and beyond.

To accomplish these goals, the group has synthesized lead-sulfide (PbS) based QD nanoparticles, and processes these into very thin-layered photodiodes. The photodiodes and their underlying silicon CMOS circuitry form a novel photosensor, sensitive to light in the shortwave IR band. The PbS based photodiodes directly convert photons of incident light into electrons, which are subsequently read out by the circuitry within individual pixels. The QD particles are typically designed to provide an optimized spectral response tuned for the 400 to 1,700 nm visible-SWIR wavelength band.

SWIR Vision's Acuros® CQD® sensors are created by the monolithic integration of quantum dot-based photodetectors directly on CMOS readout integrated circuits (ROICs). The sensor fabrication process uses well-established, low-cost deposition techniques commonly found within the semiconductor industry. The process requires no hybridization, no epitaxial growth or exotic substrate materials, no pixel-level sensor patterning, and can ultimately be scaled to wafer-level fabrication. Photodiode arrays with pixel pitch as low as 3 ?m have already been demonstrated using this technology. The relative crystalline disorder of colloidal quantum dots currently results in lower quantum efficiency when compared to InGaAs cameras in the 900-1700 nm wavelength range, which may make these CQD-based cameras less suitable for photon-starved applications. However, in the majority of machine vision applications, a CQD sensor-based camera can be paired with relatively inexpensive active illumination, resulting in near InGaAs equivalent performance with a significant reduction in overall system cost. Figure A shows a conceptual representation of the basic technology as directly deposited on a CMOS ROIC, and shows an image of the Acuros CQD eSWIR camera.

Relevance and application possibilities of the described innovation for the machine vision industry:

SWIR band cameras are already deployed to inspect Silicon wafers and semiconductor die for void and edge defects. The technology is also used to detect moisture levels in packaged products, thickness and void detection on clear coat films, glass bottle imaging, bruise detection in fruits and vegetables, inspection of lumber products, detection of water/oil on metal parts, imaging through smoke and mist environments, surveillance and security monitoring, crop monitoring, glucose monitoring and many more applications. High resolution eSWIR imagers are also expected to enhance existing applications and open up many more applications including:

  • Chemical inspection
  • Hydrocarbon and gas detection
  •  Plastic sorting
  • High speed thermal imaging
  • Medical imaging
  • Long range imaging through summer haze, maritime haze, and fine dust for automotive, surveillance and agricultural applications
  • Wafer defect inspection
  • Laser beam profiling

Unique Selling Point (USP):

SWIR's infrared CQD® technology delivers up to 6X the resolution, at 1/3 the cost, providing impactful solutions for a range of applications. Global supply chain managers face many issues in today's difficult business environment including dealing with export compliance challenges. The procurement of SWIR cameras can be challenging in many cases due to certain government export restrictions.

When will the innovation be available?

SWIR Vision Systems Acuros® CQD® eSWIR Cameras are commercially available and in use throughout the industry. We have sold our Acuros cameras to over 100 industrial machine vision, scientific and defe