top of page

Search Results

62 items found for ""

  • What is Synthetic Computer Vision and How Does it Work?

    Retailers have been looking for a way to automate their workforce for quite some time. From the rapid expansion of self-service checkouts in local supermarkets to more high-end image recognition-powered solutions like Amazon Fresh, automation is very much part of the retail end goal. However, despite the retail industry gradually advancing towards a fully automated future, there still remain some significant hurdles to overcome along the way. Speed of deployment and the accuracy of current image recognition technology are just two of the biggest issues being faced. In the retail space in general, Image Recognition (IR) has many applications and advantages, such as on-shelf stock monitoring and auditing. At present, businesses that don’t employ some form of image recognition are more likely to have a number of inconsistencies in their stock. Meanwhile, businesses that use image recognition tend to be left feeling disappointed that their technology is unreliable and has difficulty differentiating between products on busy shelves. Often image recognition solutions will struggle to identify products in bad lighting conditions or when a product has been deformed. The general unreliability of existing IR solutions has caused much frustration for Field Marketing Agencies (FMAs), Sales Force Automation (SFAs), and Consumer Packaged Goods (CPGs) companies who rely on IR so they can conduct on-shelf auditing, verify planogram compliance and conduct various data analysis of their products. How does Image Recognition work? As the name suggests, image recognition refers to the “recognition” of a product, person, or any other physical object in an image, or video by computer software. In the retail industry, recognition software is used for several purposes, such as checking stock levels and conducting audits. It can also be applied in self-checkout machines (with IR aiding in loss prevention) or in building autonomous stores (with IR tracking the shopper-shelf interaction). Facial recognition software can even be used for checking shopper IDs for age-restricted goods in some cases, helping give customers a smoother shopping experience. For the purpose of this article, however, we will be focusing on image recognition for retail execution as opposed to facial recognition. So now that you’re familiar with what image recognition is, it’s a good idea to break down how it all works. It Starts With Computer Vision? All image recognition solutions are powered by a type of artificial intelligence (AI) called Computer Vision. In a similar fashion to human vision, computer vision enables a computer to “see” and contextualise images and videos much in the way our brains and eyes operate. However, unlike humans, computer vision trains machines (machine learning) to process and identify imagery far faster than a human being ever could. For computer vision to efficiently and accurately detect objects, it requires large amounts of data to be analysed before it can begin to decipher any real-world images. In a retail scenario, this “data” would typically be images of a particular product taken at different angles, in different environments and under multiple lighting conditions. Once this process is complete, a human being then needs to go through all images and provide annotations on the position and class of each product. Naturally, CPGs face a number of challenges collecting and annotating SKU data to train IR programmes. For example, brands may produce a large number of products. The rate of SKU catalogue change may also be an issue, plus, one product may have varying packaging designs. All of this data needs to be accounted for, but gathering all of this information manually may be exceptionally time-consuming and prone to human error during the annotation process. Thankfully, there is another way to train a computer vision model which doesn’t require resourcing weeks/months of manual SKU annotation or thousands of real photos. This process is known as synthetic computer vision. What is Synthetic Computer Vision? Synthetic computer vision is an alternative approach to computer vision that replaces real data with synthetic data in the training stage of building models/algorithms of retail locations. Unlike computer vision which requires the painstaking process of collecting real photos of a product from different angles and lighting conditions, synthetic computer vision instead can generate the necessary information that a computer vision model requires entirely from synthetic data. How does Synthetic data power Synthetic Computer Vision? As the name suggests, synthetic data is information created synthetically using virtual reality (usually in the form of artificial images or videos) rather than real-world data. Synthetic data enables a synthetic computer vision model to “learn” with greater accuracy, diversity and at a scale that simply isn’t possible when using real data. One of the many advantages synthetic data has compared to real data is the fact that synthetic data can align with your expansion plans and not merely deployed as a reactionary procedure. That is to say, the scalability of synthetic data makes it possible for CPGs to conduct image recognition across their entire product line rapidly and across multiple retail locations with ease. In addition, because synthetic computer vision uses synthetic data, a CPG can conduct image recognition training on their latest products before they have hit store shelves. Compared to traditional image recognition's reactive methodology, these proactive capabilities mark a revolution. Previously, retailers, FMAs, SFA companies, or CPGs could only train their respective image recognition algorithms after a product had been physically available. As highlighted above, the improved accuracy of synthetic computer vision models is also one of the significant benefits of this method compared to traditional computer vision. Synthetic data can generate images with specific criteria and properties which simulate in advance many of the possible scenarios that a product will be encountered in the real world (e.g. SKUs on the shelf). This crucial differentiator is arguably the most poignant example of why the future of retail shelf auditing is synthetic. Neurolabs’ ZIA deploys synthetic data and computer vision, allowing companies to go from a reactionary IR technology solution to a proactive one. Additionally, the in-store camera or mobile device used by a Field agent may have difficulty recognising products if they are in direct light, or in lighting conditions that differ drastically from the real-world images used to train the image recognition software. These conditions are virtually impossible to recreate when taking photos in a controlled environment because it would take a considerably long time. Plus, the dataset would be constrained to whatever level of diversity it had acquired up to that point. However, by using a synthetic computer vision-powered image recognition solution such as our ZIA (Zero Image Annotations) technology, it’s possible to replicate such scenarios in a short time frame using virtual scenes that mimic the characteristics of real-life store shelves. Moreover it can be used to simulate scenarios such as products becoming damaged or displaced on the shelf. The virtual scenes created by our ZIA solution serve as a dedicated “training ground” for the AI so that when faced with similar scenarios in the real world, the technology can detect the products on shelves with a consistent and high level of accuracy, offering CPGs, FMAs and SFAs access to reliable image recognition technology for the first time. Potentially, you could use Synthetic Data to generate unlimited amounts of images with your desired level of diversity. The future of Retail Shelf Auditing is here Synthetic computer vision is both the future and natural evolution of retail execution and image recognition. As AI and machine learning continue to advance, CPGs, SFAs and FMAs that have not integrated a synthetic-based platform into their image recognition solutions will be at a significant disadvantage. Click below to download our ebook today and get started on your journey to enhanced retail execution. At Neurolabs, we are revolutionising in-store retail performance with our advanced image recognition technology, ZIA. Our cutting-edge technology enables retailers, field marketing agencies and CPG brands to optimise store execution, enhance the customer experience, and boost revenue as we build the most comprehensive 3D asset library for product recognition in the CPG industry.

  • What Does the Future of Retail Shelf Auditing Look Like with Synthetic Computer Vision?

    Competition in the retail sphere is fierce. So to ensure that products stand out on the shelves, retailers and brands pool a lot of time and resources into developing effective product placement strategies. Retailers and the experts entrusted with overseeing product placement, i.e. Field Marketing Agents (FMAs), can waste their promotional efforts if they don’t have adequate visibility across all CPG (Consumer Packaged Goods) locations. Poor retail planning, brought on by poor shop floor visibility, causes a rise in out-of-stocks and low-shelf availability, and as a result, consumer trust in brick-and-mortar retail can start to decline. In fact, a study by Retail Insight found that 53% of UK consumers believe that low shelf availability of products has gotten worse since the start of the COVID-19 pandemic and four-in-ten people surveyed noted that the problem is more of an issue with brick-in-mortar stores than online retailers. Planograms are a crucial tool in maintaining optimal stock levels in CPGs. Planograms are diagrams of a retail location that FMAs and retailers use to assess the compliance of in-store displays, product placement on shelves, and marketing materials. Planograms are predominantly completed through manual processes (i.e. human beings physically checking items, displays and marketing materials). Yet, according to Cognizant research, compliance with shelf-level planograms is often lower than 50%. Traditional image recognition (IR) tools can be deployed to rectify inconsistencies. However, IR tools can be costly, laborious to use, incapable of scaling efficiently, and frequently provide inaccurate results. At Neurolabs, we understand the challenges of planogram compliance, retail execution and shelf auditing. That's why we’ve created cutting-edge technology that’s reliable, accurate, fast, scalable and cost-effective! In this article, we will outline how the future of retail shelf auditing hinges on updating image recognition solutions to incorporate Synthetic Computer Vision and Synthetic Data . We'll also explore how our Neurolabs’ Zero Image Annotations (ZIA) technology can be integrated with your existing system, without laborious onboarding and time-consuming data labelling. This means field reps can continue using their current solution, while our API can be connected to business intelligence and data visualisation tools to help FMAs and CPGs track KPIs and other product-related insights. What is Synthetic Computer Vision? Synthetic Computer Vision (SCV) is a type of machine learning computer vision model that is able to process, visualise and identify a real-world object from just a small sample of digital photos, making its learning algorithms far more straightforward and fast to train. SCV is powered by Synthetic Data which comprises of computer-generated images and 3D models. Traditionally, image recognition tools use 'real data,' which describes a bank of information consisting of thousands of product photos taken in different lighting conditions, angles, positions etc. Naturally, the process of gathering enough real data to optimise IR tools takes hundreds of hours to execute and organise. In contrast, synthetic data generates a greater pool of more diverse data in hours and provides a higher degree of product detection accuracy for FMAs. It also removes the need for FMAs to physically take pictures of SKUs to upload and train the IR algorithms. Why Synthetic Computer Vision is the Future of Retail Shelf Auditing FMAs are often tasked with visiting over a dozen store locations in a single working day. In many instances, they will have only a few minutes at each location to check that the products they’re auditing are readily available and displayed optimally. Due to the time pressures of an FMA’s job, the speed, efficiency and accuracy of their image recognition software is vital, as CPGs brand’s reputation is reliant on FMAs being their eyes and ears on the shop floor. But, more than anything, FMAs need to be able to go into a store, take a picture, and immediately receive feedback from their FMA software to get instructions on the best course of action. They don't have time for errors or to make manual adjustments within their existing IR software. This is where companies like Neurolabs can help. We deploy reliable image recognition technology that enables speedier auditing execution and a higher degree of planogram compliance accuracy due to the technology’s ability to provide consistently high object detection accuracy rates. How Neurolab's ZIA (Zero Image Annotations) technology works Neurolabs’ ZIA (Zero Image Annotations) is an SCV solution trained to detect SKUs in numerous virtual settings (sub-optimal shop lighting conditions included) to ensure that FMAs can maximise every store location visit. ZIA does exactly what it says on the tin: it does away with the necessity for real data and manual annotations (as required with conventional IR solutions), drastically reducing the amount of human labour needed to maintain planogram compliance. Our ZIA technology is trained in a virtual environment using synthetic data. As a result, this allows our solution to scale in complexity as your SKU inventory expands. For instance, it can recognise older iterations of packaging design as well as updated packaging. So, if you are an FMA checking on a range of promotional materials, our technology can keep up with your compliance needs. How Neurolab's ZIA drives the future of retail shelf auditing Neurolabs also offers a high degree of flexibility with existing IR solutions, as we also provide customers with JSON files, i.e., the raw SKU data. The plug-and-play nature of Neurolabs’ ZIA solution allows companies to maintain the flexibility of defining their own KPIs to suit their brand objectives. In addition, as Neurolab’s technology is easily integrated into existing Sales Force Automation (SFA) platforms it also allows FMAs and retailers to eliminate the lengthy onboarding or training processes required to get all agents up to speed with a new tool. Uploading and processing images via the Neurolabs platform only takes between 4-6 seconds. This is highly useful, as FMAs generally only have around three minutes to complete their compliance checks before having to move on to their next location. With Neurolab's ZIA, the entire process of opening up the app in your current location and receiving instructions can be completed within 20 seconds, allowing field reps ample time to ensure that all their inventory is optimised to maximise customer appeal. Our ZIA solution also integrates with other inventory management tools to automatically identify the root cause of out of stocks. Provided you have a solution that can send SKU alerts, you can combine your existing alert app/solution with our powerful technology to ensure that out-of-stocks are never an issue for your field reps. Neurolab's ZIA can automatically annotate and identify products resulting in little to no onboarding or wait time for busy FMAs. It can also help brands grow quickly and streamline the processes of shelf auditing, representing the future of retail auditing technologies through the power of SCV. So, if you would like to learn more about SCV and Image Recognition, download our latest eBook by clicking below or get in touch for a demo. At Neurolabs, we are revolutionising in-store retail performance with our advanced image recognition technology, ZIA. Our cutting-edge technology enables retailers, field marketing agencies and CPG brands to optimise store execution, enhance the customer experience, and boost revenue as we build the most comprehensive 3D asset library for product recognition in the CPG industry.

  • How Synthetic Image Recognition is Revolutionising Consumer Packaged Goods

    Retail is a highly competitive industry whereby speed, accuracy, scalability and efficiency are often key differentiators between the success or failure of any software/solution serving the sector. As we have explained previously, synthetic data, particularly synthetic computer vision, is both the future and natural evolution of Retail Shelf Auditing and image recognition. External Field Marketing Agencies (FMAs) and Sales Force Automation companies (SFAs) that aren’t making strides towards a synthetic pipeline are going to be left behind by their competitors who will be able to provide Consumer Packaged Goods companies (CPGs) with better analysis and performance due to the improvements that synthetic image recognition delivers. Over the course of this article, we will be exploring what is synthetic image recognition and explaining how it revolutionises CPGs. What is Synthetic Image Recognition? Synthetic image recognition is a revolutionary and technologically advanced form of image recognition that harnesses the power of synthetic data. All image recognition solutions are built using artificial intelligence (AI) called computer vision. This form of AI essentially acts as the eyes of a computer, allowing it to “see” and contextualise real data and imagery. However, as the name suggests, synthetic image recognition uses synthetic data and synthetic computer vision to break away from the limitations of real data and traditional computer vision. The end result is a synthetic image recognition solution that is faster to deploy and vastly more accurate than traditional image recognition. In addition, synthetic image recognition is also far more scalable, allowing it to streamline product catalogues across multiple retail locations and cater to FMAs and SFAs of all sizes. How does Synthetic Image Recognition work? Synthetic computer vision and synthetic data are both crucial to how synthetic image recognition works. Using our synthetic data-driven ZIA (Zero Image Annotations) solution as an example, instead of requiring numerous images and various other types of real data, our synthetic computer vision solution allows us to create realistic, 3D digital twins of SKUs from a single PDF of the manufacturing artwork/packaging. Once the 3D digital twin has been generated it is then placed in a number of virtual scenes under numerous lighting conditions and angles. These virtual scenes generate synthetic data which is then used to train our synthetic computer vision model. The virtual scenes created by ZIA help replicate countless real-world lighting scenarios, product positioning and product deformations, allowing our synthetic image recognition solution to produce accurate and reliable results. In other words, for a more cost-effective, faster to deploy, and more accurate way to generate training data for image recognition, synthetic data is the answer. What can Synthetic Image Recognition do for CPGs? By leveraging synthetic image recognition technology, CPG companies can gain valuable insights into their products and how they are marketed and sold. Below we have outlined three ways that synthetic image recognition is benefiting CPGs: Accurate identification: Synthetic image recognition technology offers unparalleled accuracy when it comes to product detection, far surpassing traditional image recognition. In a matter of seconds, CPGs can acquire high-grade data, and since synthetic image recognition accuracy does not decrease over time – as is the case with traditional image recognition technology – it is dependable and consistent, allowing CPGs to enjoy improved inventory management, product tracking, and planogram compliance. Faster onboarding: Synthetic image recognition solutions make it possible for a CPG’s SKU catalogue to be onboarded faster into the respective image recognition technology. With a synthetic solution, CPGs face almost no downtime when onboarding. As such, the speed of onboarding for both a CPG’s initial catalogue and new SKUs is considerably quicker when using synthetic image recognition compared to traditional image recognition solutions. Robust image recognition: Powered by synthetic computer vision, synthetic image recognition solutions such as our ZIA tool have the ability to recognise product deformations. Unlike traditional image recognition, which can struggle to identify when a product has been damaged, synthetic image recognition allows products to be detected even if there are defects, inconsistencies or other quality issues affecting the product. Synthetic image recognition is revolutionising the CPG industry by providing companies with valuable insights into their products, packaging, and displays. By leveraging synthetic image recognition, CPGs can optimise their product placement, improve shopper insights, ensure quality control, and gain a competitive edge. As synthetic image recognition technology evolves, it will become an essential tool for CPG companies, FMAs and SFAs who want to stay ahead of the curve and improve their bottom line. Why will Synthetic Image Recognition revolutionise Retail Shelf Auditing? Synthetic image recognition harnesses the power of synthetic data and synthetic computer vision to deliver a truly next-generation image recognition solution. As referenced at the beginning of this article, speed, accuracy, scalability and efficiency are crucial components of any viable software or technology in the retail sphere. Below we have highlighted how synthetic image recognition technology, such as our ZIA solution, offers a revolutionary evolution to retail execution. Speed ZIA ensures a streamlined, effortless experience from start to finish. Our onboarding process is significantly faster than traditional image recognition solutions, and we can swiftly create 3D Digital Twins of SKUs with unprecedented speed. Rather than building datasets from real product images - a process that is often time-consuming, prone to human error, and limited in the number of variations achievable. ZIA uses SKU Digital Twins created from manufacturing artwork to generate thousands of synthetic image variations that would be impossible to achieve otherwise. With ZIA, onboarding, catalogue creation and model training are lightning-fast. In fact, an FMA can onboard a new CPG customer in just one day and provide a time to market of one week for up to 1,000 SKUs. Accuracy Our synthetic image recognition technology is trained using 3D digital models of SKUs in a wide range of virtual scenes with varying product placements and lighting conditions. As such, we are able to deliver more robust image recognition giving CPGs more accurate and reliable results than traditional image recognition technology. ZIA's product detection accuracy is consistently high, and it stays that way. This is because our image recognition technology is trained using synthetic data, allowing it to learn faster from a larger and more diverse data pool than real data can. In addition, if and when the accuracy is observed to start declining, new synthetic data can be generated automatically, and the model retrained so that its performance is brought back to production levels. As such, this eliminates the drop-off in accuracy that traditional image recognition is prone to. With synthetic image recognition, you can achieve +95% product detection accuracy from the outset and increase to above 98% for specific categories. Scalability Synthetic image recognition makes scaling product catalogues across multiple locations incredibly easy and efficient. With our cloud-based catalogues, you can quickly upload new SKUs and respond to changing market needs without compromising time-to-market or accuracy. Our streamlined catalogues also make it easy to scale cost-effectively, allowing you to stay ahead of the competition. Efficiency The problem with traditional image recognition is that it is an entirely reactive process. For example, an FMA can only provide CPGs with data analysis once an SKU has already hit store shelves. This is because the product has yet to be photographed and uploaded to an FMAs image recognition solution. With synthetic image recognition, however, you no longer need real imagery. Instead, you can enjoy day-one support for currently unreleased SKUs by uploading in-production packaging labels to ZIA. At Neurolabs, we understand that many FMAs and SFAs already employ end-to-end solutions for image recognition, making switching to a new solution rather time-consuming and financially unviable. As such, we have developed our state-of-the-art technology to serve as an intuitive “plug&play” upgrade that can slot alongside any existing solutions and analytics software currently in use. Our solution is designed with efficiency and practicality at the forefront, allowing you to improve your image recognition solution without overhauling your existing software or entire infrastructure. We are dedicated to making sure that integrating ZIA is as straightforward and efficient as possible. We work with FMAs and SFAs on behalf of CPG companies to make our synthetic IR part of their existing product, allowing CPGs to reap the benefits of a modern, automated solution without having to worry about complex technical details. With ZIA, FMAs and SFAs can quickly and easily integrate synthetic IR into their existing system without laborious onboarding and time-consuming data labelling. This means field reps can continue using their current solution while our API can be connected to business intelligence and data visualisation tools to help FMAs and CPGs track KPIs and other product-related insights. Prepare for the future of Retail Shelf Auditing with Synthetic image recognition by Neurolabs Whether you’re looking for accurate and reliable image recognition, optimised shelf execution, aiming to streamline your onboarding procedure, or seeking a faster shelf auditing delivery time, we are here to help. Synthetic image recognition is the future of Retail Shelf Execution. Check out our new ebook to learn more about how this groundbreaking technology can enhance your workflow and, most importantly, keep your business ahead of the competition. At Neurolabs, we are revolutionising in-store retail performance with our advanced image recognition technology, ZIA. Our cutting-edge technology enables retailers, field marketing agencies and CPG brands to optimise store execution, enhance the customer experience, and boost revenue as we build the most comprehensive 3D asset library for product recognition in the CPG industry.

  • How Synthetic Computer Vision Delivers Faster & More Accurate Image Recognition For Sagra Technology

    Generating virtual products to solve traditional image recognition (IR) problems. Sagra Technology Struggle to Improve Shelf-Recognition Process of Pharmaceutical Goods in Pharmacies. At Neurolabs, we believe that a problem is just a challenge that has yet to find its ideal solution. That is why, when the Polish-based tech firm Sagra Technology came to us and said they needed a fast-to-implement solution that would let them upgrade the process of auditing pharmaceutical goods as part of their solution, we went from onboarding to project delivery in just seven days. Sagra Technology started creating mobile solutions to support sales, marketing and analytics in 1998. With over 34 years in the market they cite their continued success as a matter of speed. They work differently from the trend and are ahead of the market by a technological step. As an innovative solution provider Sagra recognised the need for a more advanced technology provider, one that could work with them in the challenging pharmaceutical space to significantly reduce their clients' onboarding time as well as delivering just-in-time results of image recognition. With 25 practically identical products to audit, the only difference being the number of doses in each package, Sagra was facing a challenge: to deliver results directly to sales representatives during their visit to the pharmacy while at the same time achieving the highest level of product detection accuracy. This is when they turned to Neurolabs, where our synthetic computer vision method overcame the limitations of traditional IR to provide the optimal solution that Sagra was looking for with the project completed in a mere seven days – a fraction of the market standard of a few weeks. "We have been embedding traditional IR technology in Emigo, our SFA solution for some time, but quickly realised we need to provide results faster and run the IR service for customer in days not months. The time-consuming process of preparing the AI recognition model was not a viable solution for us. We needed a solution that could quickly onboard customers and create models significantly faster but with maintaining in the same time the highest possible accuracy, and of course being able to integrate seamlessly with our Emigo SFA System. Neurolabs' ZIA has been a true game-changer for us. Its synthetic data approach to IR as well as the way of model creation and new SKUs onboarding has revolutionised our operations!" - Sagra Technology The Synthetic Data Difference Sagra was interested in our technology but wanted to test it to see if it could deliver results in seconds instead of hours. With the classic method of IR, you need to collect a painstakingly large amount of real data. Once collected, this real data must then be annotated manually by putting bounding boxes over each product to identify it as a unique SKU. Once you have gone through the arduous process of collecting several annotated images, you can then feed an algorithm that will then be able to detect the products in new images. However, at Neurolabs, we do things differently. Our product ZIA (Zero Image Annotation) is built in such a way that we don't annotate real data. Instead, we generate synthetic data and 3D models, allowing for a faster solution that has proven to be more robust and reliable than real data, delivering Sagra Technology 98.3% product detection accuracy. And these results are not atypical for our clients. Our technology achieves an average visual detection accuracy rate of 95% from the outset which then increases to above 98% for specific categories. To also alleviate any potential anxieties about the effectiveness of our Image Recognition technology, we offer a quality audit service facilitated by an outside, independent provider to prove that the accuracy of the Neurolabs service meets the requirements laid out in the agreement. Taking real images and annotating them can be costly and time-consuming, resulting in multiple delays depending on the quality and quantity of the captured images. Not only is it time consuming to collect, but implementing these traditional IR technologies can be painfully slow. Thanks to ZIA, Sagra’s project was delivered in only a week. ZIA is built to onboard quickly, being able to add a CPG customer in as little as one day and achieve a time to market of one week for up to 1000 SKUs. Integration is another vital tool that allows us to deliver the project so fast. Sagra was able to seamlessly integrate ZIA with its existing solutions and tech stack via cloud APIs. For Sagra, the Neurolabs’ synthetic approach to IR helped them shift from a reactive to a proactive approach. With ZIA, they could upload new products and changes in packaging before they even hit store shelves, resulting in time-savings and improved operational efficiencies. A Change-proof AI Recognition Model and Accurate Product Data One of the struggles that Sagra previously encountered was the process of adding new SKUs and the time-to-learn period that the AI model needs in order to detect new SKUs while maintaining the required accuracy. When a product is out on store shelves, it is impossible to guarantee that the product will always appear as it does in real data imagery. Depending on the product, there is a likelihood that the item might be picked up, rotated, deformed, opened, or even moved to another shelf with unfavourable lighting. As the product is rarely found in the same manner as when it was displayed, the traditional or classic approach to IR can struggle to recognise products. Minor differentiations in product packaging also make it difficult or sometimes impossible for traditional IR to distinguish between products. For Sagra, whose 25 SKUs are almost identical, this presented a major challenge. However, for Neurolabs' technology, it was quickly able to distinguish between the products, which only differed in the area of displaying the dose. Using synthetic data provides the ability for our algorithm to learn from millions of varied positions and angles, guaranteeing a high level of product detection accuracy when field reps are in-store performing shelf audits with their cameras, iPads, smartphones, etc., regardless of the SKUs' placement on the shelf. Synthetic Computer Vision in Action To begin, Sagra supplied our team with PDF images of the packaging labels for all 25 SKUs. After we had received the packaging images, we used ZIA to make synthetic 3D assets of the 25 SKUs, which were then tested and trained within a virtual environment. Our testing phase lasted approximately five days, allowing us to extensively train our machine learning algorithm to detect the subtle differences between each label. Once the differences could be distinguished, we were then able to construct auditing scenes using synthetic data. Tests conducted by Sagra found that our algorithm could successfully detect the products with a 98.3% accuracy whilst delivering end results within 5 seconds on average, far exceeding Sagra’s expectations. "Neurolabs not only managed to deliver a viable solution to our challenge in an unprecedentedly fast and efficient manner, but they also did so with near-perfect accuracy. We faced some challenges with some pictures (blurs, pictures from above, glares, products covered with shelving elements) but we are well aware of them and together we are working to address them (e.g. providing alternative predictions, improvement of the data collection process)." - Sagra Technology The Perfect Partner Following our successful initial project together, we are proud to announce that we will work together alongside Sagra for future endeavours as we collectively aim to improve the execution and speed up the adoption of IR in Poland. With a specific focus on both the customers of Pharma and Fast-Moving Consumer Goods (FMCG), our joint goal is to build a viable solution that caters to customers of all sizes throughout the respective sectors. If you're looking for a way to reduce product detection errors in your shelf auditing, Get in Touch with our team and learn how ZIA can easily integrate into your current solution. We'll take you through our technology and show you how it can help. At Neurolabs, we are revolutionising in-store retail performance with our advanced image recognition technology, ZIA. Our cutting-edge technology enables retailers, field marketing agencies and CPG brands to optimise store execution, enhance the customer experience, and boost revenue as we build the most comprehensive 3D asset library for product recognition in the CPG industry.

  • How Synthetic Data is Transforming the Way We Detect Product Deformations

    Neurolabs CTO, Patric Fulop Explains At Neurolabs, we’re creating the technology that enables us to train Computer Vision algorithms by programmatically simulating realistic deformations in a wide range of packaging and materials. In this article, I'll discuss how this cutting-edge technology works when applied to identifying Consumer Packaged Goods (CPGs). What Are Deformations? What Problems Can Deformations Cause? The Benefits of Synthetic Data Pioneering Automated Deformation Faster and More Efficient Shelf Audits What Are Deformations? Packing for CPGs can be broadly classified into two categories: soft and rigid. Deformations typically happen in the former and are essentially changes in the shape of soft body objects. Soft body objects include packaging such as stand-up pouches with a ziplock, vacuum pouches (such as crisp packets), and laminated tubes (such as a tube of hand moisturiser). These are popular packaging materials because they can be modified or customised with ease and are typically manufactured at low costs. The downside, however, is that they offer minimal protection from deformations which has developed into a major issue when it comes to product detection in the CPG space. Rigid packaging is not immune to deformations either. Packaging such as aerosol spray cans, beverage cans, and bottles can also be impacted by deformations such as denting. In this article, I will focus on deformations that can occur while a soft body SKU is on the shelf, hence we won’t be discussing rigid object deformations. What Problems Can Deformations Cause? For retailers, product packaging deformation causes problems with stock management technology and can prevent you from knowing what’s happening on the shelf. With rigid objects, such as a box, the variations in which you can find the SKU are limited. It can rotate, making it a somewhat simpler task for the computer vision technology to recognise the correct SKU in its varying positions. With soft body objects, computer vision technology finds this much more difficult. If I picked up a packet of crisps from the shelf, I could deform it in numerous more ways than a rigid SKU. This could be anything from wrinkling the packaging to setting it back in a position where the store light now reflects on the label, making it visually unrecognisable to the image recognition (IR) software. When I place the packet back on the shelf, it may have a deformation that the computer vision technology had not encountered before, and it would then no longer recognise it as a unit of stock. The traditional approach to solving this problem has relied on collecting large amounts of training data. Many images are taken of an SKU in various deformed states, annotated, and fed into the computer vision technology. This method uses a real data approach, and collecting and annotating such data is time consuming, costly and prone to human error. Deformations are not a simple challenge to solve and while real data presents one possible solution, it is far from perfect, taking up a lot of time and often still delivering inaccurate results. The number of variations that can occur to soft body packing is simply so high that it would be almost impossible to photograph and annotate a product in all the possible variations. When deformations occur in a variation for which no real-world data has been collected, computer vision technology struggles to recognise it. The unrecognised CPG leads to inaccurate data being reported, resulting in stock management issues such as overstocking or understocking. More importantly, unreliable data erodes users’ trust in the technology which has been designed to ultimately help them improve their efficiency. In fact, it can become a burden that slows them down. These data blunders are far from insignificant. According to BusinessWire, retailers lose a mind-blowing amount of revenue each year due to poor inventory management. The total loss from out-of-stocks reaches $634.1 billion each year, and from overstocks, it's $471.9 billion. Combined, that makes annual losses of over $1 trillion. As such, it is only logical for Field Force Managers and technology vendors, such as ourselves, to strive for a better alternative by pushing the limits of what is achievable, especially with such high stakes. The Benefits of Synthetic Data There is little doubt that real data is proving to be highly limited in the solutions it can provide. This is why at Neurolabs we decided to pioneer the use of synthetic data for image recognition of CPGs. Synthetic data can be used as a (better) alternative to real-world data as it mimics real-world patterns and offers a faster, more diverse and more accurate way to create a database of imagery of a deformed product. Neurolabs’ process to deform virtual products can simulate the realistic deformation of an item and create thousands of variations of the deformity using synthetic computer vision. Neurolabs’ IR system - known as ZIA (Zero Image Annotations) - is powered by a combination of synthetic data and modern computer vision, with the ability to circumnavigate the need for real data altogether. These systems turn SKUs into digital 3D models for IR algorithms to learn from and reference, eliminating the need for time-consuming data collection. For example, below are programmatically generated synthetic images of a crisp packet. The underlying 3D model of the crisp bag has been deformed automatically to increase the variation during the synthetic image generation process. When we are automating the deformed product variants programmatically to train our algorithm, one of our main challenges lies in trying to ensure that the programmatically generated images are as realistic as possible to ensure accuracy when identifying real-world products. Our AI is trained with synthetic data and the multitude of programmatically generated images of deformations, so it needs to be as realistic as possible. This way, when a field rep is in the store and takes a photo, our technology is advanced and accurate enough to detect deformed products on the shelf. We are adding the recognition of deformations to our repository and our synthetic data approach using 3D assets. Although it's still early days, and the field of geometric deformations is vast and challenging, preliminary results show this is a promising approach for increasing variation and the performance of computer vision models. We are striving to solve the issues seen in the CPG space, and deformations are another step in our journey. Pioneering Automated Deformation Our product, Neurolabs’ ZIA (Zero Image Annotations), is leading the way in automated geometric deformation and revolutionising accurate product detection in retail. ZIA is a pioneering technology that tackles the challenge of ensuring the products and deformations rendered by synthetic data are accurate to the laws of physics and that their materials react as they would in the real world. Geometric deformation has been explored in different industries with incredible results. The level of detail that it's possible to emulate is staggering. For example, some of the papers presented in the YouTube channel, Two Minute Papers, demonstrate methods of simulation and the tearing of meat in an astonishingly life-like way. As a technology, geometric deformation is being explored and put to numerous uses. However, Neurolabs is the only company looking at simulating deformities using synthetic computer vision and applying them within the CPG space. Deployed in the real world, ZIA recognises products in scope with +95% accuracy, notably higher than traditional IR with this increasing to above 98% for specific categories. The solution that ZIA delivers to users provides them with a service to upload imagery of SKUs and a complete computer vision model that detects products in the real world and presents users with a viewable 3D asset that reflects a real-world product. The software is easily integrated into existing solutions or apps that Field Marketing Agencies use, simply enhancing your existing toolkit. Faster and More Efficient Shelf Audits Ready to start implementing a faster and more efficient shelf auditing process? Get an exclusive preview of ZIA today! Get In Touch and one of our team will guide you through ZIA and how it can seamlessly integrate into your current solution and reduce product detection errors in your shelf auditing. At Neurolabs, we are revolutionising in-store retail performance with our advanced image recognition technology, ZIA. Our cutting-edge technology enables retailers, field marketing agencies and CPG brands to optimise store execution, enhance the customer experience, and boost revenue as we build the most comprehensive 3D asset library for product recognition in the CPG industry.

  • Neurolabs CEO Wraps Up a Year to Remember and Reveals Exciting Plans for 2023

    A letter from Neurolabs CEO and Co-Founder Paul Pop This past year has been one to remember for us at Neurolabs. Despite challenging market conditions, we successfully transitioned from a high-end research lab to a functioning business with deliverable products and a strong recurring customer base. We have achieved incredible success in such a short span of time, and it would not have been possible without the hard work and dedication of our team. In this post, I’d like to reflect on the achievements of the past year and look at what’s ahead for 2023. From Proof to Product As we entered 2022 we had a clear set of challenges to overcome: mainly, we needed to better prove the value and potential behind synthetic data, and we needed to showcase that either can be made accessible as a product. At the beginning of the year, we had a sort of “ivory tower” proof, based on delivering an inventory management for supermarkets solution using fixed cameras. The issue with this use case was that it was very constrained in terms of complexity and hardly showcased the necessity for synthetic data, let alone its potential when put up against real data and manual annotations. Our priority thus became to tackle more challenging scenarios that pushed the limits of what was thought to be possible in retail automation — we adapted our tech to work with more diverse products and environments and made it compatible with simple mobile phone cameras that any field marketing agency would have access to. As our product and platform developed, we began to enter into a wider range of partnerships. We went from dealing with one-off requests to forming long-term relationships and gaining recognition from influential players in the industry. Finding the Ideal Customer Garnering more market attention allowed us to reconsider who our ideal clients might be and how we hoped to interact with them. To give an example, my favourite success story of the year was our work with Sagra, a Polish company that needed a way to audit pharmaceutical products on in-store shelves. Their challenge was that these products, which were packaged in cardboard boxes, were essentially identical, with the only variance being the number of pills per box. The team at Sagra had access to PDFs of their product’s packaging labels, which is all we needed to quickly create accurate 3D models to train our algorithms with. This was, to put it mildly, extremely exciting for us. Not only were we able to train and deploy image detection algorithms for these boxes in no time, but Sagra had just the right challenge and materials to perfectly integrate into our pipeline. We realised that probably nobody else out there was going from PDFs to production-level computer vision algorithms this quickly, and that maybe nobody else could. This made me extremely proud and also gave us a clear direction to move forward in terms of our services and tech. Partnerships like this also helped us better understand our position within the retail space. Previously, we had found it extremely hard to interact with retailers and CPGs themselves, often due to our strong tech focus and background. By taking a step back and instead working with solution providers, we were able to lower this transactional friction and more easily communicate with people who have their own developers, are building their own apps or smart carts or autonomous stores, and who understand the complexity of AI. In short, people who speak our language. Our Next Goals This puts us in a great spot to fulfil our three main objectives for 2023: Firstly, we want to find more clients like Sagra who we can perfectly integrate into our pipeline and for whom our platform is an ideal fit — and with whom we can work in the long-run, on a recurring basis. We will focus the first six months on this effort, mainly looking at businesses in Europe and some in the US. Secondly, for the first six months of the year, we will be exploring ways to expand our tech's reach and make it more accessible to a broader customer base. We come across many potential clients that have the same end problem as Sagra, but don’t have packaging labels at hand or are working with products that are more complex to model as 3D assets. Thirdly, we want to expand our marketing and online presence to make it easier for a broader range of businesses to learn about our tech. We’ve been in touch with a lot of companies facing similar problems as our current clients, but who are working on slightly different forms of retail automation — self-checkouts, autonomous stores or warehouse management for instance. Our platform is pretty much plug-and-play for these use cases and we are looking to organically increase our presence in these areas. Other Predictions for the Year Ahead I would go amiss not to mention the current market conditions. On the one hand, smaller consumer budgets do heighten the need for perfect in-store execution. On the other hand, we’re seeing a clear hit in terms of fundraising, valuations, investments, and so on. Because of this, I believe that near-future advancements in retail will be likely pushed for by big and established players. Ideally, we want to integrate these companies’ resilience into our own DNA, while still maintaining the agility and tech focus that makes us stand out. The bleeding edge of automation and digitisation is always evolving, so we’re looking at a lot of organic improvements that have already started to become a part of our business. We are able to render increasingly realistic 3D models, taking into account factors such as opacity, refractions, and deformations. Our algorithms are constantly improved upon and we will continue to push for highest levels of detection accuracy while also expanding the range of products our algorithms can handle at a time. That being said, we’re hoping for a true leap in capability when it comes to digitising products and objects (that is to say to easily turn real-world objects into virtual 3D models). For years now we’ve told ourselves that a breakthrough is just around the corner, that a new iPhone or Lidar or even the latest Nerf algorithm will provide a seamless end-to-end process to create digital twins of anything you want — but so far we’ve been left waiting to see the adoption barrier drop low enough for a mass adoption of this technology. A Message for the Readers Looking back at what we’ve achieved, I would like readers to understand the following: Synthetic data works on a production level — it’s not just some cool experiment that got funding and will live and die in a lab. I’m convinced that computer vision is the next big tech to be democratised and made available at the fingertips of anybody interested. We consider this a crucial part of our work at Neurolabs. We are computer vision experts first and foremost and, much like Wix or Squarespace have lowered the barrier for building a website in the last 15 years, we aim to enable citizen developers to easily create and adopt computer vision algorithms for their own benefit — be it in the retail space or otherwise. As we close out 2022, I'd like to extend a heartfelt thank you to our incredible team and loyal customers for all their hard work and support this year. Wishing you all a happy, healthy, and prosperous new year! Paul Pop CEO, Neurolabs About Paul: Paul Pop is a Co-Founder and CEO of Neurolabs, overseeing the company’s finances, fundraising and product management. To learn more about Paul, visit his LinkedIn profile here or book a demo to chat with a member of the team today.

  • Why Retail Is Set To Adopt the New Generation of Image Recognition

    In this article, we will discuss the drawbacks of legacy Image Recognition (IR), where they stem from, and the potential of synthetic data to equip retail for a new era of shelf auditing and automation. The Limitations of Legacy IR The Root of the Problem: Manual Processes The New Generation of Image Recognition Market Advantage for the Taking The Limitations of Legacy IR When Image Recognition was first introduced into retail, it came with great promises of value across the board — more data, clearer analytics, faster insights, and automation instead of manual efforts. Years later, the industry has adopted a more sober perspective, the complexities and dynamic state of retail having taken their toll on the dream of easy and accurate in-shelf analytics. Before going into how we can move away from legacy IR, let’s take a look at how it fell short of the retail industry’s demands and expectations. Retail Field Execution Companies (RFEC in short) typically state the following three pain points when dealing with legacy IR: Slow deployment speed Low accuracy Lack of scalability It can take months to introduce an Image Recognition model to a new catalogue of SKUs for it to recognise, with the resulting analytics rarely operating with more than 95% accuracy — a wide-enough error margin to prevent mass adoption of Image Recognition in retail. To make matters worse, as product catalogues exist in a constant state of flux (being prone to frequent label changes, re-brands, and seasonal promotions), the long time frame to adopt even a single new SKU creates a constant uphill battle for RFEC companies — when timely and up-to-date analytics are the most critical, it can take days to adopt a changed SKU into a legacy IR system. Lastly, unlike many other technologies, legacy IR does not profit from economies of scale, meaning that RFECs are left with high, fixed costs for each new retail location and product category they expand to. This makes legacy IR’s adoption daunting at best and commercially unviable at worst. Below are some direct quotes we hear time and time again from our clients regarding legacy IR and its most common pain points. The Root of the Problem: Manual Processes Visual detection algorithms require contextual information to learn and improve over time — with legacy IR, this additional data is provided through the manual annotation of real-world images. However, both the accumulation of real-world images and their annotation are time-consuming and expensive — when speaking of days or weeks to adopt a set of new SKUs, this is where these delays are caused, with no available shortcuts for the process. These workflows aren’t only slow, but also entirely reactionary — a new SKU can only be photographed and annotated once it has already hit in-store shelves. There are no results or analytical data from day one. Lastly, the prowess of legacy visual detection models relies on workers’ ability to correctly identify and interpret every piece of visual information forwarded to the AI. When dealing with pictures of cluttered shelves, half-hidden products, and bad lighting conditions, this becomes a difficult and time-consuming exercise, even for the most qualified of annotators. Although legacy IR providers such as Trax and Parallel Dots have attempted to meet client expectations and market demands, they have not been able to consistently do so due to their reliance on real data. The New Generation of Image Recognition We’ve recently seen the advent of a new wave of Image Recognition, systems powered by a combination of synthetic data and modern computer vision, with the ability to circumnavigate the need for real data altogether. This leap in automation technology has led to large shifts in focus across a variety of industries and applications, including retail. Rather than relying on a collection of manually annotated real-world images, these systems turn SKUs into digital 3D models for Image Recognition algorithms to learn from and reference. The needed contextual data is automatically provided through a variety of virtual environments these digital SKUs are then placed in — specifically, which products are which and where they are positioned within the 3D space. In short, synthetic data replaces the need for real data which is largely inaccurate, quick to expire, expensive and time-consuming to collect. Next to outperforming legacy IR solutions in every aspect, staying native to a digital environment also unlocks entirely new possibilities for RFECs and their clients alike. For instance, for the first time ever, it is possible to train models for changes in SKU labelling and branding before these changes affect real-world shelves, allowing for proactive initiatives and zero-delay analytics. Below is a short overview of how things happen in the new paradigm: You can also explore our case study with IPP to take a closer look at a real-world deployment of this technology. Market Advantage for the Taking When legacy IR became the norm in retail, it provided a valuable stepping stone for modern shelf auditing — however, though proving automation tech’s potential, it never made true on its promises to fulfil it. Now, with the adoption of synthetic data, we are able to usher in an entirely new era of Image Recognition, making good on old promises and once again expanding our understanding of what is possible for retail. As with any large tech shift, some providers may be slow to embrace the new generation of Image Recognition — but CPGs and retailers will readily move on from real data systems to synthetic alternatives once the full weight of upcoming recessions hits their customers and in-shelf performance. Sped up by these economic developments, and following the trends across various other industries, it won’t take long for synthetic data and computer vision to establish themselves as go-to solutions within the retail space. At Neurolabs, we are proud to be at the forefront of this tech revolution, busily providing RFECs with the new generation of Image Recognition. Our web-based platform is the first comprehensive destination for RFECs to take advantage of new-wave IR within the shelf-auditing space and our clients are ambitious market-leaders whom we’ve seen gain significant and sustainable advantages over their competition. As experts for computer vision and synthetic data at heart, we look forward to continue showcasing the technology’s potential and being the #1 partner for the pioneers of the retail space. To explore the benefits of new-wave Image Recognition first hand, book a demo here. You can also explore our blog to learn more about our technology and its possible applications. Retailers worldwide lose a mind-blowing $634 Billion annually due to the cost of poor inventory management; with 5% of all sales lost due to Out-Of-Stocks alone. Neurolabs helps optimise in-store retail execution for supermarkets and CPG brands using a powerful combination of Computer Vision and Synthetic Data, improving customer experience and increasing revenue. Our goal is to build the largest 3D asset repository for the CPG space.

  • How We Transformed IPP’s Shelf Auditing With Synthetic Image Recognition

    In this case study, you'll learn how synthetic computer vision helped Instore Power Provider (IPP) overcome the common pitfalls of traditional image recognition to cement its leadership position in the retail automation space. Founded in 2006, Instore Power Provider (IPP) is a European field marketing agency and a leader in the industry. Focused on cutting-edge solutions, they offer a suite of technology-driven retail execution services from Point of Sale (POSM) management to merchandising and automated shelf auditing, supporting both Consumer Packaged Goods (CPG) brands and retailers. IPP has been providing CPGs with services for more than 15 years and prides itself on consistently staying up-to-date with the global, social, and technological changes affecting the modern retail space. Because of this, they are a trusted partner of leading brands such as Heineken, P&G, Nestle, Pepsico and more. Catalin Bratu is IPP’s CTO. Today, Catalin is in charge of the company’s innovation and technology adoption. He has accumulated extensive experience with various retail automation technologies, including image recognition. Catalin’s ambition is to elevate IPP to the status of #1 trusted store execution partner in the CPG space and expand its operations to a regional level. The Challenge: Outdated Tech and Competitive Markets Recently, IPP have found themselves at the centre of a two-sided challenge: On the one hand, CPG brands are scrambling to maintain in-store performance in the wake of global economic downturns, with increased levels of inflation and long-term recessions still looming on the horizon. On the other hand, retail solution providers are struggling to keep up with the advancements in digital automation that could, in theory, empower CPG brands to improve their in-store execution and weather these difficult market conditions. This issue particularly applies to the field of traditional image recognition — while the technology has become a cornerstone of shelf auditing and automation, it relies on manually annotated data, causing several drawbacks for retail-based applications: It is costly and prone to human error It is not designed to scale across locations or large and dynamic product catalogues Adoption and real-world deployment are a long process IPP had been eyeing traditional image recognition for a while but also became increasingly aware of its limitations. After performing trials with some popular providers, they concluded that its adoption could not be scalable and/or commercially feasible. "We found out the hard way that many retail-based image recognition solutions rely on outdated tech and implementation, despite being positioned as go-to’s for Field Agencies and CPGs. This simply wouldn't cut it for us." - IPP In short, they were on the hunt for a new approach to retail-based image recognition. They wanted a solution that would overcome the pitfalls of manually annotated data and significantly improve their shelf stocking and auditing services. When they reached out to us, they were interested in how our synthetic data-based technology could help them to achieve this goal. "It became clear that moving away from the constraints of traditional image recognition would put us far ahead of the curve in terms of what value we could provide for CPGs and retailers alike. We saw the Neurolabs ZIA as a huge opportunity for us." - IPP The Solution: A New Generation of Image Recognition At Neurolabs, we're computer vision experts first and foremost. That means our technology and solutions stay native to the digital space, where they can profit from higher fidelity and flexibility. Rather than relying on a collection of manually annotated real-world images, our platform turns SKUs into digital 3D models for our image recognition algorithms to automatically learn from and reference. This approach enables a faster, more accurate, and more robust adoption of new SKUs into a product catalogue, and also greatly improves scalability and cost efficiency. Practically, this means that with our platform you can: Complete a real-world deployment in less than 3 weeks — the market standard is 2-3 months Onboard a new SKU in 1 day or less — this is 4x faster than the market average Achieve 95% accuracy for SKU-level detection from the outset and increase to above 98% for specific categories. Moreover, our technology ZIA is built to scale from day one: Out of the gate, we provide access to over 100k pre-saved SKUs on our platform Our system is designed to create full product catalogues for thousands of SKUs in less than 12 weeks Despite our technology’s complexity, we keep adoption simple and streamlined. Below is a 4-step overview of how ZIA turns real-world SKUs into digital models and how we use these to power image recognition: Step 1 - Data Collection Using our platform, our clients can either upload an SKU’s printing label or a simple 6-picture sample of the product in question. This content can be sourced directly from CPG brands or via so-called brand banks. Step 2 - 3D Assets Using this digital content, we generate a digital twin of each SKU. These high-fidelity 3D models reflect the real features of each product, including sizing, material, reflectivity, and more. Step 3 - Synthetic Data Next, we automatically populate a variety of digital scenes with our newly adopted SKUs. Rather than using volatile real-world settings, we use these adaptable environments to train our image recognition algorithms. Step 4 - Trained Algorithm After training, our final algorithms can accurately detect the digital SKU’s real-world equivalents in a variety of store settings, lighting conditions, and more. From here on out, our clients can choose which metrics and KPIs to track and analyse. Real-World Deployment We proposed a joint trial to showcase our technology ZIA's prowess and ease of use in a real-world setting. Together with IPP, we decided to focus on a single product category (detergent) containing a catalogue of 455 SKUs in total — the in-store proofs would be preceded by three weeks of digital onboarding. We aimed to highlight four key aspects of synthetic data-powered image recognition: Production-level SKU detection accuracy for the SKUs in scope Shelf KPIs looking at facing count, out-of-shelf rate (OOS), share-of-shelf (SOS), and shelf-planogram compliance Standardised KPI collection process across multiple stores A faster, more efficient shelf auditing turn-around for field representatives "We were genuinely surprised at how quickly we got things going. Having the Neurolabs team available for support every step of the way was one of the most notable differences to previous providers we had trialled." - IPP After only three weeks of joint tech validation, we tested IPP’s new image recognition across 77 in-store locations — of those three weeks, we only needed several days to add all 455 SKUs to our system. During this time, the IPP field force captured ~1,300 images per day. We performed weekly assessments of our image recognition, which performed at a 97.6% accuracy rate. We also stayed fully available for technical support, addressing any issues within 24h. Using synthetic data for image recognition provides IPP with a drastic and, most importantly, sustainable market advantage. Our solution created immediate value across the entire product life-cycle, not only benefitting IPP but drastically improving the possible services offered to their core clientele: retailers and CPGs, who today are more reliant on perfect store execution than ever. IPP will be rolling out a large-scale deployment of our image recognition technology in 2023 — their initial focus will be a collaboration with a well-renowned global CPG brand and encompass a product catalogue of 1700 SKUs. We will remain at IPP’s side to ensure a frictionless deployment for both IPP and their partner. Our flexible platform will enable both parties to easily adapt to new situations and environments, tackle complex challenges and easily expand product catalogues in a matter of days. They will also be privy to the constant improvements made to our algorithmic models, ensuring a lasting top standard in the accuracy of their analytics. Next Steps: The Future of Retail Automation We are excited to be partnered with companies like IPP to pioneer a new generation of shelf auditing and automation. While traditional image recognition cannot keep up with the advancements of modern retail automation, synthetic computer vision keeps up with the possibilities in this field and even expands them. We are confident in saying that early adopters will cement their foothold in the retail automation space for years to come. "With Neurolabs ZIA, we've been able to unlock an entirely new set of opportunities for IPP and our clients. We would recommend their platform to any business that’s serious about their image recognition and shelf auditing." - IPP Want to experience the advantages of synthetic data firsthand? Book a demo here. You can also take a closer look at how our technology works and why it beats traditional image recognition solutions. At Neurolabs, we are revolutionising in-store retail performance with our advanced image recognition technology, ZIA. Our cutting-edge technology enables retailers, field marketing agencies and CPG brands to optimise store execution, enhance the customer experience, and boost revenue as we build the most comprehensive 3D asset library for product recognition in the CPG industry.

  • How are Digital Twins used in retail?

    Unlocking real world automation potential using virtual products In retail, Digital Twins replicate real world Consumer Packaged Goods and Fast Moving Consumer Goods in a virtual environment. This provides computer software like Synthetic Computer Vision with the necessary visual data (Synthetic Data) to learn how to recognise those products in images and videos. In traditional Computer Vision, the most widely used version of the technology, an algorithm is trained to detect real world objects using hundreds and thousands of real images of those objects such as images of a supermarket product. Sourcing and preparing this high-quality training data is extremely costly and time-consuming. Therefore, the process it is not feasible to adapt and scale traditional Computer Vision to the demands of most retailers. It is therefore unrealistic for most companies to consider its use for domain-specific applications such as retail. Synthetic Computer Vision is not burdened by this unnecessary barrier to adoption i.e. access to the necessary training data. This is because Synthetic Computer Vision does not rely on real data to train its algorithms. Instead, Synthetic Computer Vision is powered by Synthetic Data, a virtual recreation of the real world data that is used to train Computer Vision models to detect real world objects. For real world object detection in retail, Synthetic Data encompasses rendered images and videos of a 3D, digital twin of a real world Stock-Keeping unit including the virtual supermarket scenes that it is placed in. This data represents the attributes of the product as well as possible retail environments in which it may be found in real life. It is used to train Synthetic Computer Vision models to detect that real world product for the purposes of automating in-store processes such as Shelf Auditing and Shelf Monitoring. Retailers worldwide lose a mind-blowing $634 Billion annually due to the cost of poor inventory management with 5% of all sales lost due to Out-Of-Stocks alone. Neurolabs helps optimise in-store retail execution for supermarkets and CPG brands using a powerful combination of Computer Vision and Synthetic Data, called Synthetic Computer Vision, improving customer experience and increasing revenue.

  • How Akcelita used Neurolabs to Improve Image Recognition for their CPG Clients

    Optimising In-Store Retail Execution with Synthetic Computer Vision Akcelita is a U.S.-based technology consultancy that specialises in the use of next generation technology to solve real world problems. Specifically, they are focused on solutions for Fast Moving Consumer Goods (FMCG) clients that help increase revenue and improve customer experience. The Problem Akcelita needed an image recognition software that could monitor very large numbers of retail products for the purposes of creating an Out-of-Shelf and Planogram Compliance solution for their clients. They had experimented with training computer vision models themselves using a traditional approach to the problem, collecting hundreds of real images of each product and processing them for the purpose of image recognition training i.e. training an algorithm to detect Consumer Packaged Goods (CPG) products in images based off of the images they had collected and labelled of those products. They found that this process was both extremely time consuming and lacking in quality. Collecting and classifying all the real images that they would need meant time became their biggest pain point. It was also considerably difficult to ensure that the collected images were of a high enough quality to make for an effective product detector. Garbage in means garbage out when it comes to image recognition. What they needed were robust computer vision models that could detect any CPG products they needed them to detect. They also needed to be able to create and update these models with ease and at speed so as to maintain flexibility with their clients and save time. Spending days collecting and classifying real images for each use case was, therefore, out of the question. A Solution in Sight On their search for a relief from their computer vision challenges they discovered a novel approach to the problem with Neurolabs. Neurolabs uses Synthetic Data to train computer vision models to detect CPG products in the real world. This saves teams the hassle of collecting and classifying countless real data as well as the laborious process of training a computer vision model with that data to detect a product on a supermarket shelf. From the get-go they were impressed with the Neurolabs team, their speed and responsive, and how easy they were to work with. The fact that Neurolabs had a seamless pipeline in place already to solve the exact problem that Akcelita was trying to solve gave them great faith that they were on to a winner. They established a Proof-of-Concept project to test how effectively Neurolabs could help them with their workflow. The scope included 35 supermarket products from the stores they were monitoring and the products were spread across many different images. Akcelita’s pipeline included: Collecting high quality images from the store using 3D depth cameras Pre-processing that compared the image with the results from Neurolabs A post-processing step that confirmed compliance and checked for outliers in the detection results The finished solution would automatically detect Out-Of-Shelf products as well as any Planogram Compliance issues on shelves. Problem Solved Getting instant access to the images they needed from Neurolabs along with the detection results was a smooth and seamless process from start to finish. Creating image recognition models quickly was paramount for Akcelita so that they could test the solution and iterate it quickly if necessary. The time saving that Neurolabs provided here was by far the biggest benefit. Synthetic Data really makes the process a lot quicker and removes the manual object classification process. For them, Neurolabs Synthetic approach to computer vision is the biggest time saver. All synthetic data and model training was easily managed via the Neurolabs ZIA platform and the detection data made available via API. Overall Akcelita had an excellent experience implementing Neurolabs ZIA and moreover they improved the image recognition capabilities that they can now offer its clients, meaning more business and happier customers as a result. Synthetic Future for Retail Execution Using Synthetic Data, Neurolabs ZIA enables you to build a solution that excels at streamlining in-store retail execution where conventional solutions are limited in many ways: Adaptability: The virtual nature of Synthetic Data makes it easy to transfer datasets and models between domains and CV use cases. Speed: A real-world deployment can be implemented in less than one week, saving you a ton of time and radically cutting costs. Scale: Easy access to image recognition datasets for over 100,000 SKUs through Neurolabs’ ZIA product. Quality: Achieve 96% accuracy for SKU-level product recognition from day 1. For Consumer Packaged Goods (CPG) brands, Synthetic Data enables the automation of visual-based processes such as in-store retail execution in real-world retail environments using virtual versions of Fast Moving Consumer Goods (FMCG). The most innovative retail solution providers are already experiencing the benefits of using Synthetic Data by deploying Synthetic Computer Vision software like ZIA Neurolabs to automate retail operations. At Neurolabs, we are revolutionising in-store retail performance with our advanced image recognition technology, ZIA. Our cutting-edge technology enables retailers, field marketing agencies and CPG brands to optimise store execution, enhance the customer experience, and boost revenue as we build the most comprehensive 3D asset library for product recognition in the CPG industry.

  • What are Retail Digital Twins?

    Virtual counterparts that unlock automation potential for retailers and CPG brands A Digital Twin is a virtual replica of a real world thing, including people, objects, and even entire places. In retail, Digital Twins primarily include virtual recreations of Stock-Keeping Units and the environments in which they exist in the real world i.e. grocery products on supermarket shelves. Retail Digital Twins replicate real world Consumer Packaged Goods and Fast Moving Consumer Goods in a virtual environment. This provides computer software like Synthetic Computer Vision with the necessary visual data (Synthetic Data) to learn how to recognise those products in images and videos. At Neurolabs, we are revolutionising in-store retail performance with our advanced image recognition technology, ZIA. Our cutting-edge technology enables retailers, field marketing agencies and CPG brands to optimise store execution, enhance the customer experience, and boost revenue as we build the most comprehensive 3D asset library for product recognition in the CPG industry.

  • Using Neurolabs' Retail-Specific Synthetic Dataset in Production

    CPGDet-129: Neurolabs’ Retail-First Detection Dataset We’re excited to open source one of Neurolabs’ synthetic datasets, CPGDet-129 that is used in our retail-focused flagship technology. Neurolabs is leading the way for Synthetic Computer Vision applied across the value chain of consumer product goods (CPG). CPGDet-129 has been used to train a synthetic computer vision model for the task of object localisation, for one of our retail partners, Auchan Romania. Motivation Within the wider field of computer vision, some of the main reasons for using synthetic data are speed of generation and quality of annotations. Adaptability of synthetic data to changes in domain comes in naturally, as one of the hardest things in production computer vision systems is maintaining a certain level of accuracy and ensuring the models are robust enough against data drift. In retail, a well known real dataset SKU-110K is used as a standard benchmark for object detection. However, it doesn’t contain instance segmentation annotations, as most likely, this would be too costly to acquire. Standard Cognition recently released StandardSim, a synthetic dataset for retail aimed at the task of change detection for autonomous stores. CPGDet-129, is the first of its kind public synthetic retail dataset constructed specifically for the object detection task and the challenges that arise from training such computer vision models. Engine Neurolabs’ internal data generation engine allows programmatic control over the parameters of a 3D scene. In this white-paper, we will discuss some of the most important features of generating synthetic data for the task of object localisation, as well as the importance of consistency between real and synthetic data annotations. On this occasion, we’ve released all products as one object class. If you’re interested in the class labels for further research purposes in object recognition for retail, please get in touch. Dataset Specifications Using our synthetic data generator and one scene developed in Unity, we’ve created 1200 images of products on shelves, together with their 2D bounding boxes and segmentation masks, with both structural, i.e how the products are placed, and visual appearance, variation. In total, CPGDet-129 contains 129 unique Stock Keeping Units (SKUs) and ~17,000 product annotations. We hope to engage with the wider community of synthetic data enthusiasts and practitioners through this release and encourage discussions on the improvement of using synthetic data for model training, as well as to further advance Synthetic Computer Vision. Structural Variation One of the ways to achieve a high degree of variation is by manipulating the number of objects in a scene as well as rotation and relative positioning. In our case, the three main structural components are: instancing vs. no-instancing and stacking dropout rotations & translations In a usual supermarket, most products are grouped by category and brand, and can be found in multiple positions. Randomising rotations, scale of products as well as translating products in different parts of the shelves, sits at the core of our generation engine. In addition, we define instancing, whereby we group products together based on their class or category association. This is the first step towards structural realism. The next step is achieved by stacking products depth-wise horizontally (XY stacking), or vertically (XZ stacking), as well as combining the two as seen in above images. This allows us to create dense scenes, and natural occlusions, further increasing variation. In the case of no-instancing, products are randomly placed on a shelf [Fig. 2] Although unlikely to happen in a real setting, this introduces a different kind of variation and can be seen as a form of domain randomisation. As with the traditional dropout component in neural networks, where one deactivates nodes with some probability, we’ve created the equivalent for synthetic data, whereby one can remove objects from the scene with some predefined probability. This leads to more variation in dense scenes and furthers the structural realism, as most real supermarket shelves are not full and, oftentimes, quite messy. Finally, physical simulations of objects interacting on a shelf is another interesting approach which yields good results, but we have not included these in CPGDet-129. However, there is an inherent difficulty in making sure that objects interact according to the laws of physics, especially for soft bodies and a difficulty in terms of 3D modelling of these assets accordingly. The recently released Kubric, and presented at CVPR 2022, is an open source generation engine from Google Research that is making this possible for rigid bodies, by integrating with the simulation engine PyBullet. Visual appearance and post-processing effects In terms of visual appearances, we’ve used the High Definition rendering pipeline, HDRP, and light probes are baked into the scene. Firstly, we vary the light intensity, and camera location. In one scenario, we let the camera roam free, whereas in another one, only the camera depth is varied. This is one of most important components, as mimicking and perturbing the camera location only slightly from the real scene yields better results. Some of the most important features in terms of post processing effects where bloom, colour adjustment and lens distortion. Data-Centric Validation Ultimately, the synthetic data trained models and the domain adaptation techniques used to bridge the syn2real gap are the two components that prove or disprove the success of the synthetic data. However, as part of the data-centric AI movement, we’ve noticed that validating the synthetic data not only empirically, using the model as a proxy, but also independent of the task, is equally important. Firstly, there needs to be a consistency between human labelled real data annotations and synthetic data annotations. Because this depends on who labels the data, we’ve released a script together with the dataset, that filters out annotations based on their visibility or pixel area. In CPGDet-129, the number of occlusions and annotations per image is very high before filtering out based on visibility and area. One simple validation tool for testing the consistency of annotations across synthetic & real is to compare the ground truth annotations distribution with the synthetic data one, post filtering. One can see from the below plot, that the synthetic distribution encompasses the real data one. Unfortunately, at this time, we are not able to publicly release the real data. To provide further arguments for the annotation consistency hypothesis, we’ve further observed in our experiments that: AP75 is very sensitive to the distribution shift between synthetic annotations and ground truth annotations. This shift often appears because of the inconsistency between human and rendering annotations Synthetic dataset size is important for predicted bounding box accuracy Conclusion As mentioned in a previous post on real vs. syn data, CPGDet-129 was used to achieve 60% mAP on the real test set and increase the robustness of the model, all without any real data used for training. With the public release of CPGDet-129, we hope to get feedback from others and learn from the community about what challenges arise from using synthetic data and how we can mitigate them. Retailers worldwide lose a mind-blowing $634 Billion annually due to the cost of poor inventory management with 5% of all sales lost due to Out-Of-Stocks alone. Neurolabs helps optimise in-store retail execution for supermarkets and CPG brands using a powerful combination of Computer Vision and Synthetic Data, called Synthetic Computer Vision, improving customer experience and increasing revenue.

bottom of page