Consider a manufacturing facility with cameras covering the production floor. The team has already deployed Nx AI Manager with a person detection model. It works. The system identifies when someone enters a restricted zone and generates an alert.
But the safety team has a different need. They don't just need to know someone is there. They need to know whether that person is wearing the required PPE: hard hat, high-visibility vest, safety glasses. A general person detection model can't answer that question. It sees a person. It doesn't see what they're wearing.
Buying a single monolithic model that handles both detection and PPE classification is one option, but it's brittle. It has to be retrained every time a requirement changes. It's harder to maintain across a fleet of devices with different hardware. And it doesn't scale well when the next site needs the same detection but different classification criteria.
What the safety team actually needs is a pipeline that can detect a person first, then run a second, specialized model on just that person to determine compliance. And ideally, the system should be able to get smarter over time as it encounters more edge cases on the floor.
Nx AI Manager is built to support exactly that kind of workflow. Here's how.
The model library covers common detection tasks, but a use case like PPE classification requires a model trained on the facility's specific requirements. Nx AI Manager is designed to accommodate custom models alongside its off-the-shelf options.
The pipeline natively supports image classification and object detection. Beyond those, it can also run:
Advanced model types such as segmentation, keypoint detection, and VLMs require external pre- or post-processing to handle their output formats.
For the PPE scenario, the team would train a classification model on their own labeled dataset, export it in ONNX format, and upload it to the Nx AI Cloud. The upload process also supports models from platforms like Edge Impulse, Ultralytics, Teachable Machine, and TensorFlow/TFLite. Any platform that exports to ONNX is compatible.
When a custom model is uploaded to Nx Cloud, the platform runs conversion processes that generate optimized model artifacts for the hardware accelerator targets you select during upload.
For fully supported accelerators — CPU, Intel (OpenVINO), and NVIDIA (CUDA and Jetson) — this conversion is handled through the platform. For accelerator targets still in experimental or early support stages, such as Hailo and DEEPX, dedicated artifacts can also be generated, though the maturity of those runtimes may vary. The supported accelerators page has the current list and status levels.
The practical result is that the same model can run across a mixed-hardware fleet without manual recompilation. Upload once, select your targets, and the platform handles the conversion for each. Any runtime or toolchain adhering to the Open AI Accelerator eXchange (OAAX) standard is also compatible. In practice, OAAX is an open standard that defines how AI models interface with hardware accelerators, which means the pipeline is extensible beyond its built-in runtimes as new accelerators adopt the standard.
This is where the PPE scenario comes together.
Model chaining allows multiple models to run in sequence within a single pipeline on the same device. A parent model processes the video stream first, and its output passes to one or more chained models for further analysis.
Chaining supports three modes:
For the manufacturing floor, the setup looks like this: the parent model runs general person detection on the camera stream. A chained model, set to feature extraction mode, takes the bounding box regions labeled "person" and runs the custom PPE classification model on just those regions. The output tells the system whether each detected person is compliant or not.
Both models run in the same pipeline, on the same device, without duplicating the video decode. The person detection model stays general and reusable. The PPE model stays specialized and swappable. If a different site has different PPE requirements, only the chained model needs to change.
Pipelines can be configured per device, fine-tuned individually, and then cloned across multiple devices in a fleet. This makes it practical to roll out a standardized AI configuration across an entire site, update it remotely through Nx Cloud, and adjust it per camera where needed.
Before reaching for external processing, it's worth knowing what Nx AI Manager already includes out of the box. The plugin ships with several built-in postprocessors that handle common analytics scenarios without any custom code:
Each of these works with any model that generates bounding boxes. They're configured directly in the plugin settings within the Nx Desktop Client, with no external application required.
For the PPE scenario, the team could combine model chaining with object counting to track how many non-compliant detections occur per shift, or pair the detection pipeline with line crossing to flag when someone enters a specific zone without required safety equipment.
For use cases that go beyond the built-in options, Nx AI Manager also exposes hooks for external pre-processing and post-processing applications. These run as independent processes that communicate with the plugin over Unix sockets.
External pre-processing receives the original full-resolution frame before any other processing. The application can:
External post-processing receives inference results as a MessagePack-encoded buffer after the model runs. The application can:
In the PPE example, a postprocessor could take the compliance output and cross-reference it with a shift schedule to identify which team and supervisor to notify. Or it could filter out low-confidence results before they trigger an alert, reducing false positives on the floor.
A tensor postprocessor mode also gives the application access to the input tensor via shared memory, enabling use cases like inspecting image data within bounding boxes or collecting samples from specific regions of interest.
External applications can be built in any language that supports Unix sockets. Examples in C and Python are provided, and the integration SDK is available on GitHub.
One of the more valuable possibilities the pipeline enables is a workflow for iterative model improvement — sometimes called closing the AI loop.
The concept works like this: through custom post-processing, developers can set up logic to selectively capture images during inference based on criteria like output certainty thresholds. Low-confidence detections or edge cases get saved automatically, producing a curated dataset of the exact samples that would improve model accuracy if fed back into a retraining cycle.
Back on the manufacturing floor, this could mean the system saves frames where the PPE model wasn't confident in its classification. Maybe a vest was partially obscured, or a hard hat was an unusual color. Those are the samples that matter most for retraining.
It's worth being clear about what this is today. Nx AI Manager provides the infrastructure to enable this kind of workflow through its external post-processing hooks and the integration SDK. It is not a built-in, automated retraining feature. Setting it up requires development work, and teams building this workflow will typically pair it with an external training platform like Edge Impulse. But the pipeline provides the data collection layer — running on the same device, within the same inference process — without needing a standalone system to capture training data. It's a capability the platform enables, and one that's on the roadmap for deeper integration in future releases.
What model formats does Nx AI Manager support? The primary format is ONNX. Models can also be imported from Edge Impulse, Ultralytics, Teachable Machine, and TensorFlow/TFLite. Any platform that supports ONNX export is compatible.
Does Nx AI Manager convert models for different hardware? When a model is uploaded to Nx Cloud, the platform generates optimized artifacts for each selected accelerator target. Fully supported targets include CPU, Intel (OpenVINO), and NVIDIA (CUDA/Jetson). Experimental support is available for Hailo and DEEPX. See the supported accelerators page for current status.
What types of custom models can it run? Image classification and object detection are natively supported. Segmentation, keypoint detection, vision-language models, and fully custom models built in PyTorch or TensorFlow are also supported with external pre- or post-processing.
What is model chaining? Multiple models running in sequence on the same device. A parent model processes the stream, and chained models analyze its output using direct, conditional, or feature extraction modes.
What built-in postprocessors does Nx AI Manager include? Loitering detection, left behind object detection, line crossing detection, and object counting. All work with any model that generates bounding boxes and are configured directly in the plugin settings.
Can I add custom logic to the inference pipeline? Yes. External pre- and post-processing applications communicate over Unix sockets. They can be built in any language and run alongside the plugin as independent processes.
Can Nx AI Manager help with model retraining? The pipeline enables developers to build data collection workflows through external post-processing, selectively capturing images during inference for use in retraining. This requires development work and typically pairs with an external training platform. It is not a built-in automated retraining feature today, though deeper integration is on the product roadmap.
Is Nx AI Manager available for Nx Witness Pro? No. Nx AI Manager is available exclusively on Nx Witness Enterprise. However, existing Nx Witness Pro licenses can be converted to Enterprise with a credit toward subscription duration. Contact your channel partner or talk to our team for details on the conversion process.
To learn more about Nx AI Manager, visit the product page or explore the technical documentation.
To get started, connect with a channel partner. You can find a partner through MyNx or talk to our team directly.