Puxxle, a Montreal-based AI research platform for product teams, set an unusual standard before any code was written: the system would not be permitted to invent. INTO built the engine behind it.
May 8, 2026

It is one of the more disorienting things in modern software: a product that looks like it works.
Puxxle, a Montreal-based research platform, is the kind of tool a product designer or a product manager might use to make sense of a stack of customer interviews. By the time the work began, the interface had been thought through in detail. The flows were clean. The brand was confident. A demo video moved through the screens with the sort of polish that gets investors to stop scrolling.
What it did not have was the part of the product anyone would actually pay for. The intelligence layer that reads the research, finds what matters, and answers questions about it had not been built. Puxxle's CTO, Alexei Savchenko, was the first to acknowledge what was missing. The product had a face, a body, and a pulse. It did not yet have a brain.
This is more common than the industry usually admits. The current generation of generative AI has made the shell of an intelligent product much easier to construct than the substance. A team can ship something that performs intelligence in a demo and is hollow in the hands of a real user.
Qualitative research is the slowest and most contested part of the product process. A team runs twenty interviews, accumulates hundreds of pages of transcripts, and spends two weeks reading them by hand. By the time a finding reaches a meeting, half the room is suspicious that the researcher has found only what she was looking for. The fragile thing about qualitative research is not what it finds. It is what it cannot prove. Puxxle was being built to remove that vulnerability: upload the research, and the system would tag it, organize it, and let the team query it in plain language, with every claim anchored to a real quote in a real interview.
The most consequential decision was made by Puxxle's founder and CEO, Morgane Neto, before a single line of production code existed.
The system would not be permitted to invent.
If a user asked a question that the underlying research could not answer, the system was required to say so. Not to guess. Not to glide. Not to compose a plausible reply by stitching together adjacent material. If a user asked something out of scope, the system would decline. Every claim in an answer had to be tied to a specific excerpt from a specific source. Not paraphrased nearby. Cited.
This is not a technology choice. It is a standard, and standards of this kind are unusual in a young AI product. Morgane's view was that her users wanted something quite different from a confident answer: they wanted to know whether the data supported the claim. If it did, they wanted the claim, with sources. If it did not, they wanted the system to say so plainly.
A standard like that reframes the engineering work entirely. It is no longer a problem of getting a model to speak. It is a problem of teaching it when to stay silent.
The first move was to push the generative model away from the centre of the system. The prototype Puxxle had built before INTO arrived sent the entire document to OpenAI with a single instruction. A single-prompt system has no idea where its answers come from. It can only generate, fluently and confidently, whether or not it has any business doing so.
The system that replaced it works differently. Most of what it does happens before any model is consulted. Personal information is stripped at ingest using Microsoft's open-source Presidio library. The text is segmented and indexed in Milvus, a vector database that lets the system find the relevant fragments of a corpus regardless of the words used to ask. A consistent vocabulary of themes is applied across the material, and every tag points at the line of text that earned it.
Only at the end, when an answer needs to be written for a human to read, does OpenAI enter the picture. By then, the question has been narrowed, the evidence retrieved, and the model's job is the one it is genuinely good at: writing a clear paragraph about material it has been handed. If the material is not there, the model is not asked to perform.
A researcher uploads her interviews, support tickets, or surveys. The system reads them, removes personal information, indexes the content, and tags passages against a consistent vocabulary the team has agreed on. It is a careful first pass that a human can audit and refine.
The more interesting feature is the chat. A designer can ask Puxxle, in plain English, what frustrates a particular kind of customer about onboarding, and receive back a paragraph that synthesizes the answer with three direct quotes attached. She can show that paragraph to a skeptical product manager, who can click through to the full transcripts to see the quotes in context. The argument she is making is no longer a matter of her authority. It is a matter of evidence.
When the data does not support the question, the system says so. It explains why, and suggests a next step. It is the feature that earns trust the moment a customer understands what is happening.
The system was deployed inside Puxxle's own Google Cloud Platform environment, with the code in their GitHub repository. Morgane signed off in September 2025. The system met the standard she had set at the beginning.
Before the work, Puxxle had a product that performed intelligence. After it, Puxxle has a product that demonstrates rigor. A research tool whose answers cannot be verified is a curiosity; a research tool whose every claim is anchored to a real quote in a real interview is something a researcher will defend in a meeting, and something a buyer will pay for. The standard Morgane set at the beginning is now the feature she can sell.

With expertise in strategy and product management, Sebastien helps organizations integrate AI in their business operations and services.
Puxxle, a Montreal-based AI research platform for product designers and product managers, had a polished interface and real customer pull, but no internal AI engineering capability. Before any code was written, the standard was set: the system would not be permitted to invent. INTO built the engine behind the product, designed around evidence rather than eloquence, with every answer anchored to a real quote and refusal as a feature. Puxxle now owns and operates the system in its own cloud.
Thirty minutes. We'll tell you what to build, what not to build, and what it would take.