By Marc Taccolini, CEO & Founder, Tatsoft
Currently, I’m working on a daily basis, many hours a day, with 3 AI models concurrently: Claude Opus 4.1, GPT-5 Thinking, and Gemini 2.5 Pro. Research modes active or not, according to the task. And after months of this intensive parallel workflow, I’ve developed a systematic approach that dramatically improved my efficiency – something I call Entity Engineering.
Note: This article focuses on orchestrating AI for technical documentation and analysis. Programming-specific AI workflows and code generation deserve separate treatment.
The Problem Technical Workers Face
For technical workers dealing with large amounts of written information where accuracy is a must, inevitably ends on working concurrently with multiple top AI LLM models. Each model has its strengths, weaknesses, and peculiar behaviors. The challenge isn’t just knowing how to prompt them – it’s understanding their fundamental nature and how to orchestrate them effectively.
I did recently a deep dive on reasoning tests I’ve been regularly conducting with GPT models, latest one GPT-5 got 91%, better than any human tested on same benchmark I’ve met in 30 years, but the 9% wrong shows deep structural flaws in its behavior. That is an important pattern to know that applies in general to all LLM models.
A fun fact, and directly related to this article, is that in one instance I had a request to provide the link for that Reasoning Article. I didn’t have it at hand, so I recommended the google search keywords: GPT 5 reasoning tests Taccolini. It found the LinkedIn version of the article, but credited it to Giorgio Taccolini, not the author, not a relative.
I didn’t save the Giorgio screenshot right away, so I repeated the search. For my complete surprise, each time AI put out same article summary, crediting to other non-existing relatives: Luca, Matteo, Francesco, and so on. At least it was consistent to keep Italian heritage.
Disclaimer: Those technologies and tools are evolving at pace that I can’t guarantee by the time you read the article, and the google search, the AI summary would still be getting that wrong. If you want to try, the keywords are: GPT 5 reasoning tests Taccolini. By the end of article, some captured screenshots of my experience.
The Evolution: From Prompt to Entity Engineering
In order to understand what Entity Engineering is about, we need to do a brief recap on the genesis of language interactions with AI (I will loosely use AI to refer to AI models using LLM on its core), and how the concepts of Prompt and Context Engineering evolved.
Early days, the interaction with AI was more atomic transactions, you do a prompt, get a reply. Therefore, the effort to optimize that interaction naturally led to the development of Prompt Engineering techniques.
As the models’ capacity grew, having enough tokens to keep the longer context information over many atomic interactions, keep a set of directives and artifacts connected with the entire session, naturally led to Context Engineering techniques.
When the concepts of Prompt Engineering and Context Engineering were crafted, I studied all I could, it helped, and those concepts are still applicable. But in my current workflow, I am intuitively using another concept, which in lack of an already established name, I call Entity Engineering.
What is Entity Engineering?
Picture a process engineer creating a model to define optimization settings in complex control system. A simpler example, modeling the setpoint, current value, and control value of a loop-process, to optimize its proportional, derivative and integral constants. You can do that optimization keeping focus on the INPUT and OUTPUT patterns, and experimenting on it (like a 20% setpoint increase), without having to have any idea of the exact physical process happening inside that process-loop.
Similar concept applies here, you don’t need to get into technical details what is the token limit to that specific AI model, to your subscription level, on which specific areas of use or training that model was used (hard to keep track anyway if not your daily job). Instead of it, all AI-models should be treated as Entities, which need to have a high level understanding of their characteristics – I like also the word style for LLMs – and to efficiently interact with it, understand those characteristics.
The term Entity came from the fact that there’s a big trend to humanize AI, or at minimum benchmark and compare with human behavior and capabilities. I myself did that, applying the Reasoning Tests I use on hiring programmers to evaluate GPT-5. But they are another type of entity, quite distinguished from any human pattern.
The chat interactions lead to false impression of human proximity, but as long as LLM and statistics play a role in its core foundation, their patterns are completely different from any human parameters: they can appear amazingly intelligent, next second, surprisingly dummy, as you dig deeper on the technology (or use reasoning tests methodology as I did) you find out that they are neither.
The AI models are, and as long they keep LLM as its core foundation, completely different from any human parameter. The best approach is deal with as a black-box process you need to model the basic behavior, to apply PID control – similar concept applies with those entities. Due to the nature of interaction using natural language, instead of the process analogy, I prefer to describe as an “Entity” you need to learn its typical patterns.
Instead of humanizing, take the typical practical engineer mindset: it’s an entity that requires discovering input, output patterns, parameters that can affect it, typical workflows, behavior styles, capabilities and limitations.
The optimization is achieved by learning the Entity pattern, which despite many interactions has remarkable resonance with human-style interactions, it’s fundamentally different.
Practical Application: Three-Entity Workflow
In this methodology, here is the overall characterization of the three Entities I am working with – Claude Opus 4.1, Gemini 2.5 Pro, GPT-5 Thinking, and their research-level variations – and how to leverage the patterns of each Entity for better workflow.
Claude Opus 4.1
Technically sophisticated, research output typically short and very accurate, high ability to properly keep track of abstract concepts when consolidating large sets of information. Typically will ask few clarifications before starting complex task. Its ability to keep many artifacts in context in same session, even when not in research mode is remarkable. It should be the to-go main tool when managing technical information. If we do what this article says we shouldn’t and humanize, it is the smart guy good to keep around, and let lead important workflows.
Gemini 2.5 Pro
Perfect companionship to Claude for parallel-pair working. Always more verbose, typically 3 times the output size for same requests presented to Claude, often with unnecessary information, but its meticulous and detailed-oriented approach allows to pick a few items that justify its presence on the ecosystem. Sitting on top of Google technologies on Internet search organization, deserves the label of “Deep Research” in its UI. If need to give human description to summarize this entity, it is not brilliant but meticulous collaborator which is important to keep in the ecosystem.
GPT-5 Thinking
I had various anecdotal issues when trying to use GPT-5 Auto, or Pro. So, just keep it always locked on Thinking. It is good on emotional intelligence, short summarization, making language clear with proper delivery. The lack of artifact capabilities of Claude Opus, and keep good context in longer interactions, neither the performance of Gemini in internet-research tasks, produce reports size typically in the middle of the other models. But still with good role in an ecosystem, as Claude can manage the main workflow and produced artifacts, and Gemini the researcher and auditor, GPT-5 review or organize all necessary blocks of information, without having to open session in other tools, to a task GPT is better. Think of this entity as the one to-go tool for all daily typical and more direct tasks, and higher amount of interactions, as the other two entries focus on more complex workflows with artifacts creation, and deep research and final reviews.
The Entity Engineering Process
The higher level description on to optimize a workflow mixing the 3 models is an example of Entity Engineering. It’s not about specific prompting techniques, not context organization, it is about understanding the entities available in your toolset, and, as a Control Engineer, to use them to best fit of its characteristics.
Think of it like managing a kitchen with three specialized chefs. You don’t need to know every recipe each chef trained on, but you do need to know that Chef Claude excels at precise technical execution, Chef Gemini creates comprehensive mise en place, and Chef GPT-5 presents dishes beautifully. Entity Engineering is about orchestrating their strengths, not teaching them new recipes.
Instead of trying to publish the rules-book, as all that is still too new and dynamic (It feels like last year, but GPT-5 was released in August!), I prefer to give a clear explanation of the concepts, and practical examples.
Real-World Application: How This Article Was Written
The first draft of article, its entire contents, was 100% written by me, without any AI tool interaction, no AI text generation, no content from the internet. The only Internet interaction was to test on the Google search results with supplied keywords, to learn about Giorgio, and other potential relatives.
With research mode off, I requested to all 3 models the analysis if there were paragraphs with similar contents, or paragraphs out of logical order. I also asked to verify additional notes I had, if any were useful to this article.
I applied my own editing consolidating the 3 inputs. As usual, Claude Opus was very focused, kept as the main reference, and the one used for consolidation. Gemini verbose, but few valuable insights, GPT practical good language to a broader audience.
The last set of revisions, research mode, I specifically requested to keep changes at minimum, just what is necessary to grammar and language corrections.
After consolidation, the final verification and minor adjustments don’t require specialized skills from other models, so I did with Claude to benefit from its artifacts management.
Validation: Entity Engineering in Practice
During the creation of this article and two companion pieces, the Entity Engineering approach demonstrated measurable efficiency gains. The workflow metrics reveal what’s possible when orchestrating multiple AI models effectively:
Traditional technical writing timeline (3 articles, ~3,500 words):
- Research and validation: 8-10 hours
- Initial drafting: 6-8 hours
- Revision cycles: 4-6 hours
- Citation formatting: 2-3 hours
- Total: 20-27 hours
Entity Engineering approach:
- Total time: 3.5 hours
- Acceleration: 6-8x faster
- Output: Publication-ready content with validated sources
The key wasn’t just using AI—it was understanding each entity’s strengths:
- Claude for technical architecture and document structure
- Research capabilities for source validation
- Human expertise for strategic decisions and authenticity
As one analysis noted: “You treated [the AI] as a specialized tool rather than either a replacement for thinking or a mere grammar checker.” This is Entity Engineering—knowing when to challenge suggestions, when to provide context, and when to leverage each entity’s capabilities.
The research validation alone (finding and verifying industrial automation case studies) would have taken 3-4 hours manually but was completed in minutes while maintaining academic rigor.
This real-world validation shows Entity Engineering isn’t theoretical—it’s a practical framework delivering measurable results today.
Key Takeaways for Technical Professionals
Same way any engineer needs to have the knowledge of any 3 tools he’s using in a task, Entity Engineering is about mastering the characteristics and styles of each AI model – treating them not as humans or mere tools, but as distinct entities with their own patterns, capabilities, and optimal use cases.
The Entity Engineering Principles:
- Treat AI models as distinct entities – Not human, not mere tools, but complex systems with unique patterns
- Map their characteristics empirically – Like tuning PID controllers, discover what works through experimentation and error analysis
- Orchestrate based on strengths – Use Claude for technical accuracy, Gemini for research depth, GPT-5 for communication clarity
- Maintain human oversight – You’re the control engineer; they’re sophisticated instruments in your toolkit
- Adapt as they evolve – These entities change rapidly; your understanding must evolve too
The future of technical work isn’t about choosing one AI model – it’s about conducting an orchestra of specialized entities, each contributing their unique capabilities to achieve results none could deliver alone.
Related Content
Articles:
- “GPT-5’s Logic: Faster Than the Best Humans, Still Blind to Its Own Mistakes” – Blog Post
- “GPT-5 Reasoning Test Results” – LinkedIn Article
Video:
- Live Presentation on GPT-5 Evaluation – YouTube
Marc Taccolini is CEO & Founder of Tatsoft, bringing 30+ years of industrial software expertise from his work at Tatsoft and previously InduSoft (acquired by AVEVA). He works daily with multiple AI models to advance industrial automation platforms and has conducted extensive testing on AI reasoning capabilities.