Home » 2026
Yearly Archives: 2026
Themes and Trends from Taxonomy Boot Camp London

“And history was the reason why she would never go to London. She saw it as dominated by the Bloody Tower, Fleet Street full of demon barbers, as well as dangerous escalators everywhere.” – Anthony Burgess, Inside Mr. Enderby
After a six year hiatus initiated by the Great Plague of 2020 and continued with a lost conference venue, Taxonomy Boot Camp London returned in 2026 at the America Square Conference Center. The event was co-located with the KMWorld Europe conference, allowing attendees to come together for keynote sessions while attending one of two tracks for each conference. Just blocks away from the Tower of London and literally encompassing part of the ancient walls of the City of Londinium, the steadfast and ancient hosted audience to the rapidly changing world of knowledge organization systems and artificial intelligence.
As with past conferences, I’m going to sum up some of the key themes and trends of Taxonomy Boot Camp as I heard them.
Working with AI
A prevalent theme across all sessions was working with AI rather than against it. While there is common concern that AI will replace jobs and remove the human from the work equation, most sessions focused on using AI as a tool to accomplish tasks that are repetitive, time-consuming, or inconsistent. In particular, using AI to identify and extract entities for taxonomy building or inclusion, summarizing large quantities of text, or automatically classifying content using taxonomy values.
Another key focus was on the probabilistic notion of machine learning models:
At a high level, probabilistic AI models uncertainty and provides outcomes based on likelihoods. This means that it doesn’t always offer one definitive answer but instead provides a range of possibilities with associated probabilities. Deterministic AI, on the other hand, is rule-based, designed to yield specific, predictable outcomes without room for variability once given a particular input. (Decision Point Advisors)
Machine learning models may generate different answers to the same question, often termed a “hallucination”. Grounding machine learning models in knowledge bases, including deterministic models like graphs in the form of taxonomies and ontologies, can create a neuro-symbolic AI approach providing more consistent answers.
Content and Training
Working with AI also means curating the training data made available to machine learning models. Publicly available large language models (LLMs) are trained on easily accessible large data sets…specifically, content available on the Internet. As we well know, content quality on the Internet is as varied as the people who create it and make it available. While LLMs then get the benefit of a variety of input, they also suffer from the biases inherent in that input. Using synthetic training data or retrieval-augmented generation (RAG) to supplement the pre-existing LLM training data can improve results. In particular, using organization-specific knowledge bases of training data can help provide more specific responses applicable to your domain with fewer erroneous answers.
Deciding what training data to use and how taxonomy and ontology structures become part of that training data is partially in the purview of taxonomists, so becoming familiar with which LLMs are being used and for which use cases will be important parts of a taxonomist’s changing role.
Language Consistency
While this is nothing new, many sessions focused on keeping the “controlled” in controlled vocabularies. Since nearly every session linked back to AI in one way or another, even the context within which we consider the basic tenant of control in a controlled vocabulary was emphasized as continually pertinent. With differences in language across one or more semantic models, machine learning outcomes are put at risk. As more areas of application for machine learning are being found, we are also venturing into areas involving more risk, like mental health or medical advice, legal enforcement, and financial decisions. Language consistency used in semantic models and as applied as metadata for training content is now more important than ever.
Among the use cases pertinent to this consistency is tagging documents at more granular levels, including inline tagging and tagging content chunks. Again, this is nothing new and has been a practice of DITA for 20 years. However, being able to consistently and accurately create training data to balance probabilistic large language models with deterministic knowledge can counter machine learning hallucinations and create more trustworthy AI agents.
“Bond”ing
And, although not a taxonomy or knowledge management theme, I did notice another commonality across at least two presentations: James Bond as an example. Perhaps it was the London venue that caused presenters to use our favorite secret agent as an example, but there he was, connected semantically to movies, sports cars, and identification numbers. I myself created a simple Bondtology for illustration purposes in past workshops and webinars. Interesting that a profession like spycraft met at the intersection of establishing deterministic truth through semantic models to avoid being deceived by artificial intelligence.
Taxonomies and the Fall of the House of Escher

“I know not how it was–but, with the first glimpse of the building, a sense of insufferable gloom pervaded my spirit.” – Edgar Allen Poe, The Fall of the House of Usher
I consider well-constructed semantic models akin to constructing a foundationally sound, well-architected, and visually appealing building. Not to be hyperbolic and melodramatic, because [swoon] that just isn’t me, has any taxonomist looked upon the works of others and despaired? Upon casting eyes upon and getting the “first glimpse of the building” that is the organizational semantic structure, suddenly felt “a sense of insufferable gloom” pervading the spirit? Boy howdy, have I.
To be fair, there are a number of factors at work leading to semantic debt; compromises in semantic integrity violating best practices in taxonomy construction left for some future taxonomist to unwind. These can be strong organizational cultural pushback to accepting taxonomies in structure or content, internal politics, or designing so that consuming systems can ingest the data. In any case, violations in taxonomy best practices can compound over time, leaving the semantic models in the current state not particularly semantic at all.
While it is mission-critical to gather input from business stakeholders to build, implement, and maintain taxonomies, it is also critical to allow the respective subject matter experts in taxonomy and business to do their work according to the best practices of their domains. Like a building drafted by M.C. Escher and constructed by Edgar Allen Poe & Associates, taxonomies can become circuitous, recurring, and not very meaningful if straying too far from best practices.
Escherian Design
I have frequently been involved in taxonomy design projects in which the stakeholder input into the semantic structures follows a line of thinking mirroring the work the business users do. In fairness, taxonomies should support whatever use cases assist end users in performing their jobs. However, business stakeholders are not necessarily taxonomists and so their recommendations may not follow taxonomy design principles. Here are some taxonomy design suggestions I have seen.
Taxonomies as virtual end caps. In this scenario, product owners try to mirror their product placement in the physical world as taxonomy structures in the virtual world. So you may get suggestions to build taxonomies like this:
Lumber > Deck building materials > Nails
Roof building materials > Shingles > Nails > Roofing nails
In essence, the concept representing the objects in the physical world are placed in the same locations in taxonomies as they would be in the layout of the store. The terms become conceptual end caps, quick items to throw in your cart because they are related to the products you are purchasing. In this case, I need nails for specific reasons, like building a deck or putting on a roof. For convenience, I put an end cap display of nails in the lumber department or by the stacks of shingles so buyers don’t need to hit every department to complete a project.
Taxonomies as navigational structures. While taxonomies can absolutely be used as navigational structures on the front end, the proposal here is that taxonomies exist this way in the back end taxonomy management system. Taxonomies may then be built like this:
Apparel > Men’s > Basketball shoes
Apparel > Women’s > Basketball shoes
From an access perspective, these are easy to understand navigational pathways leading directly to a set of products I can then filter by size, color, or brand to see what’s available but also make it easy to make a purchase.
Taxonomies as processes, stages, or funnels. Building taxonomies following process steps, stages, or trying to capture marketing user journeys through the funnel so that structures can look like this:
Awareness > Consideration > Conversion > Loyalty
Planning > Design > Prototype > Design for manufacturing > Manufacturing > Post-manufacturing
In this case, the sequential steps or stages are nested as a hierarchy as if to illustrate the progression through the process as a ladder or directional move through the concepts.
These are just a few of the examples I’ve experienced when working with stakeholders in the taxonomy design process. What’s wrong with giving the end users what they want by designing enterprise taxonomies to adhere to some of these patterns?
Maurits “Context” Escher
“If everything means everything, then nothing means anything.” Like Rick from Rick and Morty, I’m trying to build a following around a catchphrase. I made this same point in my blog Polyhierarchy and the Dissolution of Meaning. Repeating the same concept in multiple locations, whether trying to mirror real-world endcaps or to capture new contextual meanings from hierarchical placement, is a big taxonomic no-no and for good reason. If concepts become contextually dependent, then the individual subjects and objects within a semantic model lose their crisp focus. The point of taxonomies is to disambiguate concepts and ensure that each item is clearly defined in meaning and scope. Of course there are concepts that really can exist in more than one location in a polyhierarchical structure, but these occurrences should be minimal and not be forcing different contextual meanings.
Using the above examples, “nails” violates the “is a…” principle in that they are not semantic children of their parents. They are necessary items to complete a deck or a roof, but they are not decks or roofs themselves. We can easily build separate, mutually-exclusive taxonomy schemes or branches and connect them with semantic relationships to include all of the items necessary to build a deck or put a roof in place. Nesting them in contextual proximity is not following taxonomy best practices and, ultimately, causes ingestion confusion when stripped from context. More practically, repeated concepts will likely break a consuming application when the system finds the same label (and, if built properly in a taxonomy management system, same URI) showing up in two different locations. These are often ignored on ingestion because the system can not resolve the entities.
Building Codes and Accessibility
If you’ve seen any architectural drawings by Escher, you probably know that not only would they be very difficult to build in the real world even by Edgar Allen Poe & Associates, they would never pass local and state building codes for accessibility. Look at all those stairs! Not a ramp or elevator in sight!
Providing accessibility to products (or content) using navigational taxonomies is an excellent way to assist users in getting to what they are looking for. While there are more searchers than navigators in the world, simple drilldowns to products or content in hierarchies in conjunction with filters is still useful as an additional means of locating products or information. Navigational taxonomies rely on their contextual construction to provide signposts for users to know exactly where they are in the product structure and in the potentially very large “store” they are trying to navigate. Pretty self-explanatory name for these types of taxonomies.
Navigational taxonomies can be built directly in front-end applications to serve retail and information finding use cases. If possible, the values can come from back end taxonomy management systems to ensure consistent concepts and messaging across the organization. In these cases, the front end system may consume values from across the taxonomy schemes and hierarchies and display them in a different contextual hierarchy or as filtered values in left-hand navigations. It may also be possible that the taxonomy management system allows for the construction of semantic master schemes which can be reassambled in the tool or through the API into navigational hierarchies. Using our example above, the taxonomies behind the scenes may look like this
Products > Apparel > Footwear > Basketball shoes
People > Demographics > Men’s
In this case, only the values needed to construct a navigational taxonomy are pulled from their respective schemes and reassembled. The advantage to this methodology is that one best, preferred concept label and its unique ID are used in all locations. Any tagging to product images, copy, web pages, or concepts used in navigational structures or filters can be used for a variety of analytics including clicks on navigational nodes or filters, clicks on product images, analysis of products added to carts, etcetera, without having to reconcile the same or similar values for analysis.
Temporal Ladders
Taxonomy structures typically follow a parent-child “is a” structure in which the children are instances of their parent concepts. It is also possible to construct whole-part relationships (called meronymy in linguistics) in which the children are a part of the parent concept.
While it is possible to model temporal or sequential events in taxonomies and ontologies, it typically requires advanced skills in ontology modeling, can be challenging to implement, and can be subject to change when trying to mirror processes. Processes are not only sequential, but can change frequently as well. Changing a foundational semantic structure to keep in pace with changes in marketing funnels or manufacturing processes may not be worth the effort if the steps can be created taxonomies independent from their hierarchical structure.
That all said, using relationships to define sequence rather than hierarchical structure can be one simple way to create a semantic sense of order. For example, using a relationship like has predecessor could link books, films, or process steps in order to model sequence.
It’s all a Question of Time
As context graphs are gaining momentum in at least understanding if not yet implementation, we will likely see more ways to bridge the taxonomy modeling-temporal process gap. In the meantime, adhering to foundational taxonomy best practices is a best bet to ensure that your semantic models are ready for the next evolution to capture temporal events to provide additional context to the graph.
In short, maintaining “is a” or whole-part taxonomy structures as base semantic models while developing more complex ontological designs and connected data as part of a context graph will potentially provide a good combination to avoid Escherian design practices and Gothic horror in your semantic structures.
Smoke and Mirrors
“But it’s always been a smoke and mirrors game / Anyone would do the same.” – Gotye, Smoke and Mirrors
It’s pretty common for organizations to have many software systems and platforms. Some are home grown, some are commercial; some have small scopes while others are enterprise grade; some are legacy while others are newly rolled out; some are seamlessly integrated while some (um, many) are standalone and siloed. What many of these systems have in common is that they are dedicated to one or more functions. They may be for digital asset management (DAM), content management (CMS), product information management (PIM), customer relationship management (CRM), and so on. Something that few enterprise systems do well is taxonomy management. And, to be fair, why should a standalone system dedicated to a particular purpose also be excellent at taxonomy (and ontology) management? There are dedicated systems for this function as well.
Typically, taxonomy management systems (TMS) are positioned as centralized repositories for controlled values which can be integrated with multiple systems in a hub and spoke model. A centralized taxonomy architecture ensures single concept values with a unique identifier can be used across multiple systems for many use cases. A centralized architecture makes sense, but there are many challenges arising from consuming downstream systems’ inabilities to handle the rich semantic models published from a TMS. Consuming systems may not be able to ingest properties, relationships, or even hierarchies.
What are some ways we can address these integration issues while maintaining an architecture in which a TMS is a centralized source of truth for metadata values?
Smoke and Mirrors
When the first iPhone was released, it shifted paradigms. The original iPhone wasn’t the first device to include touch screens, but its form and user experience was unique in many ways. Apple is touted for their designs, and it was the combination of form and function that garnered such immediate success. We didn’t learn the iPhone, the iPhone taught us. We learned to scroll, swipe, and tap so these basic functions became ubiquitous across many devices and manufacturers. A good user experience will do that: teach us how to navigate through an application, how the functionality works, where we should expect to find a “yes/no” or “next” button, and the overall design principles. I see this as domain teaching and reinforcement.
The same is true in the world of semantics. As semantic practitioners, we must educate users and meet them at the level of their knowledge need. Some business users will only need to know the basics of taxonomy principles when a taxonomy redesign is in progress or they are onboarded as taxonomy consumers. Other business partners will want to become more involved in the process and go beyond concepts into modeling the semantics of their domain.
Modeling conceptual domains is what semantics is built for, reflecting the ways of thinking in the organization, including business processes and relationships between concepts and things. Semantic models are mirrors of organizational thinking. Consuming systems may be mirrors of these mirrors, even when they are not designed to be semantic modeling platforms themselves.
The most basic smoke and mirrors game is aligning concept label values across systems. While the values may be mirrored, it is more smoke than anything else, concealing the incredible amount of overhead in agreeing on the label values, maintaining the governance process to ensure these labels stay aligned, and workaround maintenance required by the nuanced differences between consuming systems. In these scenarios, any necessary change in any of the systems (including the taxonomy as an aligned source of truth) can be precarious as downstream values may be driving workflow processes or are hard-coded, making them extremely difficult to change.
Consuming systems supporting hierarchy, definitions, and synonyms can mirror hierarchical structures and at least some of the properties adding semantic context to concepts. Consuming systems only supporting flat lists may have the ability to create dependent fields. Although not a true hierarchy, selecting a value from one field that constrains the values in another field mirrors hierarchical structures. Again, mostly smoke and mirrors concealing the amount of semantic design it takes to understand which parent values may drive dependent fields and how the child concepts are displayed. A more advanced step toward alignment is ensuring that all of the values share a common ID, even if this ID is manually entered for each mirrored concept so they are the same across systems. Not ideal, but having a unique identifier for each concept can bring these concepts closer together even if they are pulled directly from a source of truth taxonomy management system.
More Mirrors, Less Smoke
The best way to take advantage of semantic models is to pull taxonomy concepts and their associated properties—including fields like description, scope note, and the URI or GUID—into consuming systems so there are taxonomy terms for metadata tagging and their associated attributes for context and additional information. Tagging interfaces might be dropdown lists of 10-15 values, hierarchical browsing, or typeahead fields displaying taxonomy concepts as the user enters characters. All of these methods of applying taxonomy concepts reinforce the terminology used in the organization. Dependent fields and hierarchies reinforce hierarchical relationships between concepts and reveal how ideas are organized.
Users less frequently see full ontologies including concepts and the relationships between them. However, revealing these structures in browsable visualizations can be useful to help users understand concepts and the relationships between them. Again, these semantic structures mirror the activities of the business. If users can see interrelated concepts, and preferably in the context of content tagged with those concepts, it reinforces the mission and activities of the business. Additionally, showing graph visualizations can help shift thinking from simple hierarchical structures to more semantically rich graph structures. When business users understand the current domain thinking, they can also begin to understand how these semantic structures can be applied to business use cases to solve common organizational problems. In addition to active education by taxonomists, user interfaces and mirrored domain modeling in systems can help users understand semantic modeling and what it can do for the business. Revealing semantic models can be reinforcing, but it’s probably less common and useful than more practical mirrorings in systems that don’t have particularly strong semantic foundations to begin with.
Conceptual domain mirroring takes creativity, using the functionality of the TMS, APIs, and consuming systems to create an ontological and graphical domain representation even in systems that are foundationally relational and hierarchical. Taxonomists, working hand in hand with information architects, can find opportunities to express the complexities of semantic models in existing UIs and through creatively creating new ideas meeting the use cases of the business. Mirroring domains creates organizational alignment and reinforces domain thinking without conscious effort by the users. Just as well-designed hardware and software teaches us new domains, so can we as semantic practitioners find ways to passively educate our users about domain thinking and semantics in the organization.
The AI-Taxonomy Disconnect

“Me, I disconnect from you.” – Gary Numan, Me! I Disconnect from You
I have seen a significant decline in the number of taxonomist positions available. Wondering if I was in a myopic bubble based on my location or choice of job boards, I’ve been asking colleagues if they see the same thing. The general consensus is, yes, there are fewer taxonomist jobs available even as the importance of foundational data structures are becoming more important with AI tools. I have seen an increase, or at least a steady availability of, ontologist jobs, many requiring more technical expertise (the ability to create Python data ingestion pipelines and retrieval augmented generation (RAG) systems) than I have seen in the past. Again, this may be a matter of where I live or where I am seeking jobs, but it seems jobs in the taxonomy and ontology field are becoming more technical and less focused on the business operations side of working with stakeholders to provide analysis and guidance on semantic frameworks.
I suspect this shift is driven by a wider adoption of AI tools and a contraction in the job market. Employers seem to be seeking a single resource to do both the business and technical sides of implementing semantic structures and the foundational components for building out applications. I wonder if another factor is also the belief that large language models (LLMs) are a replacement for semantic models. If true, there are several reasons I can see that may be driving these beliefs.
Speed to Business
As I have touched upon before in my blog (Friction and Complexity and The Taxonomy Tortoise and the ML Hare), the manual, slower, but deliberate curation of controlled vocabularies can be seen as a roadblock to business speed and agility. There are several valid points here.
One area of pushback I see frequently in organizations is the response time from initially requesting a new concept, taxonomy branch, or vocabulary until it is available in production. The ownership of the concepts and data is, to some degree, taken out of the hands of the business users and put under the control of taxonomists who incorporate these concepts into centralized semantic models (taxonomies, thesauri, ontologies). Even with governance models including service level agreements stating turnaround times and taxonomy availability, not every group in the organization is going to see this centralized service as a benefit. Rather than wait for enterprise-level support, stakeholders may develop workarounds in services and tools to support their own use cases. As we know, this decentralizing of schemas and tools creates a fragmented landscape of differing terminology, functional support in what tools have available to manage taxonomies, and processes in the way data and content is handled and tagged with metadata.
Similarly, enterprise taxonomists serving many areas of the organization may seem to be too domain agnostic to serve the variety of use cases served by controlled vocabularies. While taxonomists do not need to be a domain expert in the areas covered by semantic models, the perception may be that their time spent ramping up in a domain would be better served having the subject matter experts in those domains build their own models. There is some validity here if the domains are truly standalone and operate to serve only those domain use cases. However, tying together various domain areas to form enterprise-wide knowledge graphs seems to be the direction most organizations want to go. If that’s the case, then a centralized taxonomy team as a service to the entire enterprise makes a lot of sense.
Given these counterpoints to slowly developing semantic models, why not, one may ask, simply ask machine learning models to provide the schemes we need to organize and optimize information?
Words Are Words Are Worlds
Are LLMs seen as a replacement, or at least a viable alternative, to taxonomies (using the term as a broad umbrella for all controlled vocabularies?
A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) and provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologiesinherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on (Wikipedia).
Certainly building your own language model reflecting all of the ways that natural language queries could be asked is non-starter when there are clearly highly performing, public LLMs available at our fingertips. The chatbots ChatGPT, Google Gemini, and Microsoft Copilot are some of the more familiar tools based on foundational LLMs. One of the primary, and arguably most successful, use cases for these tools is language generation. When prompted, a chatbot can generate text, format this text based on instructions or examples, and produce a slick product which, short of a quick review to ensure the content checks out, is ready to go. These LLMs are based on a “vast amount of text”; in short, a lot more text than you will be able to provide to train a whole model.
What LLMs are missing, however, is your context. There are many, fairly easy, methods for providing them context. You can allow them access to your documents, at work or at home, so the chatbot can “see” the content of your documents, the structures, and even the writing style. That is very much your context, allowing the chatbot to compose in a way that reflects your home or work artifacts and generate new text combining the existing LLM with your specific context. Using additional context moves an LLM from generic to specific, from a wider world of words to your world of words.
At enterprise scale, the same method applies. If your organization is large, has a lot of public exposure and presence, and has a history of public interaction on the Internet and in social media, LLMs are going to know a lot about you even without additional context. I think, however, for the same reason that the market for reusable, one-size-fits-all taxonomies has never taken off is that every organization—whether true or not—is unique and special. Outside of certain industries like healthcare, life sciences, pharmaceuticals, and finance which have well-defined, and often extremely complex, ontologies they can adopt, other industries or functions within a company do not. In my experience, marketing is a great example. Despite the common needs, I have never seen a marketing department adopt any public standard. They build from scratch even when a majority of the taxonomies are commonly available terms.
In these cases, building taxonomies and ontologies to add context specific to your organization provides LLMs with both words and structures modeling the world of your domain. In fact, it is becoming more common that the development of these taxonomies is being done with human-chatbot interaction. A taxonomist can provide glossaries, metadata schemas output in spreadsheets, and documents to provide chatbots with the raw materials to extract entities, cluster topics, compare values across documents, and other processes that once required text analytics tools and, sometimes, human intervention in the form of rule writing. The speed to taxonomy and ontology development is increasing. Like other iterative feedback processes, the taxonomist and LLM work together to create domain schemas in the form of taxonomies and ontologies that provide additional guidance to future “manual” and automated processes with LLMs in the mix.
Pure speculation on my part, but is the niche and sometimes still esoteric and obscure field of taxonomy and ontology design being replaced, for better or for worse, with LLM use? Or, more specifically, is it being viewed as a replacement for taxonomy and ontology building and the experts who do it? As I stated above, I have seen a shift from the business of taxonomies to the technology of ontologies.
I Disconnect from You
In my opinion, the roles of a taxonomist and of a technically skilled ontologist are still separate. While many people in the industry have the skills to do the work of both, the paths to the two roles have been different. Many taxonomists have library science degrees. They likely have technical skills, but are more focused on the information science aspect of taxonomy and ontology development, interfacing directly with the business and providing other services, such as research, business analysis, and support for use cases relying on semantic models. Ontologists are typically computer scientists who can code and develop the technical infrastructure for ontologies. Finding resources who can, or who like, to do both has not been common. This may be changing. Certainly the roles are asking for both, with, in my view, a leaning toward the technical.
Again, speculatively, is there a shift toward more technical resources in support of rising AI use in organizations? Is there a move to cut out the intermediary roles of taxonomists to let the business owners and technical implementers of taxonomies and ontologies interface more directly? If so, what do organizations lose in the process?
The Connect
If I were reading this blog, I would think the author was trying to sell you the value of taxonomists with more of the “soft” skills of research, business stakeholder interaction, and translation of business requirements into taxonomies and to the more technical resources who support their implementation and move to actionable production. After catching up on a few seasons of Landman and hearing in my head at some point in every episode, “Brought to you by [insert name of large oil company]”, maybe I’m a little sensitive to reading between the lines. You don’t have to. I am selling you on the value of taxonomists for all of the reasons I’ve listed above. If there is indeed a shift from the business skills of taxonomists to the technical skills of an ontologist instead of having the excellent skills of both, then your organization is missing out on a valuable resource who can work alongside AI technologies to bring the business requirements and practical domain building skills to bear.
In summary, believe there are two necessary components to bridge the seeming disconnect between AI and the foundational data quality governance needed to make AI operational:
- A taxonomist who can
- build taxonomies and ontologies to create domain-specific semantic models representing the business,
- provide business analysis and requirements for technical implementation, and
- be the human-in-the-loop working with AI tools to continue building out, expanding, and governing semantic models, and
- Technical engineers who can
- operationalize ontologies by building data pipelines to AI tools, and
- focus on the engineering aspects of sharing out and productionizing ontologies for use across the enterprise.
An enterprise with both of these roles isn’t creating unnecessary resource overhead or additional layers effectively slowing the path from automation to implementation; rather, the two roles work in harmony to clarify requirements and optimize the use of AI in a variety of applications meeting business use cases. Taxonomists and ontologists are bridges between the business and the technical implementation enabling business needs.
