Why FHIR Alone is not Enough

Part 2 - Examples of Common Pitfalls Found in "Valid" FHIR

Fast Healthcare Interoperability Resources (FHIR) validation checks structure—not meaning. The real risk lies in the terminology and code systems used within the data itself. Below are some common but often overlooked pitfalls we've seen derail interoperability projects. In each of these, the resource passes FHIR’s schema tests!

To prevent these issues, you need more than just a conformant payload. You need a Terminology Server to validate and manage code systems (such as TermHub), and robust Data Normalization tools to align legacy or incorrect codes with current standards (such as AutoMap). These aren’t just enhancements, as they’re essential infrastructure for making FHIR data meaningful, computable, and trustworthy across systems.

In the previous post, we provided an overview as to why FHIR is not enough and highlighted some components needed to enable effective, scalable interoperability in real-world healthcare settings. In the next post, we will delve more deeply than we do here as to how a terminology server and data normalization solution facilitates solving these issues.

Why is using text-only or display-only codes in FHIR a problem?

"code": {

   "text": "Myocardial infarction"

}

What goes wrong – While a CodeableConcept that includes only a text field (e.g., "text": "Myocardial infarction") or a coding array with elements that contain only a description may appear readable to humans, it lacks the structured, machine-processable elements required for reliable interpretation

Without a coded identifier and a defined system (like SNOMED CT or ICD-10-CM), algorithms cannot reliably perform critical clinical processes such as:

  • Group patients with equivalent clinical concepts that are worded differently (e.g., "Heart attack" vs. "Myocardial infarction").

  • De-duplicate records across systems or within longitudinal data if text varies slightly.

  • Trend or analyze cohorts over time or across datasets, since string matching is error-prone and non-standardized.

The result? Incomplete or misleading analytics, missed cohorts in population health queries, inconsistent reporting, and unreliable clinical decision support.

How to prevent it – While there may be rare cases where no appropriate code exists and free-text is necessary, these should be the exception rather than the rule. To ensure consistency and interoperability, CodeableConcept elements should use a valid, active code from a standard code system that is appropriate and authorized for the context in which the data will be used. 

Implementing terminology validation, whether applied at the point of entry or during data ingestion into the system’s pipeline, is critical. It helps detect missing or invalid codes, enforces consistent coding practices, and plays a key role in supporting interoperability. Use of tools such as TermHub & AutoMap to catch and correct text-only entries as they occur, prevents issues before they propagate downstream.


Why is using code-only entries a problem in FHIR?

"code": {

   "coding": [{

      "system": "http://hl7.org/fhir/sid/icd-10-cm",

      “code": "I21.9"

   }]

}

What goes wrong – FHIR CodeableConcept elements are meant to support both machine readability and human interoperability. While a code-only entry (e.g., with no display or text field) may be valid from a technical standpoint, it lacks the human-readable components necessary for clarity and usability.

Without a display label, clinicians, analysts, and developers must manually look up the meaning of the code, slowing down productivity and introducing risk. Worse, the same code can mean different things across versions, jurisdictions, or code systems. Combined, a lack of human interpretable information in a CodeableConcept element risks:

  • Confusion in interpreting clinical concepts

  • Delays in data review, validation, and integration

  • Miscommunication team-to-team and system-to-system

The result? Reduced trust in the data, inconsistent user experiences, and an increased chance of misinterpretation in both clinical and operational settings.

How to prevent it – Even when a valid code is present, best practice is to include a display field that captures exactly what the user saw when selecting the code. This provides essential context for humans reviewing or interacting with the data, especially across teams or systems.

To ensure clarity and consistency, implement terminology validation, either during data entry or as part of your ingestion pipeline, to verify that both code and display are populated and accurate. Tools like TermHub and AutoMap help ensure codes are paired with the correct human-readable labels and sourced from the appropriate terminology version, reducing ambiguity and improving trust in your data.


Can inactive concepts break FHIR data?

"code": {

   "coding": [{

      "system": "http://snomed.info/sct",

      "code": "155306008",

      "display": "Old myocardial infarction"

   }]

}

What goes wrong – FHIR allows any code, active or inactive, to be included in a CodeableConcept. While this maintains flexibility, it places the burden on implementers and downstream systems to detect and handle deprecated content.

In the example above, SNOMED CT concept 155306008 ("Old myocardial infarction") is inactive. If this inactive concept goes undetected, issues may arise such as:

  • Analytics may silently ignore the record, leading to incomplete cohorts

  • Decision support may misinterpret or misfire due to outdated logic

  • Exchange partners may reject the record based on policy or validation rules

FHIR does support a mechanism for checking code status ($lookup on CodeSystem), but this requires proactive implementation. Without validation, retired concepts can corrupt results, introduce silent errors, and reduce interoperability.

How to prevent it – Implement terminology validation to detect inactive codes during data entry or ingestion. Alert users when a code has been retired and provide recommended active replacements when available. Tools like TermHub and AutoMap can automate this process, flag deprecated concepts in real-time, and suggest semantically equivalent active codes—keeping your data clean, current, and compliant.

How do you handle inactive codes in FHIR?

"code": {

   "coding": [{

      "system": http://snomed.info/sct,

      "code":   “1755008”,

      "display": “Old myocardial infarction”

   }]

}

What goes wrong – Even when you correctly detect that a code is inactive, that’s only part of the problem. You still need to determine what to do with it. For example, SNOMED CT concept 1755008 ("Old myocardial infarction") was retired and replaced by 155306008. But mapping from an inactive code to the correct active equivalent can be error-prone and slow, especially if the concept is ambiguous, outdated, or lacks a clear successor. Manual curation risks inconsistency and delays in processing clinical data.

If left unaddressed, inactive codes can:

  • Fragment cohorts and lead to under counting in analytics

  • Trigger decision support failures if no active concept is recognized

  • Confuse or break downstream systems that expect active, valid codes

How to prevent it – Apply terminology validation with built-in support for handling inactivation and historical relationships. Where available, use automated mapping (e.g., through same-as links) to detect and translate inactive codes to appropriate active equivalents. Tools like TermHub and AutoMap can identify inactivated content in real time, trace forward mappings, and suggest replacements - minimizing disruption and preserving semantic accuracy across systems.


What if the wrong code system is used in a FHIR resource?

"code": {

   "resourceType" : "Medication",

   "coding": [{

      "system": "http://snomed.info/sct",

      "code": "322890005",

      "display": "Carbamazepine 100 mg chewable tablet"

   }]

}

What goes wrong – On the surface, the data may appear valid—complete with a code, display, and system. But under the hood, it may violate FHIR best practices or jurisdictional guidelines. In this example, SNOMED CT is used to represent a medication product, but in the U.S., RxNorm is the required vocabulary for Medication.code.

Using the wrong code system can result in:

  • Clinical Decision Support errors due to mismatched expectations

  • Billing or claims rejections based on incompatible terminology

  • Exchange failures if receiving systems validate against stricter FHIR profiles

These mistakes can go unnoticed during viewing or manual inspection but undermine interoperability and reliability when processed downstream.

How to prevent it – Use terminology validation that not only checks for valid codes, but also verifies that each code is sourced from the appropriate system for its context. Align with FHIR resource definitions and implementation guides (e.g., US Core) that specify required or preferred code systems. Tools like TermHub and AutoMap can enforce these constraints and help auto-correct misaligned terminology during ingestion.

How can incorrect content break FHIR data integrity?

"code": {

   "resourceType" : "Procedure",

   "coding": [{

     "system": "http://loinc.org",

      "code": "12345-5"

   }]

}

What goes wrong – While the code may be technically valid and well-formed, it may still be semantically incorrect. In this example, LOINC is used to describe a procedure even though LOINC is designed for observations, not procedures. This kind of misuse can lead to subtle but serious downstream issues:

  • Analytics and reporting may fail to classify procedures accurately

  • Quality measures may under perform due to inconsistent terminology

  • Clinical workflows can break when the receiving system expects a different vocabulary (e.g., SNOMED CT)

Additional risks arise when using extensions of a code system that the other party doesn’t support. For example, sending a SNOMED CT US Extension concept to a system that only supports the international edition may result in silent failures or data loss.

How to prevent it – Apply terminology validation that not only checks for valid codes but also enforces vocabulary expectations for each FHIR element based on resource definitions and implementation guides. Use tools like TermHub and AutoMap to flag invalid uses of code systems and detect mismatches ensuring semantic accuracy and preventing integrity issues across systems.

Can you use proprietary, non-standard, or local code systems in FHIR?

"code": {

   "coding": [{

      "system": "http://acme.org/acmed",

      "code": "12345"

      "display": "Acute Documentation Fatigue"

   }]

}

What goes wrong – FHIR technically allows the use of proprietary or local code systems, but that doesn’t guarantee others will understand them. In the example above, the code system acmed is private to a specific organization. While the code 12345 and display may be valid within that system, any external partner who lacks access to the underlying definition cannot interpret the concept, making the data effectively meaningless outside its source. The result?

  • Data becomes opaque to downstream systems

  • Interoperability breaks unless manual coordination occurs

  • Usage of data may fail or the data may be ignored entirely

Sharing the custom code system documentation helps, but it requires meeting-based coordination, slows integration, and does not scale.

How to prevent it – To ensure long-term interoperability, map proprietary code systems to standardized terminologies (e.g., SNOMED CT and LOINC) wherever possible. Use terminology management tools like AutoMap to create and maintain mappings, and TermHub to expose both internal and standardized views—enabling consistent interpretation across systems without requiring one-off coordination.

Why does valid FHIR data still cause interoperability issues?

What goes wrong – Every example we've explored uses technically valid FHIR data: well-formed, schema-compliant, and syntactically correct. But FHIR alone doesn't guarantee semantic correctness or consistent interpretation across systems.

Interoperability problems arise not because the format is wrong, but because the terminology behind the codes is misused, inconsistent, inactive, incomplete, or non-standard.

The downstream impact can be experienced in such use cases as:

  • AI and cohorting algorithms miss matches or behave unpredictably due to inconsistent coding.

  • Analytics inflate error rates and produce inaccurate KPIs based on poorly normalized data.

  • Clinical decision support alerts fail to fire—or fire inappropriately—due to subtle terminology mismatches.

How to prevent it – Ensure semantic consistency by pairing FHIR with robust terminology infrastructure. Use a terminology server like TermHub to validate, resolve, and manage code systems, value sets, and expansions. Apply semantic normalization with tools like AutoMap to align, translate, or map terminology across jurisdictions, editions, and proprietary systems. Interoperability depends not just on structure, but on ensuring the meaning inside the data remains accurate, current, and computable across systems.


How can one improve FHIR data quality and terminology accuracy?

What goes wrong – Even when FHIR resources are structurally valid, the quality of the terminology inside them often determines whether they’re usable for downstream analytics, decision support, and exchange. Without proper validation, normalization, and governance, FHIR payloads can silently propagate inactive, ambiguous, proprietary, or misaligned codes making them unreadable, untrustworthy, or misleading across systems.

How to prevent it – Improving FHIR data quality requires a combination of infrastructure, processes, and community engagement:

  • Terminology Servers catch inactive, incorrect, or mismatched codes at ingestion by validating code systems and value sets. (e.g., TermHub)

  • Semantic Normalization maps legacy, proprietary, or invalid codes to active, standardized concepts, preserving meaning and ensuring computability. (e.g., AutoMap)

  • Data Governance & QA Pipelines enforce internal coding policies and detect issues before data reaches production.

  • Contributing to Standards Development helps close systemic gaps by submitting requests for missing or incorrect concepts in terminologies like SNOMED CT, LOINC, or ICD-10-CM.

Together, these practices build the foundation for FHIR data that is not only valid, but truly interoperable.


Coming Up in Part 3

We’ll show how semantic normalization is the only scalable path to interoperability and how TermHub/AutoMap makes it hands‑free.

Have an example that caused you challenges? Drop us a note. We may feature it (anonymously) in a future post!

Next
Next

Why FHIR Alone is not Enough