Blue Yonder's data model has 20 years of backwards-compatible decisions. The tables that actually matter for load tracking.
Blue Yonder TMS (formerly JDA Transportation Manager, formerly Manugistics TMS, depending on how long you've been in the industry) is one of the more commonly encountered TMS platforms at enterprise shippers with complex transportation networks. It has real strengths in optimization and carrier management. Its data model reflects 20+ years of product evolution, with multiple generations of table structures coexisting in the same schema. Extracting clean load and tracking data from it requires knowing which tables are current and which are historical artifacts that the software still populates for backwards compatibility.
Blue Yonder TMS organizes its core data around three primary entities: Orders (the demand for transportation), Loads/Shipments (the planned or executed carrier movements), and Stops (intermediate points in a multi-stop route). Each of these has multiple tables in the schema, representing different generations of the data model and different integration patterns.
Transportation orders in Blue Yonder are stored in the ORDER_BASE table (header information: origin, destination, commodity, weight, required delivery date) and ORDER_RELEASE (the individual line items of an order). There's also a ORDER_MOVEMENT table that links orders to the loads they've been assigned to.
For data pipeline purposes, ORDER_MOVEMENT is the critical join table between orders and loads. An order may be split across multiple loads (partial shipment), combined with other orders on a single load (consolidation), or moved between loads if a load is cancelled and rebuilt. Using ORDER_MOVEMENT as the join table rather than trying to match orders to loads directly on address fields produces significantly cleaner data.
The legacy table to avoid: older Blue Yonder implementations have a SHIP_ORDER table from the Manugistics era. In some installations, it's populated in parallel with ORDER_BASE for backwards compatibility with legacy integrations. Do not build your pipeline on SHIP_ORDER — it may be deprecated in a future version and its structure diverges from ORDER_BASE in edge cases that cause join failures in multi-stop shipment scenarios.
The SHIPMENT table (confusingly named, since it represents what most people would call a "load" — a carrier movement that may contain multiple customer shipments) is the primary source for load-level data: carrier assignment, equipment type, planned departure, planned arrival, actual departure, actual arrival, and freight charges.
The distinction between planned and actual values in the SHIPMENT table deserves attention. Planned times (PLAN_DEPART_DATE, PLAN_ARRIVE_DATE) are set at load planning time. Actual times (ACTUAL_DEPART_DATE, ACTUAL_ARRIVE_DATE) are updated when status events are received — either from carrier EDI or from manual updates in the TMS UI. In many Blue Yonder installations, actual times are populated from carrier 214 EDI feeds that have been normalized (or not) through the TMS's EDI translation layer. The quality of the actual times in Blue Yonder depends entirely on how well the carrier EDI normalization is configured.
Multi-stop loads use the STOP table, which stores each stop on a load's route with stop sequence number, facility, planned times, and actual times. The SHIPMENT_STATUS table stores the status event history for each load — a time-ordered log of status codes received from carrier EDI or manually entered.
For tracking data pipelines, SHIPMENT_STATUS is more useful than the status columns on the SHIPMENT table itself. The status columns on SHIPMENT represent the current state; SHIPMENT_STATUS is the full event history. For on-time delivery analysis, exception analysis, and dwell time calculation, the event history is essential.
Blue Yonder installations that have been running for 5+ years often have legacy table patterns that were created for previous integration needs and were never cleaned up. Two are particularly common:
Many older Blue Yonder deployments were originally integrated with ERP or BI systems via scheduled flat file exports. To support these exports, custom database tables (often prefixed RPT_ or EXTRACT_) were created as staging areas, populated by nightly batch jobs that selected from the native Blue Yonder tables and wrote denormalized rows for easier file export processing.
When a data engineer first encounters a Blue Yonder deployment, these RPT_ tables may look like exactly what's needed — they're already denormalized, already contain the key fields for shipment analysis, and query fast because they're built for reads. The problem: they're populated nightly at best, they may not include all loads (some export jobs filter by status or date range), and they often reflect business rules from the original integration that may no longer be accurate.
Building a pipeline on RPT_ shadow tables means you're inheriting someone else's data selection logic without necessarily knowing what that logic excluded. Always validate shadow table data against the native tables for a representative sample before committing to them as a pipeline source.
Load tender tracking in Blue Yonder — the process of tendering a load to a carrier, receiving acceptance or refusal, and re-tendering if refused — creates a significant volume of event records. In some configurations, these events are stored in multiple places: the native TENDER_LOG table, a legacy CARRIER_TENDER_HIST table from a previous integration, and sometimes a TENDER_EVENTS table created for a specific reporting project.
For tender cycle time analysis (how long from initial tender to carrier acceptance, how many tenders before acceptance), use only the native TENDER_LOG table filtered to the relevant tender types. The other tables may have duplicate events, missing refusals, or incorrect timestamps from when they were populated.
Blue Yonder TMS typically contains three generations of freight charge data for every load. Understanding which one to use for different analytical purposes is important:
EST_FREIGHT_COST on the SHIPMENT table): calculated at load planning time using the rated carrier and lane rate. Available as soon as the load is built.For cost analysis that needs to be timely (weekly freight spend, monthly lane cost benchmarking), estimated or tendered charges are the practical data source. For accurate landed cost calculations and carrier contract compliance analysis, only audited charges are reliable. Most logistics analytics environments need both: estimated charges for operational dashboards, audited charges for financial reporting.
Blue Yonder TMS and WMS systems share a shipment reference — typically the Bill of Lading number or a TMS shipment ID that's also present in the WMS outbound shipment record. Building this bridge correctly is prerequisite to any analysis that combines transportation cost (from TMS) with fulfillment detail (from WMS).
The matching challenge: Blue Yonder generates its shipment reference at load build time. The WMS generates its shipment reference at pick completion or load confirmation. These two events happen at different times, and the reference numbers may not match directly — particularly if the WMS and TMS are running without a formal integration layer and instead use BOL number as the only common reference.
For environments where BOL is the join key, date-proximity filtering on the BOL number match (only match BOLs that are within 48 hours of each other by ship date) reduces false positives from BOL number reuse between carriers in high-volume environments.
Blue Yonder TMS contains rich transportation operations data that, when properly extracted and normalized, enables carrier performance analysis, freight cost optimization, and on-time delivery SLA management at levels of detail that most BI tools can't easily reach without a well-structured pipeline. The extraction challenges are real — legacy tables, shadow tables, and the estimated/tendered/audited charge distinction all require careful handling — but they're not insurmountable with good schema knowledge and appropriate extraction patterns.
The investment in understanding the native table layer rather than relying on pre-built extract views typically pays off in data completeness and accuracy that the shadow tables can't match.
MLPipeLab's Blue Yonder TMS connector uses native table extraction with pre-configured handling for legacy table patterns and freight charge stage differentiation. Request a demo to see it running on your TMS configuration.