Series 13 - The Data Debt Decision: Why Your ERP Migration Is Automating the Wrong Things cover art

Series 13 - The Data Debt Decision: Why Your ERP Migration Is Automating the Wrong Things

Series 13 - The Data Debt Decision: Why Your ERP Migration Is Automating the Wrong Things

By: Ryigit
Listen for free

About this listen

You are spending millions to migrate to a modern ERP. The architecture will be cleaner. The processing will be faster. And every piece of legacy data debt you carry into the new system will now propagate at a velocity ECC never reached. The Data Debt Decision examines the most overlooked risk in enterprise ERP programmes: the data. Hosted by Rıdvan Yiğit | Founder & CEO, RTC Suite rtcsuite.com · ridvan.yigit@rtcsuite.com · linkedin.com/in/yigitridvanRyigit Economics
Episodes
  • Series 13 - The DeepDive: Stop Automating Business Mistakes at Light Speed: The Complete Architecture of Data Strategy in ERP Migration
    Apr 13 2026

    There is a phrase that captures the most consequential risk in modern ERP migration programmes more precisely than any technical specification: you are not migrating your data. You are automating your mistakes at light speed.

    A legacy ERP operating on poor-quality data produces errors slowly, in batches, with the natural friction of the system providing some degree of human review opportunity. A modern S/4HANA or cloud ERP operating on the same poor-quality data produces errors continuously, in real time, propagated across every downstream system before any human review is possible. The speed that makes the new system valuable makes every data quality problem in it more consequential, more expensive, and more difficult to contain.

    This deep dive is the most comprehensive technical treatment available of what a genuine data strategy for ERP migration looks like — from the initial data audit through canonical data model design, master data governance architecture, the migration workstream structure, the quality gates that determine what enters the new system, and the post-go-live data operating model that maintains the quality of what was built.

    We begin with the data audit: what a genuine assessment of legacy data quality requires, how to characterize data debt by category and business impact, and what the audit's output needs to contain to inform both migration design decisions and the canonical data model specification.

    We then examine canonical data model design in depth: what standardisation across entity-level chart of accounts structures requires, how tax classification codes need to be normalized to support real-time CTC compliance, what master data completeness standards are required for AI agent deployment, and how the canonical model is documented and governed as a long-term asset rather than a one-time migration deliverable.

    We address the migration workstream structure: how data quality work is organised within the programme, who owns it, how it connects to the technical migration workstream, and what the quality gate architecture looks like — the specific validation rules that determine whether data is allowed to migrate, and the process for handling data that fails those gates.

    We examine the compliance dimension in specific detail: the master data conditions that CTC compliance requires from day one of go-live — VAT registration completeness, tax code consistency, document reference chain integrity, business partner data currency — and what happens when those conditions are not met in a real-time validation environment.

    We then address the AI readiness dimension: what data quality properties the Intelligence Hub architecture requires, why semantic decay makes data extracted from poorly maintained legacy systems unreliable for AI agent deployment, and how the canonical data model established during migration becomes the foundation on which every subsequent AI investment operates.

    Finally, we examine the post-go-live data operating model: the governance structures that maintain canonical data quality after go-live, the monitoring architecture that detects data quality degradation before it becomes a compliance or analytical problem, and the data stewardship roles that represent a new organisational capability in the finance and technology functions.


    Keywords: ERP data migration architecture complete guide, SAP S/4HANA data strategy deep dive, stop automating ERP data mistakes, SAP migration canonical data model, ERP data quality audit migration, S/4HANA master data completeness, CTC compliance data quality requirements, SAP migration data governance,


    About the Host

    Rıdvan Yiğit is the Founder & CEO of RTC Suite — the world's first Autonomous Compliance and Payment Intelligence platform, built natively on SAP BTP and operating across 80+ countries.


    Connect with Rıdvan:

    🔗 linkedin.com/in/yigitridvan✉

    ridvan.yigit@rtcsuite.com

    📞 +90 545 319 93 44


    Learn more about RTC Suite:

    🌐 rtcsuite.com

    Show More Show Less
    23 mins
  • Series 13 - The Debate: The $50 Million Data Debate: Should ERP Migrations Fix Data Quality — or Is That Someone Else's Problem?
    Apr 13 2026

    It is one of the most consequential arguments in enterprise technology programme management — and it is almost never framed explicitly as a debate. Instead, it is resolved by default, in scope management conversations that happen under timeline pressure, without adequate representation from the functions that will live with the consequences.

    The argument is this: should ERP migration programmes be responsible for data quality, or should data quality be treated as a separate initiative that precedes, follows, or runs in parallel with the migration?

    Both positions have serious arguments behind them, and this episode gives each its full hearing.

    The case for keeping data quality out of the migration scope: ERP migrations are already among the most complex, risky, and expensive programmes organisations undertake. Adding a data quality transformation to an already demanding programme increases scope, extends timeline, and creates additional failure modes that can jeopardise the go-live delivery that the entire investment is predicated on. Data quality is a business ownership problem, not a technology problem — the programme team should migrate what exists and let the business functions own the quality of their own data.

    The case for treating data quality as a migration workstream: the migration is the only moment when the organisation has a genuine mandate, a defined programme structure, and the full engagement of the data owners required to address data quality at the source. Post-migration remediation is dramatically more expensive and organisationally difficult than pre-migration cleansing, because it requires modifying data in a live production system that the business is already operating on. And the canonical data model — the standardised, cross-entity, AI-ready data foundation — can only be built effectively during the migration, when the data architecture is being redesigned anyway.

    We then examine a third position that the binary framing obscures: the bounded data quality approach that uses the migration as the moment to establish the data governance principles and the canonical model, without attempting to remediate every historical data quality issue before go-live. This approach has specific design requirements and specific limitations — but it resolves the tension between the two positions in a way that neither pure position achieves.

    What We Cover:

    00:00 — Introduction: The debate that gets resolved by default in scope meetings02:00 — The full case for keeping data quality outside migration scope05:00 — Where this position fails: what the post-migration remediation reality looks like08:00 — The full case for data quality as a migration workstream11:00 — Where this position fails: scope, timeline, and the complexity argument14:00 — The bounded approach: canonical model now, full remediation over time17:00 — What the bounded approach requires to work: design principles and governance20:00 — The AI readiness argument: why data quality decisions made now determine AI capability later23:00 — Programme governance: who should own the data quality decision25:30 — The economic model: pre-migration vs. post-migration remediation cost comparison28:00 — Closing verdict: conditions under which each approach is genuinely preferable

    Keywords: ERP migration data quality debate, SAP S/4HANA data quality programme, ERP data strategy decision, SAP migration scope data quality, ERP canonical data model migration, data quality ERP programme governance, SAP data remediation cost, ERP migration data ownership, S/4HANA data architecture debate, enterprise data migration strategy


    About the Host

    Rıdvan Yiğit is the Founder & CEO of RTC Suite — the world's first Autonomous Compliance and Payment Intelligence platform, built natively on SAP BTP and operating across 80+ countries.


    Connect with Rıdvan:

    🔗 linkedin.com/in/yigitridvan✉

    ridvan.yigit@rtcsuite.com

    📞 +90 545 319 93 44


    Learn more about RTC Suite:

    🌐 rtcsuite.com

    Show More Show Less
    19 mins
  • Series 13 - The Critique: Stop Migrating Legacy Data Debt Into Your New ERP — The Structural Critique of How Enterprise Programmes Get Data Wrong
    Apr 13 2026

    The dominant model for data in ERP migration programmes is Extract, Transform, Load: take the data from the legacy system, apply whatever transformations are needed to make it structurally compatible with the new system, and load it. The model is technically competent. It reliably produces a new system populated with data. It does not reliably produce a new system populated with good data.

    The critique this episode makes is structural rather than executional. The ETL model applied to enterprise data migration is not failing because the tools are inadequate or the teams are insufficiently skilled. It is failing because it was designed to solve a different problem — moving data from one schema to another — and is being applied to a problem it was never designed to solve: improving the quality, consistency, and strategic utility of the enterprise's data asset at the moment of system transition.

    We examine three specific categories of legacy data debt that ETL-based migration consistently carries forward rather than resolving: master data inconsistency, which includes the duplicate vendor records, the inconsistently maintained tax classification codes, the chart of accounts structures that reflect historical decisions rather than current operating requirements; transactional data incompleteness, which includes the missing document reference chains, the unreconciled intercompany balances, and the open items that are open not because they represent genuine business positions but because nobody ever closed them; and structural data heterogeneity across entities, which makes multi-entity consolidation and group-level analytics structurally difficult regardless of what the new ERP is technically capable of producing.

    For each category, the episode traces the downstream consequences: the compliance failures they create in real-time tax environments, the analytical limitations they impose on AI and business intelligence deployments, and the remediation costs they generate in the years after go-live when the debt becomes visible as operational problems rather than data quality statistics.

    The critique closes with a specific argument about programme governance: the organisations that migrate data debt are not making irrational decisions. They are making rational decisions within a programme governance framework that measures success by go-live delivery and does not measure the quality of what was delivered. Changing the outcome requires changing the measurement.


    Keywords: legacy data debt ERP migration, SAP data quality critique, ERP ETL migration failure, master data ERP migration, SAP S/4HANA master data inconsistency, ERP data migration governance, transactional data debt SAP, ERP data quality programme, S/4HANA canonical data model migration, ERP open items migration, multi-entity data heterogeneity ERP, SAP migration data failure, ERP analytics data quality, AI ERP data foundation, S/4HANA data governance



    About the Host

    Rıdvan Yiğit is the Founder & CEO of RTC Suite — the world's first Autonomous Compliance and Payment Intelligence platform, built natively on SAP BTP and operating across 80+ countries.


    Connect with Rıdvan:

    🔗 linkedin.com/in/yigitridvan✉

    ridvan.yigit@rtcsuite.com

    📞 +90 545 319 93 44


    Learn more about RTC Suite:

    🌐 rtcsuite.com

    Show More Show Less
    17 mins
No reviews yet
In the spirit of reconciliation, Audible acknowledges the Traditional Custodians of country throughout Australia and their connections to land, sea and community. We pay our respect to their elders past and present and extend that respect to all Aboriginal and Torres Strait Islander peoples today.