Innovation

Enterprise Architecture for Scalable Innovation: From Architecture to Execution Across the Enterprise

Share
Enterprise Architecture for Scalable Innovation: From Architecture to Execution Across the Enterprise

Enterprise innovation rarely fails because teams lack ideas. It fails because the organization cannot move ideas into production safely, quickly, and repeatedly. In regulated, high-volume environments like financial services, that gap widens fast: new channels emerge, new markets open, peaks hit demand, and risk expectations rise—often all at once. In a conference session titled From Architecture to Execution: Enabling Scalable Innovation Across the Enterprise, Yaron Yaniv, Senior Vice President and Global Chief Architect at GM Financial, outlined a pragmatic approach to enterprise architecture for scalable innovation that connects business outcomes to real technical execution.

Yaniv’s core message is simple: architecture is not the goal. Architecture is the enabler. When built intentionally, it helps the business make real-time decisions, accelerate delivery, and break down silos that slow execution. When built poorly—or treated as a static set of standards—it becomes friction. The path forward is not a big-bang transformation. It is a journey of iterative capability-building: establish the right “rails,” make a few big decisions, leave room for evolution, and continuously align architecture work to measurable business priorities.

"You do not build architecture for the sake of building good architecture. You build architecture to enable business strategy and positive business outcomes."

— Yaron Yaniv, Senior Vice President, Global Chief Architect, GM Financial

Share
Copy link

Tie enterprise architecture to business success and customer outcomes

Build for expansion, not today’s org chart

Yaniv described GM Financial as a captive auto-finance company supporting GM, spanning a portfolio that includes leasing, purchasing transactions, insurance and protection products, and a planned expansion into banking. The takeaway is broader than one company: enterprise architecture has to anticipate expansion across products, geographies, and channels.

A loan origination system might work for one market and one sales motion today. Then business conditions change—new countries, different selling models, new customer types. Architecture that is designed as a platform can serve multiple use cases without rewriting the foundation each time the business evolves.

Design for elasticity during peaks, not constant maximum capacity

Enterprises experience predictable demand spikes. Yaniv gave a simple business reality: there are peak events across the year, but it is wasteful to pay for peak compute capacity year-round when demand only surges for a portion of the calendar. The architectural implication is clear—elasticity is business-driven, not “nice to have.”

This also reframes cost conversations. Architecture decisions that enable scaling up for peak demand and scaling down afterward are not just technical optimizations. They protect margin and improve customer experience during high-pressure moments.

Use an architecture framework that connects capabilities to APIs

To connect business drivers to technical execution, Yaniv emphasized the importance of an architecture framework that maps business capabilities down to APIs. This creates shared language between business and engineering, clarifies what capabilities exist, reveals gaps, and makes impact analysis more reliable when new initiatives arrive.

In practice, capability-to-API mapping prevents the most common enterprise failure mode: teams launch projects without understanding downstream coupling. A framework makes it easier to see which APIs and domains will be affected when a new partnership, channel, or product line emerges.

Make big decisions, but avoid over-prescribing details

Architecture can either enable or constrain. Yaniv’s guidance was to make a few big decisions—key frameworks, major patterns, and strategic technology choices—without prescribing every detail up front. The reason is not philosophical. It is operational: things change, maturity evolves, and engineers contribute meaningfully to implementation patterns over time.

This approach also reduces the “architecture as gatekeeper” dynamic. Instead of forcing teams to comply with an exhaustive rulebook, architecture sets direction, creates rails, and stays flexible as the organization learns.

Flexibility is a core principle, especially as AI changes rapidly

Yaniv stressed that flexibility is a central architectural requirement because it is difficult to predict how the future will change. He pointed to AI as the clearest example of a domain moving at breakneck velocity. Locking into overly rigid commitments based on today’s patterns can create long-term drag.

Flexibility does not mean a lack of standards. It means making decisions that preserve optionality: modular designs, clear interfaces, and the ability to swap components as tooling and requirements evolve.

Enable real-time decisions with the right “pipes”

Real-time is becoming a necessity, not a feature

Yaniv’s position is direct: real-time is no longer optional. Not every workflow must be real-time, but treating real-time as a guiding principle changes decision-making across the architecture landscape. It pushes teams to design for speed, observability, and responsiveness by default.

This is not about chasing trends. Real-time decisioning directly affects business outcomes—risk decisions, operational awareness, and the ability to evaluate initiatives quickly.

Streaming, edge computing, and near real-time model integration

To make real-time possible, you need enablers—what Yaniv called the pipes. He referenced streaming platforms such as Kafka, edge computing approaches that avoid slow batch feedback loops, and integration patterns that keep AI/ML model usage closer to real time.

The practical point: if the “edge” only sends information back once per day to a centralized system, you will not achieve real-time flows. Architecting near real-time means designing data movement, processing, and decisioning as a continuous system, not a periodic transfer.

Operational dashboards and fast feedback loops

Yaniv highlighted the need for operational data that helps teams understand outcomes quickly—whether a campaign succeeded, whether a feature achieved the adoption expected, and where friction is appearing.

This is where architecture intersects with execution. Real-time systems are only valuable if the business can interpret signals and act on them. Dashboards, metrics, and accessible operational data become part of the architectural product.

Observability and rapid incident response are part of the architecture

In production, failures happen. Yaniv’s framing is pragmatic: the question is not whether issues will occur, but how quickly you detect them, identify root cause, and restore service. That means observability cannot be bolted on later.

Architecture should include the instrumentation strategy, logging approach, and feedback mechanisms required to keep services resilient at enterprise scale.

Security and governance for regulated environments (without slowing delivery)

Yaniv emphasized security and governance as foundational, especially in regulated industries. He referenced zero trust architecture as an approach that begins from a stance of minimal access, and he highlighted the importance of data lineage, observability, and explainability.

He also called out concrete practices: encrypt sensitive information when needed, use tokenization for credit card details, and meet regulatory requirements—sometimes through vendor capabilities that support compliance. The broader lesson is that governance must be designed into the rails so teams can move faster with confidence, not slower with fear.

Integration patterns that keep data flowing across systems

Real-time decisioning depends on systems talking to each other. Yaniv pointed to open APIs on top of databases and systems, event-driven architecture, and approaches like data mesh or federated data stores so the organization is not trapped by a single “source database” bottleneck.

This is where architecture choices shape organizational speed. When integration is clean and well-governed, teams can build on shared capabilities rather than reinventing point-to-point connections.

Accelerate delivery with platforms, modularity, and an API-first mindset

Avoid the extremes: giant monoliths and uncontrolled microservices sprawl

Yaniv offered a grounded comparison: at one extreme, you have massive microservices environments that can become too complex at scale. He cited an example of Uber reaching thousands of microservices and later consolidating. At the other extreme, you have a monolith so large that parallel development is hard, regression testing is heavy, and scaling individual components is nearly impossible.

The goal is a path in between—progressively modularizing and building capabilities over time. This is not a switch you flip. It is a journey that requires investment and sequencing.

CI/CD maturity should match the enterprise, not copy hyperscalers

Yaniv referenced the aspiration of fast pipelines—change committed, merged, released, and deployed quickly. But he also made an important point: most enterprises do not need to operate like Google. At the same time, releasing only every few months is also not acceptable.

The architectural role is to design CI/CD “rails” that meet business velocity requirements while managing risk. That means incremental improvement rather than unrealistic targets that teams cannot sustain.

APIs reduce coordination overhead and speed up execution

Yaniv used the well-known Amazon internal principle: teams expose APIs and rely on contracts instead of meetings. The key insight is not that every enterprise should behave like Amazon. It is that API contracts reduce coordination drag.

When teams can integrate through stable interfaces, they spend less time negotiating dependencies and more time delivering outcomes. API-first architecture becomes a force multiplier for delivery speed.

Modular design enables parallel work and targeted scaling

Instead of jumping into hundreds of microservices, Yaniv advised starting with modularity: break the monolith thoughtfully, structure components so development and testing can happen in parallel, and scale services differently based on demand.

Modularity also supports flexibility. When components are decoupled, you can adapt faster—whether the driver is new channels, new markets, or new tooling.

Cloud-native decisions should be business-justified, but the journey matters

Yaniv acknowledged that cloud migration can be controversial. Some organizations do not move to the cloud; some went to the cloud and returned on-prem. Still, he noted that most enterprises continue migrating because cloud platforms provide scalable infrastructure and resilience that reduces the burden on constrained teams.

He also described a simple governance practice: in architecture review boards, if someone proposes an on-prem implementation, the first question is why not cloud. The point is not dogma. The point is to ensure decisions are intentional and aligned to long-term scalability and operational capability.

Microservices require operational readiness, not just containers

Yaniv warned against treating microservices as magic. Containers are only one part. Teams need logging, security, identity, and platform capabilities to run microservices reliably—managed services still require management. His guidance: start small, build the ecosystem, and scale only once the organization is ready.

This is delivery acceleration through sustainability. If you scale microservices before you have the rails, you trade short-term progress for long-term instability.

Break down silos with shared language, product mindset, and executive alignment

Enterprise taxonomy creates shared meaning across business and tech

To reduce misunderstanding and friction, Yaniv described building an enterprise taxonomy: definitions like what constitutes a loan, a term loan, a customer, and whether “customer” means dealer or end customer. This work takes time because it requires consensus across business, engineering, and architecture.

GM Financial leveraged an industry model for financial services called BIAM and customized it—removing what was not relevant while keeping a strong base. The key is not the model name. It is the approach: start from a proven framework when possible, and tailor it to your enterprise.

Treat architecture as a product with services and a backlog

Yaniv advocated for product mindset beyond customer-facing products. Architecture teams can define the services they provide: solution design support, concierge support for review boards, infrastructure design guidance, and more.

A product mindset creates a natural mechanism for backlog management and prioritization. Requests become visible, prioritized, and aligned—reducing the hidden queue that often drives frustration between teams.

Align priorities to OKRs to keep architecture tied to outcomes

Yaniv described using OKRs to connect architecture priorities to organizational goals. If the business has an objective—such as increasing loan volume in a specific segment—architecture can identify what it can deliver to support that outcome.

This keeps architecture from becoming disconnected from value. It also helps justify investment in platforms and rails because the linkage to measurable outcomes is explicit.

Use short, iterative cycles instead of long, delayed delivery

Yaniv emphasized that long delivery cycles often fail because needs change by the time the solution arrives. The alternative is iterative delivery that is agreed upon with stakeholders: deliver smaller increments to a limited group, learn, adjust, and expand.

He offered a practical example: enabling access for a small number of agents for a specific use case, then expanding based on results. This builds trust while reducing risk.

Make “big, meaty” architecture decisions at the right level

Yaniv described creating an Architecture North Star Steering Committee—executive leaders meet regularly to discuss significant architectural decisions that affect large parts of the organization. Examples included enterprise strategy decisions and service maturity strategy across applications.

This is a structural approach to eliminating delays caused by indecision or late-stage disagreements. When executives align early, delivery teams avoid rework later.

Applied innovation at enterprise scale: the Innovation Lab and experimentation environments

H3: Remove structural friction from experimentation

Yaniv described a recurring problem: the business and engineering teams wanted to test new capabilities—new data inputs for loan approval, decision engines, and other experiments—but the standard process was too slow. Procurement cycles took months. Provisioning hardened environments took months. Innovation was bottlenecked by default enterprise controls.

The solution was an Innovation Lab built around lightweight experimentation environments on a separate tenant, isolated from production. This created a controlled space where teams could try ideas without waiting for the full enterprise path designed for production.

Adoption requires automation, not just permission

The Innovation Lab was paired with a simplified, automated process. A ticket submission triggered automation that provisioned a new Azure subscription in minutes. Yaniv was clear: without automation, adoption would be low. Speed is not a memo. It is a system.

Reframe ROI expectations for high-beta experiments

Yaniv noted that GM is highly ROI-driven, and innovation does not always map cleanly to upfront ROI—especially for high-risk, high-return experiments. The team addressed this through education and controlled cost structures.

He described a model where a subscription might cost about 1,000 dollars, run for 30 days, and remain governed by rails. This made experimentation feel like a manageable, bounded investment rather than an open-ended spend.

Early results and a realistic success rate

In Q&A, Yaniv shared early outcomes: roughly 15 projects ran through the Innovation Lab after launch in early 2025, and about three were moved into the formal SDLC pipeline. That is about a 20 percent progression rate—early, but promising.

The deeper insight is not the percentage. It is that the enterprise created a repeatable mechanism for learning: fast experiments, controlled environments, and a clear path to scale when the signal is strong.

Modernizing legacy systems without big-bang risk

Yaniv also addressed how to decide between leaving a system as-is and modernizing. If a system runs occasionally and is low-risk, it might be best left alone. But if a system drives the majority of revenue and carries high operational risk due to legacy technology and lack of documentation, modernization becomes a business necessity.

He advised against big-bang replacement. Instead, carve out capabilities, build domain-driven APIs, pilot with a small user set, and scale gradually. He described an example approach: start with a small number of agents on the new system, while the broader population remains on legacy, enabling a controlled transition.

Citizen development and AI: more builders, stronger rails

In another Q&A thread, Yaniv shared his perspective that citizen development will grow rapidly, driven by tools like Microsoft Copilot and Copilot Studio, which enable users to create agentic workflows and retrieval-based assistants using enterprise knowledge sources.

His caution was consistent with his broader architecture philosophy: as barriers fall, the need for rails rises. Identity, access, auditability, and data governance must be clear. Some categories of work should not be built by citizen developers in regulated contexts, while other lower-risk use cases can be appropriate.

He also noted the rise of natural-language prototyping tools that can generate working applications quickly, with an important boundary: prototype does not mean production-ready. Setting expectations early prevents the misunderstanding that “it took 30 minutes to generate” equals “it is safe to deploy.”

Conclusion

Enterprise architecture for scalable innovation is not a document, a committee, or a set of standards. It is an operating model that helps the enterprise move from ideas to outcomes repeatedly. Yaron Yaniv’s session underscored a practical blueprint: anchor architecture to business success, design for real-time decisioning, accelerate delivery through platforms and API-first principles, and break down silos with shared language, product mindset, and executive alignment. The consistent theme is progress over perfection—make a few big decisions, build flexible rails, and iterate.

The Innovation Lab story brings the approach to life: experimentation becomes feasible when environments are lightweight, isolated, automated, and governed with clear cost controls and security boundaries. The same mindset applies to modernization: avoid big bangs, carve out capabilities, pilot with small groups, and scale once confidence is earned.

For enterprise technology leaders, the next step is straightforward: identify where architecture is adding friction today, then redesign the rails so teams can move faster with less risk. The payoff is not just better systems—it is an enterprise that can innovate on demand.

Contributors:

  • Yaron Yaniv, Senior Vice President, Global Chief Architect, GM Financial

FAQs