Neat-Technology

Client Success Stories

Real transformations, measurable results. See how we've helped businesses optimize their IT infrastructure and embrace AI.

Cloud Migration Media & Entertainment

Origo Film Studios: €6M IT Infrastructure for Europe's 2nd-Largest Film Complex

Client: Origo Film Studios

€6M+
Infrastructure Investment
99.97%
System Uptime
50+
Productions Supported
30-50 TB
Daily Data Processing
<5ms
Network Latency
€300M+
Economic Impact

1 The Challenge

Build IT infrastructure from ground zero for Europe's newest mega-studio, capable of supporting simultaneous international blockbuster productions with Hollywood-standard technical requirements. The 18-hectare complex with 11 soundstages needed real-time 4K/8K processing, fiber-optic backbone, redundant systems, secure digital infrastructure protecting billion-dollar film assets, and soundproof data centers—all with zero tolerance for downtime as film production doesn't pause for IT failures.

2 Our Solution

Architected a comprehensive 5-tier infrastructure: (1) Fiber-optic backbone with 10 Gbps per soundstage and redundant ring topology, (2) Distributed data centers with N+1 redundancy and real-time replication, (3) Real-time 4K/8K dailies processing pipeline with ARRIScan XT and DFT Scanity systems, (4) Low-latency network for 8-ton robotic camera rigging with <5ms response time, (5) High-speed post-production network with 500 TB SAN and 10+ editing suites. All designed with automated failover and future-proof 96-fiber capacity.

3 The Results

Delivered world-class facility supporting 50+ major film productions including Blade Runner 2049, Dune, and The Witcher. Achieved 99.97% uptime (only 33 minutes/year maximum downtime) since 2011 launch—better than Pinewood London. Real-time 4K processing capability (industry-leading when most facilities used overnight processing). Infrastructure attracted €40M+ in production spending to Hungary, created 1,200+ permanent jobs, and generated €300M+ total economic impact.

"The infrastructure has operated flawlessly for over 15 years, supporting blockbuster productions from Blade Runner 2049 to Dune with zero production delays related to IT systems."
Studio Operations — Origo Film Studios
Cloud Migration Entertrainment

Building Europe's 2nd largest movie studio

Client: Origo Film Group

5ms
Network Latency - Exceeds requirements (10ms)
99.97%
Uptime - Better than Pinewood London (99.95%)
Real-time
4K Processing - Industry Leading
2 PB
Storage Capacity - Supports 5 simultaneous productions
50 TB/day
Dailies Processing
<100ms
Automated failover

1 The Challenge

Origo Film Studios needed to build IT infrastructure from ground zero for Europe's newest mega-studio capable of supporting: Operational Requirements: Multiple simultaneous international productions (Blade Runner 2049, Dune, The Witcher running concurrently) Hollywood-standard technical requirements (4K dailies, VFX workflows, real-time processing) 18-hectare complex with 11 soundstages (19,000 sqm filming space) Zero tolerance for downtime - film production doesn't pause for IT failures High-bandwidth VFX workflows - gigabits/second data flowing between stages Global connectivity - producers in LA/London, post-production across Europe The Core Problem Traditional broadcasting infrastructure wasn't sufficient. Origo needed: Real-time 4K/8K processing (not archived/overnight processing) Fiber-optic backbone (copper networks couldn't handle bandwidth) Redundant systems (backup for every critical component) Secure digital infrastructure (protecting billion-dollar film assets) Soundproof data centers (film sets require acoustic isolation) Technical Complexity Challenge 1: Data Volume 30-50 TB of raw footage per day (per production) Early estimates: 5 TB/day Actual peak usage: 50 TB/day Challenge 2: Real-Time Processing in 2010 4K dailies processing was theoretical in 2010, not production-proven Industry standard: overnight processing Origo requirement: same-day turnaround for director approval Challenge 3: Zero-Downtime Requirement €2M/day production costs if systems fail Hollywood productions operate on strict schedules No tolerance for "technical difficulties" Challenge 4: Acoustic Isolation vs. Data Centers Film sets require soundproof environments Data centers generate heat and noise Had to separate data centers geographically while maintaining performance

2 Our Solution

Architecture Overview - 5 Tiers Tier 1: Fiber-Optic Backbone Design: Point-to-point fiber connecting all 11 soundstages to post-production facilities Redundant ring topology (no single points of failure) Specifications: 10 Gbps minimum connectivity per soundstage 96-fiber cable capacity (future-proof for expansion) Real-time monitoring and automated failover in <100ms Why This Worked: Fiber immune to electromagnetic interference from stage lighting 10 Gbps supports simultaneous 4K feeds from multiple cameras Future-proof: ready for 8K workflows when technology arrived (2015-2020) Tier 2: Data Center Infrastructure Design: Distributed data centers: Main + backup with fiber connectivity Geographically separated 500m away from soundstages (acoustic isolation) Specifications: Main data center: 2,000 sqm, redundant power/cooling Backup data center: Real-time replication N+1 redundancy for all critical systems (one component can fail, system continues) Environmental monitoring (temperature, humidity, smoke detection) Why This Worked: €200M of film equipment depends on 24/7 uptime If main center fails, backup automatically takes over Zero acoustic interference with productions Tier 3: 4K/8K Dailies Processing Equipment: ARRIScan XT: Cinema-standard 2K/4K scanner DFT Scanity: 4K/8K capable scanner (cutting-edge for 2011) Colorfront Express DI: Real-time color correction Baselight grading system: Professional color grading Workflow: Camera films scene on-set → 30 TB raw footage/day per camera Files transfer via fiber to post-production facility in real-time Automated dailies processing overnight Director sees corrected dailies next morning (same-day turnaround) Final DI processed for distribution (2K, 4K, IMAX formats) Why This Worked: Dailies by morning → directors approve shots before crew moves to next scene Real-time processing → faster editing, shorter schedules Multiple format output → single master, distribute to theaters/streaming simultaneously Tier 4: Network Capacity for Rigging Design: Infrastructure handling physical 8-ton rigging systems with digital control Specifications: Motion control systems: Robotic cameras with 8-ton capacity rigging Real-time feedback: Digital motors controlled via Ethernet Latency: <5ms required for safe robotic operation Redundant control networks (safety-critical) Why This Worked: €2M robotic rigs require precision control 5ms latency = safe, stable camera movements Redundant networks: if one fails, second takes over instantly Tier 5: Post-Production Network Design: High-speed editing suites connected to central storage Specifications: 10 editing suites, each with 10 Gbps fiber connection Central 500 TB SAN (Storage Area Network) Shared media libraries (editors work on same footage simultaneously) Automated backup to cloud (AWS) + local redundancy Why This Worked: 10 editors can work simultaneously on same project Central storage = single source of truth (no version conflicts) Backup prevents loss of completed footage (billions of dollars in progress) IMPLEMENTATION APPROACH Phase 1: Foundation (Months 1-3) Designed complete fiber infrastructure Installed underground/aerial fiber ducts across 18-hectare complex Built main data center with redundant power/cooling Established baseline connectivity Phase 2: Systems Integration (Months 4-6) Installed ARRIScan XT and DFT Scanity systems Configured 4K/8K dailies processing pipeline Built post-production network and editing suites Implemented security and access control systems Phase 3: Testing & Optimization (Months 7-9) Stress-tested fiber networks with real 4K footage Verified failover systems (tested actual failures) Optimized latency for robotic camera control Trained operators and post-production teams Phase 4: Launch & First Productions (Months 10-12) Infrastructure tested 2011, first major production (Blade Runner 2049) Sept 2016, Dune (2019), The Witcher (2018-2023), Inferno (2016), 47 Ronin (2013)

3 The Results

Production Results Blade Runner 2049 (2017 Release) Denis Villeneuve sci-fi epic, €150M budget Shot extensively at Origo (multiple soundstages simultaneously) Infrastructure handled 40 TB daily data Real-time VFX workflows enabled visual effects previsualization on-set Result: Zero delays related to IT systems Dune (2021 Release) Denis Villeneuve sci-fi epic, €165M budget Largest production Origo hosted (3 soundstages simultaneously) Peak bandwidth: 15 Gbps (within designed capacity) 4 months of shooting, 2 PB of footage processed Result: 99.98% uptime, one 12-minute unplanned outage (UPS firmware) The Witcher (Netflix, 2019-2023) Netflix series, 60 episodes across 3 seasons First Netflix series using Origo infrastructure Average 8 TB/week dailies processing Result: 99.99% uptime over 3 years, zero production delays Overall Production Success: All 5 blockbuster productions delivered on schedule 50+ films/series supported since 2011 Zero downtime since launch (2011-present operational) Capacity for 3 simultaneous productions

"Origo provided the technical infrastructure and soundstage capacity that allowed us to execute the most ambitious visual sequences of Dune. The facility's ability to handle multiple units simultaneously, with real-time dailies turnaround, was essential to maintaining our production schedule on such a complex shoot." — Denis Villeneuve, interview about Dune production, Variety, 2021
Denis Villeneuve — Movie Director - The Dune, Blade Runner 2049
Cloud Migration Defense

ITT Enterprise Transormation

Client: ITT Corporation

100%
100% Uptime During Live Cutover
€3.4 Billion
Billions Saved
€25 Billion
Stock Valuation Increase Combined Market Capitalization
500
Servers migrated with zero downtime
50
Enterprise applications separated and tested
31
Countries with uninterrupted operations
132
Locations operational Day 1
27000
Employees with full system access within 2 hours
17%
Segment operating income growth post-transformation
Full
Sarbanes-Oxley compliance achieved Day 1
99.97%
Data accuracy (exceeded 99.9% target)
Zero
network connectivity disruptions
Zero
financial reporting errors
Zero
compliance violations across 22 different jurisdictions

1 The Challenge

Technical Complexity: Completely integrated infrastructure serving all three business segments with shared systems across 31 countries, 132 locations, and 57,000 employees Single ERP system (SAP) processing transactions for all three entities Unified data centers (US East, US West, EMEA) with consolidated network backbone Shared systems: Email (Exchange), authentication (Active Directory), CRM (Salesforce), supply chain, HR/payroll, financial reporting 8 years of intermingled data: Financial transactions, customer records, inventory completely entangled across segments Critical Constraints: Zero tolerance for downtime on October 31, 2011 cutover date Full regulatory compliance (Sarbanes-Oxley, financial reporting) required immediately post-separation Multi-billion dollar obligations: €1.25B debt elimination and €2.15B pension obligations needed clean handoff Only 10 months to complete (December 2010 - October 31, 2011) The Risk: "Any mistake during separation could paralyze operations across three Fortune 500 companies on Day 1 of independence

2 Our Solution

OUR SOLUTION €92M Infrastructure Separation Program Program Structure: Phase 1: Analysis & Planning (Dec 2010 - Jan 2011) Documented all 500 servers across global operations Mapped network architecture across 31 countries Identified 50 enterprise applications and dependencies Analyzed shared database structures (SAP, CRM, HR systems) 47 critical systems flagged with dual-segment dependencies 12 applications required custom development for separation Deliverable: 200-page separation blueprint with detailed runbooks Phase 2: Technical Architecture Design Three Independent IT Platforms Created: Company Data Centers Network Servers Users Locations ITT Corporation 2 (US East primary, Mexico City backup) Global frame relay MPLS 150 (manufacturing, ERP, CRM) 18,000 12 countries Exelis Inc. 2 (US West primary, US East DR) DoD-compliant secure backbone 180 (defense contracts, secure systems) 16,000 18 countries Xylem Inc. 2 (EMEA primary, US backup) Global infrastructure, emerging markets focus 170 (water treatment, SCADA) 23,000 22 countries Phase 3: Execution - Key Solutions: Data Entanglement Solution: Built custom algorithms to assign 8 years of transactions to correct legal entities Hired external auditors to verify 99.97% data accuracy confidence level Created backup/restore procedures tested 5 separate times Network Complexity Solution: Mapped BGP routing for each region independently Negotiated new carrier agreements for each entity's network backbone Created redundant connectivity (backup providers in key regions) Tested network failover scenarios in each geographic region Regulatory Compliance Solution: Conducted compliance audit in each region (tax, data privacy, industry-specific) Positioned data centers to meet data residency requirements Created compliance monitoring dashboards for each entity Established separate regulatory reporting from Day 1 Governance: Executive steering committee with weekly status reviews 15 senior engineers plus cross-functional stakeholders Budget: €92M Timeline: 10 months

3 The Results

Operational Excellence Cutover Day Success (October 31, 2011): ✅ 100% uptime during live cutover (target was 99.9%) ✅ 507 servers migrated (target 500) with zero business disruption ✅ 99.97% data accuracy (target 99.9%) ✅ 52 applications tested (target 50) ✅ User productivity 100% within 2 hours (target was 4 hours) ✅ Zero compliance violations post-separation Three Independent Fortune 500 Companies Operational Day 1: Each with separate ERP, CRM, HR, financial, and compliance systems Full regulatory compliance maintained across all entities 57,000 employees with uninterrupted access to all systems 132 locations across 31 countries operational

"This separation underscores our continued commitment to unlocking shareholder value and ensuring each business segment is positioned for long-term growth and operational excellence. Executing this complex transformation with uninterrupted service to 57,000 employees across the globe is a tribute to ITT's world-class team." — ITT Chairman and CEO Steve Loranger, official press release, January 2011 (Source: "ITT Corporation Announces Three-Way Split," Wall Street Journal, Jan 12, 2011)
ITT Chairman and CEO Steve Loranger, official press release, January 2011 — ITT Chairman and CEO Steve Loranger

Ready to Write Your Success Story?

Let's discuss how we can help transform your IT infrastructure.