Software Staff Engineer at international fintech company One Inc. explains how attackers managed to steal data from millions of users.
This spring, major cloud platform Snowflake Inc. found itself at the center of a massive data breach incident. Data from millions of users became publicly accessible, with ticket sales service TicketMaster and international bank Santander particularly hard hit—attackers stole information on approximately 600 million customers.
The situation around Snowflake showed that security issues remain relevant even for the largest cloud platforms. Why exactly did the data breach occur and could it have been prevented? We asked these questions to Sergei Gasilov, Software Staff Engineer at international fintech company One Inc. Sergei is responsible for integrations from a technical perspective—he implemented a unified Payment Platform API and configured processes so the system could quickly process millions of transactions while maintaining high data protection.
In this interview, the expert discussed what mistakes by software architects lead to data breaches, how to build a performant and reliable system, and what awaits the cloud security industry after the Snowflake incident.
— Sergei, you’ve worked extensively with API standards and designing high-load systems. In your experience, what most often leads to data breaches in cloud platforms?
Over the years, I’ve seen the same pattern: major data breaches almost never result from sophisticated hacker attacks. The most common cause is basic architectural miscalculations that leave data poorly protected from the start. Single-factor authentication, lack of centralized identity management, identical passwords for all service accounts—in such systems, attackers who compromise one thing can gain access to all customer data. That’s exactly what happened with Snowflake.
The second category of errors is what I call “configuration entropy”—when a system’s actual configuration gradually diverges from how it was designed. Open data repositories, logging disabled “to save costs,” unencrypted data—all this accumulates over months and years. Classic example: a developer temporarily opens a port for testing, forgets to close it, and six months later that port becomes an entry point for an attack.
The third mistake concerns monitoring architecture. In Snowflake’s case, attackers used the platform’s own legitimate tools to exfiltrate data. Standard monitoring systems didn’t notice this because from a technical standpoint everything looked normal—an authorized user performing permitted operations.
— Many of these mistakes seem obvious in hindsight. How realistic is it to detect such risks in advance—at the architecture design stage, before the system goes into production?
Security must be built into the system from the very beginning. Adding it after an incident is like installing a lock after a burglary. That’s why risks need to be identified at the architecture design stage, with mechanisms in place to maintain data protection throughout all stages of the system’s operation.
To prevent errors, I recommend using a combination of tools: Cloud Security Posture Management (CSPM) systems provide continuous configuration monitoring, security checks during development catch problems before they reach production, and User and Entity Behavior Analytics (UEBA) systems detect anomalies in user actions. For example, if a user typically downloads megabytes of data but suddenly extracts terabytes—that’s a signal the system should flag. Of course, the larger the system, the harder it is to simultaneously ensure both performance and data protection.
— Can you tell us about a situation where you spotted an error during testing that could have led to a serious breach?
Yes, there was such a case on one project. During testing, I noticed that one service account had overly broad permissions and authentication was single-factor—if compromised, this account could have provided access to all system data.
Instead of solving the problem alone, I brought in the team: we discussed the situation, conducted a joint audit of access rights and architecture, and developed a remediation plan. Thanks to a culture of open discussion and team verification of actions, we were able to identify and eliminate the potential vulnerability before going into production.
— The Snowflake case showed that system load increases its vulnerability. You have practical experience working with modules that process hundreds of thousands of webhooks daily. How do you balance performance and data protection under such conditions?
Problems start the moment a system grows but the security approach stays the same. Teams scale functionality and load but forget that protective mechanisms must grow at the same rate. This gap is exactly what leads to critical vulnerabilities—as happened in the Snowflake case.
For example, growth in user numbers increases load on authentication services. If the architecture isn’t ready for this, checks start slowing the system down. The problem can be solved with distributed identity systems and token caching—preserving both security and process speed.
Encryption with large traffic volumes also creates noticeable infrastructure load—it’s important to initially use dedicated encryption services and hardware security modules. The volume of security logs also increases, requiring centralized event processing and intelligent filtering to detect anomalies.
Access control is a separate topic. This problem is solved by caching access policies and moving some checks to the system boundary, so load is distributed evenly and doesn’t affect core functionality.
— As the system grows, so does the number of integrations—with clients, partners, external services. You oversaw the integration architecture of the One Inc. fintech platform and worked with major Tier-1 clients. How do you maintain integration convenience in such an environment without increasing data risks? And what, in your view, went wrong with integrations in the Snowflake case?
Integrations give business growth and flexibility but simultaneously expand the attack surface. In Snowflake’s case, the problem wasn’t the technologies themselves but low maturity of the integration architecture: there were no unified standards, direct data access was used, many checks were performed manually. In such a model, any error or credential compromise immediately led to serious consequences.
The key point is to perceive integration as extending your own security perimeter to an external partner. Essentially, you’re giving a third party access to your data, and this needs to be treated as strictly as employee access to confidential information.
In practice, proven architectural approaches work well: a single entry point for integrations with mandatory authentication, permission verification, and rate limiting protects core systems from overload and abuse, while additional mechanisms like limits and request signing, as with Stripe, reduce damage even if keys are leaked.
It’s separately important to follow the principle of data minimization: a partner should receive only what they actually need. Access control at the individual data level and short-lived tokens greatly reduce risks—unlike Snowflake’s situation, where compromised credentials remained active for months.
— In your work, you’ve repeatedly faced critical security requirements—from PCI DSS to automating access to payment data. What would you advise companies and their architects to consider to avoid ending up in Snowflake’s situation?
The Snowflake incident isn’t just a data breach—it’s a textbook of architectural and process failures at an industry-wide level. The first and main lesson: multi-factor authentication isn’t optional, it’s a mandatory requirement, so enable it by default. Snowflake provided the option to enable it but didn’t make it mandatory, so clients could choose convenience over security—and many did.
The second important lesson relates to the shared responsibility model in the cloud—it shouldn’t turn into shifting risks onto the client. Yes, a provider can offer security tools, but if their use isn’t mandatory, the system remains vulnerable. Cloud platforms must forcibly ensure a baseline level of protection, while clients consciously strengthen it.
The third lesson—credential management must be maximally automated. In Snowflake’s case, compromised passwords remained valid for months, which multiplied the damage. Modern systems must include monitoring for credential leaks, automatic secret rotation, temporary access, and the principle of least privilege.
— And one last question: how, in your opinion, will the approach to cloud security change after such major incidents?
The Snowflake incident has already become a turning point for the entire cloud storage industry: companies will start changing their architectural approaches, paying more attention to security—strengthening authentication, validating every request, using behavioral analytics and machine learning for early attack detection.
The role of automation will grow separately—systems will automatically revoke access and isolate threats without waiting for manual intervention. In parallel, regulatory pressure will intensify: basic security measures will become mandatory, not recommended.
Organizations that don’t adapt their systems to these new realities will be next in the headlines about major breaches. The main thing we all need to remember: security isn’t a feature you can add later, it’s a fundamental property of architecture that must be built in from day one.