15 April 2025
“Security by design helps you stay one step ahead”
Ekaterina Rudina, Security Analysis Group Manager at Kaspersky, discusses the challenges of assessing the security of industrial facilities and the role of the professional community in their protection, the reasons behind security issues in rapidly evolving industries, and the impact of digitalization on society.

There has been considerable discussion lately, including at the government level, about the need to ensure the security of industrial facilities, including transportation, logistics, and smart buildings. In your opinion, what is currently the weakest link in securing such facilities?
The weakest link is the lack of responsibility for security, or rather, the lack of clear allocation of responsibility. Industrial facilities are complex systems whose protection falls within the responsibility and interests of many people, legal entities, and even the government. For example, a vendor of ICS (Industrial Control System) software could introduce errors and vulnerabilities, a system integrator could overlook technological shortcomings or an aging architecture, and an operator company could fail to procure protection tools to ensure information security. None of the parties involved bears full responsibility for a security incident. The contract might not include clauses on security update development and delivery, and even if the vendor releases patches on its initiative, the operator company might not establish a process for their timely deployment: this could require additional testing, maintenance windows for installation and configuration, etc.
An attacker exploits a combination of factors that were overlooked or emerged accidentally. The Swiss cheese model doesn’t hold up – the attacker finds just the right holes in the cheese maze and reaches the target. The key factor that turns a potential hazard into a real threat is people and the decisions they make. For instance, a car is inherently a danger – a danger of accidents, a risk to life and health. Modern vehicle safety systems are designed to protect the driver, passengers, and others on the road from technical failures or driving mistakes, but they can’t prevent deliberately dangerous human actions. In chemical production, safety concerns extend to both people and the environment. Functional safety is concerned with hazardous scenarios, such as common-cause malfunctions or failures, for which probabilities can be calculated. When assessing cybersecurity risks, we assume that someone may be motivated to cause deliberate harm, for example, by triggering a road accident or chemical contamination. Some might say, “No one would want to harm me on purpose,” others could try to save on antivirus software, while still others would count on antivirus protection being installed and fail to release necessary patches.
People and the human factor play a role on both sides: the attacked party may be poorly protected and fail to coordinate a response effectively, while the attacker may take advantage of this. If a system is poorly designed, the operator organization may not be aware of this or may consider it the integrator’s responsibility. After all, “everything was done properly according to the contract.” In turn, the integrator might say, “Ensuring cyber resilience wasn’t part of the technical requirements.” This lack of alignment is the leading cause of subsequent problems. When an incident occurs, blame is often placed on scapegoats “with weak passwords.” However, this is not how responsibility should be allocated. Using weak passwords is bad indeed. However, it’s the lesser of the evils in the chain. Failing to enforce a proper password policy at the facility under your care is a greater evil. Underestimating threats and not funding cybersecurity at the facility is an even greater evil. Systematically compromising on quality, safety, and security of product you develop – that’s an evil of the highest order.
The vulnerability of industrial facilities poses risks to vital functions, with disruptions having real consequences for human life. But to what extent can they impact a country’s economy?
It all comes down to the scale of an incident and its consequences. If a critical infrastructure facility experiences a failure that affects the well-being, lives, and health of people, for instance, a prolonged and widespread blackout or an environmental disaster, then yes, such disruptions can have a significant impact. This is precisely the kind of scenario considered …under NIS 2 Directive. The Directive imposes specific obligations on certain companies, including operators of essential services (OES) and digital service providers (DSPs), to implement robust cybersecurity measures and report incidents to national authorities. It clearly lists all the sectors and subsectors (industries) that need to comply with this European cybersecurity directive.
National authorities assume responsibility for addressing certain types of threats through laws, regulations, and enforcement mechanisms. Part of the responsibility for securing such facilities lies with the government. The government assumes responsibility for addressing certain types of threats through laws, regulations, and enforcement mechanisms.
For example, in the nuclear industry, there is the concept of a “design basis threat”, as outlined in the corresponding document, Design Basis Threat. It’s a methodology developed by the IAEA for assessing the main types of nuclear security threats. The document, adopted at the national level, outlines the criteria for threats and acts of sabotage that the government is responsible for protecting against. At a certain point, when the capabilities of potential internal or external perpetrators and the level of emerging threat meet these criteria, the responsibility for taking appropriate measures shifts from the operator or facility owner to the government.
This document defines requirements not only for physical protection but also for cybersecurity. This cut-off ensures that internal and external adversaries cannot influence the national economy across all sectors. The government defines when a cybersecurity issue becomes a matter of national security, clearly outlines the threats it will counter, and begins to enforce the relevant laws and regulations at its level, requiring compliance from citizens to guarantee national security.
It’s like traffic laws. For example, the speed limit in built-up areas is 60 km/h, and the government installs road signs and ensures compliance through traffic police. This doesn’t eliminate all accidents, but it does help maintain a certain level of safety. To avoid an accident, each driver monitors the road situation and adjusts speed according to the limit. Likewise, preventing cybersecurity incidents at industrial facilities – outside of threats to critical infrastructure – is the responsibility of businesses; it does not lie entirely with the government. If an industrial facility complies with the ISA/IEC 62443 series of standards, this does not automatically mean it is fully protected.
Is there any assessment of the vulnerability of industrial facilities in different countries across the globe? If there is, how closely is it correlated with the geopolitical situation?
There are methodologies for assessing an enterprise’s perceived readiness for attacks – its maturity level. There are many maturity models, and they may differ depending on the type of facility, industry, or country. Typically, these models don’t assess the current state of a facility’s vulnerability but instead focus on its maturity and preparedness for potential attacks. Such assessments must be based not only on technical vulnerabilities but also on the continuity of operations and maintenance of the facility. In other words, these should be multifactor assessments.
They can indeed correlate with the geopolitical situation. Many are now talking about the balkanization of markets, including the market for industrial devices and the Internet of Things. My colleague, Vladimir Dashchenko, Principal Security Researcher at Kaspersky, recently brought this up in relation to IoT device certification in the U.S., including aspects related to the country of origin. This is a voluntary certification, a type of “quality mark” known as the U.S. Cyber Trust Mark, which confirms compliance with relatively straightforward requirements, such as the ability to change passwords and install updates. However, the program does not apply to equipment from “unfriendly countries” or products made by companies on special lists, such as those compiled by the U.S. Department of Commerce or the U.S. Department of Defense, or those banned from federal procurement. This means that equipment from many major companies with strong reputations in the IoT market won’t receive certification, even if their products offer the highest level of security. Russia is also introducing reciprocal measures “to ensure technological sovereignty.” And that’s not necessarily a bad thing. The important thing is that concepts should not be conflated: just because something is domestically produced, this doesn’t mean it’s secure or trusted.
Can individual countries be compared in terms of their vulnerability or readiness for attacks?
The situation in this area also varies, and it can be monitored through both attack statistics and an assessment of the maturity of regulatory frameworks and responsible agencies. Kaspersky ICS CERT publishes statistics regularly, and back in 2016, when our ICS CERT was starting out, we studied the state of security management in critical infrastructure facilities across various countries. That was one of the first publications on our website, outlining different levels of security management maturity across nations. The study was based on data from the International Telecommunication Union (ITU), which had published management maturity profiles for each country. We gathered the relevant information from individual profiles, summarized it, and correlated it with the observed levels of incidents across different countries. The study was sufficiently interesting and served as a good starting point. Of course, much has changed in the nine years since, and a similar analysis could probably be repeated today – but this would involve collecting material from scratch, since the ITU has discontinued that program.
One of the challenges in comparing the vulnerability of facilities across countries is that security management models differ significantly, although they can be broadly categorized into several types. In some models, the regulator acts as the absolute authority across all industries; in others, its role is purely supportive or advisory, and each sector handles security internally. The allocation of responsibility for security between government bodies – and between the public and private sectors – also varies. The effectiveness of a given management model largely depends on the mindset. For example, in Japan, there has traditionally been a strong emphasis on quality and safety, as well as personal responsibility for these aspects. It’s common for people to work at the same plant for decades, and this is often considered a matter of pride. Workers may detect that equipment is malfunctioning literally by the sound it makes – an unusual hum – and escalate the issue. The Japanese take rules very seriously. In Japan’s foundational documents, such as the Basic Act on Cybersecurity, the opening paragraphs already contain the idea that every member of society is personally responsible for public safety, including information security. This cultural trait, a culture of compliance, has several implications. On the one hand, by nearly eliminating the human factor, Japan has achieved very low (though not record-low) levels of exposure to random threats in its industrial automation systems, as shown in the Kaspersky ICS CERT statistics mentioned earlier. On the other hand, Japan is among the top three countries with the highest number of cybersecurity incidents in industrial enterprises caused by targeted attacks – it turns out that when there is a targeted attack against you, relying solely on employee discipline isn’t enough, you need to take more active and proactive security measures. Moreover, as a global center of industrial production, Japan also exports insecurity along with its products – according to Kaspersky ICS CERT researchers, Japanese vendors tend to have inferior cybersecurity culture, and their products can hardly be called cybersecure.
What you’re describing comes down to people’s beliefs and their general worldview, doesn’t it?
That’s right. And if you take a broader look, the very concept of “security” can differ significantly among nations and cultures – even at the linguistic level.
For example, in English, there are two distinct terms: safety and security, whereas in Russian, there is just one word – «безопасность». This linguistic difference influences how people subconsciously perceive and relate to various aspects of safety and security. For example, it’s much more common to hear Russian-speaking individuals say something like, “Why do we need your information security if we already prevent accidents at the process level?” We encountered such views especially frequently when we first started working on the security of industrial facilities.
Once, we were working on standardization within an IEEE1 group alongside engineers from industrial equipment vendors. Those engineers told us, “In industry, we already have mechanisms for ensuring safety – we don’t need security; cybersecurity is excessive for us.” And indeed, there are dedicated systems designed to ensure safety – known as safety instrumented systems (SIS) – whose primary purpose is to prevent accidents at hazardous facilities. These systems operate independently of industrial control systems and monitor specific environmental and equipment parameters; if a dangerous condition arises, the SIS shuts everything down and brings the system to the default state.
That piqued my interest, so I began studying how such safety enforcement mechanisms function and are integrated into industrial environments. As a result, we prepared a document for the International Telecommunication Union that we internally referred to as “Security for Safety”, which, translated literally into Russian, is «Безопасность для безопасности» (or “security for security,” since, in Russian, the same word is used for “security” and “safety”). One of the examples in the document addressed the safety instrumented systems themselves and the methods used to protect them, ensuring they can withstand attacks. The document went through editorial review and was published at the end of 2017. Almost simultaneously with the release of these recommendations by the ITU, news broke of the Triton attack on Schneider Electric’s Triconex safety systems. During this attack, the safety systems were disabled through an attempted modification of their operating logic. The attack appeared more like “battlefield reconnaissance” or an effort to use the victim’s systems to develop, debug, and test malicious tools, which, fortunately, was detected. No accident was triggered in that case – the attackers’ sloppy actions caused the safety system to crash, which in turn led to the shutdown of the associated industrial process. But according to an analysis by cybersecurity experts, the attackers could have achieved far more serious results. This was not the only instance where our recommendations were published almost in parallel with the evolution of threats.
How much does the security situation for industrial facilities vary across regions of the world? Is it possible to draw comparisons over the course of a year?
A year is a relatively short time frame, especially when you consider that many sophisticated attacks can take years to carry out and are often only discovered long after the fact. There are landmark attacks – turning points – where theoretical threats become reality. I’m now going to pronounce a word – Stuxnet – that no one likes to hear because this attack has been mentioned far too often in the context of cyberwarfare or whenever someone wants to escalate the debate, but it really was Stuxnet that gave the initial impulse to the emergence of industrial cybersecurity. That attack gradually disabled more than 1,300 out of 5,000 centrifuges at a uranium enrichment facility in Natanz, Iran. The consequences were more than just physical – they were also strategic, resulting from the development course chosen by a specific state. The second such turning point was the Triton attack mentioned earlier. It demonstrated that harm to people and the environment, even destruction, can result from attacks despite the presence of purpose-designed safety systems. The third milestone incident was Sunburst, the 2020 attack involving SolarWinds software. The compromise of SolarWinds allowed a backdoor to be implanted into software used to monitor and manage IT infrastructure. The malicious code was then delivered as part of an update to privileged software. According to official data, several thousand users were compromised, and since government agencies and departments commonly use SolarWinds software, it was reported that critical infrastructure was also one of the attackers’ primary targets. As a result of that incident, additional chapters were added to NERC CIP – the set of security and reliability standards for North America’s power grid. These landmark attacks fundamentally changed the way we think about industrial cybersecurity. In essence, they confirmed that what we had imagined in theory was indeed possible in practice.
Of course, we could recall even earlier attacks on industrial enterprises, such as Maroochy Water Breach. But until the early 2000s, there was no clear sign of specific interest from one country to cyberattack the critical infrastructure of another. This interest, no doubt, is inherently geopolitical.
Is there a reason to believe today that attacks on critical infrastructure could trigger the start of a (cyber)war?
Some believe it’s already happening. But how do you define what is and isn’t a war? Nobody formally declares a cyberwar – it’s not as though one day it was cyberpeace and the next day it became cyberwar. Instead, we witness landmark events that alter the overall situation, which ties into the discussion surrounding the security threat model at the national level. The response to a cyberattack can involve various methods, including military, but so far, cyberattacks have not been met with a direct military response – and I hope it never comes to that.
What we are seeing now isn’t war – it’s more of a rising confrontation, an arms race, a show of capabilities – such as the attacks on Ukraine’s energy infrastructure in 2015 and 2016. Incidentally, the Balkanization of the internet is also part of this confrontation.
War implies confrontation at the national level. But the professional community of information technology and cybersecurity experts remains largely friendly across national lines. These are committees and international consortia comprising individuals from diverse nationalities. I’m part of that community, and I don’t observe active confrontation from the other side. That may be due to the high level of professional ethics in the field, which emphasizes a responsible approach to vulnerability disclosure. These standards are accepted internationally and followed by cyber emergency response teams. When a vulnerability is identified, the information is first disclosed confidentially to the vendor, because it could be exploited in an attack. However, amid recent geopolitical shifts, some community members have begun to disregard these accepted ethical norms. One striking example is when vulnerabilities were discovered in Russian-made elevator equipment from Tekon-Automatics; an information security researcher not only published the findings but also the exploits, making them openly accessible. It looked like a call to action: “Let’s hack Russian elevators!” Of course, this sparked some backlash in the community, but at the same time, it highlights the reality of confrontations between people.
Is there cooperation between cybersecurity professionals and the government to secure the most important industrial facilities, and if so, how does this cooperation work?
The community can play a significant role, particularly in addressing new challenges that governments have not previously encountered. And this is particularly true for the cybersecurity. Companies engaged in cybersecurity research, as well as individual experts, help “put out fires” and prevent severe damage. Then comes an interesting phase: the model of state governance, which I mentioned earlier, begins to affect how this new domain is regulated. If the governance model is centralized, top-level officials are appointed, national laws are passed, and these laws are implemented through regulations applicable to the relevant entities. The community is involved in interpreting and implementing these provisions. If a subsidiary model is used, where each sector has its own body responsible for quality and safety, then the community may also become segmented into industry experts who make sector-level decisions. At a later stage, this jigsaw puzzle will eventually be assembled into a unified national strategy. These are natural processes that shouldn’t be hindered. Both governance models – centralized and subsidiary – as well as the hybrid models that fall between these two, have their strengths and weaknesses. The community’s influence on security-related decisions is typically indirect, operating within a well-established public-private partnership framework.
Is it possible to develop a unified approach and a set of common regulations to ensure the security of industrial facilities across all industries?
It is essentially impossible for general regulatory documents to account for the specifics of every industry fully. Security objectives and the corresponding criteria vary. For example, in the nuclear energy, nuclear safety and radiation protection are paramount; in the financial sector, it’s economic security; in many other industries, it’s safety. These objectives are achieved using different methods, technologies, protocols, and technical tools. The introduction of standardized lists of typical sectoral facilities is an essential step toward industry-specific clarification. But overall, implementing a unified approach across all industries is a utopia.
Is it possible to ensure that the security of industrial facilities is based on cyber immunity?
A cyber immune system can withstand attacks, including those that weren’t anticipated when the system was being created. This can be achieved using the security-by-design concept. When creating or upgrading a facility, the main goal is to eliminate hazardous situations. If we account not only for random failures but also for deliberate breaches, including those involving new technologies, we can stay one step ahead. That’s what cyber immunity is all about.
People often ask, “How do we implement this? We’ve segmented the network and installed a SIEM.” But effective attack prevention is not just about mechanisms – it’s about setting the right goals and defining the proper criteria. Requirements should be formulated from the standpoint of “how the system must behave,” rather than “what must not happen,” which means describing the boundaries of a safe state as a mathematical invariant that should hold under all conditions. Mechanisms should be configured based on these requirements.
It’s similar to the 60 km/h speed limit mentioned above: regardless of the circumstances, this rule remains in effect as an invariant. Within that limit, there may be local restrictions or requirements – but they are all aimed toward a common goal: maintaining a safe road environment.
Working with constraints and policies is an engineering discipline. Cyber immunity is achieved using engineering methods.
How relevant is it, in the context of securing industrial facilities, to discuss the need to cultivate a culture of information security at all levels – from professionals working directly with these systems to users of basic devices? In other words, is this also a matter of societal culture, awareness, and responsibility? And if so, how can this culture be cultivated?
It is certainly relevant – and necessary. Information security culture is less about technical literacy and more about conscious responsibility, about the refusal to rely on luck. This culture needs to be cultivated – deliberately and over the long term.
When discussing functional safety and industrial safety, it’s worth looking back at history. During the First Industrial Revolution, when the first factories and plants were being built, nobody gave production safety a thought. Workers suffered injuries, lost their ability to work, or even their lives. The emergence of trade unions helped enterprise owners realize the economic value of ensuring worker safety – previously, investing in safety wasn’t considered profitable, a cynical reality. Today, production safety is often taken for granted – it’s part of the culture – but building that culture took a considerable amount of time. It will be the same with cybersecurity culture.
Industries develop at different paces. It appears that sectors such as telecommunications, healthcare, and transportation evolve more rapidly than, for example, mining or metallurgy. Would it be fair to say that fast-developing sectors are more vulnerable or less vulnerable?
The need to maintain these development dynamics often results in lapses in quality and security. One hot topic is the development of autonomous transport. Due to the rush to deliver results quickly, there are frequently insufficient resources or time to rethink and redesign the initial system architecture with security in mind. Transportation is inherently risky, and new technologies introduce additional threats. While there are methods for assessing hazards, threats, and risks, there are not enough professionals who can apply those methods and integrate them properly into the development processes. They are still learning. As a result, in rapidly evolving industries, security often lags behind.
Security built into a new device, such as a car, by design may require additional iterations of design and development, as well as more time. The electronic architecture of vehicles and the core technologies of onboard networks were not created with digital trust management in mind. There may be no place to store keys or certificates. This opens the door to compromise, allowing for the replacement of key functions or firmware in control units. A small time-saving decision at the design stage can result in a fundamental flaw in the final product. And no one is going to fix that at the production stage, when the units and components are already on the assembly line. Only after the current generation of the product has been used can the next generation be designed with those lessons in mind. Once again, in fast-moving industries, security often finds itself playing catch-up.
In less dynamic industries, such as metallurgy, more resources are invested in long-term projects and infrastructure, equipping the enterprise with everything necessary. Facilities are designed for long-term operation, with a greater emphasis on quality control and safety. High investment levels contribute to increased reliability and quality, which indirectly boosts resilience to cyberattacks. So, less dynamic enterprises tend to be more resilient – although they, too, can suffer from targeted attacks.
What needs to happen for security to stop playing catch-up?
We need to apply a systematic approach to solving complex engineering problems. I mean systems engineering approach. If we treat security as one of a system’s fundamental properties – providing it through a by-design approach – and recognize that information security affects both physical and functional safety, then we’ll be well-equipped to meet this challenge.
Is the cybersecurity community, overall, keeping pace with the rapid technological development?
I have the feeling that it’s not. Cybersecurity is a relatively new discipline, having emerged only a few decades ago. Within the community, there is a clear distinction between high-level experts, who possess fundamental scientific knowledge, and – let me call them this – engineering technicians, who work at the level of individual solutions, technologies, and sometimes specific industries. There are few experts who operate in the intermediate, integrative layer – those who can see the big picture behind the specifics, and the specifics behind the big picture. Only large companies or those with government support can afford to conduct applied research while also keeping “forward-thinking scientists” on staff. That said, consortia like The Open Group2 or major universities are generally the most effective, although even they need field research, without which they end up analyzing the same old incidents, memory dumps, and outdated data for years. As a result, hands-on engineers tend to look down on science. And yet, vulnerabilities in specific solutions and technologies are often best viewed from a fundamental perspective. “Why don’t we abandon the familiar platform or protocol and approach the problem from a different angle? Perhaps we need to develop an entirely new operating system or move data and computation to the cloud?” This kind of thinking leads to breakthroughs in areas where stagnation used to prevail.
The Internet of Things is rapidly evolving. Today, there’s talk not just about smart devices but about smart environments. Is it reasonable to assume that the risks and potential consequences of disrupting such interconnected systems outweigh the benefits they offer?
It may be reasonable to think that, but that doesn’t mean someone will say, “Stop the wheels” and halt digitalization. When the first internal combustion engine appeared, no one could foresee the scale of its impact – yet it shaped the world we live in, with its oil production, dependence on natural resources, and the resulting geopolitical and cultural context. The same is true of digitalization: we may not even be able to see or assess its global risks today. However, we can say with confidence that our understanding and definition of security will shift. Take privacy, for example – the concept of personal privacy has changed dramatically over the past couple of decades.
People are more concerned than ever about the erosion of personal data protection, but they continue to use smart speakers and adopt connected cars. Corporations offer services – often unnecessary – in exchange for access to personal data. And it feels like a deal we can’t refuse.
These are valid examples. It’s important to understand that the moment we step into a connected car, we agree to share our data. If we don’t accept that condition, we shouldn’t even get into the car, let alone buy it. I recall the first time I saw a web browser at school. As soon as I started typing a query into the address bar, a warning popped up: “Everything you type here will be sent to the Internet.”
Privacy has one key trait that’s the opposite of security. A system designed to ensure security (of any kind) is intended to prevent harm from occurring. But a system that aims to preserve our privacy must do precisely one thing: ask for our consent to interact with it (or else we decline and don’t use it). It’s like clicking “Accept cookies” or leaving the website. What do we usually choose to do?
Naturally, this leads to a point where personal privacy becomes a myth. In the past, when a newly installed system was used, the browser would warn users before sending data to the web. These days, spontaneous activation of a smart speaker’s microphone is business as usual. We need to realize that even accidentally recorded moments from our private lives can be reviewed not just by machines, but by people, for the sake of improving speech recognition algorithms. For instance, in January 2025, Apple settled a class-action lawsuit for $95 million related to the unintentional activation of Siri. The company had used random recordings to train its voice assistant, with help from third-party contractors. A similar incident involved Amazon’s Alexa virtual assistant. Such incidents are becoming increasingly routine in corporate practice, and this inevitably leads to a new normal, where people are gradually conditioned to accept that digital technologies make their personal lives transparent.
True – but that doesn’t mean people trust technology. I read about a study where drivers were asked about their use of driving assistants, and many admitted they didn’t use them, because they didn’t trust them. If we extrapolate this to a global scale, we can draw some conclusions about how humans trust technology (which is virtually the same as the digital environment today). How is that trust formed, and what does it depend on?
Trust in technology has always been a relevant issue. It’s closely related to how people perceive risk and the cognitive biases that influence them, as studied by Daniel Kahneman, Amos Tversky, and Richard Thaler. The “Econ” from Thaler’s Nudge – the entirely rational individual (who is virtually non-existent in real life) – would, when offered the choice of using a driving assistant or taking a ride in a driverless taxi, thoroughly investigate the test results (“five stars from an independent agency”), the test methods, the reputation of the testing lab, the state of the equipment, and the insurance coverage – before making a decision. However, humans, being emotional and irrational, either unquestioningly trust five-star ratings or, suspecting something is wrong, disable driving assistants “just in case.” Some people delve deeply into the methodology and other factors, but there is always an area of assumptions and probabilities. Even an experienced engineer can’t mathematically calculate every outcome – not even when it concerns their own life and safety, or the lives of their loved ones. This is where perceptual errors come into play: selective perception (survivorship bias), confirmation bias, and risk aversion. The human psyche is far older than digitalization. The world will always have both reckless enthusiasts and neo-Luddites.
Let’s examine this from another perspective. It would be naïve to claim that the new technological revolution is all about the well-being of society. Most people and businesses struggle to keep up with technological progress, which can leave them behind. Could it be that the resistance to industrial enterprises – and therefore to technology – is a struggle for a more straightforward, more comprehensible, and perhaps more sustainable world?
The term “Fourth Industrial Revolution” is rarely used these days, but that revolution has taken place. A revolution never brings immediate well-being, and during the transitional period, there is always a sense of longing for the past, which is familiar and comprehensible. I often hear: “Many people will lose their jobs; AI will replace them. Perhaps we shouldn’t develop AI any further?” I’m sure similar thoughts were voiced during the first industrial revolution, as well. For example, the invention of Jacquard looms sparked concerns that the work of weavers and artists would become obsolete. The advent of agricultural machinery raised questions about the future of peasants and farmers. Well, here’s the news: there are more people today than ever before – and they don’t seem to be suffering from a lack of work.
Let’s take a break from cybersecurity for a moment. Right now, I’m reading about the origins of the Bauhaus. For a long time, I believed that this school emerged in direct connection with the Industrial Revolution – that its founders were advocates of industrial design and drew inspiration from industrialization. However, I was surprised to learn that the roots of the movement lay not in adapting artistic creativity to the needs of mass production, but rather in the opposite direction. Artists John Ruskin and William Morris, who founded the Arts & Crafts movement in Victorian England, opposed the “tastelessness” of mass machine production and championed a return to handcrafted work. Their ideas became the foundation of the Bauhaus and of industrial design as a whole. It’s fascinating that a movement initially created in opposition to the Industrial Revolution ended up becoming one of its products – and that the Bauhaus aesthetic became inseparable from mass production. The same is happening now: artists are concerned about the development of artificial intelligence and are trying to resist it, but it seems it may already be too late. It’s difficult to predict now what kind of boomerang effect the rejection of digitalization and artificial intelligence may have. But it’s clear that technological progress will lead to another transformation of social relations and values. Human society does have a certain resilience as a system. What’s truly fascinating is how society will change – and what our understanding of security will look like a few decades from now.
- Institute of Electrical and Electronics Engineers – a non-profit engineering association from the United States that develops widely adopted global standards in radio electronics, electrical engineering, and computer system and network hardware. It’s a professional community of engineers specializing in electrical engineering and electronics. ↩︎
- The Open Group is an industrial consortium created to establish neutral, open technology standards for computing infrastructure. Its members include both buyers and vendors from the information technology sector, as well as government agencies – such as Fujitsu, Hitachi, Hewlett-Packard, IBM, NASA, and the U.S. Department of Defense. ↩︎