Cyber Wars Are Here, Threatening Your Digital

Udi Golan
Dofinity
Published in
8 min readFeb 2, 2021

--

Hacker attacks on computer systems are old news. In recent decades, we have seen major attacks on giant companies leading to rather grim outcomes.

However, those were mostly casual attacks driven mainly by financial or psychological motives.

Lately, we see a new type of attacks — planned and coordinated as part of an organized war plan that aims to inflict widespread and significant damage to the computer systems of a given country. These are no longer sporadic attacks but rather ongoing, systematic, and sophisticated activities where targets are chosen based on their nationality.

The age of cyberwarfare has just begun.

Even before this era, cyber-attacks had become more frequent. Data published by Corodata shows that in 2018, 46% of companies in the United States suffered a data breach. This is twice the number of companies recorded in 2017.

Nowadays, not only is there an increase in the frequency of attacks — but also in the quality and quantity of the data that is stolen.

As someone who’s been involved for many years in building secure digital platforms and integrating them to organizational infrastructures, it is clear to me that one can work endlessly to secure their company’s systems. The number of potential loopholes has grown exponentially along with the total of systems being used simultaneously and especially due to the links between them.

These days, digital transformation, remote working, and moving to the cloud are prominent trends that definitely have negative effects on the organization’s information security systems.

Therefore, it is extremely important to identify the main weaknesses and to devote efforts accordingly. Such an analysis should be based on two aspects of information — accessibility and venerability.

Accessibility

Almost every organization operates in a configuration separating between an internal network (Intranet) and an external network (Internet).

There is a kind of trade-off in such a structure — the internal network should be more secure while the external network more accessible.

That is, the basic assumption is that while internal systems are relatively protected from the outside world, external systems must be more accessible (e.g., marketing websites, e-commerce platforms, CRM systems, etc.) and therefore the risk of hacking inevitably increases.

Sensitivity

When we come to examine the information from the perspective of sensitivity to leakage and the potential damage to the organization, we naturally focus on confidential information that is not exposed to the public.

Here too there are several levels of classification which are based primarily on the damage.

Very often when it comes to businesses, the most sensitive information that the organization keeps in its databases concerns personally identifiable information (PII). This is because usually, the most severe damage that can be done to an organization is by stealing customer information for the purpose of extortion and/or distribution.

In recent years, public awareness of this issue is on the rise and, consequently, there has also been significant progress in legislation in Europe. This trend increases the extent of potential damage from lawsuits and therefore so is the level of risk associated with the leakage of such information. According to an article published by Norton, on average, around the world, companies would be forced to pay $4 million due to data breaches while US companies forfeit an average of $8 million.

Vulnerability

We can, therefore, typify the organization’s databases according to the degrees of both their accessibility to elements outside of the organization and the sensitivity to exposure of the information.

Accessibility and sensitivity-based analysis allow us to better understand what the organizations’ Achilles’ heel is. Databases that have sensitive information on the one hand, and that allows for (relatively) high accessibility on the other hand, are the first weaknesses.

On the hackers’ side of things, there is a mirror image — these databases are the most attractive to try and breach.

Simply put, examining information security, the weakest links are databases that serve applications / websites that are visible on the internet and contain personally identifiable information.

This is high-risk information.

High-risk Information

A great example of a large category of systems that (typically) hold sensitive information is e-commerce websites. Naturally, in the vast majority of cases an e-commerce website must be accessible outside the organization (meaning visible online) and must store data about customers — names, emails, phone numbers, addresses, purchase history, preferences, etc. The more sophisticated the site is, the better the personal experience it offers, which can only be based on the information collected about the user. That is, there is a clear motivation among marketing professionals to collect maximum behavioral and descriptive information about the users. Assuming that the collection of this information is in compliance with applicable legislation (for example, the Protection of Privacy Law in Israel and/or the European GDPR), the challenge facing the organization’s IT teams is the protection of information at high risk.

If so, with the understanding that, more than ever before, business growth relies on the exchange of information through digital channels on the one hand, and the increasing risks relating to security on the other hand, what practices can organizations use to reduce risks?

Separation of Concerns

In some cases, and not necessarily for security-related reasons (for example, the DRY principle), an organization would decide that most of the high-risk information and often all of it — will be kept on internal systems (ERP / CRM). In these instances, for an external system to still be able to make use of the internal information and, of course, to add new information, the organization the external system to communicate (via APIs) with the internal systems.

This approach means integration which often results in a significant increase in the costs required to set up external systems and, of course, maintain them over time. And even if there is financial justification to do so, the organization must take into account that on the one hand, the chance of data theft is small because it is stored on internal systems; however, on the other, such an integration means much greater damage in the event of a hack (because then the hacker breaks through the site into the organization’s core systems).

Necessary Minimum

Obviously, the risk in keeping information rises as the quantity and quality of the information collected increases. Therefore, it is extremely important to set a clear definition of an information retention policy and how it is processed in the context of privacy. In a marketing professional’s ideal world, every piece of information one can possibly obtain about individuals or segments is readily available. Therefore, without a well-defined policy in place, the organization will quickly find itself with masses of information in its databases, which won’t necessarily be put to good use. That is, for reasons of risk reduction, it is recommended to reduce the quantity of information collected and to keep only the data that has distinct benefit and value. To put it differently, the organization’s policy states clearly it will keep the bare minimum of information needed and nothing more.

Server-Side Code Reduction

The more complex a software system is and the more lines a code has, the higher the potential for security breaches. This is certainly the case when such a system builds over time, sometimes by various developing sources.

One of the architectures that are emerging today in the digital sphere is JAMStack. The main idea behind this architecture is the clear separation between static part of the site and its dynamic parts (including server-side code). In many cases, most of the pages of the site can be generated while editing without really needing to run a server-side code to pull them. According to this approach, the components that are dynamic, for example, a cart on an e-commerce site, can be pulled using designated interfaces.

Obviously, this may not be suitable for all cases however, it does offer a significant reduction in security risks when it comes to websites containing well-defined dynamic components that process high-risk information.

Open Source and Standardization

More than a decade ago, there was a massive movement of organizations around the world towards the use of open source. The reasons are diverse: cost savings, faster development time and also a higher level of security. Using open source is indeed more likely to be of a higher security level, but it does not guarantee it.

What is the difference between open-source projects that offer a particularly high level of security and those that do not?

Well, there are of course, several factors, but if we are to point out one — it would be the project’s ability to establish a clear set of standards and to have them fully adopted by the community. Standardization is especially important in open-source projects.

Unlike closed source projects where development is centralized, open-source projects are built by a large number of developers spread all over the world. Therefore, a valid and clear set of standards is crucial in order to ensure the establishment of a well-organized software infrastructure according to a well-defined architecture that makes proper use of software engineering principles.

Without a set of standards, we get “spaghetti code” which is only as strong and impervious to attacks as the weakest developer working on the code (the weakest link). By contrast, a standards-based code does not depend only on the individual developer, but first and foremost on the quality of the standards.

An excellent example of an open-source project that has succeeded in creating a particularly strong system of standards is Drupal, which started as a content management system and evolved into an entire infrastructure (Application Framework).

One of the main reasons that large organizations, including government agencies, banks, academic institutions and large corporations choose Drupal is its particularly high level of security which relies on a robust set of coding standards.

Conclusions

We are on the threshold of a new era — the era of cyberwarfare.
In this age, organizations must be proactive in protecting all the information at their disposal and especially high-risk information.
“Separation of Powers”, code reduction; the use of open source and standardization are different architectural practices for the protection and security of information, which can be used separately and even be combined. Either way, when an organization establishes external systems that hold sensitive information, it is imperative to examine the different architectural approaches and define a clear policy accordingly.

With the understanding that the threat is constantly increasing, organizations can no longer stand at the sideline and keep go passive about the issue. Pro-active steps and measures must be taken in order to keep safe one of their most valuable assets— personal information.

This article is brought to you by Dofinity, a web development agency based in Israel, Check us out on our website or on LinkedIn.

--

--

Udi Golan
Dofinity

Born developer, raised manager, 360 IT professional, Business Development @Dofinity