Y-tech Blog

Shirbit Insurance Cyber Attack
03/16/2021

Shirbit Insurance Cyber Attack

We are all familiar with the story that published, an intrusion into sensitive information at Shirbit.
The whole incident began when the Capital Market Authority and the Cyber Authority issued an unusual announcement about a leak of information from Shirbit's customers due to a cyber-attack that the organization went through.
Shirbit insurance company that won the tender of the government and one of the concerns was that a lot of information from senior executives distributed openly.
Shirbit started operating as an insurance company in 2000. The company focuses on car insurance but works in other sectors in the insurance world.

Leaked documents:
Car insurance application, ID card and more. Leaked information can allow impersonation of citizens to commit criminal offenses such as theft of funds.
A group of hackers named black shadow took responsibility for the entire event. The tweets were through their Twitter account, which was eventually blocked.
Several hypotheses raised as a motive for the attack: activists, Iranian attack, Korean activists.
Cyberattacks, in general, are a matter of routine; during the Corona period, there was a worldwide increase in the volume of cyber attacks because the move to work from home in an unsecured environment created many security vulnerabilities.

How do you respond to a cyber-attack?
An attack cannot be prevented; A person or organization that decides to attack will carry out its plot and try in any way to carry out an attack; the right question is how do you defend against a cyber attack?
A private / public organization or virtually any active business must take care of itself with advanced defense systems to fend off attacks. The question is whether it is possible to produce quality and good internal organizational protection systems that will preserve the company, protect information and enable day-to-day operations?
The answer is yes but in most cases it is not applicable at all; the cost to a single organization to create advanced defense systems in such a way that they can deal with external threats is a huge cost, it will include defense systems (hardware and software) and valuable human resources to manage the entire array in real-time.
In practice, in most cases, the decision-makers choose to apply minimal protections out of a belief "it will not happen to us…"

What is the alternative?
The first option is a massive investment in in-house protection systems; another option is to transfer the entire activity to a secure and closed cloud environment that includes many and varied mechanisms whose real purpose is to repel threats as a matter of routine.
Y-tech Group provides a comprehensive cloud and network solutions for companies and organizations that includes a large-scale cyber array that keeps the business from attack attempts.

Writer:
Shay Gindi, Vice President of Business Development and Sales at the Y-tech Group
Cloud security model - shared responsibility
03/16/2021

Cloud security model - shared responsibility

The shared responsibility model defines the limits of responsibility for cloud security in the provider/customer relationship. The responsibilities model varies for the different types of services, IaaS, PaaS and SaaS.
IaaS - Infrastructure as a Service, for example: servers, network, and cloud storage

PaaS - Platform as a Service, for example: SQL service without a server

SaaS - Software as a Service, for example: Salesforce service

You should know the differences between the solutions, the implications for security, and what you, the customer, are responsible for.

With so many providers, platforms, tools, and services in the IaaS, PaaS, and SaaS categories, organizations need to understand the distribution of responsibilities for cloud security that can be misleading and not always clear. Organizations tend to think that in fact moving to the cloud they become secure, without realizing that in most cases, the responsibility for security and implementation of appropriate solutions is theirs and not the cloud provider’s. Organizations need to ask themselves whose information they will store in the cloud irrelevant to the service model, a question to which the answer is usually clear, but in fact in the body of the answer that will arrive lies the answer as to who is responsible for securing that information.
With traditional on-prem local data centers, the company's IT team is fully responsible for all data as well as the physical infrastructure. This fact makes the whole security process relatively "simple". Of course it is not "simple" and extensive knowledge must be kept up to date in order to provide an appropriate solution.
When they move to the cloud, IT teams have to deal with cloud vendor infrastructure and services, a fact that tends to complicate the organization's overall security strategy, and also that most cloud infrastructures are directly connected to the Internet, a fact that further challenges security requirements and is sometimes "easier" to organize. Understand and deal with it in a "simpler" way in On-Prem solutions.
In a cloud solution, the vendor assumes some of the responsibility associated with environmental security, but not necessarily the data held within it. This can be extremely helpful for many IT teams, but if administrators and developers do not secure operating systems and applications, the organization may be exposed to many security threats, no matter how secure the cloud infrastructure may be and may even meet security standards that cover only the infrastructure, but not the The applicative environment and / or information of the organization.
The key to a successful cloud security strategy is understanding the shared responsibility model for both parties.

IaaS, PaaS and SaaS security models
Before diving into the details of a shared responsibility model in the cloud, IT teams must understand the security differences in the different types of service models. These services dictate what the supplier is responsible for or not responsible for.
With the IaaS model, the vendor is responsible for securing the physical data centers and hardware that runs the infrastructure, including "host" servers, storage, and the "core" network. The customer must secure the data, operating systems, virtual network, software layer and applications that he runs in the cloud. In this model, the client is responsible for many aspects of the solution, but he has the option to use the cloud provider, depending on the provider and the options he makes available to the client of course, to also manage the logical security of the client's virtual servers.
PaaS is an "intermediate" step when it comes to the shared responsibility model for cloud security, and places more responsibility in the hands of the cloud provider. In this scenario, the customer's IT team still manages the applications and data, but the vendor secures the basic infrastructure operation and also adds to its responsibility the operating systems and platform that the customer uses, for example databases.
In the SaaS model, the customer has no control at all over the software layer and he actually shifts the entire responsibility to the supplier. A SaaS vendor is responsible for all levels of security in the solution it provides, from the infrastructure level to the software level. IT teams only need to manage permissions, depending on the tools provided by the vendor.
Keep in mind that in most cases, SaaS vendors rely on the infrastructure of other IaaS vendors, and in fact create an additional layer of responsibility for the security of the solution that is shared between multiple vendors. Keep in mind that a "software as a service" provider, ie SaaS, at the end of the day provides a software service for which it is solely responsible from end to end, In case he hosts the software on another provider's IaaS infrastructure service, the IaaS provider has a clear role in the responsibility for infrastructure security and may have appropriate standards, but is not a partner in software security, ie the product that the customer needs from the SaaS provider. Compliance with the infrastructure provider's standards in the IaaS service does not imply the standard right also to the SaaS provider who is solely responsible for the software, in which case he must present an appropriate standard exclusively for the product he provides, regardless of the infrastructure on which the product is hosted. The product / software and installation is not relevant to this layer.

Enforcement of the model
Cyber attacks and security threats in the cloud are on the rise, some of us are exposed to them in the press and some of us are even involved in situations on a regular basis. Most notable security breaches, which tend to be very costly for the affected organization, are caused as a result of misleading on the customer side and / or at the software level and not provided by the cloud.
Working with a cloud provider may be less clear for some organizations when there is no coordination of expectations and an understanding of the division of responsibilities. From the first day of the contract, a clear division of responsibilities between the cloud provider and the customer must be determined and the teams outlined a clear outline that shows where the cloud provider's security ends, where the customer's responsibility begins and sometimes where the responsibility arises for joint discussion.
When it comes to a shared responsibility model in the cloud, there is no room for assumption and nothing should remain “in the air”. To ensure that there is a mutual understanding between the cloud provider and the customer, expectations must be accurately coordinated using a clear SLA document for both parties.
If the cloud provider also has an integration department, the issue may be very easy for the customer as the cloud provider's integration department is well skilled with all the tools and options and will be able to provide the customer with the peace of mind, as well as advise and share professional knowledge and sometimes remove different responsibilities in the journey to the cloud.

111

Tomer Schwaitzer is the CEO of the Y-tech Group and Israel's representative on the World Standards Committee for Cloud Computing in the ISO / IEC JTC1 organization.


Business Continuity – RPO/RTO
12/05/2021

Business Continuity – RPO/RTO

Business Continuity Program is a broad program that aims to enable the organization to work as continuously as possible, following the goals set in advance.

A Disaster Recovery Plan is part of the more extensive process of building business continuity.

The world of computing today is an integral part of any business activity, some businesses use computer systems but can also exist without them for one period or another, and there are businesses where the entire business core is based on computer systems and a few minutes and sometimes seconds without the ability to work can have business significance.

Today’s general concept is the construction of smart systems that include continuous protection and the ability to continue and recover from one disaster or another. In our world, the ways to damage computer systems are many and varied.

There are two essential concepts associated with disaster recovery that are worth knowing:

RTO – Recovery Time Objective

The period time required for the restart of the business activity, how long it will take for the organization to return to regular and complete work and / or according to the business continuity plan.

The time definition can be zero; that is, the organization cannot experience downtime at all, and the systems must be built accordingly, several hours/days, depending on the needs of business continuity and the ability to invest financially in the solution.

The great importance is to carry out a thorough examination at the characterization stage, and in accordance with the needs of the organization, the computer systems and the solution must be adapted so that the resulting test will meet the set goal.

RPO -Recovery Point Objective

The time span in which data can be accumulated that can be lost during a disaster event. The time frame varies and is defined according to the needs of the organization.

For example, a particular organization may define seven hours of recovery from a disaster that it considers reasonable to lose material of three hours of work; another organization may represent a broader range of hours or an inability to lose at all.

RPO-RTO



These metrics may affect:

How to build computer systems on DAY 1
The time setting can be zero, which means that the organization is not able to experience downtime at all.
Which backup systems should be adapted to the customer, including a cold or hot, active or passive DR (disaster recovery) solution.
Level of service required (SLA) from the supporting systems, IT staff, and relevant vendors.


Example of a complete process:

At the stage of characterizing the solution to the customer, the following data are addressed:
What is the uptime level required for the system?
How long of total failure can the organization endure?
How long can an organization lose information (RPO)?
How long time does it take for the organization to perform a backup (RTO)?
Each of the above parameters may be relevant to all systems or determined individually for each system according to its importance level.



Depending on the answers, a solution will be characterized that will meet the above needs, for example:

Uptime level is required 99.99% per year – make sure that the power systems meet the standard and are able to meet the requirement.
Total failure of up to 9 hours per year.
Loss of information for one working day.
Return from backup within 4 hours.


The data will be passed to an engineer at the company who will recommend a cloud solution that can provide the exact solution. It should be noted that there is the ability to offer any resolution and any solution, from a standard cloud solution that includes redundancy to a total hot DR solution that works in an Active-Active configuration.

The characterization phase is critical and essential as it must match the customer’s needs on the operational/technical side but no less important on the financial side.

Advice and guidance are an integral part of the process. It is vital to present to the client the pros and cons of each type of solution and help him reach the right decision recommended by the technical people.

Keep in mind that the system is built and ready for “doomsday”, for a day when there will be a failure. Then there will be a real test where everything must work. It is highly recommended to perform from time to time with the actual customer scenarios, simulate several extreme scenarios, and see how the system responds. These tests will allow the customer and the technical staff to be confident and calm that everything will work as planned on Judgment Day.



Summary:

No one can predict precisely when a disaster will occur and what it will affect in the existing reality.

It is certainly possible and recommended to prepare in advance to cope optimally as soon as the disaster “knocks on the door”. RTO & RPO values ​​may vary from organization to organization. Still, they will always compromise the business needs and availability the organization needs and the budgetary investment required in IT.

This estimate should be determined in deliberation between management personnel who understand the business need, availability and possible harm in any situation and the IT experts whose job it is to reflect the technical risks and build the technical solution that will address the business need and from there converge to a suitable budget.

Eventually, a decision will be made, and it will be handed over to the operating entity whether it is internal or external to the company; from this moment onwards, the burden of proof is on the solution provider, and its job is to perform periodic inspections and reflect the findings to the customer.

Another angle for understanding the issue: the whole insurance solution can be likened to a complex structure, say a large business tower in a commercial area. Once the agreement is signed, it becomes irrelevant until the day when the insurance company will be required to provide an answer for one reason or another. If an accurate and correct characterization has been made, and further proper maintenance and testing, the answer will be provided according to expectations.

Shai Gindi, VP of Business Development in Y-tech
Information security in the cloud – How to do it in the right way (For partners)
10/18/2021

Information security in the cloud – How to do it in the right way (For partners)

The customers of cloud computing, in all sizes and from all of the sectors, expects to receive peace of mind from their cloud services provider. This is why they have to be sure that their data center’s computing and communication infrastructure performs well. It is especially important for an integrator that wants to offer cloud services to his end clients, like Y-tech ICT Platform for Integrators – the ICT model offered by Y-tech. The integrator must know for sure that he and his end clients will receive information security at the highest level.

 

Information security in the perimeter and internal

When cloud provider gives virtual and private cloud services to its customers, he naturally aspires to provide each and one of the customers the computing resourced that were promised and allocated for him. The cloud provider doesn’t divide its hardware infrastructure between the clients. The hardware is common. But there is a clear separation between the computing resources allocated to the different companies that share the same cloud, and there is no resources leak between the clients.

The cloud provider protects its system from the outside world using a general firewall that built of protections manners in some layers, but it doesn’t stop here. Every client enjoys his own firewall that protects him from external threats and also from his neighbors in the cloud. This method further strengthening the tight ring used to insure that information won’t leak between companies hosted in the same physical system.

Business Continuity: Security in the cloud level

Beyond the security measures offered by the cloud provider in the perimeter level, there is another security tool in the level of the inner network of each client. Every customer that buys hosting services package based on Y-tech ICT Platform enjoys a third layer of security. This layer protects the communication to all of the servers in layer 2 which is the Data Link level in the point where the server connects to the network. The client can manage the different levels of security directly from Y-tech's management portal, including firewall rules, bandwidth allocations, antivirus, IDS, logs in different levels and anomaly tracking. The client can also choose to receive mail alerts when the system identifies out of normal behavior or intrusion attempts.

Redundancy 

Y-tech, as an ICT cloud provider, promises redundancy in the highest level of its solution. The physical structure of the servers farm, and the placement of the servers inside the farm, was designed is order to ensure the highest survivability.

Additionally, for every component that has a certain rule in the system there is a parallel one that does the same job in order to achieve redundancy in the level of the servers, communication, storage, and electricity and cooling. When a fault happens the replacement component steps in to avoid shutdown. This process happens automatically, in most cases without client awareness. A system in this level promises the client the needed business continuity.

And in the end, the client choose what else to install

A high quality ICT cloud service grants the client multi dimensional protection on the one hand, but simple to use on the other hand. Additionally, the client can choose to use a gallery of different products, including information security ones. With a few clicks the client can add any additional security service he needs. Y-tech ICT Platform is built to provide a wide range of exiting security systems and the ability of installing customized services as well.

The process of adding services and choosing solutions can happen in any given time. At the start of the road the integrator can choose the solution he wants to include in his cloud from which he will provide the services to his clients. Along the way, in any service’s setup the integrator can simply choose the right tools for each customer.

The integrator can employ an infrastructure of shared services from which he will provide Security as a Service solution built upon his shared firewall and other products. He can also build a private cloud for each client, or to provide a combination of private environments with public ones from the integrator’s cloud like shared mail security services, and more. It’s important to note that the process is very flexible and all can be changed all along the service life cycle.

At the end of the day, in order to provide high level of information security to the cloud offered by Y-tech, the company uses some layers of protection, some of them built in the systems and some can be chosen by the integrator. The security components include internal and external information security in the infrastructure level, while the integrator himself can easily add more information security measures he chooses.

 

Yossi Bar is an infrastructure and security engineer in Y-tech, responsible for data center, communication and infrastructure security environments.


Smart cloud computing as a key to improving work speeds of integrators and end customers
08/01/2021

Smart cloud computing as a key to improving work speeds of integrators and end customers

In the past, when hardware in general and servers in particular were physically located in the premises of organizations and companies, setting up new servers or upgrading existing ones was one of the toughest IT tasks. In order to do so, the IT team often had to coordinate a full shutdown of the system until the setup and tests are completed. A process of this type had its price: the inability of the company to work during the shutdown, the work hours required by the internal and external teams of integrators, and the addition of expensive extra work hours to the IT costs. Setting up servers and new work environments resulted in onetime costs and long lead times. It also required IT managers to purchase new hardware and licenses and invest long hours in installing hardware.

In contrast, today, in the age of Cloud Computing, the technological infrastructure allows integrators to offer their customers cloud based computing resources at the push of a button, without the need for large investments. Setting up a server takes several minutes, compared to whole days and weeks in the old model. Naturally, in order to achieve these savings, you have to rely on a high quality and automated cloud environment, such as
Y-tech ICT Platform, which is offered to integrators by Y-tech Group.

 

Work speed as a result of automation

In a competitive world, the speed of your actions and reactions is highly important. The new cloud infrastructure Y-tech is offering integrators is based on a combination of high quality virtualization systems and an automation system that runs on top of these systems. Such a system only requires the running of an installation file before moving on to the next phases. The automation system relies on predefined scripts and allows the integrator to quickly set up new servers while simplifying and accelerating IT procedures in general.

An integrator/customer that uses Y-tech ICT platform is able to set up a new server for its end client in a few minutes, depending on the server’s type. Additionally, he can set up several servers simultaneously and spare additional time.

The integrator/customer uses a pre-allocated amount of IT resources. The resources are purchased by the integrator through the platform, according to the integrator’s needs. They can be expanded at any time, depending of the service model selected by the integrator. Integrators can use the Pay As You Go model and manage allocations independently through the management dashboard. The result is a significant improvement in the speed of IT procedures that are performed by the integrator and the end customer alike.

 

Work speed achieved through automatic connection

When the preparation of the new servers for the end customer is over, the servers connect automatically to the data center’s network, automatically receive an IP address and are already connected to the internet. At the same time, security policies and bandwidth are defined and managed by the integrator.

At this stage, the security around the servers is already in place and. The servers automatically connect to assets such as CPU consumption monitoring system, memory, network and storage assets, in addition to extended security systems which include server level firewall and IDS. At the same time, there the integrator and the end-customer are also protected by the data center’s general data security layers behind the integrator’s main firewall.

After this infrastructure is set up (which takes around 5 minutes in average) most of the work that is left is to associate the customer in the system to his or her network environment inside the integrator’s cloud. After that, it is possible to move on towards installing applications on the servers according to needs.

 

At this stage it is also possible to connect the customer’s offices directly to his or her cloud environment inside the integrator’s cloud, through a private communication line. This is done though Y-tech’s communication services, which are directly connected to the cloud environment. To achieve this, the integrator receives full control over the medium between the customer’s premises and the cloud servers. The connection is built over a private medium without an internet connection. As a result, the integrator provides the customer with high level performance and security capabilities, while receiving end-to-end control and over the entire solution, in addition to full monitoring capability.

 

Work speed achieved through system performance

Y-tech cloud platform is equipped with the IT and communication resources required to provide the capabilities we mentioned above. These resources are continuously expanded according to actual ongoing requirements. Y-tech offers physical systems equipped with the highest levels of computing, processing and memory capabilities, in addition to SSD based storage systems that provide massive IOPS capabilities. The connection is achieved through a Fiber 10Gb communication system with massive throughput and minimal latency.

The combination of work speed that is achieved through the automation capabilities of the system and work speed that is achieved through the high performance of the system itself, delivers a winning solution. The solution allows the integrator to provide quick, reliable and high quality services, while exposing the integrator to new capabilities and business opportunities.

Yossi Bar is a Senior Systems Engineer at Y-tech ICT. Bar is part of the team responsible for Y-tech ICT infrastructures and the creation of automation processes.

 


How can governments maximize the potential of cloud computing?
04/19/2021

How can governments maximize the potential of cloud computing?

According to recent reports, by 2019, cloud computing applications will constitute 90% of the mobile traffic over the world, while already today, 64% of the world’s small and medium businesses use cloud-based applications.

The global business community has been devising and implementing cloud computing solutions and strategies for years. Nevertheless, in my opinion, governments over the world have still not been proactive enough in designing and carrying out comprehensive cloud computing strategies.

So what should governments do in order to plan cloud computing strategies for their countries? Let me offer 4 guidelines.

Collaboration in the national level

 

Cloud computing strategies have to be planned and implemented as national strategies, with the appropriate support and budgeting from the highest possible levels in each government. Senior government official should be the managers of the plan, with the collaboration of each country’s top business, technology and science leaders. Other prominent collaborators should include the leaders of the academic institutions and of NGOs that deal with relevant fields such as consumer issues, freedom of information and privacy.

Extensive implementation in the public sector

 

The strategic plan should present clear ways to maximize the value of cloud computing in the government departments and in the public sector as a whole, including municipalities and government-owned companies. Governments should demand various arms of the public sectors to consider using cloud services when they discuss buying new IT solutions, and consider migrating websites to appropriate cloud infrastructures.

Encouraging more organizations to use cloud computing

 

National cloud computing strategies should systematically encourage small businesses, NGOs and end users to adopt cloud computing services. Governments should set up an online knowledge resource center that will target key figures in various industries and deliver the best available knowledge and insights that can facilitate the wide adoption of cloud computing. With that said, governments should “license” cloud providers to comply with their own standards and be able to provide their services with the government approval and adoption. That brings me to the 4th guideline:

Cloud computing must be regulated

 

Governments must ensure that cloud computing providers will meet minimum levels of credibility, information security and service. The information that people and organizations entrust to the cloud providers is precious and should be treated as such. Additionally, cloud computing infrastructures should be subject to information security standards and protocols.

As the process will move forward, governments will see first-hand how cloud computing helps release both the public sector and the private sector from infrastructure limitations and excessive capital expenses. When the process will mature, governments will have new and valuable knowledge and work tools. These assets will enable governments strengthen innovation, tackle productivity issues in a creative way, save money and improve the services provided by each country’s public sector.

Tomer Schwaitzer is the CEO and Founder of Y-tech

For further information:

Australia’s national cloud computing strategy:

http://www.finance.gov.au/sites/default/files/australian-government-cloud-computing-policy-3.pdf

Why the U.S. Government is Moving to Cloud Computing:

http://www.wired.com/insights/2013/09/why-the-u-s-government-is-moving-to-cloud-computing/

“Government as Platform” – Forbes:

http://www.forbes.com/sites/joemckendrick/2014/02/23/government-as-a-platform-how-cloud-computing-is-progressing-inside-the-beltway/#49e6e4e74987

Government cloud – market research:

http://www.marketsandmarkets.com/PressReleases/government-cloud.asp


Securing your cloud service  - Tips for the Cloud Provider
01/16/2021

Securing your cloud service - Tips for the Cloud Provider

When you approach a potential customer with your cloud computing solution, he must feel safe enough to give you a chance.

After gaining his basic trust in you as a person and as a professional, you have to present a convincing way to protect the cloud computing service he is buying.

As I mentioned in my last post, your customer is often not an IT professional, let alone a cloud computing expert, but he does know when you, as a professional and your security solutions give him peace of mind. The way to do it is be sure that you yourself know that you have all the required cloud computing security functions in place. Once you have belief and confidence in your solution, you will transfer the same feelings to your customer without even knowing you do. Authenticity and transparency is all you need.

Delving into the specifics of all these functions is beyond this post’s scope, but Y-tech’s very extensive experience has allowed us to compile a checklist of several – surely not all – general guidelines:

Make sure you comply with relevant data security standards

You do not have to reinvent the world of data security in order to plan and implement an effective end-to-end cloud security solution. Ensure you comply with the highest industry standards for security, such as ISO / IEC 27001. This will help you ensure that you supply your customer with the most critical security functions.

Don’t compromise on 24/7 monitoring and control

The best cloud security protections are useless without total monitoring and control. Make sure you monitor and control all the computing and networking components of your systems 24/7. Use the highest quality NOC (Network Operations Center) and receive alerts before or during any cyber or information security threat is active.

Think twice before you expose servers to the internet

Since exposing the servers in your cloud service to the internet can maximize their functionality, you might feel tempted to do it very early on. The most common mistake in this respect is to do it before you have the required security mechanism for these servers in place. Don’t be tempted to get into that trap.

Make sure your firewalls cover the entire cloud

Putting firewalls in the most prominent locations such as the entrances and the exits of the cloud is relatively easy, but it will not ensure your peace of mind regarding your cloud security. Make sure you go all the way and place the best firewalls you can afford and that they cover all the locations in the cloud, internal and external.

Strengthen your protection against DDoS attacks

There are surely countless types of cyber attacks today, but in my opinion, one type of threat that is often not being met well enough by some cloud providers is DDoS (Distributed Denial of Service) attacks. Screen the entire traffic in your cloud and make sure you use that best DDoS protection you can lay your hands on. For most organizations basic protections will be enough and not that expensive, and they are able to implement them as a standard service. If a customer wishes more, make sure you can scale.

Protect your customer from his neighbors

You might feel that you are protecting your customer’s cloud service against external attacks in a perfect way. However, if you are selling a cloud solution that hosts several customers, which you probably are, the neighbors’ systems can be a source of threats for your new customer’s cloud infrastructure. Make sure you protect your customer from his neighbors in the data center at least in the same way you protect him from external threats. Sometimes, internal threats are even larger.

Tomer Schwaitzer, CEO, Y-tech

 


Contact us