Understanding ITIL v4 for Infrastructure and Platform Management

The IT Infrastructure Library (ITIL) is the de-facto standard around the world as a guide for IT professionals to manage IT services. It is designed to be customer focused, quality driven, and economical. It evolved from a standard defined in Great Britain by their Central Computing and Telecommunications Agency (CCTA) back in the 80’s. The British government needed a unified standard to improve the quality of IT services they had received. The result was this compilation of best practices, and it’s being used by other IT organizations around the world.

After several iterations, the ITIL standard is now on version 4, published in February 2019. The key components of ITIL 4 are the four dimensions model of service agreement, and the ITIL Service Value System (SVS), illustrated in the diagram below:

ITILv4 Key Components (source YASM Wiki)

The four dimensions model are:

  1. Organizations and People
  2. Information and Technology
  3. Partners and Suppliers
  4. Value Streams and Processes

More detailed explanation of the model and components are in the YASM Wiki site.

For IT System Administrators and Managers, ITIL 4 framework has a focus on overseeing the infrastructure and platforms used by organization that enables monitoring of available technology solutions. It includes a model to manage vendors providing Software as a Service (SaaS) and cloud computing environments, as a way of flexible and on-demand expansion. It outlines requirements for configurable computing resources and network access.

The framework of ITIL 4 for IT infrastructure and platform management, stem from the following:

  1. IT Infrastructure components such as physical (ie. Dell, HP, Sparc servers), or virtual (ie. VMWare, Citrix, AWS), including the technologies used behind-the-scene, such as storage, network, Middleware applications (ie. JBOSS, Elasticsearch), and Operating Systems (ie. Linux, Windows).
  2. Develop an implementation and administration strategy for infrastructure or platform that is unique for the organization, to fulfill the business and technical requirements.
  3. Design communication methods between the organization’s own systems (cloud or on-premise), as well as to vendors in a secure and efficient way.

Of course, the above key concepts are an over simplification of the actual implementation. Running a data center is difficult and can be costly. Software is not perfect and fails without proper maintenance. Cybersecurity is complex, both in social and technological contexts. Technology keeps changing that will require constant training of the IT staff to keep up to date.

These are some of the reason why guidelines like ITIL 4 is valuable for IT Managers and Directors to be familiar with. It serves as a starting point to build IT infrastructure and platforms. They need to apply the practice on their own organization. With proper investment, going through many deployment iterations and lessons learned, an organization will be able to achieve the desired stability and security required.

Be Prepared For An Outage

The top question among IT professionals is always this:

How prepared are we during an outage or data loss?

The typical follow up questions would be:

  • What are the root causes?
  • How do we recover?
  • How do we prevent it from happening again?
  • What is the cost of the damage?

The Ponemon Institute study (2016) showed the most common cause of outages are UPS power failure, cyber crime, and human error. If the research is conducted now (2019), cyber crime will probably be on top – as seen from today’s headlines with many banks and corporations (including manufacturing sector) hacked and data breached. The report also indicates this, as the trend jumped from 2% in 2010 to 22% in 2016!

Ponemon Institute Research Report (2016)
Root Causes of Unplanned Outages

Preparation for inevitable disasters will certainly involve more investment in cyber security training and update outdated software and hardware. It also helps to keep things simple, and not introduce unproven technology just for the sake of being trendy, or on the “bleeding edge”.

It is easier said than done. However, it’s not impossible. Management needs to be more aware that complicated business process introduce more human errors. Introducing many systems can also expose many weaknesses when IT teams tries to connect them together, to share data. Having multiple sites outside a traditional Enterprise data center also exposes data to be breached either by external hacker, or internal leak.

Prevention is certainly the priority for many concerned IT experts. Knowing the common points of failures, additional checks and balances in data recovery services and stressing security concerns for the employees are important first steps. One can’t simply wait for the storm to come. Instead, prepare for the storm and budget accordingly.

Can Anyone Get Rich from Open Source?

Open Source Initiative Logo

Can any company make money from Open Source?  The idea of open source work is like charity – it’s a great service for the community, but it won’t make anyone rich like Bill Gates, Steve Jobs, or Larry Ellison.  That thought may be right and wrong.

One example was MySQL. It was not capable of beating, or even competing, with Oracle Database.  However, it was the cheaper (free) solution to run web sites for bloggers (like this one) or SMBs. Since then, Oracle decided to buy MySQL’s Innobase engine because of the large install base. The same with Java which was once touted by Sun Microsystems as the ideal platform for Enterprise open-source language, was acquired by default when the Oracle bought Sun. No doubt, Larry Ellison had a thought that with this many users, there was a potential revenue to be made.

A decade ago, there was a speculation that an open source operating system like Linux is a possible money maker.  Back then, enterprise customers were still mostly invested in Solaris (Sparc) and Windows (x86) OS.  Red Hat was the biggest name in Linux distribution, and they were making money from providing support for it.  Now, IBM saw the Linux adoption kept going up, so it was only logical for IBM to acquire Red Hat, and the growing customer base along with it.

Linux adoption became bigger when Microsoft decided to include Linux as part of Windows 10 distribution, and contributed a large chunk of their code as open source.  The thinking is that contributing to vibrant and open community brings a sort of likeability to giants like Microsoft.  It’s no surprise Microsoft is touted to be a better technology innovator than Apple, Samsung, IBM, or even Google.

Speaking of likeability, or “coolness” factor, another example is Elastic offering a solid product based on Lucene open source search engine. With customers like Uber and SpaceX adopting their (based-on) open source search engine, Elastic is poised to make plenty of revenue.  So much so, they’re gaining competition from Amazon Web Services offering the same solution based on Elasticsearch open source software. The potential revenue is definitely available for the taking.

Can anyone get rich from Open Source?  Absolutely.  As long as there are mass adoptions, rich use cases, growing libraries, and plenty of community experts, open source is now becoming the standard for technology adoption in Enterprise environments.  The most successful companies will succeed in the open source game, only if they can make a compelling product that works really well and be able to support it. The customers are there – just make them happy!

[EDIT 8/1/2019]: Wired has a nice write-up on how companies should take the “moral” ground and mutual benefits when it comes to licensing open source software. My thought this can be tricky because of the old saying: “It’s just business, nothing personal.” Although it’s nice that we expect people to play nice, making money is a dirty business.