Technology Listing (Week 4, Jan 2020)

1. DevOps

DevOps is a set of practices that combines software development (Dev) and information-technology operations (Ops) which aims to shorten the systems development life cycle and provide continuous delivery with high software quality.

As DevOps is intended to be a cross-functional mode of working, those that practice the methodology use different sets of tools—referred to as "toolchains"—rather than a single one. These toolchains are expected to fit into one or more of the following categories, reflective of key aspects of the development and delivery process:

a) Coding – code development and review, source code management tools, code merging

b) Building – continuous integration tools, build status

c) Testing – continuous testing tools that provide quick and timely feedback on business risks

d) Packaging – artifact repository, application pre-deployment staging

e) Releasing – change management, release approvals, release automation

f) Configuring – infrastructure configuration and management, infrastructure as code tools

g) Monitoring – applications performance monitoring, end-user experience

Some categories are more essential in a DevOps toolchain than others; especially continuous integration (e.g. Jenkins, Gitlab, Bitbucket pipelines) and infrastructure as code (e.g., Terraform, Ansible, Puppet).

Forsgren et al. found that IT performance is strongly correlated with DevOps practices like source code management and continuous delivery.

Relationship with Agile
Agile and DevOps serve complementary roles: several standard DevOps practices such as automated build and test, continuous integration, and continuous delivery originated in the Agile world, which dates (informally) to the 1990s, and formally to 2001. Agile can be viewed as addressing communication gaps between customers and developers, while DevOps addresses gaps between developers and IT operations / infrastructure. Also, DevOps has focus on the deployment of developed software, whether it is developed via Agile or other methodologies..

The goals of DevOps span the entire delivery pipeline. They include:

- Improved deployment frequency;
- Faster time to market;
- Lower failure rate of new releases;
- Shortened lead time between fixes;
- Faster mean time to recovery (in the event of a new release crashing or otherwise disabling the current system).

DevOps automation 
DevOps automation can be achieved by repackaging platforms, systems, and applications into reusable building blocks through the use of technologies such as virtual machines and containerization.

Implementation of DevOps automation in the IT-organization is heavily dependent on tools, [unreliable source?]which are required[citation needed] to cover different areas of the systems development lifecycle (SDLC):

1. Infrastructure as code (e.g. Ansible)
2. CI/CD
3. Test automation
4. Containerization (e.g. Docker)
5. Orchestration (e.g. Kubernetes)
6. Software deployment
7. Software measurement

Ref: https://en.wikipedia.org/wiki/DevOps

2. Linux

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged in a Linux distribution.

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Popular Linux distributions include Debian, Fedora, and Ubuntu. Commercial distributions include Red Hat Enterprise Linux and SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, and a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, or include a solution stack such as LAMP. Because Linux is freely redistributable, anyone may create a distribution for any purpose.

Linux was originally developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system. Linux is the leading operating system on servers and other big iron systems such as mainframe computers, and the only OS used on TOP500 supercomputers (since November 2017, having gradually eliminated all competitors). It is used by around 2.3 percent of desktop computers. The Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US.

Linux also runs on embedded systems, i.e. devices whose operating system is typically built into the firmware and is highly tailored to the system. This includes routers, automation controls, televisions, digital video recorders, video game consoles, and smartwatches. Many smartphones and tablet computers run Android and other Linux derivatives. Because of the dominance of Android on smartphones, Linux has the largest installed base of all general-purpose operating systems.

Linux is one of the most prominent examples of free and open-source software collaboration. The source code may be used, modified and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License.

Commercial and popular uptake
Main article: Linux adoption

Adoption of Linux in production environments, rather than being used only by hobbyists, started to take off first in the mid-1990s in the supercomputing community, where organizations such as NASA started to replace their increasingly expensive machines with clusters of inexpensive commodity computers running Linux. Commercial use began when Dell and IBM, followed by Hewlett-Packard, started offering Linux support to escape Microsoft's monopoly in the desktop operating system market.

Today, Linux systems are used throughout computing, from embedded systems to virtually all supercomputers, and have secured a place in server installations such as the popular LAMP application stack. Use of Linux distributions in home and enterprise desktops has been growing. Linux distributions have also become popular in the netbook market, with many devices shipping with customized Linux distributions installed, and Google releasing their own Chrome OS designed for netbooks.

Linux's greatest success in the consumer market is perhaps the mobile device market, with Android being one of the most dominant operating systems on smartphones and very popular on tablets and, more recently, on wearables. Linux gaming is also on the rise with Valve showing its support for Linux and rolling out its own gaming-oriented Linux distribution. Linux distributions have also gained popularity with various local and national governments, such as the federal government of Brazil.

Design 
A Linux-based system is a modular Unix-like operating system, deriving much of its basic design from principles established in Unix during the 1970s and 1980s. Such a system uses a monolithic kernel, the Linux kernel, which handles process control, networking, access to the peripherals, and file systems. Device drivers are either integrated directly with the kernel, or added as modules that are loaded while the system is running.

The GNU userland is a key part of most systems based on the Linux kernel, with Android being the notable exception. The Project's implementation of the C library functions as a wrapper for the system calls of the Linux kernel necessary to the kernel-userspace interface, the toolchain is a broad collection of programming tools vital to Linux development (including the compilers used to build the Linux kernel itself), and the coreutils implement many basic Unix tools. The project also develops a popular CLI shell. The graphical user interface (or GUI) used by most Linux systems is built on top of an implementation of the X Window System. More recently, the Linux community seeks to advance to Wayland as the new display server protocol in place of X11. Many other open-source software projects contribute to Linux systems.

Installed components of a Linux system include the following: 

- A bootloader, for example GNU GRUB, LILO, SYSLINUX, or Gummiboot. This is a program that loads the Linux kernel into the computer's main memory, by being executed by the computer when it is turned on and after the firmware initialization is performed.

- An init program, such as the traditional sysvinit and the newer systemd, OpenRC and Upstart. This is the first process launched by the Linux kernel, and is at the root of the process tree: in other terms, all processes are launched through init. It starts processes such as system services and login prompts (whether graphical or in terminal mode).

- Software libraries, which contain code that can be used by running processes. On Linux systems using ELF-format executable files, the dynamic linker that manages use of dynamic libraries is known as ld-linux.so. If the system is set up for the user to compile software themselves, header files will also be included to describe the interface of installed libraries. Besides the most commonly used software library on Linux systems, the GNU C Library (glibc), there are numerous other libraries, such as SDL and Mesa.

- C standard library is the library needed to run C programs on a computer system, with the GNU C Library being the standard. For embedded systems, alternatives such as the musl, EGLIBC (a glibc fork once used by Debian) and uClibc (which was designed for uClinux) have been developed, although the last two are no longer maintained. Android uses its own C library, Bionic.

- Basic Unix commands, with GNU coreutils being the standard implementation. Alternatives exist for embedded systems, such as the copyleft BusyBox, and the BSD-licensed Toybox.

- Widget toolkits are the libraries used to build graphical user interfaces (GUIs) for software applications. Numerous widget toolkits are available, including GTK+ and Clutter developed by the GNOME project, Qt developed by the Qt Project and led by Digia, and Enlightenment Foundation Libraries (EFL) developed primarily by the Enlightenment team.

- A package management system, such as dpkg and RPM. Alternatively packages can be compiled from binary or source tarballs.

- User interface programs such as command shells or windowing environments.

Ref: https://en.wikipedia.org/wiki/Linux

3. JBoss EAP

The JBoss Enterprise Application Platform (or JBoss EAP) is a subscription-based/open-source Java EE-based application server runtime platform used for building, deploying, and hosting highly-transactional Java applications and services. The JBoss Enterprise Application Platform is part of the JBoss Enterprise Middleware portfolio of software. Because it is Java-based, the JBoss application server operates across platforms; it is usable on any operating system that supports Java. The JBoss Enterprise Application Platform was developed by JBoss, now a division of Red Hat.

Product components and features
Key features:

- Eclipse-based Integrated Development Environment (IDE) is available using JBoss Developer Studio
- Supports Java EE and Web Services standards
- Enterprise Java Beans (EJB)
- Java persistence using Hibernate
- Object request broker (ORB) using JacORB for interoperability with CORBA objects
- JBoss Seam framework, including Java annotations to enhance POJOs, and including JBoss jBPM
- JavaServer Faces (JSF), including RichFaces
- Web application services, including Apache Tomcat for JavaServer Pages (JSP) and Java Servlets
- Caching, clustering, and high availability, including JBoss Cache, and including JNDI, RMI, and EJB types
- Security services, including Java Authentication and Authorization Service (JAAS) and pluggable authentication modules (PAM)
- Web Services and interoperability, including JAX-RPC, JAX-WS, many WS-* standards, and MTOM/XOP
- Integration and messaging services, including J2EE Connector Architecture (JCA), Java Database Connectivity (JDBC), and Java Message Service (JMS)
- Management and Service-Oriented Architecture (SOA) using Java Management Extensions (JMX)
- Additional administration and monitoring features are available using JBoss Operations Network

Key components:

- JBoss Application Server, the framework used to support the development and implementation of applications
- Hibernate, an object/relational mapping and persistence (ORM) framework
- JBoss Seam, a framework for building web applications
- JBoss Web Framework Kit, for building Java applications

Ref: https://en.wikipedia.org/wiki/JBoss_Enterprise_Application_Platform

4. Ansible

Ansible is an open-source software provisioning, configuration management, and application-deployment tool. It runs on many Unix-like systems, and can configure both Unix-like systems as well as Microsoft Windows. It includes its own declarative language to describe system configuration.

Ansible was written by Michael DeHaan and acquired by Red Hat in 2015. Ansible is agentless, temporarily connecting remotely via SSH or remote PowerShell to do its tasks.

The design goals of Ansible include:

- Minimal in nature. Management systems should not impose additional dependencies on the environment.
- Consistent. With Ansible one should be able to create consistent environments.
- Secure. Ansible does not deploy agents to nodes. Only OpenSSH and Python are required on the managed nodes.
- Highly reliable. When carefully written, an Ansible playbook can be idempotent, to prevent unexpected side-effects on the managed systems. It is entirely possible to have a poorly written playbook that is not idempotent.
- Minimal learning required. Playbooks use an easy and descriptive language based on YAML and Jinja templates.

Ref: https://en.wikipedia.org/wiki/Ansible_(software)

5. CentOS

CentOS (from Community Enterprise Operating System) is a Linux distribution that provides a free, community-supported computing platform functionally compatible with its upstream source, Red Hat Enterprise Linux (RHEL). In January 2014, CentOS announced the official joining with Red Hat while staying independent from RHEL, under a new CentOS governing board.

The first CentOS release in May 2004, numbered as CentOS version 2, was forked from RHEL version 2.1AS. CentOS version 7.0 officially supports only the x86-64 architecture, while versions older than 7.0-1406 also support IA-32 with Physical Address Extension (PAE). As of December 2015, AltArch releases of CentOS 7 are available for the IA-32 architecture, Power ISA, and for the ARMv7hl and AArch64 variants of the ARM architecture. Building of CentOS 8 started at May 2019. CentOS 8 was released on 24 September 2019.

Design 
RHEL is available only through a paid subscription service or for development use in a non-production environment – which provides access to software updates and varying levels of technical support. The product is largely composed of software packages distributed under free software licenses and the source code for these packages is made public by Red Hat.

CentOS developers use Red Hat's source code to create a final product very similar to RHEL. Red Hat's branding and logos are changed because Red Hat does not allow them to be redistributed. CentOS is available free of charge. Technical support is primarily provided by the community via official mailing lists, web forums, and chat rooms.

The project is affiliated with Red Hat but aspires to be more public, open, and inclusive. While Red Hat employs most of the CentOS head developers, the CentOS project itself relies on donations from users and organizational sponsors.

Ref: https://en.wikipedia.org/wiki/CentOS

6. Cloud native computing

Cloud native computing is an approach in software development that utilizes cloud computing to its fullest due to its use of an open source software stack to deploy applications as microservices. Typically, cloud native applications are built as a set of microservices that run in Docker containers, orchestrated in Kubernetes and managed and deployed using DevOps and Git Ops workflows. The advantage of using Docker containers is the ability to package all software needed to execute into one executable package. The container runs in a virtualized environment, which isolates the contained application from its environment.

Ref: https://en.wikipedia.org/wiki/Cloud_native_computing

7. Hypervisor

A hypervisor (or virtual machine monitor, VMM) is a computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel.

The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisor, with hyper- used as a stronger variant of super-. The term dates to circa 1970; in the earlier CP/CMS (1967) system the term Control Program was used instead.

Classification 



Type-1 and type-2 hypervisors
In their 1974 article, Formal Requirements for Virtualizable Third Generation Architectures, Gerald J. Popek and Robert P. Goldberg classified two types of hypervisor:

Type-1, native or bare-metal hypervisors
These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors. The first hypervisors, which IBM developed in the 1960s, were native hypervisors. These included the test software SIMMON and the CP/CMS operating system (the predecessor of IBM's z/VM). Modern equivalents include Nutanix AHV, AntsleOs, Xen, XCP-ng, Oracle VM Server for SPARC, Oracle VM Server for x86, Microsoft Hyper-V, Xbox One system software, and VMware ESXi (formerly ESX).

Type-2 or hosted hypervisors
These hypervisors run on a conventional operating system (OS) just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are examples of type-2 hypervisors.
The distinction between these two types is not always clear. For instance, Linux's Kernel-based Virtual Machine (KVM) and FreeBSD's bhyve are kernel modules that effectively convert the host operating system to a type-1 hypervisor. At the same time, since Linux distributions and FreeBSD are still general-purpose operating systems, with applications competing with each other for VM resources, KVM and bhyve can also be categorized as type-2 hypervisors.

Ref: https://en.wikipedia.org/wiki/Hypervisor

8. x86

x86 is a family of instruction set architectures[a] initially developed by Intel based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a fully 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address. The term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors.

Many additions and extensions have been added to the x86 instruction set over the years, almost consistently with full backward compatibility. The architecture has been implemented in processors from Intel, Cyrix, AMD, VIA Technologies and many other companies; there are also open implementations, such as the Zet SoC platform (currently inactive). Nevertheless, of those, only Intel, AMD, VIA Technologies and DM&P Electronics hold x86 architectural licenses, and from these, only the first two are actively producing modern 64-bit designs.

The term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware; embedded systems, as well as general-purpose computers, used x86 chips before the PC-compatible market started, some of them before the IBM PC (1981) itself.

As of 2018, the majority of personal computers and laptops sold are based on the x86 architecture, while other categories—especially high-volume[clarification needed] mobile categories such as smartphones or tablets—are dominated by ARM; at the high end, x86 continues to dominate compute-intensive workstation and cloud computing segments.

ARM, previously Advanced RISC Machine, originally Acorn RISC Machine, is a family of reduced instruction set computing (RISC) architectures for computer processors, configured for various environments.


Ref: https://en.wikipedia.org/wiki/X86

9. x86-64

x86-64 (also known as x64, x86_64, AMD64 and Intel 64) is the 64-bit version of the x86 instruction set. It introduces two new modes of operation, 64-bit mode and compatibility mode, along with a new 4-level paging mode. With 64-bit mode and the new paging mode, it supports vastly larger amounts of virtual memory and physical memory than is possible on its 32-bit predecessors, allowing programs to store larger amounts of data in memory. x86-64 also expands general-purpose registers to 64-bit, as well extends the number of them from 8 (some of which had limited or fixed functionality, e.g. for stack management) to 16 (fully general), and provides numerous other enhancements. Floating point operations are supported via mandatory SSE2-like instructions, and x87/MMX style registers are generally not used (but still available even in 64-bit mode); instead, a set of 32 vector registers, 128 bits each, is used. (Each can store one or two double-precision numbers or one to four single precision numbers, or various integer formats.) In 64-bit mode, instructions are modified to support 64-bit operands and 64-bit addressing mode. The compatibility mode allows 16- and 32-bit user applications to run unmodified coexisting with 64-bit applications if the 64-bit operating system supports them.[note 2] As the full x86 16-bit and 32-bit instruction sets remain implemented in hardware without any intervening emulation, these older executables can run with little or no performance penalty, while newer or modified applications can take advantage of new features of the processor design to achieve performance improvements. Also, a processor supporting x86-64 still powers on in real mode for full backward compatibility, as x86 processors have done since the 80286.

The original specification, created by AMD and released in 2000, has been implemented by AMD, Intel and VIA. The AMD K8 microarchitecture, in the Opteron and Athlon64 processors, was the first to implement it. This was the first significant addition to the x86 architecture designed by a company other than Intel. Intel was forced to follow suit and introduced a modified NetBurst family which was software-compatible with AMD's specification. VIA Technologies introduced x86-64 in their VIA Isaiah architecture, with the VIA Nano.

The x86-64 architecture is distinct from the Intel Itanium architecture (formerly IA-64), which is not compatible on the native instruction set level with the x86 architecture. Operating systems and applications written for one cannot be run on the other.

Ref: https://en.wikipedia.org/wiki/X86-64

10. OpenShift

OpenShift is a family of containerization software developed by Red Hat. Its flagship product is the OpenShift Container Platform—an on-premises platform as a service built around Docker containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux. The family's other products provide this platform through different environments: OKD serves as the community-driven upstream (akin to Fedora), OpenShift Online is the platform offered as software as a service, and Openshift Dedicated is the platform offered as a managed service.

History 
OpenShift originally came from Red Hat's acquisition of Makara - a company with a proprietary PaaS solutionbased on Linux containers. Even though OpenShift was announced in May 2011, it was proprietary technology and did not become open-source until May of 2012. Up until v3, the container technology and container orchestration technology used custom developed technologies. This changed in v3 with the adoption of Docker as the container technology, and Kubernetes as the container orchestration technology. The v4 product has many other architectural changes - a prominent one being a shift to using CRIO as the container runtime (and Podman for interacting with pods and containers), and Buildah as the container build tool, thus breaking the exclusive dependency on Docker.

Architecture
The main differentiator between OpenShift and vanilla Kubernetes is the notion of build related artifacts being first class Kubernetes resources upon which standard Kubernetes operations can apply. The OpenShift client program is "oc" - which offers a superset of capabilities offered by the "kubectl" client program of Kubernetes. Using this client, one can directly interact with the build related resources using sub-commands (such as "new-build" or "start-build"). In addition to this, an OpenShift-native pod build technology called Source-to-Image (S2I) is available out of the box. For the OpenShift platform, this provides capabilities equivalent to what Jenkins can do.

Some other differences when OpenShift is compared to Kubernetes:

- The v4 product line uses the CRI-O runtime - which means that docker daemons are not present on the master or worker nodes. This improves the security posture of the cluster.
- The out-of-the-box install of OpenShift comes included with an image repository.
- ImageStreams (a sequence of pointers to images which can be associated with deployments) and Templates (a packaging mechanism for application components) are unique to OpenShift and simplify application deployment and management.
- The "new-app" command which can be used to initiate an application deployment automatically applies the app label (with the value of the label taken from the --name argument) to all resources created as a result of the deployment. This can simplify the management of application resources.
- OpenShift introduced the concept of routes - points of traffic ingress into the Kubernetes cluster. The Kubernetes ingress concept was modeled after this.

OpenShift also provides value adds by bundling various software solutions - application runtimes as well as infrastructure components from the Kubernetes ecosystem. For example, for observability needs, Prometheus, Hawkular, and Istio (and their dependencies) are included out of the box. The console UI includes an "Operator Hub" serves as a marketplace from where publicly provided operator-based solutions can be downloaded and deployed.

Ref: https://en.wikipedia.org/wiki/OpenShift

11. Docker

Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating-system kernel and are thus more lightweight than virtual machines.

The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc.

Components 
The Docker software-as-a-service offering consists of three components:

- Software: The Docker daemon, called dockerd, is a persistent process that manages Docker containers and handles container objects. The daemon listens for requests sent via the Docker Engine API. The Docker client program, called docker, provides a command-line interface that allows users to interact with Docker daemons.

- Objects: Docker objects are various entities used to assemble an application in Docker. The main classes of Docker objects are images, containers, and services.

  - A Docker container is a standardized, encapsulated environment that runs applications. A container is managed using the Docker API or CLI.
  - A Docker image is a read-only template used to build containers. Images are used to store and ship applications.
  - A Docker service allows containers to be scaled across multiple Docker daemons. The result is known as a swarm, a set of cooperating daemons that communicate through the Docker API.

- Registries: A Docker registry is a repository for Docker images. Docker clients connect to registries to download ("pull") images for use or upload ("push") images that they have built. Registries can be public or private. Two main public registries are Docker Hub and Docker Cloud. Docker Hub is the default registry where Docker looks for images. Docker registries also allow the creation of notifications based on events.

Tools 
- Docker Compose is a tool for defining and running multi-container Docker applications. It uses YAML files to configure the application's services and performs the creation and start-up process of all the containers with a single command. The docker-compose CLI utility allows users to run commands on multiple containers at once, for example, building images, scaling containers, running containers that were stopped, and more. Commands related to image manipulation, or user-interactive options, are not relevant in Docker Compose because they address one container. The docker-compose.yml file is used to define an application's services and includes various configuration options. For example, the build option defines configuration options such as the Dockerfile path, the command option allows one to override default Docker commands, and more. The first public beta version of Docker Compose (version 0.0.1) was released on December 21, 2013. The first production-ready version (1.0) was made available on October 16, 2014.

- Docker Swarm provides native clustering functionality for Docker containers, which turns a group of Docker engines into a single virtual Docker engine. In Docker 1.12 and higher, Swarm mode is integrated with Docker Engine. The docker swarm CLI utility allows users to run Swarm containers, create discovery tokens, list nodes in the cluster, and more. The docker node CLI utility allows users to run various commands to manage nodes in a swarm, for example, listing the nodes in a swarm, updating nodes, and removing nodes from the swarm. Docker manages swarms using the Raft Consensus Algorithm. According to Raft, for an update to be performed, the majority of Swarm nodes need to agree on the update.

Ref: https://en.wikipedia.org/wiki/Docker_(software)

12. CRI-O

CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. It is a lightweight alternative to using Docker as the runtime for kubernetes. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. Today it supports runc and Kata Containers as the container runtimes but any OCI-conformant runtime can be plugged in principle.

CRI-O supports OCI container images and can pull from any container registry. It is a lightweight alternative to using Docker, Moby or rkt as the runtime for Kubernetes.

Architecture 



The architectural components are as follows:

- Kubernetes contacts the kubelet to launch a pod. Pods are a kubernetes concept consisting of one or more containers sharing the same IPC, NET and PID namespaces and living in the same cgroup.
- The kubelet forwards the request to the CRI-O daemon VIA kubernetes CRI (Container runtime interface) to launch the new POD.
- CRI-O uses the containers/image library to pull the image from a container registry.
- The downloaded image is unpacked into the container’s root filesystems, stored in COW file systems, using containers/storage library.
- After the rootfs has been created for the container, CRI-O generates an OCI runtime specification json file describing how to run the container using the OCI Generate tools.
- CRI-O then launches an OCI Compatible Runtime using the specification to run the container proceses. The default OCI Runtime is runc.
- Each container is monitored by a separate conmon process. The conmon process holds the pty of the PID1 of the container process. It handles logging for the container and records the exit code for the container process.
- Networking for the pod is setup through use of CNI, so any CNI plugin can be used with CRI-O.

Components 
CRI-O is made up of several components that are found in different GitHub repositories.

- OCI compatible runtime
- containers/storage
- containers/image
- networking (CNI)
- container monitoring (conmon)
- security is provided by several core Linux capabilities

Ref: https://cri-o.io/

13. Elastic Search

Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents. Elasticsearch is developed in Java. Following an open-core business model, parts of the software are licensed under various open-source licenses (mostly the Apache License), while other parts fall under the proprietary (source-available) Elastic License. Official clients are available in Java, .NET (C#), PHP, Python, Apache Groovy, Ruby and many other languages. According to the DB-Engines ranking, Elasticsearch is the most popular enterprise search engine followed by Apache Solr, also based on Lucene.

Features 
Elasticsearch can be used to search all kinds of documents. It provides scalable search, has near real-time search, and supports multitenancy. "Elasticsearch is distributed, which means that indices can be divided into shards and each shard can have zero or more replicas. Each node hosts one or more shards, and acts as a coordinator to delegate operations to the correct shard(s). Rebalancing and routing are done automatically". Related data is often stored in the same index, which consists of one or more primary shards, and zero or more replica shards. Once an index has been created, the number of primary shards cannot be changed.

Elasticsearch is developed alongside a data collection and log-parsing engine called Logstash, an analytics and visualisation platform called Kibana, and Beats, a collection of lightweight data shippers. The four products are designed for use as an integrated solution, referred to as the "Elastic Stack" (formerly the "ELK stack").

Elasticsearch uses Lucene and tries to make all its features available through the JSON and Java API. It supports facetting and percolating, which can be useful for notifying if new documents match for registered queries. Another feature is called "gateway" and handles the long-term persistence of the index; for example, an index can be recovered from the gateway in the event of a server crash. Elasticsearch supports real-time GET requests, which makes it suitable as a NoSQL datastore, but it lacks distributed transactions.

On 20 May 2019, Elastic made the core security features of the Elastic Stack available free of charge, including TLS for encrypted communications, file and native realm for creating and managing users, and role-based access control for controlling user access to cluster APIs and indexes. The corresponding source code is available under the “Elastic License”, a source-available license.

Ref: https://en.wikipedia.org/wiki/Elasticsearch

14. Fleunt Bit

Fluent Bit is an open source and multi-platform Log Processor and Forwarder which allows you to collect data/logs from different sources, unify and send them to multiple destinations. It's fully compatible with Docker and Kubernetes environments.

Fluent Bit is written in C, have a pluggable architecture supporting around 30 extensions. It's fast and lightweight and provide the required security for network operations through TLS.

Ref: https://fluentbit.io/

15. Kibana

Kibana is an open source data visualization dashboard for Elasticsearch. It provides visualization capabilities on top of the content indexed on an Elasticsearch cluster. Users can create bar, line and scatter plots, or pie charts and maps on top of large volumes of data.

Kibana also provides a presentation tool, referred to as Canvas, that allows users to create slide decks that pull live data directly from Elasticsearch. 

The combination of Elasticsearch, Logstash, and Kibana, referred to as the "Elastic Stack" (formerly the "ELK stack"), is available as a product or service. Logstash provides an input stream to Elasticsearch for storage and search, and Kibana accesses the data for visualizations such as dashboards. Elastic also provides "Beats" packages which can be configured to provide pre-made Kibana visualizations and dashboards about various database and application technologies.

In December 2019 Elastic introduced Kibana Lens product. 

Ref: https://en.wikipedia.org/wiki/Kibana

16. Logstash

Logstash is a tool for managing events and logs. When used generically, the term encompasses a larger system of log collection, processing, storage and searching activities.

Logstash is a tool to collect, process, and forward events and log messages. Collection is accomplished via configurable input plugins including raw socket/packet communication, file tailing, and several message bus clients. Once an input plugin has collected data it can be processed by any number of filters which modify and annotate the event data. Finally logstash routes events to output plugins which can forward the events to a variety of external programs including Elasticsearch, local files and several message bus implementations.

Centralize, transform & stash your data 
Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite "stash."

Ref 1: https://www.elastic.co/products/logstash
Ref 2: https://wikitech.wikimedia.org/wiki/Logstash

17. ServiceNow

ServiceNow, Inc. (Service-now in 2011) is an American cloud computing company with its headquarters in Santa Clara, California. It was founded in 2004 by Fred Luddy. ServiceNow is listed on the New York Stock Exchange and is a constituent of the Russell 1000 Index and S&P 500 Index.

Business model
ServiceNow is a software-as-a-service provider, providing technical management support, such as IT service management, to the IT operations of large corporations, including providing help desk functionality. The company's core business revolves around management of "incident, problem, and change" IT operational events. Their fee model was based on a cost per user (seat) per month, with that cost ranging down from US$100.

Ref: https://en.wikipedia.org/wiki/ServiceNow

18. Service Mesh

In software architecture, a service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between microservices, often using a sidecar proxy.

Having such a dedicated communication layer can provide a number of benefits, such as providing observability into communications, providing secure connections, or automating retries and backoff for failed requests.

Implementations
- Consul
- Istio
- Kuma
- Linkerd
- Maesh

Ref: https://en.wikipedia.org/wiki/Service_mesh

19. Istio

Istio is an open-source platform for managing and securing microservices

Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. Istio supports managing traffic flows between microservices, enforcing access policies, and aggregating telemetry data, all without requiring changes to the microservice code. Istio gives you:

- Automatic load balancing for HTTP, gRPC, and TCP traffic.
- Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.
- A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.
- Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.
- Secure service-to-service authentication with strong identity assertions between services in a cluster.

Ref: https://stackoverflow.com/tags/istio/info

20. Conduit

Conduit is a software tool that provides service mesh communication. It is built by Red Hat. 

Ref: https://stackoverflow.com/questions/51191291/differences-between-the-service-mesh-projects-istio-and-conduit

21. Jaeger Tracing

Jaeger, inspired by Dapper and OpenZipkin, is a distributed tracing system released as open source by Uber Technologies. It is used for monitoring and troubleshooting microservices-based distributed systems, including:

- Distributed context propagation
- Distributed transaction monitoring
- Root cause analysis
- Service dependency analysis
- Performance / latency optimization

Uber published a blog post, Evolving Distributed Tracing at Uber, where they explain the history and reasons for the architectural choices made in Jaeger.

Features 
- OpenTracing compatible data model and instrumentation libraries in Go, Java, Node, Python and C++
- Uses consistent upfront sampling with individual per service/endpoint probabilities
- Multiple storage backends: Cassandra, Elasticsearch, memory.
- Adaptive sampling (coming soon)
- Post-collection data processing pipeline (coming soon)

Software tracing 
Software tracing provides developers with information useful for debugging. This information is used both during development cycles and after the release of the software. Unlike event logging, software tracing usually does not have the concept of a "class" of event or an "event code". Other reasons why event-logging solutions based on event codes are inappropriate for software tracing include:

- Because software tracing is low-level, there are often many more types of messages that would need to be defined, many of which would only be used at one place in the code. The event-code paradigm introduces significant development overhead for these "one-shot" messages.
- The types of messages that are logged are often less stable through the development cycle than for event logging.
- Because the tracing output is intended to be consumed by the developer, the messages don't need to be localized. Keeping tracing messages separate from other resources that need to be localized (such as event messages) is therefore important.
- There are messages that should never be seen.
- Tracing messages should be kept in the code, because they can add to the readability of the code. This is not always possible or feasible with event-logging solutions.
- Another important consideration for software tracing is performance. Because software tracing is low-level, the possible volume of trace messages is much higher. To address performance concerns, it often must be possible to turn off software tracing, either at compile-time or run-time.

Other special concerns:

- In proprietary software, tracing data may include sensitive information about the product's source code.
- If tracing is enabled or disabled at run-time, many methods of tracing require the inclusion of a significant amount of additional data in the binary, which can indirectly hurt performance even when tracing is disabled.
- If tracing is enabled or disabled at compile-time, getting trace data for a problem on a customer machine depends on the customer being willing and able to install a special, tracing-enabled version of the software and then duplicating the problem.
- Many uses of tracing have very stringent robustness requirements. This is both in the robustness of the trace output but also in that the use-case being traced should not be disrupted.
- In operating systems, tracing is sometimes useful in situations (such as booting) where some of the technologies used to provide event logging may not be available.
- In embedded software, tracing requires special techniques.

Event logging ---  vs --- Software tracing
Consumed primarily by system administrators --- Consumed primarily by developers
Logs "high level" information (e.g. failed installation of a program) --- Logs "low level" information (e.g. a thrown exception)
Must not be too "noisy" (containing many duplicate events or information is not helpful to its intended audience) --- Can be noisy
A standards-based output format is often desirable, sometimes even required --- Few limitations on output format
Event log messages are often localized --- Localization is rarely a concern
Addition of new types of events, as well as new event messages, need not be agile --- Addition of new tracing messages must be agile

Ref 1: https://www.jaegertracing.io/docs/1.11/
Ref 2: https://en.wikipedia.org/wiki/Tracing_(software)

22. Kiali

Kiali is an observability console for Istio with service mesh configuration capabilities. It helps you to understand the structure of your service mesh by inferring the topology, and also provides the health of your mesh. Kiali provides detailed metrics, and a basic Grafana integration is available for advanced queries. Distributed tracing is provided by integrating Jaeger.

Ref: https://kiali.io/

23. Red Hat 3scale

3scale is an internet technology company that develops API management software.

API management is the process of creating and publishing web application programming interfaces (APIs), enforcing their usage policies, controlling access, nurturing the subscriber community, collecting and analyzing usage statistics, and reporting on performance. API Management components provide mechanisms and tools to support developer and subscriber community.

Components 
While solutions vary, components that provide the following functionality are typically found in API management products:

- Gateway: a server that acts as an API front-end, receives API requests, enforces throttling and security policies, passes requests to the back-end service and then passes the response back to the requester. A gateway often includes a transformation engine to orchestrate and modify the requests and responses on the fly. A gateway can also provide functionality such as collecting analytics data and providing caching. The gateway can provide functionality to support authentication, authorization, security, audit and regulatory compliance.

- Publishing tools: a collection of tools that API providers use to define APIs, for instance using the OpenAPI or RAML specifications, generate API documentation, manage access and usage policies for APIs, test and debug the execution of API, including security testing and automated generation of tests and test suites, deploy APIs into production, staging, and quality assurance environments, and coordinate the overall API lifecycle.

- Developer portal/API store: community site, typically branded by an API provider, that can encapsulate for API users in a single convenient source information and functionality including documentation, tutorials, sample code, software development kits, an interactive API console and sandbox to trial APIs, the ability to subscribe to the APIs and manage subscription keys such as OAuth2 Client ID and Client Secret, and obtain support from the API provider and user and community.

- Reporting and analytics: functionality to monitor API usage and load (overall hits, completed transactions, number of data objects returned, amount of compute time and other internal resources consumed, volume of data transferred). This can include real-time monitoring of the API with alerts being raised directly or via a higher-level network management system, for instance, if the load on an API has become too great, as well as functionality to analyze historical data, such as transaction logs, to detect usage trends. Functionality can also be provided to create synthetic transactions that can be used to test the performance and behavior of API endpoints. The information gathered by the reporting and analytics functionality can be used by the API provider to optimize the API offering within an organization's overall continuous improvement process and for defining software Service-Level Agreements for APIs.

- Monetization: functionality to support charging for access to commercial APIs. This functionality can include support for setting up pricing rules, based on usage, load and functionality, issuing invoices and collecting payments including multiple types of credit card payments.

Products
The wide adoption of APIs led to the emergence of off-the-shelf API management products, open-source projects, and SaaS offerings. Both Gartner and Forrester Research list a number of API management vendors in their reports. Companies listed by both as being active in API management space and other organizations working this area include the following:

Opensource
- WSO2
- 3scale

Proprietary
- Apigee (now owned by Google)
- Asseco
- Axway (acquired Vordel)
- CA API Management (formerly Layer 7, acquired by CA Technologies)
- DreamFactory
- IBM API Connect
- Jitterbit
- Kong Inc.
- Mashery (now owned by TIBCO Software)
- Microsoft (Azure API Management)
- MuleSoft
- New Relic
- NGINX (NGINX Controller)
- Oracle API Platform Cloud Service
- Rogue Wave Software (acquired Akana)
- Runscope
- Sensedia (part of CI&T)
- SmartBear
- Software AG
- ZUP API Manager

Ref 1: https://en.wikipedia.org/wiki/3scale
Ref 2: https://en.wikipedia.org/wiki/API_management

24. Swagger

Swagger is an open-source software framework backed by a large ecosystem of tools that helps developers design, build, document, and consume RESTful web services. While most users identify Swagger by the Swagger UI tool, the Swagger toolset includes support for automated documentation, code generation, and test-case generation.

Sponsored by SmartBear Software, Swagger has been a strong supporter of open-source software, and has widespread adoption.

Usage 
Swagger's open-source tooling usage can be broken up into different use cases : development, interaction with APIs, and documentation.

# Developing APIs
When creating APIs, Swagger tooling may be used to automatically generate an Open API document based on the code itself. This is informally called code-first or bottom-up API development. While the software code itself can accurately represent the Open API document, many API developers[who?] consider this to be an outdated technique as it embeds the API description in the source code of a project and is typically more difficult for non-developers to contribute to.

Alternatively, using Swagger Codegen, developers can decouple the source code from the Open API document, and generate client and server code directly from the design. While considered complicated, this has been considered a more modern API workflow by many industry experts[citation needed] and allows more freedom when designing the API by deferring the coding aspect.

# Interacting with APIs
Using the Swagger Codegen project, end users generate client SDKs directly from the OpenAPI document, reducing the need for human-generated client code. As of August 2017, the Swagger Codegen project supported over 50 different languages and formats for client SDK generation.

# Documenting APIs
When described by an OpenAPI document, Swagger open-source tooling may be used to interact directly with the API through the Swagger UI. This project allows connections directly to live APIs through an interactive, HTML-based user interface. Requests can be made directly from the UI and the options explored by the user of the interface.

Ref: https://en.wikipedia.org/wiki/Swagger_(software)

25 Keycloak

Keycloak is an open source software product to allow single sign-on with Identity Management and Access Management aimed at modern applications and services. As of March 2018 this JBoss community project is under the stewardship of Red Hat who use it as the upstream project for their RH-SSO product. From a conceptual perspective the tool's intent is to make it easy to secure applications and services with little to no coding.

RH-SSO: Red Hat Single Sign-On

Features 
Among the many features of Keycloak include :

- User Registration
- Social login
- Single Sign-On/Sign-Off across all applications belonging to the same Realm
- 2-factor authentication
- LDAP integration
- Kerberos broker
- multitenancy with per-realm customizeable skin

Components 
There are 2 main components of Keycloak:

- Keycloak server
- Keycloak application adapter

Ref: https://en.wikipedia.org/wiki/Keycloak

26. OpenStack

OpenStack is a free and open-source software platform for cloud computing, mostly deployed as infrastructure-as-a-service (IaaS), whereby virtual servers and other resources are made available to customers. The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users either manage it through a web-based dashboard, through command-line tools, or through RESTful web services.

OpenStack began in 2010 as a joint project of Rackspace Hosting and NASA. As of 2012, it is managed by the OpenStack Foundation, a non-profit corporate entity established in September 2012 to promote OpenStack software and its community. More than 500 companies have joined the project.

Components 

OpenStack main services
OpenStack has a modular architecture with various code names for its components.

Compute (Nova)
OpenStack Compute (Nova) is a cloud computing fabric controller, which is the main part of an IaaS system. It is designed to manage and automate pools of computer resources and can work with widely available virtualization technologies, as well as bare metal and high-performance computing (HPC) configurations. KVM, VMware, and Xen are available choices for hypervisor technology (virtual machine monitor), together with Hyper-V and Linux container technology such as LXC.

It is written in Python and uses many external libraries such as Eventlet (for concurrent programming), Kombu (for AMQP communication), and SQLAlchemy (for database access). Compute's architecture is designed to scale horizontally on standard hardware with no proprietary hardware or software requirements and provide the ability to integrate with legacy systems and third-party technologies.

Due to its widespread integration into enterprise-level infrastructures, monitoring OpenStack performance in general, and Nova performance in particular, scaling has become an increasingly important issue. Monitoring end-to-end performance requires tracking metrics from Nova, Keystone, Neutron, Cinder, Swift and other services, in addition to monitoring RabbitMQ which is used by OpenStack services for message passing. All these services generate their own log files, which, especially in enterprise-level infrastructures, also should be monitored.

Networking (Neutron)
OpenStack Networking (Neutron) is a system for managing networks and IP addresses. OpenStack Networking ensures the network is not a bottleneck or limiting factor in a cloud deployment,[citation needed] and gives users self-service ability, even over network configurations.

OpenStack Networking provides networking models for different applications or user groups. Standard models include flat networks or VLANs that separate servers and traffic. OpenStack Networking manages IP addresses, allowing for dedicated static IP addresses or DHCP. Floating IP addresses let traffic be dynamically rerouted to any resources in the IT infrastructure, so users can redirect traffic during maintenance or in case of a failure.

Users can create their own networks, control traffic, and connect servers and devices to one or more networks. Administrators can use software-defined networking (SDN) technologies like OpenFlow to support high levels of multi-tenancy and massive scale. OpenStack networking provides an extension framework that can deploy and manage additional network services—such as intrusion detection systems (IDS), load balancing, firewalls, and virtual private networks (VPN).

Block storage (Cinder)
OpenStack Block Storage (Cinder) provides persistent block-level storage devices for use with OpenStack compute instances. The block storage system manages the creation, attaching and detaching of the block devices to servers. Block storage volumes are fully integrated into OpenStack Compute and the Dashboard allowing for cloud users to manage their own storage needs. In addition to local Linux server storage, it can use storage platforms including Ceph, CloudByte, Coraid, EMC (ScaleIO, VMAX, VNX and XtremIO), GlusterFS, Hitachi Data Systems, IBM Storage (IBM DS8000, Storwize family, SAN Volume Controller, XIV Storage System, and GPFS), Linux LIO, NetApp, Nexenta, Nimble Storage, Scality, SolidFire, HP (StoreVirtual and 3PAR StoreServ families), INFINIDAT (InfiniBox) and Pure Storage. Block storage is appropriate for performance sensitive scenarios such as database storage, expandable file systems, or providing a server with access to raw block level storage. Snapshot management provides powerful functionality for backing up data stored on block storage volumes. Snapshots can be restored or used to create a new block storage volume.

Identity (Keystone)
OpenStack Identity (Keystone) provides a central directory of users mapped to the OpenStack services they can access. It acts as a common authentication system across the cloud operating system and can integrate with existing backend directory services like LDAP. It supports multiple forms of authentication including standard username and password credentials, token-based systems and AWS-style (i.e. Amazon Web Services) logins. Additionally, the catalog provides a queryable list of all of the services deployed in an OpenStack cloud in a single registry. Users and third-party tools can programmatically determine which resources they can access.

Image (Glance)
OpenStack Image (Glance) provides discovery, registration, and delivery services for disk and server images. Stored images can be used as a template. It can also be used to store and catalog an unlimited number of backups. The Image Service can store disk and server images in a variety of back-ends, including Swift. The Image Service API provides a standard REST interface for querying information about disk images and lets clients stream the images to new servers.

Glance adds many enhancements to existing legacy infrastructures. For example, if integrated with VMware, Glance introduces advanced features to the vSphere family such as vMotion, high availability and dynamic resource scheduling (DRS). vMotion is the live migration of a running VM, from one physical server to another, without service interruption. Thus, it enables a dynamic and automated self-optimizing datacenter, allowing hardware maintenance for the underperforming servers without downtimes.

Other OpenStack modules that need to interact with Images, for example Heat, must communicate with the images metadata through Glance. Also, Nova can present information about the images, and configure a variation on an image to produce an instance. However, Glance is the only module that can add, delete, share, or duplicate images.

Object storage (Swift)
OpenStack Object Storage (Swift) is a scalable redundant storage system. Objects and files are written to multiple disk drives spread throughout servers in the data center, with the OpenStack software responsible for ensuring data replication and integrity across the cluster. Storage clusters scale horizontally simply by adding new servers. Should a server or hard drive fail, OpenStack replicates its content from other active nodes to new locations in the cluster. Because OpenStack uses software logic to ensure data replication and distribution across different devices, inexpensive commodity hard drives and servers can be used.

In August 2009, Rackspace started the development of the precursor to OpenStack Object Storage, as a complete replacement for the Cloud Files product. The initial development team consisted of nine developers. SwiftStack, an object storage software company, is currently the leading developer for Swift with significant contributions from HP, Red Hat, NTT, NEC, IBM and more.

Dashboard (Horizon)
OpenStack Dashboard (Horizon) provides administrators and users with a graphical interface to access, provision, and automate deployment of cloud-based resources. The design accommodates third party products and services, such as billing, monitoring, and additional management tools. The dashboard is also brand-able for service providers and other commercial vendors who want to make use of it. The dashboard is one of several ways users can interact with OpenStack resources. Developers can automate access or build tools to manage resources using the native OpenStack API or the EC2 compatibility API.

Orchestration (Heat)
Heat is a service to orchestrate multiple composite cloud applications using templates, through both an OpenStack-native REST API and a CloudFormation-compatible Query API.

Workflow (Mistral)
Mistral is a service that manages workflows. User typically writes a workflow using workflow language based on YAML and uploads the workflow definition to Mistral via its REST API. Then user can start this workflow manually via the same API or configure a trigger to start the workflow on some event.

Telemetry (Ceilometer)
OpenStack Telemetry (Ceilometer) provides a Single Point Of Contact for billing systems, providing all the counters they need to establish customer billing, across all current and future OpenStack components. The delivery of counters is traceable and auditable, the counters must be easily extensible to support new projects, and agents doing data collections should be independent of the overall system.

Database (Trove)
Trove is a database-as-a-service provisioning relational and a non-relational database engine.

Elastic map reduce (Sahara)
Sahara is a component to easily and rapidly provision Hadoop clusters. Users will specify several parameters like the Hadoop version number, the cluster topology type, node flavor details (defining disk space, CPU and RAM settings), and others. After a user provides all of the parameters, Sahara deploys the cluster in a few minutes. Sahara also provides means to scale a preexisting Hadoop cluster by adding and removing worker nodes on demand.

Bare metal (Ironic)
Ironic is an OpenStack project that provisions bare metal machines instead of virtual machines. It was initially forked from the Nova Baremetal driver and has evolved into a separate project. It is best thought of as a bare-metal hypervisor API and a set of plugins that interact with the bare-metal hypervisors. By default, it will use PXE and IPMI in concert to provision and turn on and off machines, but Ironic supports and can be extended with vendor-specific plugins to implement additional functionality.

Messaging (Zaqar)
Zaqar is a multi-tenant cloud messaging service for Web developers. The service features a fully RESTful API, which developers can use to send messages between various components of their SaaS and mobile applications by using a variety of communication patterns. Underlying this API is an efficient messaging engine designed with scalability and security in mind. Other OpenStack components can integrate with Zaqar to surface events to end users and to communicate with guest agents that run in the "over-cloud" layer.

Shared file system (Manila)
OpenStack Shared File System (Manila) provides an open API to manage shares in a vendor agnostic framework. Standard primitives include ability to create, delete, and give/deny access to a share and can be used standalone or in a variety of different network environments. Commercial storage appliances from EMC, NetApp, HP, IBM, Oracle, Quobyte, INFINIDAT and Hitachi Data Systems are supported as well as filesystem technologies such as Red Hat GlusterFS or Ceph.

DNS (Designate)
Designate is a multi-tenant REST API for managing DNS. This component provides DNS as a Service and is compatible with many backend technologies, including PowerDNS and BIND. It doesn't provide a DNS service as such as its purpose is to interface with existing DNS servers to manage DNS zones on a per tenant basis.

Search (Searchlight)
Searchlight provides advanced and consistent search capabilities across various OpenStack cloud services. It accomplishes this by offloading user search queries from other OpenStack API servers by indexing their data into ElasticSearch. Searchlight is being integrated into Horizon and also provides a Command-line interface.

Key manager (Barbican)
Barbican is a REST API designed for the secure storage, provisioning and management of secrets. It is aimed at being useful for all environments, including large ephemeral Clouds.

Container orchestration (Magnum)
Magnum is an OpenStack API service developed by the OpenStack Containers Team making container orchestration engines such as Docker Swarm, Kubernetes, and Apache Mesos available as first class resources in OpenStack. Magnum uses Heat to orchestrate an OS image which contains Docker and Kubernetes and runs that image in either virtual machines or bare metal in a cluster configuration.

Root Cause Analysis (Vitrage)
Vitrage is the OpenStack RCA (Root Cause Analysis) service for organizing, analyzing and expanding OpenStack alarms & events, yielding insights regarding the root cause of problems and deducing their existence before they are directly detected.

Rule-based alarm actions (Aodh)
This alarming service enables the ability to trigger actions based on defined rules against metric or event data collected by Ceilometer or Gnocchi.

Ref: https://en.wikipedia.org/wiki/OpenStack

27. Cucumber 

Cucumber is a software tool used by computer programmers that supports behavior-driven development (BDD). Central to the Cucumber BDD approach is its plain language parser called Gherkin. It allows expected software behaviors to be specified in a logical language that customers can understand. As such, Cucumber allows the execution of feature documentation written in business-facing text. It is often used for testing other software. It runs automated acceptance tests written in a behavior-driven development (BDD) style.

Cucumber was originally written in the Ruby programming language. and was originally used exclusively for Ruby testing as a complement to the RSpec BDD framework. Cucumber now supports a variety of different programming languages through various implementations, including Java and JavaScript. The open source port of Cucumber in .Net is called SpecFlow. For example, Cuke4php and Cuke4Lua are software bridges that enable testing of PHP and Lua projects, respectively. Other implementations may simply leverage the Gherkin parser while implementing the rest of the testing framework in the target language.

Stable release: 3.1.2 / 13 July 2018; 18 months ago
Written in: Ruby
Operating system: Cross-platform
Type: Behavior driven development framework / Test tool
License: MIT License
Website: cucumber.io

Ref: https://en.wikipedia.org/wiki/Cucumber_(software)

28. JUnit

JUnit is a unit testing framework for the Java programming language. JUnit has been important in the development of test-driven development, and is one of a family of unit testing frameworks which is collectively known as xUnit that originated with SUnit.

JUnit is linked as a JAR at compile-time; the framework resides under package junit.framework for JUnit 3.8 and earlier, and under package org.junit for JUnit 4 and later.

A research survey performed in 2013 across 10,000 Java projects hosted on GitHub found that JUnit (in a tie with slf4j-api), was the most commonly included external library. Each library was used by 30.7% of projects.

Stable release: 5.5.1 / July 20, 2019; 6 months ago
Repository: github.com/junit-team/junit5
Written in: Java
Operating system: Cross-platform
Type: Unit testing tool
License: Eclipse Public License (relicensed from CPL before)
Website: junit.org

Ref: https://en.wikipedia.org/wiki/JUnit

29. Mockito

Mockito is an open source testing framework for Java released under the MIT License. The framework allows the creation of test double objects (mock objects) in automated unit tests for the purpose of test-driven development (TDD) or behavior-driven development (BDD).

The framework's name and logo are a play on mojitos, a type of drink.

Features 
Mockito allows developers to verify the behavior of the system under test (SUT) without establishing expectations beforehand. One of the criticisms of mock objects is that there is a tight coupling of the test code to the system under test. Mockito attempts to eliminate the expect-run-verify pattern by removing the specification of expectations. Mockito also provides some annotations for reducing boilerplate code.

Ref: https://en.wikipedia.org/wiki/Mockito

30. SonarQube

SonarQube (formerly Sonar) is an open-source platform developed by SonarSource for continuous inspection of code quality to perform automatic reviews with static analysis of code to detect bugs, code smells, and security vulnerabilities on 20+ programming languages. SonarQube offers reports on duplicated code, coding standards, unit tests, code coverage, code complexity, comments, bugs, and security vulnerabilities.

SonarQube can record metrics history and provides evolution graphs. SonarQube provides fully automated analysis and integration with Maven, Ant, Gradle, MSBuild and continuous integration tools (Atlassian Bamboo, Jenkins, Hudson, etc.).

Overview 
SonarQube includes support for the programming languages Java (including Android), C#, PHP, JavaScript, TypeScript, C/C++, Ruby, Kotlin, Go, COBOL, PL/SQL, PL/I, ABAP, VB.NET, VB6, Python, RPG, Flex, Objective-C, Swift, CSS, HTML, and XML. Some of these are only available via a commercial license.

SonarQube is available for free under the GNU Lesser General Public License. An enterprise version for paid licensing also exists, as well as a data center edition that supports high availability.

SonarQube integrates with Eclipse, Visual Studio, and IntelliJ IDEA development environments through the SonarLint plug-ins, and also integrates with external tools like LDAP, Active Directory, GitHub, and others. SonarQube is expandable with the use of plug-ins.

Stable release: 7.9.1 / July 10, 2019; 6 months ago
Repository: github.com/SonarSource/sonarqube
Written in: Java
Operating system: Cross-platform
Type: Static program analysis
License: Lesser GNU General Public License
Website: sonarqube.org

Ref: https://en.wikipedia.org/wiki/SonarQube

31. Fortify

Fortify Software, later known as Fortify Inc., is a California-based software security vendor, founded in 2003 and acquired by Hewlett-Packard in 2010 to become part of HP Enterprise Security Products.

Fortify offerings included Static Application Security Testing and Dynamic Application Security Testing products, as well as products and services that support Software Security Assurance. As of February 2011, Fortify sells Fortify OnDemand, a static and dynamic application testing service.

Type: Software Vendor
Industry: Computer software
Genre: Software Security Assurance
Founded: 2003
Founder: Ted Schlein of Kleiner, Perkins, Caufield & Byers, Mike Armistead, Brian Chess, Arthur Do, Roger Thornton
Headquarters: San Mateo, California, United States
Key people: John M. Jack (former CEO), Jacob West (head of Security Research Group), Brian Chess (former Chief Scientist), Arthur Do (former Chief Architect)
Owner: Micro Focus

Ref: https://en.wikipedia.org/wiki/Fortify_Software

32. Knative

Kubernetes-based platform to deploy and manage modern serverless workloads.
 
Make your developers more productive 
Knative components build on top of Kubernetes, abstracting away the complex details and enabling developers to focus on what matters. Built by codifying the best practices shared by successful real-world implementations, Knative solves the "boring but difficult" parts of deploying and managing cloud native services so you don't have to.

Highlights 
- Focused API with higher level abstractions for common app use-cases.
- Stand up a scalable, secure, stateless service in seconds.
- Loosely coupled features let you use the pieces you need.
- Pluggable components let you bring your own logging and monitoring, networking, and service mesh.
- Knative is portable: run it anywhere Kubernetes runs, never worry about vendor lock-in.
- Idiomatic developer experience, supporting common patterns such as GitOps, DockerOps, ManualOps.
- Knative can be used with common tools and frameworks such as Django, Ruby on Rails, Spring, and many more.
 
Ref: https://knative.dev/

33. Rust

Rust is a multi-paradigm system programming language focused on safety, especially safe concurrency. Rust is syntactically similar to C++, but is designed to provide better memory safety while maintaining high performance.

Rust was originally designed by Graydon Hoare at Mozilla Research, with contributions from Dave Herman, Brendan Eich, and others. The designers refined the language while writing the Servo layout or browser engine, and the Rust compiler. The compiler is free and open-source software dual-licensed under the MIT License and Apache License 2.0.

Rust has been the "most loved programming language" in the Stack Overflow Developer Survey every year since 2016.

Paradigms: Multi-paradigm (concurrent, functional, generic, imperative, structured)
Designed by: Graydon Hoare
Developer: The Rust Project
First appeared: July 7, 2010; 9 years ago
Stable release: 1.40.0 / December 19, 2019; 30 days ago
Typing discipline: Inferred, linear, nominal, static, strong
Implementation language: Rust
Platform: ARM, IA-32, x86-64, MIPS, PowerPC, SPARC, RISC-V
OS: Linux, macOS, Windows, FreeBSD, OpenBSD, Redox, Android, iOS
License: MIT or Apache 2.0
Filename extensions: .rs, .rlib
Website: www.rust-lang.org
Influenced by: Alef, C#, C++, Cyclone, Erlang, Haskell, Limbo, Newsqueak, OCaml, Ruby, Scheme, Standard ML, Swift
Influenced: Crystal, Elm, Idris, Spark, Swift, Project Verona

Design

A presentation on Rust by Emily Dunham from Mozilla's Rust team (linux.conf.au conference, Hobart, 2017).
Rust is intended to be a language for highly concurrent and highly safe systems, and programming in the large, that is, creating and maintaining boundaries that preserve large-system integrity. This has led to a feature set with an emphasis on safety, control of memory layout, and concurrency.

Performance of idiomatic Rust 
Performance of idiomatic Rust is comparable to the performance of idiomatic C++.

Syntax 
The concrete syntax of Rust is similar to C and C++, with blocks of code delimited by curly brackets, and control flow keywords such as if, else, while, and for. Not all C or C++ keywords are implemented, however, and some Rust functions (such as the use of the keyword match for pattern matching) will be less familiar to those versed in these languages. Despite the superficial resemblance to C and C++, the syntax of Rust in a deeper sense is closer to that of the ML family of languages and the Haskell language. Nearly every part of a function body is an expression, even control flow operators. For example, the ordinary if expression also takes the place of C's ternary conditional. A function need not end with a return expression: in this case if the semicolon is omitted, the last expression in the function creates the return value.

Memory safety 
Rust is designed to be memory safe, and thus it does not permit null pointers, dangling pointers, or data races in safe code. Data values can only be initialized through a fixed set of forms, all of which require their inputs to be already initialized. To replicate the function in other languages of pointers being either valid or NULL, such as in linked list or binary tree data structures, the Rust core library provides an option type, which can be used to test if a pointer has Some value or None. Rust also introduces added syntax to manage lifetimes, and the compiler reasons about these through its borrow checker.

Memory management 
Rust does not use an automated garbage collection system like those used by Go, Java, or the .NET Framework. Instead, memory and other resources are managed through the resource acquisition is initialization (RAII) convention, with optional reference counting. Rust provides deterministic management of resources, with very low overhead.[citation needed] Rust also favors stack allocation of values and does not perform implicit boxing.

There is also a concept of references (using the & symbol), which do not involve run-time reference counting. The safety of using such pointers is verified at compile time by the borrow checker, preventing dangling pointers and other forms of undefined behavior.

Ownership 
Rust has an ownership system where all values have a unique owner, where the scope of the value is the same as the scope of the owner. Values can be passed by immutable reference using &T, by mutable reference using &mut T or by value using T. At all times, there can either be multiple immutable references or one mutable reference. The Rust compiler enforces these rules at compile time and also checks that all references are valid.

Types and polymorphism 
The type system supports a mechanism similar to type classes, called "traits", inspired directly by the Haskell language. This is a facility for ad hoc polymorphism, achieved by adding constraints to type variable declarations. Other features from Haskell, such as higher-kinded polymorphism, are not yet supported.

Rust features type inference, for variables declared with the keyword let. Such variables do not require a value to be initially assigned to determine their type. A compile-time error results if any branch of code fails to assign a value to the variable. Variables assigned multiple times must be marked with the keyword mut.

Functions can be given generic parameters, which usually require the generic type to implement a certain trait or traits. Within such a function, the generic value can only be used through those traits. This means that a generic function can be type-checked as soon as it is defined. This is in contrast to C++ templates, which are fundamentally duck typed and cannot be checked until instantiated with concrete types. C++ concepts address the same issue and are expected to be part of C++20 (2020).

However, the implementation of Rust generics is similar to the typical implementation of C++ templates: a separate copy of the code is generated for each instantiation. This is called monomorphization and contrasts with the type erasure scheme typically used in Java and Haskell. The benefit of monomorphization is optimized code for each specific use case; the drawback is increased compile time and size of the resulting binaries.

The object system within Rust is based around implementations, traits and structured types. Implementations fulfill a role similar to that of classes within other languages, and are defined with the keyword impl. Inheritance and polymorphism are provided by traits; they allow methods to be defined and mixed in to implementations. Structured types are used to define fields. Implementations and traits cannot define fields themselves, and only traits can provide inheritance. Among other benefits, this prevents the diamond problem of multiple inheritance, as in C++. In other words, Rust supports interface inheritance, but replaces implementation inheritance with composition; see composition over inheritance.

Ref: https://en.wikipedia.org/wiki/Rust_(programming_language)

34. Go

Go, also known as Golang, is a statically typed, compiled programming language designed at Google by Robert Griesemer, Rob Pike, and Ken Thompson. Go is syntactically similar to C, but with memory safety, garbage collection, structural typing, and CSP-style concurrency.

There are two major implementations:

Google's self-hosting compiler toolchain targeting multiple operating systems, mobile devices, and WebAssembly.
gccgo, a GCC frontend.
A third party transpiler, GopherJS, compiles Go to JavaScript for front-end web development.

Static type checking 
Static type checking is the process of verifying the type safety of a program based on analysis of a program's text (source code). If a program passes a static type checker, then the program is guaranteed to satisfy some set of type safety properties for all possible inputs.

Static type checking can be considered a limited form of program verification (see type safety), and in a type-safe language, can be considered also an optimization. If a compiler can prove that a program is well-typed, then it does not need to emit dynamic safety checks, allowing the resulting compiled binary to run faster and to be smaller.

Static type checking for Turing-complete languages is inherently conservative. That is, if a type system is both sound (meaning that it rejects all incorrect programs) and decidable (meaning that it is possible to write an algorithm that determines whether a program is well-typed), then it must be incomplete (meaning there are correct programs, which are also rejected, even though they do not encounter runtime errors). For example, consider a program containing the code:

if [complex test] then [do something] else [signal that there is a type error]

Even if the expression [complex test] always evaluates to true at run-time, most type checkers will reject the program as ill-typed, because it is difficult (if not impossible) for a static analyzer to determine that the else branch will not be taken. Conversely, a static type checker will quickly detect type errors in rarely used code paths. Without static type checking, even code coverage tests with 100% coverage may be unable to find such type errors. The tests may fail to detect such type errors, because the combination of all places where values are created and all places where a certain value is used must be taken into account.

A number of useful and common programming language features cannot be checked statically, such as downcasting. Thus, many languages will have both static and dynamic type checking; the static type checker verifies what it can, and dynamic checks verify the rest.

Many languages with static type checking provide a way to bypass the type checker. Some languages allow programmers to choose between static and dynamic type safety. For example, C# distinguishes between statically-typed and dynamically-typed variables. Uses of the former are checked statically, whereas uses of the latter are checked dynamically. Other languages allow writing code that is not type-safe; for example, in C, programmers can freely cast a value between any two types that have the same size, effectively subverting the type concept.

For a list of languages with static type checking, see the category for statically typed languages.

Ref 1: https://en.wikipedia.org/wiki/Type_system#STATIC
Ref 2: https://en.wikipedia.org/wiki/Category:Statically_typed_programming_languages

Design 

Go was designed at Google in 2007 to improve programming productivity in an era of multicore, networked machines and large codebases. The designers wanted to address criticism of other languages in use at Google, but keep their useful characteristics:

- Static typing and run-time efficiency (like C++)
- Readability and usability (like Python or JavaScript)
- High-performance networking and multiprocessing
The designers were primarily motivated by their shared dislike of C++.


Go is influenced by C, but with an emphasis on greater simplicity and safety. The language consists of:

- A syntax and environment adopting patterns more common in dynamic languages:
 - Optional concise variable declaration and initialization through type inference (x := 0 not int x = 0; or var x = 0;).
 - Fast compilation times.
 - Remote package management (go get) and online package documentation.
- Distinctive approaches to particular problems:
 - Built-in concurrency primitives: light-weight processes (goroutines), channels, and the select statement.
 - An interface system in place of virtual inheritance, and type embedding instead of non-virtual inheritance.
 - A toolchain that, by default, produces statically linked native binaries without external dependencies.
- A desire to keep the language specification simple enough to hold in a programmer's head, in part by omitting features which are common in similar languages.

Ref: https://en.wikipedia.org/wiki/Go_(programming_language)

35. Ruby

Ruby is an interpreted, high-level, general-purpose programming language. It was designed and developed in the mid-1990s by Yukihiro "Matz" Matsumoto in Japan.

Ruby is dynamically typed and uses garbage collection. It supports multiple programming paradigms, including procedural, object-oriented, and functional programming. According to the creator, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, Basic, and Lisp.

Philosophy 

Yukihiro Matsumoto, the creator of Ruby
Matsumoto has said that Ruby is designed for programmer productivity and fun, following the principles of good user interface design. At a Google Tech Talk in 2008 Matsumoto further stated, "I hope to see Ruby help every programmer in the world to be productive, and to enjoy programming, and to be happy. That is the primary purpose of Ruby language." He stresses that systems design needs to emphasize human, rather than computer, needs:

 Often people, especially computer engineers, focus on the machines. They think, "By doing this, the machine will run fast. By doing this, the machine will run more effectively. By doing this, the machine will something something something." They are focusing on machines. But in fact we need to focus on humans, on how humans care about doing programming or operating the application of the machines. We are the masters. They are the slaves.

Ruby is said to follow the principle of least astonishment (POLA), meaning that the language should behave in such a way as to minimize confusion for experienced users. Matsumoto has said his primary design goal was to make a language that he himself enjoyed using, by minimizing programmer work and possible confusion. He has said that he had not applied the principle of least astonishment to the design of Ruby, but nevertheless the phrase has come to be closely associated with the Ruby programming language. The phrase has itself been a source of surprise, as novice users may take it to mean that Ruby's behaviors try to closely match behaviors familiar from other languages. In a May 2005 discussion on the newsgroup comp.lang.ruby, Matsumoto attempted to distance Ruby from POLA, explaining that because any design choice will be surprising to someone, he uses a personal standard in evaluating surprise. If that personal standard remains consistent, there would be few surprises for those familiar with the standard.

Matsumoto defined it this way in an interview:

 Everyone has an individual background. Someone may come from Python, someone else may come from Perl, and they may be surprised by different aspects of the language. Then they come up to me and say, 'I was surprised by this feature of the language, so Ruby violates the principle of least surprise.' Wait. Wait. The principle of least surprise is not for you only. The principle of least surprise means principle of least my surprise. And it means the principle of least surprise after you learn Ruby very well. For example, I was a C++ programmer before I started designing Ruby. I programmed in C++ exclusively for two or three years. And after two years of C++ programming, it still surprises me.

Features 
- Thoroughly object-oriented with inheritance, mixins and metaclasses
- Dynamic typing and duck typing
- Everything is an expression (even statements) and everything is executed imperatively (even declarations)
- Succinct and flexible syntax that minimizes syntactic noise and serves as a foundation for domain-specific languages
- Dynamic reflection and alteration of objects to facilitate metaprogramming
- Lexical closures, iterators and generators, with a block syntax
- Literal notation for arrays, hashes, regular expressions and symbols
- Embedding code in strings (interpolation)
- Default arguments
- Four levels of variable scope (global, class, instance, and local) denoted by sigils or the lack thereof
- Garbage collection
- First-class continuations
- Strict boolean coercion rules (everything is true except false and nil)
- Exception handling
- Operator overloading
- Built-in support for rational numbers, complex numbers and arbitrary-precision arithmetic
- Custom dispatch behavior (through method_missing and const_missing)
- Native threads and cooperative fibers (fibers are a 1.9/YARV feature)
- Support for Unicode and multiple character encodings.
- Native plug-in API in C
- Interactive Ruby Shell (a REPL)
- Centralized package management through RubyGems
- Implemented on all major platforms
- Large standard library, including modules for YAML, JSON, XML, CGI, OpenSSL, HTTP, FTP, RSS, curses, zlib and Tk

Paradigm: Multi-paradigm (functional, imperative, object-oriented, reflective)
Designed by: Yukihiro Matsumoto
Developer: Yukihiro Matsumoto, et al.
First appeared: 1995; 25 years ago
Stable release: 2.7.0 (December 25, 2019; 26 days ago) [±]
Typing discipline: Duck, dynamic, strong
Scope: Lexical, sometimes dynamic
Implementation language: C
OS: Cross-platform
License: Ruby License, GPLv2, or 2-clause BSD license
Filename extensions: .rb
Website: www.ruby-lang.org
Major implementations: Ruby MRI, YARV, Rubinius, MagLev, JRuby, MacRuby, RubyMotion, Mruby, IronRuby
Influenced by: Ada, C++, CLU, Dylan, Eiffel, Lisp, Lua, Perl, Python, Smalltalk, Basic
Influenced: Clojure, CoffeeScript, Crystal, D, Elixir, Groovy, Ioke, Julia, Mirah, Nu, Ring, Rust, Swift

Ref: https://en.wikipedia.org/wiki/Ruby_(programming_language)

36. NumPy

NumPy (pronounced /ˈnʌmpaɪ/ (NUM-py) or sometimes /ˈnʌmpi/ (NUM-pee)) is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. The ancestor of NumPy, Numeric, was originally created by Jim Hugunin with contributions from several other developers. In 2005, Travis Oliphant created NumPy by incorporating features of the competing Numarray into Numeric, with extensive modifications. NumPy is open-source software and has many contributors.

# Features 
NumPy targets the CPython reference implementation of Python, which is a non-optimizing bytecode interpreter. Mathematical algorithms written for this version of Python often run much slower than compiled equivalents. NumPy addresses the slowness problem partly by providing multidimensional arrays and functions and operators that operate efficiently on arrays, requiring rewriting some code, mostly inner loops using NumPy.

Using NumPy in Python gives functionality comparable to MATLAB since they are both interpreted, and they both allow the user to write fast programs as long as most operations work on arrays or matrices instead of scalars. In comparison, MATLAB boasts a large number of additional toolboxes, notably Simulink, whereas NumPy is intrinsically integrated with Python, a more modern and complete programming language. Moreover, complementary Python packages are available; SciPy is a library that adds more MATLAB-like functionality and Matplotlib is a plotting package that provides MATLAB-like plotting functionality. Internally, both MATLAB and NumPy rely on BLAS and LAPACK for efficient linear algebra computations.

Python bindings of the widely used computer vision library OpenCV utilize NumPy arrays to store and operate on data. Since images with multiple channels are simply represented as three-dimensional arrays, indexing, slicing or masking with other arrays are very efficient ways to access specific pixels of an image. The NumPy array as universal data structure in OpenCV for images, extracted feature points, filter kernels and many more vastly simplifies the programming workflow and debugging.

The ndarray data structure 
The core functionality of NumPy is its "ndarray", for n-dimensional array, data structure. These arrays are strided views on memory. In contrast to Python's built-in list data structure (which, despite the name, is a dynamic array), these arrays are homogeneously typed: all elements of a single array must be of the same type.

Such arrays can also be views into memory buffers allocated by C/C++, Cython, and Fortran extensions to the CPython interpreter without the need to copy data around, giving a degree of compatibility with existing numerical libraries. This functionality is exploited by the SciPy package, which wraps a number of such libraries (notably BLAS and LAPACK). NumPy has built-in support for memory-mapped ndarrays.

Limitations
Inserting or appending entries to an array is not as trivially possible as it is with Python's lists. The np.pad(...) routine to extend arrays actually creates new arrays of the desired shape and padding values, copies the given array into the new one and returns it. NumPy's np.concatenate([a1,a2]) operation does not actually link the two arrays but returns a new one, filled with the entries from both given arrays in sequence. Reshaping the dimensionality of an array with np.reshape(...) is only possible as long as the number of elements in the array does not change. These circumstances originate from the fact that NumPy's arrays must be views on contiguous memory buffers. A replacement package called Blaze attempts to overcome this limitation.

Algorithms that are not expressible as a vectorized operation will typically run slowly because they must be implemented in "pure Python", while vectorization may increase memory complexity of some operations from constant to linear, because temporary arrays must be created that are as large as the inputs. Runtime compilation of numerical code has been implemented by several groups to avoid these problems; open source solutions that interoperate with NumPy include scipy.weave, numexpr and Numba. Cython and Pythran are static-compiling alternatives to these.

Ref: https://en.wikipedia.org/wiki/NumPy

37. Pandas

Original author(s): Wes McKinney
Developer(s): Community
Initial release: 11 January 2008; 12 years ago
Stable release: 0.25.1 / 21 August 2019; 4 months ago
Repository: github.com/pandas-dev/pandas
Written in: Python, Cython, C
Operating system: Cross-platform
Type: Technical computing
License: New BSD License
Website: pandas.pydata.org

In computer programming, pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series. It is free software released under the three-clause BSD license. The name is derived from the term "panel data", an econometrics term for data sets that include observations over multiple time periods for the same individuals.

Library features 
- DataFrame object for data manipulation with integrated indexing.
- Tools for reading and writing data between in-memory data structures and different file formats.
- Data alignment and integrated handling of missing data.
- Reshaping and pivoting of data sets.
- Label-based slicing, fancy indexing, and subsetting of large data sets.
- Data structure column insertion and deletion.
- Group by engine allowing split-apply-combine operations on data sets.
- Data set merging and joining.
- Hierarchical axis indexing to work with high-dimensional data in a lower-dimensional data structure.
- Time series-functionality: Date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging.
- Provides data filtration.
- The library is highly optimized for performance, with critical code paths written in Cython or C.

Dataframes 
Pandas is mainly used for machine learning in form of dataframes. Pandas allow importing data of various file formats such as csv, excel etc. Pandas allows various data manipulation operations such as groupby, join, merge, melt, concatenation as well as data cleaning features such as filling, replacing or imputing null values.

Ref: https://en.wikipedia.org/wiki/Pandas_(software)

38. Node.js

Node.js is an open-source, cross-platform, JavaScript runtime environment that executes JavaScript code outside of a browser. Node.js lets developers use JavaScript to write command line tools and for server-side scripting—running scripts server-side to produce dynamic web page content before the page is sent to the user's web browser. Consequently, Node.js represents a "JavaScript everywhere" paradigm, unifying web-application development around a single programming language, rather than different languages for server- and client-side scripts.

Though .js is the standard filename extension for JavaScript code, the name "Node.js" doesn't refer to a particular file in this context and is merely the name of the product. Node.js has an event-driven architecture capable of asynchronous I/O. These design choices aim to optimize throughput and scalability in web applications with many input/output operations, as well as for real-time Web applications (e.g., real-time communication programs and browser games).

The Node.js distributed development project, governed by the Node.js Foundation, is facilitated by the Linux Foundation's Collaborative Projects program.

Corporate users of Node.js software include GoDaddy, Groupon, IBM, LinkedIn, Microsoft, Netflix, PayPal, Rakuten, SAP, Voxer, Walmart, and Yahoo!.

Initial release: May 27, 2009; 10 years ago
Stable release: 13.6.0 / January 7, 2020; 15 days ago
Repository: github.com/nodejs/node
Written in: C, C++, JavaScript
Operating system: Linux, macOS, Microsoft Windows, SmartOS, FreeBSD, OpenBSD, IBM AIX
Type: Runtime environment
License: MIT license
Website: nodejs.org

Overview 
  Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of "modules" that handle various core functionalities. Modules are provided for file system I/O, networking (DNS, HTTP, TCP, TLS/SSL, or UDP), binary data (buffers), cryptography functions, data streams, and other core functions. Node.js's modules use an API designed to reduce the complexity of writing server applications.
  
  JavaScript is the only language that Node.js supports natively, but many compile-to-JS languages are available. As a result, Node.js applications can be written in CoffeeScript, Dart, TypeScript, ClojureScript and others.
  
  Node.js is primarily used to build network programs such as Web servers. The most significant difference between Node.js and PHP is that most functions in PHP block until completion (commands only execute after previous commands finish), while Node.js functions are non-blocking (commands execute concurrently or even in parallel, and use callbacks to signal completion or failure).
  
  Node.js is officially supported on Linux, macOS and Microsoft Windows 7 and Server 2008 (and later), with tier 2 support for SmartOS and IBM AIX and experimental support for FreeBSD. OpenBSD also works, and LTS versions available for IBM i (AS/400). The provided source code may also be built on similar operating systems to those officially supported or be modified by third parties to support others such as NonStop OS and Unix servers.
  
  Platform architecture 
  Node.js brings event-driven programming to web servers, enabling development of fast web servers in JavaScript. Developers can create scalable servers without using threading, by using a simplified model of event-driven programming that uses callbacks to signal the completion of a task. Node.js connects the ease of a scripting language (JavaScript) with the power of Unix network programming.
  
  Node.js was built on the Google V8 JavaScript engine since it was open-sourced under the BSD license. It is proficient with internet fundamentals such as HTTP, DNS, TCP. JavaScript was also a well-known language, making Node.js accessible to the web development community.
  
  Industry support 
  There are thousands of open-source libraries for Node.js, most of them hosted on the npm website. The Node.js developer community has two main mailing lists and the IRC channel #node.js on freenode. There are multiple developer conferences and events that support the Node.js community, including NodeConf, Node Interactive, and Node Summit as well as a number of regional events.
  
  The open-source community has developed web frameworks to accelerate the development of applications. Such frameworks include Connect, Express.js, Socket.IO, Feathers.js, Koa.js, Hapi.js, Sails.js, Meteor, Derby, and many others. Various packages have also been created for interfacing with other languages or runtime environments such as Microsoft .NET.
  
  Modern desktop IDEs provide editing and debugging features specifically for Node.js applications. Such IDEs include Atom, Brackets, JetBrains WebStorm, Microsoft Visual Studio (with Node.js Tools for Visual Studio, or TypeScript with Node definitions,) NetBeans, Nodeclipse Enide Studio (Eclipse-based), and Visual Studio Code. Certain online web-based IDEs also support Node.js, such as Codeanywhere, Codenvy, Cloud9 IDE, Koding, and the visual flow editor in Node-RED.

39. Quarkus

Quarkus: a next-generation Kubernetes native Java framework

Quarkus is a Kubernetes Native Java framework tailored for GraalVM and HotSpot, crafted from best-of-breed Java libraries and standards. The goal of Quarkus is to make Java a leading platform in Kubernetes and serverless environments while offering developers a unified reactive and imperative programming model to optimally address a wider range of distributed application architectures.

Ref: https://developers.redhat.com/blog/2019/03/07/quarkus-next-generation-kubernetes-native-java-framework/

40. Reactive Programming

In computing, reactive programming is a declarative programming paradigm concerned with data streams and the propagation of change. With this paradigm it is possible to express static (e.g., arrays) or dynamic (e.g., event emitters) data streams with ease, and also communicate that an inferred dependency within the associated execution model exists, which facilitates the automatic propagation of the changed data flow.

For example, in an imperative programming setting, a:=b+c would mean that a is being assigned the result of b+c in the instant the expression is evaluated, and later, the values of b and c can be changed with no effect on the value of a. On the other hand, in reactive programming, the value of a is automatically updated whenever the values of b or c change, without the program having to re-execute the statement a:=b+c to determine the presently assigned value of "a".

Another example is a hardware description language such as Verilog, where reactive programming enables changes to be modeled as they propagate through circuits.

Reactive programming has been proposed as a way to simplify the creation of interactive user interfaces and near-real-time system animation.

For example, in a model–view–controller (MVC) architecture, reactive programming can facilitate changes in an underlying model that are reflected automatically in an associated view.

Ref: https://en.wikipedia.org/wiki/Reactive_programming

41. Vert.x

Eclipse Vert.x is a polyglot event-driven application framework that runs on the Java Virtual Machine.

Similar environments written in other programming languages include Node.js for JavaScript, Twisted for Python, Perl Object Environment for Perl, libevent for C, reactPHP and amphp for PHP and EventMachine for Ruby.

As of version 2.1.4, Vert.x exposes its API in Java, JavaScript, Groovy, Ruby, Python, Scala, Clojure and Ceylon.

As of version 3.7.0, Vert.x exposes its API in Java, JavaScript, Groovy, Ruby, Scala, Kotlin and Ceylon.

History 
Vert.x was started by Tim Fox in 2011 while he was employed by VMware.

Fox initially named the project "Node.x", a play on the naming of Node.js, with the "x" representing the fact that the new project was polyglot in nature, and didn't simply support JavaScript. The project was later renamed to "Vert.x" to avoid any potential legal issues as "Node" was a trademark owned by Joyent Inc. The new name was also a play on the name node, as a vertex is a synonym for a node in mathematics.

In December 2012, after he left their employment, VMware served legal papers on Tim Fox to take control of the Vert.x trademark, domain name, blog, Github account, and Google Group from the Vert.x community

After much discussion with other parties, in January 2013, VMware was persuaded that it would be in the best interests of the Vert.x community to move the project and associated IP to the Eclipse Foundation, a neutral legal entity.

In August 2013, the core Vert.x project completed its move to the Eclipse Foundation. The other projects that make up the Vert.x stack did not migrate to Eclipse but continued to use the "Vert.x" trademark with tacit approval of the Eclipse Foundation.

In May 2014, Vert.x won the award for "Most Innovative Java Technology" at the JAX Innovation awards.

On January 12, 2016, Tim Fox stepped down as the lead of the Vert.x project. and Julien Viet, a long-time contributor, took his place.

Architecture
Vert.x uses low level IO library Netty.

The application framework includes these features:

- Polyglot. Application components can be written in Java, JavaScript, Groovy, Ruby, Scala, Kotlin and Ceylon.
- Simple concurrency model. All code is single threaded, freeing from the hassle of multi-threaded programming.
- Simple, asynchronous programming model for writing truly scalable non-blocking applications.
- Distributed event bus that spans the client and server side. The event bus even penetrates into in-browser JavaScript allowing to create so-called real-time web applications.
- Actor model and public repository, to re-use and shared components.

Ref: https://en.wikipedia.org/wiki/Vert.x

42. MicroProfile

The Eclipse MicroProfile Standard is a specification geared towards microservices, which, complementary and based on JavaEE, wants to achieve portability for applications in different MicroProfile runtime environments.  The economic driver is the increasing use of cloud computing resources by service providers.  Three publications per year have been announced. 

Core elements  
From the JavaEE, the specification adopts the individual specifications that were written for REST and JSON . In addition, there is the programming model with CDI . While a classic JavaEE runtime environment is designed to take over the cross-sectional tasks (e.g. configuration , logging , monitoring ) for several applications , many small runtime environments are essential features of the software architecture for microservices . This is where the additional specifications of MicroProfile come in, in order to be able to fulfill the cross-sectional tasks even in such a structure.

Ref: https://de.wikipedia.org/wiki/MicroProfile

43. WildFly

WildFly, formerly known as JBoss AS, or simply JBoss, is an application server authored by JBoss, now developed by Red Hat. WildFly is written in Java and implements the Java Platform, Enterprise Edition (Java EE) specification. It runs on multiple platforms.

WildFly is free and open-source software, subject to the requirements of the GNU Lesser General Public License (LGPL), version 2.1.

On 20 November 2014, JBoss Application Server was renamed WildFly. The JBoss Community and other Red Hat JBoss products like JBoss Enterprise Application Platform were not renamed.

Ref: https://en.wikipedia.org/wiki/WildFly

44. Application binary interface

In computer software, an application binary interface (ABI) is an interface between two binary program modules; often, one of these modules is a library or operating system facility, and the other is a program that is being run by a user.

An ABI defines how data structures or computational routines are accessed in machine code, which is a low-level, hardware-dependent format; in contrast, an API defines this access in source code, which is a relatively high-level, hardware-independent, often human-readable format. A common aspect of an ABI is the calling convention, which determines how data is provided as input to or read as output from computational routines; examples are the x86 calling conventions.

Adhering to an ABI (which may or may not be officially standardized) is usually the job of a compiler, operating system, or library author; however, an application programmer may have to deal with an ABI directly when writing a program in a mix of programming languages, which can be achieved by using foreign function calls.

Description 
ABIs cover details such as:

- a processor instruction set (with details like register file structure, stack organization, memory access types, ...)
- the sizes, layouts, and alignments of basic data types that the processor can directly access
- the calling convention, which controls how functions' arguments are passed and return values are retrieved; for example, whether all parameters are passed on the stack or some are passed in registers, which registers are used for which function parameters, and whether the first function parameter passed on the stack is pushed first or last onto the stack
- how an application should make system calls to the operating system and, if the ABI specifies direct system calls rather than procedure calls to system call stubs, the system call numbers
- and in the case of a complete operating system ABI, the binary format of object files, program libraries and so on.

Ref: https://en.wikipedia.org/wiki/Application_binary_interface

45. Red Hat Fuse

Red Hat Fuse is an open source integration platform based on Apache Camel. It is a distributed integration platform that provides a standardized methodology, infrastructure, and tools to integrate services, microservices, and application components.

Red Hat Fuse is a distributed integration platform designed for agile integration with standalone, cloud, and Cloud-based integration deployment options so integration experts, application developers, and business users can independently develop connected solutions in the environment of their choosing. The unified platform lets users collaborate, business units self-serve, and organizations ensure governance.

Technology 
Red Hat Fuse supports Spring Boot, OSGi and Java EE for use in enterprise IT organizations. It has a pluggable architecture that allows individuals to use their preferred software services in a traditional service-oriented architecture (SOA) or a microservices-based architecture. Fuse components may be deployed on-prem or in public/private clouds.

Key features 
# Hybrid deployment - use Red Hat Fuse on-prem, in public/private clouds, or as a hosted service and have all integration infrastructure work seamlessly allowing users to collaborate across the enterprise.
# Distributed infrastructure - Integrations, built from predefined Enterprise Integration Patterns (EIPs) and over 2000 connectors, are deployed on container-native infrastructure to adapt easily and scale quickly.
# Low-code interface - tooling allows developers and non-technical users to drag and drop predefined services and integration patterns so business units can self-serve and continuously innovative.

Ref: https://en.wikipedia.org/wiki/Fuse_ESB

46. Apache Camel

Apache Camel is an open source framework for message-oriented middleware with a rule-based routing and mediation engine that provides a Java object-based implementation of the Enterprise Integration Patterns using an application programming interface (or declarative Java domain-specific language) to configure routing and mediation rules.

The domain-specific language means that Apache Camel can support type-safe smart completion of routing rules in an integrated development environment using regular Java code without large amounts of XML configuration files, though XML configuration inside Spring Framework is also supported.

Camel is often used with Apache ServiceMix, Apache ActiveMQ and Apache CXF in service-oriented architecture projects.

Tooling 
- Several Maven-plugins are provided for validation and deployment.
- Graphical, Eclipse-based tooling is freely available from Red Hat. It provides graphical editing and debugging and advanced validation.
- Eclipse based tooling from Talend.

Ref: https://en.wikipedia.org/wiki/Apache_Camel

47. Apache ActiveMQ

Apache ActiveMQ is an open source message broker written in Java together with a full Java Message Service (JMS) client. It provides "Enterprise Features" which in this case means fostering the communication from more than one client or server. Supported clients include Java via JMS 1.1 as well as several other "cross language" clients. The communication is managed with features such as computer clustering and ability to use any database as a JMS persistence provider besides virtual memory, cache, and journal persistency.

The ActiveMQ project was originally created by its founders from LogicBlaze in 2004, as an open source message broker, hosted by CodeHaus. The code and ActiveMQ trademark were donated to the Apache Software Foundation in 2007, where the founders continued to develop the codebase with the extended Apache community.

ActiveMQ employs several modes for high availability, including both file-system and database row-level locking mechanisms, sharing of the persistence store via a shared filesystem, or true replication using Apache ZooKeeper. A robust horizontal scaling mechanism called a Network of Brokers, is also supported out of the box. In the enterprise, ActiveMQ is celebrated for its flexibility in configuration, and its support for a relatively large number of transport protocols, including OpenWire, STOMP, MQTT, AMQP, REST, and WebSockets.

ActiveMQ is used in enterprise service bus implementations such as Apache ServiceMix and Mule. Other projects using ActiveMQ include Apache Camel and Apache CXF in SOA infrastructure projects.

Coinciding with the release of Apache ActiveMQ 5.3, the world's first results for the SPECjms2007 industry standard benchmark were announced. Four results were submitted to the SPEC and accepted for publication. The results cover different topologies to analyze the scalability of Apache ActiveMQ in two dimensions.

ActiveMQ is currently in major version 5, minor version 15. There's also a separate product called Apache ActiveMQ Artemis which is a new JMS Broker based on the HornetQ code base which was previously owned by Red Hat, and bringing the broker's JMS implementation up to the 2.0 specification.

Amazon Web Services offers a managed message broker service for Apache ActiveMQ called Amazon MQ

Ref: https://en.wikipedia.org/wiki/Apache_ActiveMQ

48. YAML

YAML (a recursive acronym for "YAML Ain't Markup Language") is a human-readable data-serialization language. It is commonly used for configuration files and in applications where data is being stored or transmitted. YAML targets many of the same communications applications as Extensible Markup Language (XML) but has a minimal syntax which intentionally differs from SGML . It uses both Python-style indentation to indicate nesting, and a more compact format that uses [] for lists and {} for maps making YAML 1.2 a superset of JSON.

Custom data types are allowed, but YAML natively encodes scalars (such as strings, integers, and floats), lists, and associative arrays (also known as maps or dictionaries). These data types are based on the Perl programming language, though all commonly used high-level programming languages share very similar concepts. The colon-centered syntax, used for expressing key-value pairs, is inspired by electronic mail headers as defined in RFC 0822, and the document separator "---" is borrowed from MIME (RFC 2046). Escape sequences are reused from C, and whitespace wrapping for multi-line strings is inspired from HTML. Lists and hashes can contain nested lists and hashes, forming a tree structure; arbitrary graphs can be represented using YAML aliases (similar to XML in SOAP). YAML is intended to be read and written in streams, a feature inspired by SAX.

Support for reading and writing YAML is available for many programming languages. Some source-code editors such as Emacs and various integrated development environments have features that make editing YAML easier, such as folding up nested structures or automatically highlighting syntax errors.

The official recommended filename extension for YAML files has been .yaml since 2006.

In computer science, in the context of data storage, serialization (or serialisation) is the process of translating data structures or object state into a format that can be stored (for example, in a file or memory buffer) or transmitted (for example, across a network connection link) and reconstructed later (possibly in a different computer environment). When the resulting series of bits is reread according to the serialization format, it can be used to create a semantically identical clone of the original object. For many complex objects, such as those that make extensive use of references, this process is not straightforward. Serialization of object-oriented objects does not include any of their associated methods with which they were previously linked. This process of serializing an object is also called marshalling an object in some situations. The opposite operation, extracting a data structure from a series of bytes, is deserialization (also called unmarshalling).

Ref 1: https://en.wikipedia.org/wiki/YAML
Ref 2: https://en.wikipedia.org/wiki/Serialization

49. Publish–subscribe pattern

In software architecture, publish–subscribe is a messaging pattern where senders of messages, called publishers, do not program the messages to be sent directly to specific receivers, called subscribers, but instead categorize published messages into classes without knowledge of which subscribers, if any, there may be. Similarly, subscribers express interest in one or more classes and only receive messages that are of interest, without knowledge of which publishers, if any, there are.

Publish–subscribe is a sibling of the message queue paradigm, and is typically one part of a larger message-oriented middleware system. Most messaging systems support both the pub/sub and message queue models in their API, e.g. Java Message Service (JMS).

This pattern provides greater network scalability and a more dynamic network topology, with a resulting decreased flexibility to modify the publisher and the structure of the published data.

Message filtering 
In the publish-subscribe model, subscribers typically receive only a subset of the total messages published. The process of selecting messages for reception and processing is called filtering. There are two common forms of filtering: topic-based and content-based.

In a topic-based system, messages are published to "topics" or named logical channels. Subscribers in a topic-based system will receive all messages published to the topics to which they subscribe. The publisher is responsible for defining the topics to which subscribers can subscribe.

In a content-based system, messages are only delivered to a subscriber if the attributes or content of those messages matches constraints defined by the subscriber. The subscriber is responsible for classifying the messages.

Some systems support a hybrid of the two; publishers post messages to a topic while subscribers register content-based subscriptions to one or more topics.

Ref: https://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern

50. Cloud Computing

Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. The term is generally used to describe data centers available to many users over the Internet. Large clouds, predominant today, often have functions distributed over multiple locations from central servers. If the connection to the user is relatively close, it may be designated an edge server.

Clouds may be limited to a single organization (enterprise clouds), or be available to many organizations (public cloud).

Cloud computing relies on sharing of resources to achieve coherence and economies of scale.

Advocates of public and hybrid clouds note that cloud computing allows companies to avoid or minimize up-front IT infrastructure costs. Proponents also claim that cloud computing allows enterprises to get their applications up and running faster, with improved manageability and less maintenance, and that it enables IT teams to more rapidly adjust resources to meet fluctuating and unpredictable demand, providing the burst computing capability: high computing power at certain periods of peak demand.

Cloud providers typically use a "pay-as-you-go" model, which can lead to unexpected operating expenses if administrators are not familiarized with cloud-pricing models.

The availability of high-capacity networks, low-cost computers and storage devices as well as the widespread adoption of hardware virtualization, service-oriented architecture and autonomic and utility computing has led to growth in cloud computing. By 2019, Linux was the most widely used operating system, including in Microsoft's offerings and is thus described as dominant. The Cloud Service Provider (CSP) will screen, keep up and gather data about the firewalls, intrusion identification or/and counteractive action frameworks and information stream inside the network.

Service models 

# Infrastructure as a service (IaaS)

"Infrastructure as a service" (IaaS) refers to online services that provide high-level APIs used to dereference various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup etc. A hypervisor runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. Containerisation offers higher performance than virtualization, because there is no hypervisor overhead. Also, container capacity auto-scales dynamically with computing load, which eliminates the problem of over-provisioning and enables usage-based billing. IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.

The NIST's definition of cloud computing describes IaaS as "where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)."

IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.

# Platform as a service (PaaS)

The NIST's definition of cloud computing defines Platform as a Service as:

The capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages, libraries, services, and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but has control over the deployed applications and possibly configuration settings for the application-hosting environment.

PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including operating system, programming-language execution environment, database, and web server. Application developers develop and run their software on a cloud platform instead of directly buying and managing the underlying hardware and software layers. With some PaaS, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually.

Some integration and data management providers also use specialized applications of PaaS as delivery models for data. Examples include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows. Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware. dPaaS delivers integration—and data-management—products as a fully managed service. Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of programs by building data applications for the customer. dPaaS users access data through data-visualization tools. Platform as a Service (PaaS) consumers do not manage or control the underlying cloud infrastructure including network, servers, operating systems, or storage, but have control over the deployed applications and possibly configuration settings for the application-hosting environment.

# Software as a service (SaaS)

The NIST's definition of cloud computing defines Software as a Service as:

The capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through either a thin client interface, such as a web browser (e.g., web-based email), or a program interface. The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.

In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee. In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.

The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so prices become scalable and adjustable if users are added or removed at any point. It may also be free. Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result,[citation needed] there could be unauthorized access to the data.[citation needed] Examples of applications offered as SaaS are games and productivity software like Google Docs and Word Online. SaaS applications may be integrated with cloud storage or File hosting services, which is the case with Google Docs being integrated with Google Drive and Word Online being integrated with Onedrive.

# Mobile "backend" as a service (MBaaS)

In the mobile "backend" as a service (m) model, also known as backend as a service (BaaS), web app and mobile app developers are provided with a way to link their applications to cloud storage and cloud computing services with application programming interfaces (APIs) exposed to their applications and custom software development kits (SDKs). Services include user management, push notifications, integration with social networking services and more. This is a relatively recent model in cloud computing, with most BaaS startups dating from 2011 or later but trends indicate that these services are gaining significant mainstream traction with enterprise consumers.

# Serverless computing

Serverless computing is a cloud computing code execution model in which the cloud provider fully manages starting and stopping virtual machines as necessary to serve requests, and requests are billed by an abstract measure of the resources required to satisfy the request, rather than per virtual machine, per hour. Despite the name, it does not actually involve running code without servers. Serverless computing is so named because the business or person that owns the system does not have to purchase, rent or provision servers or virtual machines for the back-end code to run on.

# Function as a service (FaaS)

Function as a service (FaaS) is a service-hosted remote procedure call that leverages serverless computing to enable the deployment of individual functions in the cloud that run in response to events. FaaS is included under the broader term serverless computing, but the terms may also be used interchangeably.

No comments:

Post a Comment