What is DevOps?
DevOps can be viewed from three angles – people, processes and technologies. Seen through the prism of human relationships, DevOps is a philosophy that seeks to break down traditional organizational silos, and especially the strict separation of teams engaged in application development and infrastructure management
Lastly, DevOps involves the application of a variety of technologies that enable more efficient operation and automatization whenever available. Thus, Git is used to manage the source code, to view Gerrit code, and to manage configuration tools such as Puppet, Shef, Ansible, and SaltStack. The technological basis for DevOps mode includes VMware vRealize Code stream tools for application publishing, as well as vRealize Automation and vRealize Operations for resource allocation and infrastructure monitoring.
Common practices that DevOps uses
Below are some of the most important practices that DevOps uses
DevOps relies entirely on automatization and tools to enable it to be implemented. If there are tasks that are often performed in the same way and produce identical results, then there is no reason to waste time and perform them manually each time. This saves time, but also avoids possible human error. When it comes to development, automatization is very important. One of the most common tools used is Vagrant, with this tool it is quick and easy to tweak the development environment. It is of paramount importance for every developer to get a finished image with a preinstalled system and all the necessary components that will allow him / her to focus on development within 15 minutes. In addition, one of the tools that is standard among DevOps engineers is Jenkins – a tool that enables the creation of so-called jobs for continuous integration, continuous delivery, continuous testing and similar.
Before continuous integration, development teams would write a bunch of code over three to four months. Then these teams would put together their code to release it. Different versions of the code would be so different and have so many changes that the actual integration step could take months. This process was very unproductive, this is why the Waterfall Development is used to divide the project activities into linear sequential phases, with each phase depending on the results of the previous one and corresponding to the task specialization.
This is one of the most important practices in DevOps, which essentially means: as often as possible to commit, push and merge code that we work on. For each task, the developer creates a separate, so-called feature branch, which after the desired changes and the most basic testing on its part, is first pushed to the remote repository, to make the branch visible to others, but also merge into some larger codebase. This cycle in agile development is repeated several times a day, of course – all depending on the volume of work in a particular task. This actually means that every developer needs to actively monitor what the rest of the team is doing to integrate their code seamlessly with others, but on the other hand, this approach significantly improves team communication and allows the baggy code to be identified and corrected at the earliest possible.
As Scrum dictates, pre-release is always at least one week scheduled to test active features. However, as with previous theses, testing should be a continuous collaboration between the developer and the QA engineer. Also, testing in DevOps context does not mean classic click-to-click testing – automatization is the key. Of course, there is nothing wrong with this, every team must have QA people who will continually go through the application and determine that it is behaving according to requirements. However, more recent practices also include test-driven development, that is, automated tests that also run through a tool, such as Jenkins, usually after every build. This makes it more convenient to immediately identify the critical feature and eliminate it from codebase before any major damage occurs. On the other hand, not only are new features tested, there are load, stress, and probably some other tests that will automatically show how an application behaves when exposed to certain circumstances.
This practice builds on the previous two and represents the deployment of stable and fully tested production code, with the goal of keeping it as short as possible. The time period should be between the two deployments, which usually depends on the release of the individual product plan. Basically, delivery happens at the end of each sprint, which means once a month or once a month and a half.
After deployment and commissioning, the next step is monitoring. Of course, it all comes down to automatization and tools, your custom tools, open source tools, proprietary tools. The bottom line is that you have to constantly monitor production work and have measurable results that you can use to prevent many unwanted things, or improve the environment in the next cycle. However, monitoring is not only on the back of the Ops team – each team member could monitor the production environment and, based on what is shown, conclude how the environment behaves and what it is doing at that moment. For example, if it is a web application, you can easily monitor the performance of all servers currently active and, if necessary, run scale-up / down to maintain the desired responsiveness at minimal cost. On the other hand, alerting systems should always be provided that will notify the Ops team when there are downtime or problems at key points in your system (e.g. dead locks detection, etc.).
Differences between Hardware and Software DevOps
• Software is easier to change than hardware. The cost of change is much higher for hardware than for software.
• Software products are being developed through multiple releases by adding new features and rewriting existing logic to support new features. Hardware products consist of physical components that cannot be “rebuilt” after production and cannot add new capabilities that require hardware changes.
• Designs for new hardware are often based on earlier generation products, but usually rely on new generation components that are not yet present.
• Hardware designs are limited by the need to install standard parts.
• Specialized hardware components can have a much longer acquisition time than software.
• Hardware design governs architectural decisions. More architectural work has to be done in advance compared to software products.
• Software development costs are relatively flat over time. However, hardware development costs increase rapidly by the end of the development cycle. Software testing usually requires the development of thousands of test cases. Hardware testing involves far fewer tests.
• Software testing is done by specialist quality assurance (KA) engineers, while hardware testing is usually done by product creation engineers.
• The hardware must be designed and tested to work in a specific time and environment, which is not the case for software.