Top 128 DevOps Questions and Answers for Job Interview :
1. How are Amazon Web Services and DevOps related?
Answer:The Amazon Web Services provides a set of services that helps the user practice DevOps at his/her company and those service which are built first for use with AWS. These tools automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.
2. What role does a DevOps engineer play in any organization?
Answer:No formal career track has been devised for becoming a DevOps engineer. Such engineers are usually either developers who get interested in deployment and network operations, or system administrators who have a passion for scripting and coding, and hence, move into the development side where they can improve the planning of test and deployment.
3. How does DevOps operate with Cloud Computing?
Answer: DevOps and Cloud Computing can be considered to be elements of the same concept. Development and operations practices can be considered to be inseparable and hence, universally relevant. Cloud computing, Agile development, and DevOps are linking parts of a strategy for transforming IT into a business adaptability enabler.
4. Mention the reasons for which AWS is used in combination with DevOps.
Answer: There are many advantages of using AWS in combination with DevOps:
• Built for Scale – The user can manage a single instance or scale to thousands using AWS services. These services help him/her make the most of flexible compute resources by simplifying provisioning, configuration, and scaling.
• Programmable – The user has the option to use each service via the AWS Command Line Interface or through APIs and SDKs. The user can also model and provision AWS resources and your entire AWS infrastructure using declarative AWS Cloud Formation templates.
• Get Started Fast – Each AWS service is ready to use if the user already has an AWS account. There is no setup required or any additional software to install.
• Fully Managed Services – The AWS services can help the user take advantage of AWS resources quicker. The user can worry less about setting up, installing, and operating infrastructure independently. This lets him/her to focus on the core product and guarantees a higher success rate.
• Large Partner Ecosystem – AWS supports a large ecosystem of partners which integrate with and extend AWS services. The user can now implement the preferred third-party and open source tools with AWS to build an end-to-end solution.
• Pay-As-You-Go – With AWS purchase services as the user needs them and only for the period when the user plans to use them. AWS pricing has no upfront fees, termination penalties, or long term contracts. The AWS Free Tier helps the user get started with AWS.
• Automation – AWS helps the user use automation so you can build faster and more efficiently. Using AWS services, the user can automate manual tasks or processes such as deployments, development & test workflows, container management, and configuration management.
• Secure – Use AWS Identity and Access Management (IAM) to set user permissions and policies. This gives the user the granular control over who can access the concerned resources and how they access those resources.
5. Elaborate on DevOps Tooling with reference to AWS.
Answer: AWS provides services that help the user practice DevOps at his/her company and that are built first for use with AWS. These tools automate manual tasks, help teams manage complex environments at scale, and keep engineers in control of the high velocity that is enabled by DevOps.
6. How can the user handle Continuous Delivery and Continuous and Continuous Integration in AWS DevOps?
Answer: The AWS Developer Tools helps the user to securely store and version the application’s source code and automatically build, test, and deploy the application to AWS or any on-premises environment. One can start with AWS CodePipeline to build a continuous integration or continuous delivery workflow that uses AWS CodeBuild, AWS CodeDeploy, and other tools, or use each service separately.
7. Do you know anything AWS CodePipeline in AWS DevOps?
Answer: AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. The CodePipeline builds, tests, and deploys the user’s code every time there is a code change, based on the release process models the user define. This enables the user to rapidly and reliably deliver features and updates.
8. What do you know about AWS CodeBuild in AWS DevOps?
Answer: AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, the user doesn’t need to provision, manage, and scale user-built or independent build servers. CodeBuild scales continuously and processes multiple builds concurrently, so that user builds are not left waiting in a queue.
9. What do you know about AWS CodeDeploy in AWS DevOps?
Answer: AWS CodeDeploy is an in-built application which automates code deployments to any instance, including Amazon EC2 instances and on-premises servers. AWS CodeDeploy makes it easier for the user to rapidly release new features, helps the user to avoid downtime during application deployment, and handles the complexity of updating all the concerned applications.
10. What do you know about AWS CodeStar in AWS DevOps?
Answer: The AWS CodeStar enables the user to quickly develop, build, and deploy applications on AWS. The AWS CodeStar provides a unified user interface, enabling the user to easily manage allthe software development activities in one place. With AWS CodeStar, the user can easily set up the entire continuous delivery toolchain in minutes, allowing the user to start releasing code faster.
11. How does the Instacart make efficient use of AWS DevOps?
Answer: The Instacart is one of the most useful in-built features which makes use of AWS CodeDeploy to automate deployments for both the front-end and back-end services. Using the AWS CodeDeploy has permitted Instacart’s developers to focus on their product and care less about deployment operations.
12. How does the lululemon athletica make use of AWS DevOps?
Answer: The lululemon athletica uses a variety of AWS services to engineer a fully automated, continuous integration and delivery system. The lululemon athletica deploys artifacts distributed via Amazon S3 specifically using AWS CodePipeline. From this stage, the artifacts are deployed to AWS Elastic Beanstalk.
13. What do you know about the Amazon Elastic Container in AWS DevOps?
Answer: The Amazon Elastic Container Service (ECS) is a highly scalable as well as a high performance container management service that supports Docker containers and allows the user to easily run applications on a managed cluster of Amazon EC2 instances.
14. Do you know anything about the AWS Lambda in AWS DevOps?
Answer: The AWS Lambda lets the user run his/her code without any specific provisioning or managing servers. With Lambda, the user can run code for virtually any type of application or backend service – all with zero administration. The user just needs to upload the concerned code and Lambda takes care of the following procedures required to run and scale the code with high availability. The user has to pay only for the compute time he/she consumes– there is no charge when the user built code is not running.
15. Talk about the AWS Developer Tools.
Answer: The AWS Developer Tools is a set of services designed to enable developers and IT operations professionals practicing DevOps to rapidly and safely deliver software. Together, these services help the user securely store and version control the user’s application’s source code and automatically build, test, and deploy the application to AWS or the user’s on-premises environment. The user can use AWS CodePipeline to orchestrate an end-to-end software release workflow using these services and third-party tools or integrate each service independently with your existing tools.
16. Do you know anything about CodeCommit in AWS DevOps?
Answer: The AWS CodeCommit is a fully-managed source control service that makes it easy for companies to host secure and highly scalable private Git repositories. The special feature of CodeCommit that it eliminates the need to operate all kinds of independent source control system or worry about scaling its infrastructure. The user can use CodeCommit to securely store anything from source code to binaries, and it works seamlessly with the user existing Git tools.
17. Mention the advantages of AWS CodeBuild when used in AWS DevOps.
Answer: The AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, the user doesn’t need to provision, manage, and scale all kinds of independent build servers. The CodeBuild scales continuously and processes multiple builds concurrently, so the user builds are not left waiting in a queue. The user can get started quickly by using prepackaged build environments, or the user can create custom build environments that use user-defined build tools. With CodeBuild, the user is charged by the minute for the compute resources the user’s use.
AWS CodeBuild Benefits:
• Pay as You Go – With AWS CodeBuild, The user is charged based on the number of minutes it takes to complete the build of his/her code.
• Extensible – The user can bring his/her own build tools and programming runtimes to use with AWS CodeBuild by creating customized build environments in addition to the prepackaged build tools and runtimes supported by CodeBuild.
• Fully Managed Build Service – AWS CodeBuild eliminates the need to set up, patch, update, and manage user-defined build servers and software. There is no software to install or manage.
• Continuous Scaling – AWS CodeBuild scales automatically to meet the build volume of the user’s code. It immediately processes each build the user submits and can run separate builds concurrently, which means all the user’s builds are not left waiting in a queue.
• Enables Continuous Integration and Delivery – AWS CodeBuild belongs to a family of AWS Code Services, which the user can use to create complete, automated software release workflows for continuous integration and delivery (CI/CD). The user can also integrate CodeBuild into the user’s existing CI/CD workflow.
• Secure – With AWS CodeBuild, the user can build artifacts that are encrypted with customer-specific keys that are managed by the AWS Key Management Service (KMS). CodeBuild is integrated with AWS Identity and Access Management (IAM), so that the user can assign user-specific permissions to all concerned build projects.
18. Do you know anything about the Amazon EC2 in AWS DevOps?
Answer: The Amazon Elastic Compute Cloud, commonly known as the Amazon EC2 is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers.
19. What do you know about Amazon S3 in AWS DevOps?
Answer: The Amazon Simple Storage Service (Amazon S3) is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web.
20. Tell us something about Amazon RDS Services in AWS DevOps.
Answer: The Amazon Relational Database Service, popular as the Amazon RDS, makes it easier for the user to set up, operate, and scale a relational database in the cloud.
21. What do you know about Amazon QuickSight in AWS DevOps?
Answer: The Amazon QuickSight is a fast, cloud-powered business analytics service that makes it easier for the user to build visualizations, perform ad-hoc analysis, and quickly get business insights from all data concerned.
22. What is AWS IoT in AWS DevOps?
Answer: The AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices.
23. Mention the benefits of using AWS CodeDeploy in AWS DevOps.
Answer: The AWS CodeDeploy is a service that automates software deployments to a variety of computer services including Amazon EC2, AWS Lambda, and instances running on-premises. The AWS CodeDeploy makes it easier for the user to rapidly release new features, helps him/her to avoid downtime during application deployment, and handles the complexity of updating the user’s applications.
AWS CodeDeploy Benefits:
• Centralized Control – The AWS CodeDeploy allows the user to easily launch and track the status of the user’s application deployments through the AWS Management Console or the AWS CLI. CodeDeploy gives the user a detailed report allowing him/her to view when and to where each application revision was deployed.
• Easy To Adopt – The AWS CodeDeploy is platform and language agnostic, works with any application, and provides the same experience whether the user is deploying to Amazon EC2 or AWS Lambda. The user can easily reuse the existing setup code. CodeDeploy can also integrate with the existing software release process or continuous delivery toolchain (e.g., AWS CodePipeline, GitHub, Jenkins).
• Automated Deployments – The AWS CodeDeploy fully automates the software deployments, allowing him/her to deploy reliably and rapidly. The user can consistently deploy his/her application across his/her development, test, and production environments whether deploying to Amazon EC2, AWS Lambda, or instances running on-premises. The service scales with the user’s infrastructure so that the user can deploy to one Lambda function or thousands of EC2 instances.
• Minimize Downtime – The AWS CodeDeploy helps maximize the user’s application availability during the software deployment process. It introduces changes incrementally and tracks application health according to configurable rules. Software deployments can easily be stopped and rolled back if there are errors.
24. How can a user efficiently use CodeBuild to automate release process?
Answer: The CodeBuild is one of the most helpful features and is integrated with the AWS CodePipeline. With this, the user can add a build action and set up a continuous integration and continuous delivery process that runs in the cloud.
25. What do you know about a ‘build project’ in AWS DevOps?
Answer: A build project is used to define how the CodeBuild is supposed run a build. It includes information such as where to get the source code, which build environment to use, the build commands to run, and where to store the build output. A build environment is the combination of operating system, programming language runtime, and tools used by the CodeBuild to run a build.
26. How does a user configure a ‘build project’ in AWS DevOps?
Answer: A build project can be configured through the console or the AWS CLI. The user has to specify the source repository location, the runtime environment, the build commands, the IAM role assumed by the container, and the compute class required to run the build. Optionally, the user can also specify build commands in a buildspec.yml file.
27. Mention the source repositories supported by the CodeBuild in AWS DevOps.
Answer: The CodeBuild can connect to AWS CodeCommit, S3, and GitHub to pull source code for builds.
28. Mention the programming frameworks supported by the Code Build feature in AWS DevOps.
Answer: The CodeBuild provides preconfigured environments for supported versions of Java, Ruby, Python, Go, Node.js, Android, and Docker. The user is given the freedom to customize his/her own environment by creating a Docker image and uploading it to the Amazon EC2 Container Registry or the Docker Hub registry. The user can then reference this custom image in his/her build project.
29. Mention the different processes that take place when a build is run in CodeBuild in AWS DevOps.
Answer: The CodeBuild is programmed to create a temporary compute container of the class defined in the build project, load it with the specified runtime environment, download the source code, execute the commands configured in the project, upload the generated artifact to an S3 bucket, and then destroy the compute container. During the build, CodeBuild will stream the build output to the service console and Amazon CloudWatch Logs.
30. How is a user supposed to set up the first build in CodeBuild in AWS DevOps?
Answer: If setting up the first build in COdeBuild in AWS Devops,the user is supposed to sign in to the AWS Management Console, create a build project, and finally, run a build.
31. How is the user supposed to make use of CodeBuild with Jenkins in AWS DevOps?
Answer: The CodeBuild Plugin for Jenkins can be used to integrate CodeBuild into Jenkins jobs. The build jobs are sent to CodeBuild, eliminating the need for provisioning and managing the Jenkins worker nodes.
32. How can a user view past build past results in AWS CodeBuild?
Answer: The user can access his/her past build results through the console or the API. The results include outcome (success or failure), build duration, output artifact location, and log location.
33. How can the user debug a past build failure in AWS CodeBuild?
Answer:The user can debug a build by inspecting the detailed logs generated during the build run.
34. Mention the various types of applications that a user can build using the AWS CodeStar.
Answer:The CodeStar is programmed in such a way that it can be used for building web applications, web services and more. The applications developed using CodeStar run on Amazon EC2, AWS Elastic Beanstalk or AWS Lambda. There are a number of project templates that are available in several different programming languages including Java, Node.js (Javascript), PHP, Python and Ruby.
35. How does the user add, remove or change users for any AWS CodeStar projects?
Answer:The user can add, change or remove users for his/her CodeStar project through the “Team” section of the CodeStar console. The user can also choose to grant the users Owner, Contributor or Viewer permissions. The user can also remove users or change their roles at any time.
36. What is the similarity between users of AWS CodeStar and IAM?
Answer:All CodeStar users are IAM users that are managed by CodeStar to provide pre-built, role-based access policies across the user’s development environment. This happens because CodeStar users are built on IAM and the user can still get the administrative benefits of IAM. For example, if the user adds an existing IAM user to a CodeStar project, the existing global account policies in IAM are enforced irrespective of the conditions created.
37. Can a user work on his/her AWS CodeStar projects directly from an IDE?
Answer:This is one of the greatest advantages for AWS CodeStar users. They can directly work on their projects from an IDE by installing the AWS Toolkit for Eclipse or Visual Studio. By doing so, the user gains the ability to easily configure his/her local development environment to work with CodeStar Projects. Once the application softwares have been installed, developers can then select from a list of available CodeStar projects and have their development tooling automatically configured to clone and checkout their project’s source code, all from within their IDE.
38. How can a user configure his/her project dashboard?
Answer:Project dashboards can be configured to show the tiles the user wants, where he/she wants the tiles. To add or remove tiles, the user just has to click on the “Tiles” drop down on his/her project dashboard. To change the layout of his/her project dashboard, he/she has to drag the tile to the desired position.
39. Do you know anything about third party integrations that can be used with AWS CodeStar?
Answer:The AWS CodeStar works with Atlassian JIRA to integrate issue management with the user’s projects.
40. Can a user make user of AWS CodeStar to help manage his/her existing AWS applications?
Answer:Unfortunately, such a thing cannot be accomplished using the features of AWS COdeSTar. AWS CodeStar helps all its customers to quickly start new software projects on AWS. Each project includes development tools, including AWS CodePipeline, AWS CodeCommit, AWS CodeBuild and AWS CodeDeploy, that can be used on their own and with existing AWS applications.
41. Why is the AWS DevOps such a popular application software?
Answer:Software and the Internet have transformed the world and its industries, from shopping to entertainment to banking. Software no longer just supports a business, instead it has slowly evolved into an integral component of every part of a business. Companies relate with their customers through software applications such as online services or applications and on all sorts of devices. They also use software to increase operational efficiencies by transforming every part of the value chain, such as logistics, communications, and operations. Just as physical goods companies have been transformed with respect to how they design, build, and deliver products using industrial automation throughout the 20th century, companies in today’s world also need to transform how they build and deliver software. Right here somes the application of AWS DevOps.
42. How is a user supposed to adopt a AWS DevOps Model?
Answer:Transitioning to DevOps requires a change in culture and mindset. At the naivest form, DevOps is about eliminating the barriers between two conventionally siloed teams, development and operations. In some organizations, there may not even be separate development and operations teams and engineers might have to do both. With DevOps, the two teams work together to optimize both the productivity of developers and the reliability of operations. The users strive to communicate frequently, increase efficiencies, and improve the quality of services they provide to customers. The users take full ownership for their services, often beyond where their stated roles or titles have traditionally been scoped by thinking about the end customer’s needs and how they can contribute to solving those needs. Quality assurance and security teams may also become tightly integrated with these teams. Organizations using a DevOps model, regardless of their organizational structure, have teams that view the entire development and infrastructure lifecycle as part of their errands.
43. Mention some of the DevOps Practices.
Answer:There are a number of practices that help the various organizations innovate faster through automating and streamlining the software development and infrastructure management processes. Most of these practices are accomplished with proper tooling and efficient use of the various DevOps Practices:
• One of the fundamental practices is to perform very frequent but small updates. This is how organizations innovate faster for their customers.
• These updates are usually more incremental in nature than the occasional updates performed under traditional release practices.
• Frequent but small updates make each deployment less risky. These updates help the collaborating teams address bugs faster because teams can identify the last deployment that caused the error.
• Although the cadence and size of updates will vary, the organizations using a DevOps model deploy updates more often than organizations using traditional software development practices.
• Organizations might also make use of a microservices architecture to make their applications more flexible and enable quicker innovation. The microservices architecture decouples large, complex systems into simple, independent projects.
• Applications are conveniently broken down into many individual components/ services with each service scoped to a single purpose or function and operated independently of its peer services and the application as a whole.
• This architecture reduces the coordination overhead of updating applications, and when each service is paired with small, agile teams who take ownership of each service, organizations can move more quickly.
However, the combination of microservices and increased release frequency leads to significantly more deployments which can present a variety of operational challenges. Thus, DevOps practices like continuous integration and continuous delivery solve these issues and let organizations deliver rapidly in a safe and reliable manner. Infrastructure automation practices, like infrastructure as code and configuration management, help to keep the computing resources elastic and responsive to frequent changes. Additionally, the use of monitoring and logging helps engineers track the performance of applications and infrastructure so they can react quickly to problems. Overall, these practices help organizations deliver faster and more reliable updates to their customers.
44. What do you mean by Continuous Integration in AWS DevOps?
Answer:Continuous integration is a software development exercise where developers regularly merge their code changes into a central repository, after which automated builds and tests are run. The key goals of the continuous integration are:
• To find and address bugs quicker.
• Improve software quality.
• Reduce the time it takes to validate and release new software updates.
45. What do you mean by Continuous Delivery in AWS DevOps?
Answer:Continuous delivery is a software development practice where code changes are mechanically built, tested, and prepared for a release to manufacture. It expands upon the concept of continuous integration by deploying all code changes to a testing environment and/or a production environment after the build stage. When continuous delivery is applied properly, developers will always have a deployment-ready build artifact that has been accepted by a standardized test process.
46. Do you know anything about Microservices in AWS DevOps?
Answer:The microservices architecture is a design approach to build a single application as a set of small services. Each microservice runs in its own process and connects with other services through a well-defined interface using a lightweight mechanism, usually an HTTP-based application programming interface (API). Microservices are built around business capabilities where each service is scoped to a single purpose. The user can use different frameworks or programming languages to write microservices and organize them independently, as a single service, or as a group of services.
47. Do you know anything about Infrastructure as Code in AWS DevOps?
Answer: Infrastructure as code is a practice in which the infrastructure is provisioned and managed using code and software development techniques, such as version control and continuous integration. The cloud’s API-driven model allows both developers and system administrators to interact with infrastructure programmatically, and at scale, instead of requiring to manually set up and configure resources. Therefore, engineers can interface with infrastructure using code-based tools and treat infrastructure in a manner similar to how application code is treated. Since all infrastructures are defined by code, infrastructure and servers can quickly be deployed using standardized patterns, updated with the latest patches and versions, or replicated in repeatable ways.
48. In AWS DevOps, what is AWS CloudFormation?
Answer:The AWS CloudFormation is a service that gives developers and businesses a stress-free way to create a collection of related AWS resources and provide a means to them in an orderly and expectable fashion.
49. Differentiate between AWS CloudFormation and AWS Beanstalk.
Answer: Both these services are designed to complement each other. The AWS Elastic Beanstalk provides an environment to easily deploy and run applications in the cloud. It is combined with certain developer tools and provides a single yet effective experience for the user to manage the lifecycle of his/her applications.
AWS CloudFormation is an appropriate provisioning mechanism for a broad range of AWS resources. It supports the infrastructure needs of a number of types of applications such as existing enterprise applications, legacy applications, applications built using a variety of AWS resources and container-based solutions, even comprising those built using the AWS Elastic Beanstalk.
50. What are the key aspects or principles behind the development of DevOps?
Answer: The key aspects or principle behind DevOps is:
• Infrastructure as a Code
• Continuous Integration
• Continuous Monitoring
• Security
• Continuous Deployment
• Automation
51. Mention some of the popular tools used with DevOps.
Answer: There are a number of popular tools with which DevOps is used:
• Git
• Jenkins
• Ansible
• Puppet
• Nagios
• Docker
• ELK (Elasticsearch, Logstash, Kibana)
52. Do you know anything about the Version Control System or the VCS? If yes, please elaborate.
Answer: The Version Control System (VCS) is a software that helps all software developers to work together and maintain a complete history of their work. Some of the feature of VCS as follows:
• Allow developers to work simultaneously
• Maintain the history of every version.
• Does not allow overwriting on each other changes.
Currently two types of Version Control Systems are in use:
1. Central Version Control System, Ex: Git, Bitbucket
2. Distributed/Decentralized Version Control System, Ex: SVN
53. Mention the features of DevOps when implemented with GIT.
Answer: The GIT has a number of features:
• The GIT is a Decentralized Version Control Tool.
• The Push and pull operations are fast.
• GIT contains the local repo and the full history of the whole project on all the developers’ hard drive. If there is a server outage, the user can easily do recovery from the team mate’s local GIT repo.
• It belongs to 3rd generation Version Control Tool.
• Commits can be done offline too.
• Client nodes can share the entire repositories on their local systems.
• Work are shared automatically by commit.
54. Mention the features of SVN in DevOps.
Answer: The GIT has a number of features:
• SVN is a Centralized Version Control Tool.
• Push and pull operations are slower compared to GIT.
• SVN relies only on the central server to store all the versions of the project file
• It belongs to 2nd generation Version Control tools
• Commits can be done only online.
• Version history is stored on server-side repository
• Work are shared automatically by commit.
55. What are the languages mainly used in GIT?
Answer: GIT has been written in the C language, and since it has been written in C language its very fast and reduces the overhead of runtimes.
56. Do you know anything about SubGIT?
Answer: The SubGIT is a tool for migrating SVN to GIT. It creates a writable GIT mirror of a local or remote Subversion repository and uses both Subversion and GIT if allowed by the user.
57. How can a user clone a GIT Repository via the Jenkins?
Answer: In order to clone a GIT Repository via the Jenkins, the user must enter the e-mail and user name for the user’s Jenkins system. Then the user has to switch into the job directory and execute the “git config” command.
58. Mention the advantages of using GIT.
Answer: GIT can be used to implement a number of features within GIT:
• Only one. git directory per repository
• Superior disk utilization and network performance
• Data redundancy and replication
• High availability
• Collaboration friendly
• Git can use any sort of projects.
59. What is the software Ansible used for?
Answer: Ansible is mainly used in IT infrastructure to manage or deploy applications to remote nodes. For example, we want to deploy one application in 100’s of nodes by just executing one command, then Ansible is the one actually coming into the picture but should have some knowledge on Ansible script to understand or execute the same.
60. Mention the advantages of Ansible.
Answer: Ansible is one of the most powerful tools in DevOps and has the following features:
• Agentless, it doesn’t require any extra package/daemons to be installed
• Very low overhead
• Very Easy to learn
• Declarative not procedural
• Good performance
• Idempotent
61. How can a user see a list of all the variables used in Ansible?
Answer: Ansible by default is used to collect “facts” about the system in use, and these facts can be accessed in Playbooks and in templates. To see a list of all the facts that are available about a machine, the user can run the “setup” module as an ad-hoc action: ‘Ansible -m setup hostname’. This command will print out a dictionary of all the facts that are available for that particular host.
62. Do you know anything about Docker and Docker image? What do you know about the Docker Container?
Answer: The Docker is a containerization technology that packages the user’s application and all its dependencies together in the form of Containers to ensure that the application works seamlessly in any environment. Docker image is the source of Docker container. When simply put, Docker images are used to create containers.
63. Mention some of the benefits that one can attain if working with DevOps.
Answer: DevOps is gaining more popularity due to its easy implementation and user-friendly interface. Some of the benefits of implementing DevOps Practice have been given below:
Release Velocity: DevOps enable organizations to achieve a great release velocity. The user can release code to production more often and without any hectic problems.
Development Cycle: DevOps shortens the development cycle from initial design to production.
Defect Detection: With DevOps approach, the user can catch defects much earlier than releasing to production. It improves the quality of the software.
Collaboration: With DevOps, collaboration between development and operations professionals increases.
Performance-oriented: With DevOps, organization follows performance-oriented culture in which teams become more productive and more innovative.
Full Automation: DevOps helps to achieve full automation from testing, to build, release and deployment.
Deployment Rollback: In DevOps, the user can easily plan for any failure in deployment rollback due to a bug in code or issue in production. This gives confidence in releasing feature without worrying about downtime for rollback.
64. What do you know about the typical DevOps overflow?
Answer: The typical DevOps workflow can be tabulated as follows:
• The Atlassian Jira is used for writing requirements and tracking tasks.
• Based on the Jira tasks, the developers check the concerned code into GIT version control system.
• The code is checked into GIT is built by using a Apache Maven.
• The build process is automated with Jenkins.
• During the build process, a number of automated tests run to validate the code checked in by a developer.
• The Code built on Jenkins is sent to organization’s Artifactory.
• The Jenkins automatically picks the libraries from Artifactory and deploys it to Production.
• During Production deployment, the Docker images are used to deploy same code on multiple hosts.
• Once a code is deployed to Production, we use monitoring tools like Ngios are used to check the health of production servers.
• The Splunk based alerts inform the admins of any issues or exceptions in production.
65. Mention some of the tools of DevOps.
Answer: Here is a list of some most important DevOps tools
• Git
• Jenkins, Bamboo
• Docker
• Monit
• ELK –Elasticsearch, Logstash, Kibana
• Collectd/Collect
• Selenium
• Puppet, BitBucket
• Chef
• Ansible, Artifactory
• Nagios
66. Elaborate on Gradle.
Answer: Gradle is an open-source build automation system that builds upon the concepts of Apache Ant and Apache Maven. Gradle uses a directed acyclic graph (“DAG”) to determine the order in which tasks can be run. Gradle has a proper programming language instead of XML configuration file and the language is called ‘Groovy’.
Gradle was designed for multi-project builds, which can expand indefinitely. It supports incremental builds by intelligently determining which parts of the build tree are up to date, any task dependent only on those parts does not need to be re-executed.
67. Mention the advantages of Gradle.
Answer: Gradle offers a number of advantages to the user:
• Declarative Builds: One of the biggest advantage of Gradle is the provision of the Groovy language. Gradle provides declarative language elements, which provides build-by-convention support for Java, Groovy, Web and Scala.
• Structured Build: Gradle allows developers to apply common design principles to their build. It provides a perfect structure for build, so that the build us well-structured and can be easily maintained, comprehensible build structures can be built.
• Deep API: Using this API, developers can monitor and customize its configuration and execution behaviors.
• Scalability: Gradle can increase productivity by a huge margin, from simple and single project builds to huge enterprise multi-project builds.
• Multi-project builds: Gradle supports multi-project builds and also partial builds.
• Build management: Gradle supports different strategies to manage project dependencies.
• First build integration tool − Gradle completely supports ANT tasks, Maven and lvy repository infrastructure for publishing and retrieving dependencies. It also provides a converter for turning a Maven pom.xml to Gradle script.
• Ease of migration: Gradle can easily adapt to any project structure.
• Gradle Wrapper: Gradle Wrapper allows developers to execute Gradle builds on machines where Gradle is not installed. This is useful for continuous integration of servers.
• Free open source − Gradle is an open source project, and licensed under the Apache Software License (ASL).
• Groovy: Gradle’s build scripts are written in Groovy, not XML. But unlike other approaches this is not for simply exposing the raw scripting power of a dynamic language. The whole design of Gradle is oriented towards being used as a language, not as a rigid framework.
68. Why is that Maven or Ant is always preferred over Gradle?
Answer: There isn’t a sound support system for multi-project builds in Ant and Maven. Developers have to do a lot of coding to support the multi-project builds. Having some build-by-convention often helps the developer and makes build scripts more concise. With Maven, it takes build by convention too far, and helps the user in customizing his/her build process. Maven also promotes every project publishing an artifact. Maven does not support subprojects to be built and versioned together.
However with Gradle developers can have the flexibility of Ant and build by convention of Maven. Groovy is easier and allows to clean to code than XML. In Gradle, developers can define dependencies between projects on the local file system without the need to publish artifacts to repository.
69. Differentiate exclusively from Gradle and Maven.
Answer: On the basis of User Experience: Maven has a very good support for various IDE’s. Gradle’s IDE support continues to improve quickly but is not great as of Maven. Although IDEs are important, a large number of users prefer to execute build operations through a command-line interface. Gradle provides a modern CLI that has discoverability features like `Gradle tasks`, as well as improved logging and command-line completion.
Flexibility: Google implements Gradle as the official build tool for Android, not because build scripts are code, but because Gradle is modeled in a way that is extensible in the most fundamental ways. Both Gradle and Maven provide convention over configuration. However, Maven provides a very rigid model that makes customization tedious and sometimes impossible. While this can make it easier to understand any given Maven build, it also makes it unsuitable for many automation problems. Gradle, on the other hand, is built with an empowered and responsible user in mind.
Performance: Both Gradle and Maven employ some form of parallel project building and parallel dependency resolution. The biggest differences are Gradle’s mechanisms for work avoidance and incrementally. Following features make Gradle much faster than Maven:
• Incrementally: Gradle avoids work by tracking input and output of tasks and only running what is necessary.
• Build Cache: It reuses the build outputs of any other Gradle build with the same inputs.
• Gradle Daemon: It is a long-lived process that keeps build information “hot” in memory.
Dependency Management: Both build systems provide built-in capability to resolve dependencies from configurable repositories. Both are able to cache dependencies locally and download them in parallel. As a library consumer, Maven allows one to override a dependency, but only by version. Gradle provides customizable dependency selection and substitution rules that can be declared once and handle unwanted dependencies project-wide. This substitution mechanism enables Gradle to build multiple source projects together to create composite builds. Maven has few, built-in dependency scopes, which forces awkward module architectures in common scenarios like using test fixtures or code generation. There is no separation between unit and integration tests. Gradle allows custom dependency scopes, which provides better-modeled and faster builds.
70. Elaborate on Gradle Build Scripts, Gradle Wallpaper, and the Gradle Build Script File Name.
Answer: Gradle builds a script file for handling projects and tasks. Every Gradle build represents one or more projects. A project represents a library JAR or a web application.
The Gradle wrapper is a batch script on Windows, and a shell script for other operating systems. Gradle Wrapper is the preferred way of starting a Gradle build. When a Gradle build is started via the wrapper, Gradle will automatically download and run the build.
The Gradle Build Script File Name is a type of name that is written in a specific format, that is, build.gradle. It generally configures the Gradle scripting language.
71. What do you know about Dependency Configuration?
Answer: Dependency configuration involves the external dependency, which the user needs to install well and make sure the downloading is done from the web. Some of the key features of this configuration are:
• Compilation: The project which the user would be starting and working on the first needs to be well compiled and it needs to be ensured that it is maintained in the good condition.
• Runtime: It is the desired time which is required to get the work dependency in the form of collection.
• Test Compile: The dependencies check source requires the collection to be made for running the project.
• Test runtime: This is the final process which needs the checking to be done for running the test that is in a default manner considered to be the mode of runtime
72. Elaborate on the concept of Gradle Daemon.
Answer: A daemon is a computer program that runs as a background process, rather than being under the direct control of an interactive user. Gradle runs on the Java Virtual Machine (JVM) and uses several supporting libraries that require a non-trivial initialization time. Hence, it can sometimes seem to be a little slow to start. The solution to this impending problem is the Gradle Daemon. It is a long-lived background process that executes all the builds of a user much faster than would otherwise be the case. The user can accomplish this by avoiding the expensive bootstrapping process as well as leveraging caching, by keeping data about the project in memory. Running Gradle builds with the Daemon is no different than without Daemon.
73. Does the feature of Dependency Management exist in Gradle? If yes, elaborate.
Answer: Software projects never work in isolation. In most cases, a project relies on reusable functionality in the form of libraries or has been broken up into individual components by the user to compose a modularized system. Dependency management is a technique for declaring, resolving and using dependencies required by the project in an automated fashion. Gradle has a built-in support system for dependency management and lives up the task of fulfilling typical scenarios encountered in modern software projects.
74. Mention the benefits of using Daemon in Gradle.
Answer: There are a number of benefits of using Daemon in Gradle:
• It is very powerful and has a sound UX system.
• It is aware of the resources being used by the system and is always default enables when being applied.
• It is well integrated within the Gradle Build scans.
75. What do you know about the ‘Gradle Multi-Project Build’?
Answer: The Multi-project builds helps with modularization and simplifies the entire project. It allows any user to concentrate on one area of work in a larger project, while Gradle takes care of dependencies from other parts of the project. A multi-project build in Gradle consists of one root project, and one or more subprojects that may also have subprojects. While each subproject could configure itself in complete isolation of the other subprojects, it is common that subprojects share common traits. It is then usually preferable to share configurations among projects, so the same configuration affects several subprojects.
76. What do you know about Gradle Build Task?
Answer: The Gradle Build Tasks is made up of one or more projects and a project represents what is being done with Gradle with a number of in-built features.Some key of features of Gradle Build Tasks are:
1. Every single task has life cycled methods
2. All Build Scripts being implemented are pieces of code.
3. The default tasks that are used throughout any project are very flexible, such as, run, clean etc
4. The task dependencies can be defined using properties like the method dependsOn().
77. Mention the Gradle Build Life Cycle.
Answer: The Gradle Build life cycle consists of the given three steps:
• Initialization phase: In this phase the project layer or objects are organized
• Configuration phase: In this phase all the tasks are available for the current build and a dependency graph is created
• Execution phase: In this phase tasks are executed.
78. What do you know about the concept of Dependency Configuration?
Answer: A set of dependencies is termed as dependency configuration, which contains some external dependencies for download and installation. Some of the key features of dependency configuration are:
• Compile: The project must be able to compile together
• Runtime: It is the required time needed to get the dependency work in the collection.
• Test Compile: The check source of the dependencies is to be collected in order to run the project.
• Test Runtime: The final procedure is to check and run the test which is by default act as a runtime mode.
79. Explain ‘Groovy’.
Answer: Apache Groovy is an object-oriented programming language majorly used for the Java platform. It is both a static and dynamic language with features similar to those of Python, Ruby, Perl, and Smalltalk. It can be used as both a programming language and a scripting language for the Java Platform, is compiled to Java virtual machine (JVM) bytecode, and interoperates seamlessly with other Java code and libraries. Groovy uses a curly-bracket syntax similar to Java. Groovy supports closures, multiline strings, and expressions embedded in strings. The most striking feature of Groovy is the provision of ASTtransformations which is usually triggered through annotations.
80. Mention some of the features provided by Groovy which are causing a rapid gain in its popularity.
Answer: Groovy is one of the uprising languages that is gaining popularity at a rapid rate. Some of the features that makes it unique are:
• It has a familiar OOP language syntax.
• It has an extensive stock of various Java libraries, which are well known by the developers and programmers who constantly use Java.
• It enables Dynamic typing, i.e., lets the user code more quickly.
• It has increased expressivity, i.e., the user has to type less to accomplish more.
• It provides a number of Closures.
• The Native associative array/key-value mapping support is probably one of the best. The user can create an associative array literal.
• Groovy enables String interpolation and the user can enjoy cleaner creation of strings displaying values.
81. Explain Thin Documentation in Groovy.
Answer: Groovy has been documented very badly. The core documentation of Groovy is limited and there is no information regarding the complex and run-time errors that happen. The developers are largely left to their own means and they normally have to figure out the explanations about internal workings by themselves.
82. Mention all the platforms where Groovy can be used.
Answer: Below is the list of the infrastructure components where the user can make use of Groovy:
• Application Servers
• All other Java-based platforms
• Servlet Containers
• Databases with JDBC drivers
83. Are any pre-requirements required to install Groovy or use Groovy in any system?
Answer: Installing and using Groovy is extremely easy and does not need any pre-requirements. Groovy does not have a complex system requirements and is OS independent. Groovy can perform optimally in every situation. There are many Java based components in Groovy, which make it even easier to work with Java applications.
84. Do you know anything about Closure in Groovy?
Answer: A closure in Groovy is an open, anonymous, block of code that can take arguments, return a value and be assigned to a variable. A closure may reference variables declared in its surrounding scope. In opposition to the formal definition of a closure, Closure in the Groovy language can also contain free variables which are defined outside of its surrounding scope.
A closure definition follows this syntax:
{ [closureParameters -> ] statements }
Here [closureParameters->] is an optional comma-delimited list of parameters, and statements are 0 or more Groovy statements. The parameters look similar to a method parameter list, and these parameters may be typed or untyped.
When a parameter list is specified, the -> character is required and serves to separate the arguments from the closure body. The statements portion consists of 0, 1, or many Groovy statements.
85. Do you know anything about Maven?
Answer: Maven is a build automation tool used primarily for Java projects. Maven addresses two aspects of building software: It describes how software is built and it describes its dependencies. Unlike earlier tools like Apache Ant, it uses conventions for the build procedure, and only exceptions need to be written down. An XML file describes the software project being built, its dependencies on other external modules and components, the build order, directories, and required plug-ins. It comes with pre-defined targets for performing certain well-defined tasks such as compilation of code and its packaging. Maven dynamically downloads Java libraries and Maven plug-ins from one or more repositories such as the Maven 2 Central Repository, and stores them in a local cache. This local cache of downloaded artifacts can also be updated with artifacts created by local projects. Public repositories can also be updated.
86. Mention the benefits of using Maven.
Answer: There are a number of advantages of using Maven along with DevOps:
• Its design regards all projects as having a certain structure and a set of supported task work-flows.
• Maven has a quick project setup and no complicated build.xml files. It requires just a POM and go.
• All developers in a project within Maven use the same jar dependencies due to centralized POM.
• In Maven getting a number of reports and metrics for a project “for free”.
• It reduces the size of source distributions, because jars can be pulled from a central location.
• Maven lets developers get your package dependencies easily.
• With Maven there is no need to add jar files manually to the class path.
87. Mention the steps in the Build Life Cycle in Maven.
Answer: The Build lifecycle is a list of named phases that can be used to give a certain order to goal execution. One of Maven’s standard life cycles is the default lifecycle, which includes the following phases, in this order
1) Validate
2) generate-sources
3) process-sources
4) generate-resources
5) process-resources
6) compile
7) process-test-sources
8) process-test-resources
9) test-compile
10) test
11) package
12) install
13) deploy
88. What is the Build Tool in DevOps when operating in accordance with Maven?
Answer: Build tools are programs that automate the creation of executable applications from source code. Building incorporates compiling, linking and packaging the code into a usable or executable form. In small projects, developers will often manually invoke the build process. This is not practical for larger projects. Whenever it becomes very hard to keep track of what needs to be built, in what sequence the processes need to be placed, and what dependencies there are in the building process. Using an automation tool like Maven, Gradle or ANT allows the build process to be more consistent and helps the developer in his/her future steps and code.
89. Do you know anything about the Dependency Management Mechanism in Maven when used with DevOps?
Answer: Maven’s dependency-handling mechanism is organized around a coordinate system identifying individual artifacts. The artifacts can be anything from software libraries to modules. For example if a project needs Hibernate library, it has to simply declare Hibernate’s project coordinates in its POM. Maven will automatically download the dependency and the dependencies that Hibernate itself needs and store them in the user’s local repository. Maven 2 Central Repository is used by default to search for libraries, but developers can configure the custom repositories to be used (e.g., company-private repositories) within the POM.
90. Are Plugins required in Maven?
Answer: Most of Maven’s functionality is in plugins. A plugin provides a set of goals that can be executed using the following syntax: mvn [plugin-name]:[goal-name]
For example, a Java project can be compiled with the compiler-plugin’s compile-goal by running mvn compiler:compile.
There are Maven plugins for building, testing, source control management, running a web server, generating Eclipse project files, and much more. Plugins are introduced and configured in a <plugins>-section of a pom.xml file. Some basic plugins are included in every project by default, and they have sensible default settings.
91. Explain the concept of POM in Maven.
Answer: A Project Object Model (POM) provides all the configuration for a single project. The general configuration covers the project’s name, its owner and its dependencies on other projects. A user can also configure individual phases of the build process, which are implemented as plugins. For example, a user can configure the compiler-plugin to use Java version 1.5 for compilation, or specify packaging the project even if some unit tests fail.
Larger projects should be divided into several modules, or sub-projects, each with its own POM. This not only supports the OOP feature of DevOps but One can also write a root POM through which one can compile all the modules with a single command. POMs can also inherit configuration from other POMs. All POMs inherit from the Super POM by default. The Super POM provides default configuration, such as default source directories, default plugins, and so on.
92. Elaborate on Maven Archetype and Maven Artifact.
Answer: Archetype is a Maven project templating toolkit. An archetype is defined as an original pattern or model from which all other things of the same kind are made.
In Maven, artifact is simply a file or JAR that is deployed to a Maven repository. An artifact has three things that make it stand out:
• Group ID
• Artifact ID
• Version string. The three together uniquely identify the artifact. All the project dependencies are specified as artifacts.
93. Do you know anything about Goal in Maven?
Answer: In Maven a goal represents a specific task which contributes to the building and managing of a project. It may be bound to 1 or many build phases. A goal not bound to any build phase can be freely executed outside of the build lifecycle by its direct invocation.
94. Elaborate on Build profile.
Answer: In Maven a build profile is a set of configurations. This set is used to define or override some of the default behavior of Maven build. Build profile helps the developers to customize the build process for different environments. For example, a user can set profiles for Test, UAT, Pre-prod and Prod environments each with its own configurations etc.
95. Differentiate between Compile and Install.
Answer: Install is used to install the package into the local repository for use as a dependency in other projects locally. Design patterns can also be used with Groovy.
Compile is used to compile the source code of the project.
96. How does a user activate a Maven Build Profile?
Answer: A Maven Build Profile can be activated in following ways
• Using command line console input.
• Based on environment variables- both User and System variables.
• By using Maven settings.
97. What is LINUX?
Answer: Linux is the best-known and most-used open source operating system. As an operating system, Linux is a software that is a background program for all of the other software on a computer, receiving requests from those programs and relaying these requests to the computer’s hardware. In many ways, Linux is similar to other operating systems such as Windows, OS X, or iOS. But Linux also is different from other operating systems in many important ways:
1. Linux is open source software. The code used to create Linux is free and available to the public to view, edit, and—for users with the appropriate skills—to contribute to.
2. Linux operating system is consist of 3 components which are as below:
• Kernel: Linux is a monolithic kernel that is free and open source software that is responsible for managing hardware resources for the users.
• System Library: System Library plays a vital role because application programs access Kernels feature using system library.
• System Utility: System Utility performs specific and individual level tasks.
98. Differentiate between UNIX and LINUX.
Answer: UNIX and Linux are similar in a number of ways. In fact, Linux was originally created in a manner similar to UNIX.
Both have similar tools for interfacing with the systems, programming tools, filesystem layouts, and other key components.
However, UNIX is not free. Over the years, a number of different operating systems have been created that attempted to be like UNIX or mimic UNIX in some ways but Linux has been the most successful.
99. Do you know anything about BASH? If yes, do tell us.
Answer: BASH is the acronym for Bourne Again Shell. BASH is the UNIX shell for the GNU operating system. So, BASH is the command language interpreter that helps you to enter your input, and thus you can retrieve information. Simply, BASH is a program that will understand the data entered by the user and execute the command and gives output.
100. What do you know about CronTab?
Answer: The CronTab is short for “cron table” and is a list of commands that are scheduled to run at regular time intervals on computer system. The CronTab command opens the CronTab for editing, and lets the user add, remove, or modify scheduled tasks. The daemon which reads the CronTab and executes the commands at the right time is called Cron. It’s named after Kronos, the Greek god of time.
The following is the Command syntax for the same:
crontab [-u user] file
crontab [-u user] [-l | -r | -e] [-i] [-s]
101. Is Daemon a part of LINUX?
Answer: Yes, Daemon forms an important part of LINUX. A daemon is a type of program on Linux operating systems that runs discreetly in the background, rather than under the direct control of a user, waiting to be activated by the occurrence of a specific event or condition. All Unix-like systems typically run numerous daemons, mainly to accommodate requests for services from other computers on a network, but also to respond to other programs and to hardware activity.
It is not necessary that the perpetrator of the action or condition be aware that a daemon is listening, although programs frequently will perform an action only because they are aware that they will implicitly arouse a daemon.
Some instances of daemons in LINUX are of actions or conditions that can trigger daemons into activity are a specific time or date, passage of a specified time interval, a file landing in a particular directory, receipt of an e-mail or a Web request made through a particular communication line.
102. Explain the concept of process in LINUX.
Answer:Daemons are usually instantiated as processes. A process is an executing (i.e., running) instance of a program. Processes are managed by the kernel (i.e., the core of the operating system), which assigns each a unique process identification number (PID).
There are three types of processes in Linux:
-Interactive: All Interactive processes are run interactively by a user at the command line
-Batch: All Batch processes are submitted from a queue of processes and are not associated with the command line. They are well suited for performing recurring tasks when system usage is otherwise low.
-Daemon: Daemons are recognized by the system as any processes whose parent process has a PID of one
103. Does LINUX work with CLI?
Answer: Yes, LINUX does work with Command Line Interface (CLI). CLI (Command Line Interface) is a type of human-computer interface that relies solely on textual input and output.
The entire display screen, or the currently active portion of it, shows only characters (and no images), and input is usually performed entirely with a keyboard.
104. Elaborate on the concept of Kernel in LINUX.
Answer:A kernel is the lowest level of easily replaceable software that interfaces with the hardware in your computer. It is responsible for interfacing all of the user’s applications that are running in “user mode” down to the physical hardware, and allowing processes, known as servers, to get information from each other using inter-process communication (IPC).
There are three types of Kernels:
• Microkernel:A microkernel takes the approach of only managing what it has to- the CPU, memory, and IPC. Everything else in a computer can be seen as an accessory and can be handled in user mode.
• Monolithic Kernel: Monolithic kernels are the opposite of microkernels because they involve not only the CPU, memory, and IPC, but also things like device drivers, file system management, and system server calls
• Hybrid Kernel: Hybrid kernels have the ability to pick and choose what they want to run in user mode and what they want to run in supervisor mode. Since, the Linux kernel is monolithic, it has the largest footprint and the most complexity over the other types of kernels. This was a design feature which was under quite a bit of debate in the early days of Linux and still carries some of the same design flaws that monolithic kernels are inherent to have.
105. What is the Root Account?
Answer:The root account a system administrator account. It provides the user full access and control of the system. The admin can create and maintain user accounts, assign different permission for each account, etc.
106. Differentiate between Cron and Anacron.
Answer: One of the main difference between Cron and Anacron jobs is that Cron works on the system that are running continuously while Anacron is used for the systems that are not running continuously.
All Cron jobs can run every minute, but Anacron jobs can be run only once a day.
Any normal user can do the scheduling of Cron jobs, but the scheduling of Anacron jobs can be done by the super-user only.
Cron should be used when you need to execute the job at a specific time as per the given time in Cron, but Anacron should be used in when there is no any restriction for the timing and can be executed at any time.
If we think about which one is ideal for servers or desktops, then Cron should be used for servers while Anacron should be used for desktops or laptops.
107. What do you mean by Swap Space?
Answer: Swap space is the amount of physical memory that is allocated for use by Linux to hold some concurrent running programs temporarily. This condition usually occurs when the RAM does not have enough memory to support all concurrent running programs. This memory management involves the swapping of memory to and from physical storage.
108. What do you mean by LINUX distributors?
Answer: There are roughly six hundred Linux distributors. Some of the important ones are as follows:
• UBuntu: It is a well-known Linux Distribution with a lot of pre-installed apps and easy to use repositories libraries. It is very easy to use and works like the MAC operating system.
• Linux Mint: It uses the Cinnamon and Mate desktop. It works on Windows and is easy to grasp for use by newcomers.
• Debian: It is the most stable, quicker and user-friendly Linux Distributors.
• Fedora: It is less stable but provides the latest version of the software. It has GNOME3 desktop environment by default.
• Red Hat Enterprise: It must be used commercially and should be well tested before release. It usually provides the stable platform for a long time.
• Arch Linux: Every package is to be installed by the user and is not suitable for the beginners.
109. Mention the file permissions required in LINUX.
Answer: There are 3 types of permissions in Linux
• Read: The user can read the file and list the directory.
• Write: The user can write new files in the directory.
• Execute: The user can access and run the file in a directory.
110. What do you know about Memory Management in LINUX?
Answer: It is always required to keep a check on the memory usage in order to find out whether the user is able to access the server or the resources are adequate. There are 5 methods that determine the total memory used by the Linux and can be explained as below:
• Free command: This is the most simple and easy to use the command to check memory usage.
For example:
‘$ free –m’, the option ‘m’ displays all the data in MBs.
• /proc/meminfo: The next way to determine the memory usage is to read /proc/meminfo file.
For example:
‘$ cat /proc/meminfo’
• Vmstat: This command basically lays out the memory usage statistics.
For example:
‘$ vmstat –s’
• Top command: This command determines the total memory usage as well as also monitors the RAM usage.
• Htop: This command also displays the memory usage along with other details.
111. Tell us about the directory commands available in LINUX.
Answer: There are a number of directory commands available within LINUX. Some of the most popular ones have been explained here:
• pwd: It is a built-in command which stands for ‘print working directory’. It displays the current working location, working path starting with / and directory of the user. It displays the full path to the directory you are currently in.
• Is: This command list out all the files in the directed folder.
• cd: This stands for ‘change directory’. This command is used to change to the directory the user wants to work from the present directory. The user just needs to type cd followed by the directory name to access that particular directory.
• mkdir: This command is used to create an entirely new directory.
• rmdir: This command is used to remove a directory from the system.
112. Elaborate on the concept of Shell Script in Linux.
Answer: A shell script is a file containing a series of commands. The shell reads this file and carries out the commands as though they have been entered directly on the command line. The shell is unique as it is both a powerful command line interface to the system and a scripting language interpreter. Most of the things that can be done on the command line can be done in scripts, and most of the things that can be done in scripts can be done on the command line. The shell also provides a set of features usually used when writing programs.
113. What are the tools that have been provided within LINUX in order to report statistics?
Answer: Some of the popular and frequently used system resource generating tools provided by the user on the Linux platform are:
• vmstat
• netstat
• mpstat
• iostat
• ifstat
These commands are used for reporting statistics from different system components such as virtual memory, network connections and interfaces, CPU, input/output devices and more.
114. Do you know anything about DSTAT in LINUX?
Answer: The dstat is a powerful, flexible and versatile tool for generating Linux system resource statistics,that is a replacement for all the tools mentioned in above question. It comes with extra features, counters and it is highly extensible, users with Python knowledge can build their own plugins.
Salient Features of dstat:
1. Joins information from vmstat, netstat, iostat, ifstat and mpstat tools.
2. Displays statistics simultaneously.
3. Supports colored output and indicates different units in different colors.
4. Shows exact units and limits conversion mistakes as much as possible/.
5. Supports exporting of CSV output to Gnumeric and Excel documents.
6. Supports summarizing of grouped block/network devices
7. Displays interrupts per device.
8. Works on accurate timeframes without any timeshifts when a system is stressed.
115. Mention the different types of processes in LINUX.
Answer: There are primarily two types of processes in Linux:
• Foreground processes– Also referred to as Interactive processes, these are initialized and controlled through a terminal session. In other words, there has to be a user connected to the system to start such processes such that they haven’t started automatically as part of the system functions/services.
• Background processes– Also referred to as Non-Interactive/Automatic Processes are processes not connected to a terminal and they don’t expect any user input.
116. Mention the steps involved in creating a new process in LINUX while working with DevOps.
Answer: A new process is normally created when an existing process makes an exact copy of itself in memory. The child process will have the same environment as its parent, but only the process ID number is different. There are two conventional ways used for creating a new process in Linux:
• By using The System() Function – This method is relatively simple. However, it is inefficient and has significantly certain security risks.
• By using the fork() and exec() Function – This technique is a little advanced but offers greater flexibility, speed, together with security.
117. Mention the different states of processes while executing in LINUX with DevOps.
Answer: During execution, a process changes from one state to another depending on its environment/circumstances. In Linux, a process has the following possible states:
• Running – The process is either running, i.e., it is the current process in the system or it’s ready to run, i.e., it’s waiting to be assigned to one of the CPUs.
• Waiting – In this state, a process is waiting for an event to occur or for a system resource. Additionally, the kernel also differentiates between two types of waiting processes.
Interruptible waiting processes – can be interrupted by signals.
Uninterruptible waiting processes – are waiting directly on hardware conditions and cannot be interrupted by any event/signal.
• Stopped – In this state, a process has been stopped, usually by receiving a signal. For instance, a process that is being debugged.
• Zombie – In this, a process is dead, i.e., it has been halted but it’s still has an entry in the process table.
118. What are the advantages of GIT in DevOps?
Answer: GIT has few disadvantages and comprises the scenarios when GIT is difficult to use. Some of these situations are:
• Binary Files: If we have a lot binary files (non-text) in our project, then GIT becomes very slow. E.g. Projects with a lot of images or Word documents.
• Slow remote speed: Sometimes the use of remote repositories in slow due to network latency. Still GIT is better than other VCS in speed.
• Steep Learning Curve: It takes some time for a newcomer to learn GIT. Some of the GIT commands are non-intuitive to a fresher.
119. What do you know about Continuous Integration?
Answer: Continuous Integration is the process of continuously integrating the code and often multiple times per day. The purpose is to find problems quickly, s and deliver the fixes more rapidly. CI is a best practice for software development. It is done to ensure that after every code change there is no issue in software.
120. Do you know anything about Build Automation?
Answer: Build automation is the process of automating the creation of a software build and the associated processes. This includes compiling computer source code into binary code, packaging binary code, and running automated tests.
121. Explain the process of Automation Deployment.
Answer: Automated Deployment is the process of consistently pushing a product to various environments on a “trigger”. It enables the user to quickly learn what to expect every time he/she deploys an environment with much faster results. This combined with Build Automation can save development teams a significant amount of hours. Automated Deployment saves clients from being extensively offline during development and allows developers to build while “touching” fewer of a clients’ systems. With an automated system, human error is prevented. In the event of human error, developers are able to catch it before live deployment – saving time and headache. The user can even automate the contingency plan and make the site rollback to a working or previous state as if nothing ever happened.
Clearly, this automated feature is super valuable in allowing applications and sites to continue during fixes. Additionally, contingency plans can be version-controlled, improved and even self-tested.
122. Explain how Continuous Integration process works.
Answer: Whenever developer commits changes in version control system, then Continuous Integration server detects that changes are committed and undergoes through the following process:
• Continuous Integration server retrieves latest copy of changes.
• It builds code with new changes in build tools.
• If build fails, it notifies the developer.
• After build pass, it runs automated test cases if test cases fail in order to notify the developer.
• It creates package for deployment environment.
123. Mention the software required to implement the Continuous Integration process.
Answer: Here are the minimum tools that any user needs to achieve Continuous Integration:
• Source code repository: To commit code and changes for example GIT.
• Server: It is Continuous Integration software for example, Jenkin and Teamcity.
• Build tool: It builds application on particular way for example, Maven and Gradle.
• Deployment environment: On which application will be deployed.
124. What is the Jenkins Software?
Answer: Jenkins is a self-contained, open source automation server used to automate all sorts of tasks related to building, testing, and delivering or deploying software.It is one of the leading open source automation servers available. Jenkins has an extensible, plugin-based architecture, enabling developers to create 1,400+ plugins to adapt it to a multitude of build, test and deployment technology integrations.
125. Why should one use Jenkins?
Answer: Jenkins is an open-source continuous integration software tool written in the Java programming language for testing and reporting the various isolated changes in a larger code base in real time. The Jenkins software allows developers to find and solve faults in a code base rapidly and to automate testing of their builds.
126. Why are Pipelines an integral part of Jenkins?
Answer: Pipeline adds a powerful set of automation tools onto Jenkins, supporting use cases that extend from simple continuous integration to comprehensive continuous delivery pipelines. By showing a series of related tasks, users can take advantage of the many features of Pipeline:
• Code: Pipelines are implemented in code and characteristically checked into source control, giving teams the ability to edit, review, and iterate upon their delivery pipeline.
• Versatile: Pipelines support complex real-world continuous delivery requirements, including the ability to fork/join, loop, and perform work in parallel.
• Extensible: The Pipeline plugin supports custom extensions to its DSL and multiple options for integration with other plugins.
• Durable: Pipelines can survive both planned and unplanned restarts of the Jenkins master.
• Pausable: Pipelines can optionally stop and wait for user input or approval before continuing the Pipeline run.
127. Can a multibranch pipeline be created in Jenkins?
Answer: The Multi branch Pipeline project type enables the user to implement different Jenkins files for different branches of the same project. In a Multi branch Pipeline project, Jenkins automatically discovers, manages and executes Pipelines for branches which contain a Jenkins file in source control.
128. Mention the importance of Buffer in AWS with DevOps.
Answer: An Elastic Load Balancer ensures that the incoming traffic is distributed optimally across various AWS instances. A buffer synchronizes different components and makes the arrangement additional elastic to a burst of load or traffic. The components are prone to work in an unstable way of receiving and processing the requests. The buffer creates the equilibrium linking various apparatus and crafts them effort at the identical rate to supply more rapid services.
DevOps is one of the most emerging development methodology being adopted by many organizations. DevOps is a culture to promote collaboration between Development teams and Operations Teams. DevOps has become almost default methodology for new development projects. Organizations recognize DevOps as one tool for better collaboration between technical and business team which will help to develop the final solution as per end user needs. Hence, there is a bright future and expected more demand for DevOps specialists. 128 top DevOps interview questions given above will help you to crack the interview successfully.