понедельник, 1 февраля 2021 г.

DevOps Interview Questions (RU)

 

Вопросы на собеседовании

Kubernetes

General questions

- Please introduce yourself and your professional experience
- What would be your approach troubleshooting some network connectivity issue?
- How would you design a web frontend microservice to be highly available and scalable?
- How the pipelines you build look like ?


- What's Pod ?
+ The pod is the primary deployment object and the main logical unit in K8s. Pods are a set of one or more containers for joint deployment on a node. Grouping containers of different types is required when they are interdependent and must be run in one node. This allows for faster response times during interaction.

- What's Deployment?
+ Deployment is an object that stores the description of pods, the number of replicas and the algorithm for replacing them in case of changes in parameters. The deployment controller allows declarative updates (by describing the desired state) to objects such as pods and replica sets.

- Image pull policy?

The imagePullPolicy and the tag of the image affect when the kubelet attempts to pull the specified image.

  • imagePullPolicy: IfNotPresent: the image is pulled only if it is not already present locally.
  • imagePullPolicy: Always: every time the kubelet launches a container, the kubelet queries the container image registry to resolve the name to an image digest. If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet downloads (pulls) the image with the resolved digest, and uses that image to launch the container.
  • imagePullPolicy is omitted and either the image tag is :latest or it is omitted: Always is applied.
  • imagePullPolicy is omitted and the image tag is present but not :latest: IfNotPresent is applied.
  • imagePullPolicy: Never: the image is assumed to exist locally. No attempt is made to pull the image.

- What's Service?
+ A Service is a means for publishing an application as a network service. It is also used to balance traffic and load between pods.

- What's Ingress ?
+ Ingress is a resource for adding rules for routing traffic from external sources to services in a K8s cluster. It is necessary to create Ingress rules in the same Namespaces in which the services are deployed. You cannot direct traffic to a service in another Namespace that does not have an Ingress object.

- What types of Ingress you know ?
+ The Ingress Controller is usually a proxy service deployed on a cluster. It's nothing more than        deployment Kubernetes for a service.
Ingress Controller types:
1. Nginx Ingress Controller (Community & From Nginx Inc)
2. Traefik
3. HAproxy
4. Contour
5. GKE Ingress Controller

- What does Ingress Controller ?
+A Kubernetes Ingress controller is a specialized load balancer for Kubernetes environments. Kubernetes is the de facto standard for managing containerized applications. For many enterprises, moving production workloads into Kubernetes brings additional challenges and complexities around application traffic management. An Ingress controller abstracts away the complexity of Kubernetes application traffic routing and provides a bridge between Kubernetes services and external ones.

Kubernetes Ingress controllers:
  • Accept traffic from outside the Kubernetes platform, and load balance it to pods (containers) running inside the platform
  • Can manage egress traffic within a cluster for services which need to communicate with other services outside of a cluster
  • Are configured using the Kubernetes API to deploy objects called “Ingress Resources”
  • Monitor the pods running in Kubernetes and automatically update the load‑balancing rules when pods are added or removed from a service
- What's context ?
+A context in Kubernetes is a group of access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl: all kubectl commands run against that cluster. Each of the context that have been used will be available on your .kubeconfig. (to switch between clusters)

- Kubernetes SideCar?
+A Sidecar container is a second container added to the Pod definition. Why it must be placed in the same Pod is that it needs to use the same resources being used by the main container. 

- How to terminate SSL for services in EKS?
apiVersion: v1 kind: Service metadata: name: echo-service annotations: # Note that the backend talks over HTTP. service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http # TODO: Fill in with the ARN of your certificate. service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:{region}:{user id}:certificate/{id} # Only run SSL on the port named "https" below. service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
+How do I terminate HTTPS traffic on Amazon EKS workloads with ACM?

- What's init container? What for is used?
+Init Containers are containers that run before the main container runs with your containerized application. They normally contain setup scripts that prepares an environment for your containerized application. Init Containers also ensure the wider server environment is ready for your application to start to run. 
You can find a detailed list of what these containers can be used for in the official Kubernetes documentation.
Practice

- What's Replica Set ?
+ Deployment - when you need a high-level, Replica-Set if you need more granular definition.
Replica Set is an object responsible for describing and controlling multiple instances (replicas) of pods created on a cluster. Having more than one replica can improve fault tolerance and scalability of your application. In practice, the Replica Set is created using Deployment.
  A ReplicaSet ensures that a specified number of pod replicas are running at any given time. However, a Deployment is a higher-level concept that manages ReplicaSets and provides declarative updates to Pods along with a lot of other useful features. Therefore, we recommend using Deployments instead of directly using ReplicaSets, unless you require custom update orchestration or don't require updates at all. Link

- What's Stateful Set ?
+ Like other objects such as Replica Set or Deployment, Stateful Set allows you to deploy and manage one or more pods. But unlike them, pod IDs have predictable and persistent values ​​across restarts.
Link

- What other types you know in Kubernetes?
+ Daemon Set. A DaemonSet ensures that all (or some) Nodes run a copy of a Pod. As nodes are added to the cluster, Pods are added to them. As nodes are removed from the cluster, those Pods are garbage collected. Deleting a DaemonSet will clean up the Pods it created.
+Jobs. A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate. As pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (ie, Job) is complete. Deleting a Job will clean up the Pods it created.
+Garbage Collection. The role of the Kubernetes garbage collector is to delete certain objects that once had an owner, but no longer have an owner.
+CronJob. A CronJob creates Jobs on a repeating schedule. One CronJob object is like one line of a crontab (cron table) file. It runs a job periodically on a given schedule, written in Cron format.

- How to send logs from Kubernetes to LogSystem?
+COLLECTING APPLICATION LOGS ON KUBERNETES
+You can get the logs from multiple containers using labels 
kubectl logs --selector app=yourappname
In case you have a pod with multiple containers, the above command is going to fail and you'll need to specify the container name:
kubectl logs --selector app=yourappname --container yourcontainername
or
kubectl --namespace kube-system logs yourpodsname

- What's Affinity and AntiAffinity ?
+ Affinity is of 3 types such as node affinity, pod affinity and anti-affinity. It’s important to read these as properties of a pod. Link

- What's Taints and Toleration ?
+ The taint is a labeled thing set on the nodes that has to be matched by a toleration from POD side, period. Link

- How to make pods to deploy on only 2 nodes of the cluster (out of 5 as an example) ?
+ With nodeSelector, Node Affinity, Inter-Pod Affinity Link

- What elements of Kubernetes you know ?
+ Pod, Services, Volumes, Namespaces, Controllers

- What does Scheduler?
+ A scheduler watches for newly created Pods that have no Node assigned. For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the best Node for that Pod to run on. The scheduler reaches this placement decision taking into account the scheduling principles described below.

- What's etcd ?
+ Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data. If your Kubernetes cluster uses etcd as its backing store, make sure you have a back up plan for those data.

- What's Kube Proxy ?
+ The kube-proxy service is responsible for the rules for routing packets between Service and Pod, which can work in one of three modes - user space proxy mode, iptables proxy mode and IPVS proxy mode. Link

- What Service types you know ?
+ ClusterIP: provides access to the service on the internal IP address of the cluster (the service is available only within the cluster). The ClusterIP type is used by default;
+NodePort: provides access to the service on the IP address of each node (node) of the cluster, on a static port (from the range 30000-32767). A service of the ClusterIP type will also be automatically created, to which requests from the NodePort will be routed. You can also interact with the service from outside the cluster using <NodeIP> as the address: <NodePort>;
+LoadBalancer: Provides access to the service using the load balancer of the cloud provider. At the same time, services such as NodePort and ClusterIP are automatically created, to which requests from the balancer will be routed;
+ExternalName: special case - matches the service name with the content of the externalName field (for example, foo.bar.example.com), returning a CNAME record. No proxying occurs.

- How to restrict CPU, RAM for a pod ?
+ Need to use restrictions Link

- Can two containers inside the same pod ping each other? If yes - how ?
+Containers on same pod act as if they are on the same machine.  You can ping them using localhost:port itself.Every container in a pod shares the same IP. You can 'ping localhost' inside a pod. Two containers in the same pod share an IP and a network namespace and They are both localhost to each other.Discovery works like this: Component A's pods -> Service Of Component B -> Component B's pods and Services have domain names servicename.namespace.svc.cluster.local, the dns search path of pods by default includes that stuff, so a pod in namespace Foo can find a Service bar in same namespace Foo by connecting to `bar`​.

- Kubernetes Components ?

- Kubernetes Readiness/Liveness/Startup probes
Configure Liveness, Readiness and Startup Probes

- Kubernetes ConfigMaps - what is it ?
+ A ConfigMap is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable. link

- How networking in Kubernetes works ?
+ In Kubernetes, every pod has its own routable IP address. Kubernetes networking – through the network plug-in that is required to install (e.g. Calico, Flannel, Weave…) takes care of routing all requests internally between hosts to the appropriate pod. External access is provided through a service, load balancer, or ingress controller, which Kubernetes routes to the appropriate pod.
Kubernetes networking uses iptables to control the network connections between pods (and between nodes), handling many of the networking and port forwarding rules. This way, clients do not need to keep track of IP addresses to connect to Kubernetes services. Also, port mapping is greatly simplified (and mostly eliminated) since each pod has its own IP address and its container can listen on its native port.

- PVC Class Types?
- How to create a K8s multi-cluster infrastructure?

ISTIO

- What's Istio?
+ Cloud platforms provide a wealth of benefits for the organizations that use them. However, there’s no denying that adopting the cloud can put strains on DevOps teams. Developers must use microservices to architect for portability, meanwhile operators are managing extremely large hybrid and multi-cloud deployments. Istio lets you connect, secure, control, and observe services. At a high level, Istio helps reduce the complexity of these deployments, and eases the strain on your development teams. It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.

ServiceMesh

- What's Service Mesh?
+    Istio addresses the challenges developers and operators face as monolithic applications transition towards a distributed microservice architecture. To see how, it helps to take a more detailed look at Istio’s service mesh.
   The term service mesh is used to describe the network of microservices that make up such applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication.
   Istio provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications.

Docker

- What's the difference between CMD, RUN, EntryPoint?
+ Commands such as CMD, RUN and ENTRYPOINT are interchangeably used when you are writing a dockerfile to create the Docker Image. However, if you have just started using Docker or you don’t have enough hands-on experience working with these commands, then these commands might cause a lot of confusion for you. In this link, we are going to discuss all three commands in-depth with practical examples. 

- What's the difference between ADD, COPY ?
+ When creating a Dockerfile, there are two commands that you can use to copy files/directories into it – ADD and COPY. Although there are slight differences in the scope of their function, they essentially perform the same task. So, why do we have two commands, and how do we know when to use one or the other? In this link, we explain each command, analyze Docker ADD vs COPY, and tell you which one to use. 

- What's EXPOSE and is it mandatory ? Can you use other ports then you wrote in EXPOSE ?
+ Let’s see what the official docs for the EXPOSE instruction says

- How to optimize Docker images size ?
Docker image size optimization
- Rootless Container?
Rootless containers are containers that can be created, run, and managed by users without admin rights. ... They allow multiple unprivileged users to run containers on the same machine (this is especially advantageous in high-performance computing environments)


Jenkins

- How Sonarqube is used in the Pipelines?
SonarQube Integration with Jenkins Using Pipelines

- How to continue to run the pipeline if the stage failed? 
Jenkins declarative pipeline continue on failure

GitlabCI

- How to continue to run the pipeline if the stage failed? 

AWS

- What's the difference ACL / Security Group
Link
- What's the difference Classing Load Balancer and ALB ? How they work?
- AWS offers three types of load balancers, adapted for various scenarios: Elastic Load Balancers, Application Load Balancers, and Network Load Balancers. Obviously, all AWS load balancers distribute incoming requests to a number of targets, which can be either EC2 instances or Docker containers. They all implement health checks, which are used to detect unhealthy instances. They are all highly available and elastic (in AWS parlance: They scale up and down within a few minutes according to workload). --> Link

Terraform

- What for to use Modules?
Link
- What's the purpose of the State file ?
Link
- Where can you store State file ?
Terraform backend
- How to use Locking in Terraform ? and what for ?
State Locking

Ansible

- What's Role ?
+ Ansible role is a set of tasks to configure a host to serve a certain purpose like configuring a service. Roles are defined using YAML files with a predefined directory structure. A role directory structure contains directories: defaults, vars, tasks, files, templates, meta, handlers. Each directory must contain a main.yml file which contains relevant content. Let’s look little closer to each directory.

  • defaults: contains default variables for the role. Variables in default have the lowest priority so they are easy to override.
  • vars: contains variables for the role. Variables in vars have higher priority than variables in defaults directory.
  • tasks: contains the main list of steps to be executed by the role.
  • files: contains files which we want to be copied to the remote host. We don’t need to specify a path of resources stored in this directory.
  • templates: contains file template which supports modifications from the role. We use the Jinja2 templating language for creating templates.
  • meta: contains metadata of role like an author, support platforms, dependencies.
  • handlers: contains handlers which can be invoked by “notify” directives and are associated with service.

- What's Playbook ?
+ Ansible Playbooks are the files where Ansible code is written. These files are written in the language, YAML, which is a funny acronym for, “YAML Ain’t no Markup Language.”
Playbooks contain one or more Plays. Plays map a group of computers to a few well-defined roles known as Tasks. Tasks can be defined as Ansible scripts.

- Types of Variables ? (where they can be)
Link

- In what order they will be applied
Variable priority (last values ​​overwrite previous ones):

  • Values ​​of variables in roles (tasks in roles will see their own values. Tasks that are defined outside of a role will see the latest values ​​of role variables)
  • variables in inventory file
  • variables for host group in inventory file
  • variables for hosts in inventory file
  • variables in the group_vars directory
  • variables in the host_vars directory
  • host facts
  • script variables (play)
  • script variables that are requested via vars_prompt
  • variables that are passed to the script via vars_files
  • variables obtained through the register parameter
  • set_facts
  • variables from role and placed via include
  • block variables (overwrite other values ​​for the block only)
  • task variables (overwrite other values ​​for task only)
  • variables that are passed when the playbook is called via the --extra-vars parameter (always the highest priority)

GIT

- What's the difference git fetch / git pull
+ When using pull, git tries to do everything for you. It merges any committed commits into the branch you are currently working on. The pull command automatically merges commits, preventing you from looking at them first. If you don't follow the branches closely, running this command can lead to frequent conflicts. 

When using fetch, git collects all commits from the target branch that are not in the current branch and stores them in the local repository. However, it does not merge them into the current branch. This is especially useful if you need to keep your repository up to date, but you are working on functionality that, if implemented incorrectly, could negatively affect the project as a whole. To merge commits into the main branch, you need to use merge. Roughly speaking, by default, git pull is a shortcode for a sequence of two commands: git fetch (fetching changes from the server) and git merge (merging into a local copy).

- What's rebase?
git rebase with examples
- What's squash ?
+ In Git, the term squash is used to squash the previous commits into one. It is not a command; instead, it is a keyword. The squash is an excellent technique for group-specific changes before forwarding them to others.
Squash commits into one with Git
- What's cherrypeak ?
+ Cherry picking is the act of picking a commit from a branch and applying it to another. git cherry-pick can be useful for undoing changes. For example, say a commit is accidently made to the wrong branch. You can switch to the correct branch and cherry-pick the commit to where it should belong.

Vault

+ HashiCorp Vault is a secrets management solution that brokers access for both humans and machines, through programmatic access, to systems. Secrets can be stored, dynamically generated, and in the case of encryption, keys can be consumed as a service without the need to expose the underlying key materials. HashiCorp Vault(RU)
- How it works?
+ It can be used to store sensitive values and at the same time dynamically generate access for specific services/applications on lease. Plus, Vault can be used to authenticate users (machines or humans) to make sure they’re authorized to access a particular file. Authentication can either be via passwords or using dynamic values to generate temporary tokens that allow you to access a particular path. Policies written using HashiCorp Configuration Language (HCL) are used to determine who gets what access.
- How to setup integration with Consul?
 How to set up High Availability Vault with Consul backend? Consul Secrets Engine
- How to use it with Docker, Kubernetes deployments?
Docker Credential Helper for Vault-stored Credentials


How to setup Vault with Kubernetes

Answer on habr.com
envconsul
katacoda
015 USENIX Container Management Summit (UCMS ’15)

Consul

- What it is and how it works ?
Habr part 1
Habr part 2

Python

- Types of variables? 
Python - Variable Types
- What's GIL ?
+ The Python Global Interpreter Lock or GIL, in simple words, is a mutex (or a lock) that allows only one thread to hold the control of the Python interpreter.
This means that only one thread can be in a state of execution at any point in time. The impact of the GIL isn’t visible to developers who execute single-threaded programs, but it can be a performance bottleneck in CPU-bound and multi-threaded code.
Since the GIL allows only one thread to execute at a time even in a multi-threaded architecture with more than one CPU core, the GIL has gained a reputation as an “infamous” feature of Python.
Link
- What modules you know ?
Modules


SQL

- How replication works in SQL cluster?

Linux

- How to find process that uses a specific file?
+You can use the fuser command, like:
# fuser file_name wiki
- How to get the file from memory when the file is deleted?
- How to get TCPDump?
TCPDump
- How to see open ports?
How to check if port is in use on Linux or Unix
- How to get open connection?
+ For open (established) tcp connections, try:
# netstat -tn
To additionally get the associated PID for each connection, use:
# netstat -tnp

- How to find which process uses a lot of CPU ?
ps stands for processes status, it display the information about the active/running processes on the system.
It provides a snapshot of the current processes along with detailed information like username, user id, cpu usage, memory usage, process start date and time command name etc.
# ps -eo pid,ppid,%mem,%cpu,cmd --sort=-%cpu | head
Details of the above command:
  • ps : This is a command.
  • -e : Select all processes.
  • -o : To customize a output format.
  • –sort=-%cpu : Sort the ouput based on CPU usage.
  • head : To display first 10 lines of the output
  • PID : Unique ID of the process.
  • PPID : Unique ID of the parent process.
  • %MEM : The percentage of RAM used by the process.
  • %CPU : The percentage of CPU used by the process.
  • Command : Name of the process
If you only want to see the command name instead of the absolute path of the command, use the ps command format below.
# ps -eo pid,ppid,%mem,%cpu,comm --sort=-%cpu | head
- How to find a memory leak ? And how to fix that?

- Difference PUT and POST restapi commands?
- Difference Authentication and Authorization ?

DNS

- What DNS routing policy you know ?
- What DNS records type you know ?
- How DNS works ?

Prometheus

Overwiev
- How it works?
- How to send data to Prometheus?

Комментариев нет:

Отправить комментарий