• B

    For any code you write, there are several steps needed to transform it from a set of text files to a certain release artifact or a running service. You go through these steps manually at first, but sooner or later you decide to automate them. And this is how the CI/CD pipeline of a project is born.

    But there are different ways how you can organize the automation.

    The classical approach (CI System) is formed around a standalone CI system (for ex. Jenkins, Buildbot, Bamboo..). In the CI system you configure scenarios (jobs, build factories, plans..) and triggers. You also can add dependencies, manual triggers, parametrization, post-build hooks, dashboards and much more. CI system usually becomes the world of its own, with limited number of almighty administrators, managing multiple interdependent projects at once.

    There is also a new, "postmodern" way of setting up a CI, which is essentially inspired by the Travis CI and its integration into GitHub. And if we would follow the common trend, we'd call it System-less CI Pipeline.

    In this approach the recipes, triggers and actions of the CI/CD pipeline are stored in a configuration file together with the project code. The CI system (yes, there is one) is hidden somewhere behind your repository frontend. You do not interact with the CI system directly, rather see it responding to your actions on the codebase: pull-requests, merges, review comments, tags - every action can trigger a certain recipe, configured inside the project. Feedback from the CI system is also provided via the repository frontend in a form of comments, tags and labels.

    To highlight the difference between these setups, let us consider multiple projects (see picture).
    ci system vs pipeline

    As soon as the code enters the CI system, it is no longer owned by the project. It becomes the part of a larger infrastructure, and follows the lifecycle defined on the system level. This allows complex dependency graphs, common integration gates and cross-project communication.

    In the pipeline case, the pipeline carries the project code through the set of steps defined on the project level. Each project gets an isolated, independent lifecycle, which doesn't generally interact with the outside world.

    And which one is better

    You probably can guess my answer by now: yes, you need both.

    But let's dive into a common discussion point.

    CI system silo

    Silo is one of the scariest words in the modern IT environment, and also one of the most deadly weapons. And the argument goes as follows:

    "Standalone CI system creates a silo, effectively cutting out the dev team from the project lifecycle. Pipeline is better because it lets developers to keep the full control of the project"

    As a Continuous Quantum Integration Engineer I would love to dismiss this argument altogether as in the end it compares apples to oranges, but it does have a point. Or, better to say, I can see where it comes from:

    Widely known and widely adopted classical CI systems, like Jenkins, were not designed for collaboration and therefore are extremely bad it. There are no "native" tools for code review, tests, approvals and even readable configuration. Projects like Jenkins Job Builder, which address some of these issues, are still considered to be alien for the CI ecosystem. High entry barrier, outdated and unusable visualizations, lack of code reuse, no naming policies and common practices.. together with generic code-centric development culture (in which CI is not worth any effort) this leads to complex, legacy CI systems. And none of us wants to be there.

    Thus, from a developer point of view the choice looks as follows: either you work with some unpredictable unknown third-party system or you bring every piece of the infrastructure into the project and manage it yourself.

    dev point of view

    Now, given that I admit the problem, I've got some good news and some bad news:

    Bad news are that moving to CI Pipeline doesn't make you immune to the silo issue, it encourages you to create one. In fact, the pipeline is a silo by the very definition of it:

    An information silo, or a group of such silos, is an insular management system in which one information system or subsystem is incapable of reciprocal operation with others that are, or should be, related. - Wikipedia

    As a project developer you might have a feeling that pipeline improves the situation, while the only thing which has improved is your position with respect to the silo wall: you are inside now.
    Pipelines Silo

    Good news are: there is another way!

    To solve the collaboration problem you don't give every collaborator a personal isolated playground, you give collaborators tools and processes to work on the shared environment. And it is exactly what good CI System is supposed to be.


    It is an integration system, after all.

    posted in CI Book read more
  • B

    Let's take a look at Nomad CLI:

    $ nomad --help
    Available commands are:
       agent                 Runs a Nomad agent
       agent-info            Display status information about the local agent
       alloc-status          Display allocation status information and metadata
       client-config         View or modify client configuration details
       deployment            Interact with deployments
       eval-status           Display evaluation status and placement failure reasons
       fs                    Inspect the contents of an allocation directory
       init                  Create an example job file
       inspect               Inspect a submitted job
       job                   Interact with jobs
       keygen                Generates a new encryption key
       keyring               Manages gossip layer encryption keys
       logs                  Streams the logs of a task.
       node-drain            Toggle drain mode on a given node
       node-status           Display status information about nodes
       operator              Provides cluster-level tools for Nomad operators
       plan                  Dry-run a job update to determine its effects
       run                   Run a new job or update an existing job
       server-force-leave    Force a server into the 'left' state
       server-join           Join server nodes together
       server-members        Display a list of known servers and their status
       status                Display the status output for a resource
       stop                  Stop a running job
       validate              Checks if a given job specification is valid
       version               Prints the Nomad version

    Now you are probably not familiar with Nomad concepts, so let's run through them:

    There is a job, when job is submitted it creates a deployment. Whenever you update the job configuration and submit it to cluster, it starts new deployment.

    Deployment creates a group of allocations - allocation is an instance of the app you are deploying. The deployment may be a group of N allocations.

    And evaluation is the process of allocating apps for a certain deployment.

    Now, back to the interface.

    • Why there is an nomad alloc-status and not nomad alloc status?
    • nomad node-drain and nomad node-status and not nomad node [status,drain,..]?
    • Why nomad server-members and not nomad server status?
    • plan and run are for jobs, why they are on the top level?
    • and fs and logs are for allocations

    And now compare with the kubectl's:

    kubectl [command] [TYPE] [NAME] [flags]

    posted in Main Track read more
  • B

    Once you get exposed to the Cloud, DevOps and Everything As A Service world, you realize that almost every piece of software you might need has already been written by someone (or provided as a service). Why is not it all rainbows and unicorns then?

    Because you do not need a tool, a software or a service. You need an infrastructure.

    Start with the right problem

    The most common issue in dealing with the infrastructure setup is that you keep forgetting about the task.

    Of course it was hanging somewhere in the beginning, when you gathered the initial list of tools, which might be worth looking at. But then, once you get your hands on the list, you ask yourself:

    Which one is better - DockerHub, Quay, Artifactory or Nexus?

    And you are doomed.

    As soon as you ask that question, you start to look into feature lists and marketing materials for the mentioned products. And then you build your entire choice strategy on figuring out how to get more by paying less.

    The truth is: you do not need more. Usually you need just right amount of stuff to do the right task. And you can not replace one feature with five others you have got on a summer sale.

    So let's do our homework first and get back to the task.

    The task

    Again, we start with the question:

    We need a Docker Registry, how do we get one?

    And again, we are wrong.

    No one needs a Docker Registry for the sake of having a Docker Registry alone. It is an implementation detail we have slipped in our problem statement. So let's remove it and find out what we are truly looking for:

    the infrastructure to build, store and distribute Docker images

    Now this is something to talk about.


    Docker images are widely known as a delivery mechanism. And every delivery can work in two ways: it might be you who packages and delivers the software to the customer (or target environment), or it might be some third-party which packages the software and delivers it to you.

    In the latter case you might get out just fine without the need to build a single Docker image on your own. So you can skip this part and focus on storing and distributing. Even better, you may forget about Docker images. Treat them as any external binary artifact: download the tarball, install, backup and watch for the updates.

    If building Docker images is your thing, then welcome to the club and keep reading.


    Let's say you use Docker images for packaging the apps to deploy them to production environment. You have Dockerfiles stored in your source code and CI/CD pipelines set to build them in your build service. And your release schedule is close to one release per hour.

    With that kind of setup your Docker infrastructure is, in fact, stateless. If by some reason you lose all the Docker images currently available, you can rebuild them one by one directly from sources. Then you do not really need the storage function and can focus on building and distributing.

    If, on the other hand, you need a long-term storage for Docker images, for example those which you have released and delivered to your customers, things get slightly more complicated.

    In real life you are most likely to need both: there are base images you update once per year, there are images which you can not rebuild at all (there shouldn't be, but there usually are) and there are images which you build daily in hundreds.

    And it totally makes sense to manage them differently. Maybe with different tools?


    In the end it all comes to the distributing of images to the target environments. And target environment could be a developer workstation, CI system, production or customer environment. And these environments are usually distributed all over the globe.

    Now Docker registry should connect to all three by the closest distance possible. Can you find a place for it? I guess not. And here is where caching, mirroring and proxying are brought into play.

    With all that in mind, the natural question to ask is: how many registries you actually need? It appears, more than one.

    Build, store and distribute, shaken, not stirred

    Let set up the context and consider the following situation: we develop an application, package it into container images, and then deploy them to several remote production environments (for example eu and us).

    Registries and their users

    Users working within the infrastructure can be divided into three groups: contributors (Dev), build and test systems (CI), and the customer or consumer (Prod).

    Dev users create and share random containers all day long. They do not have rules and restrictions. Dev containers should never go anywhere but the workstations and temporary custom environments. These containers are generally disposable and do not need a long-term storage.

    CI users are automated and reliable, thus they can follow strict rules on naming, tagging and metadata. Similarly to Dev, CI users generate and exchange huge amounts of disposable images. But as CI provides the data to make decisions (be it decision to approve a pull-request or decision to release the product), CI images must be verifiable and authoritative.

    Prod users do not create containers, they work in a read-only mode and must have a strict policy on what is provided to them. The prod traffic is much lower in terms of variety of images consumed, while it might be high in terms of the amount of data fetched.

    We could have tried to implement these use cases in one Docker registry: set up the naming rules, configure user and group permissions, enforce strict policy to add new repository, enable per-repository cleanup strategies.. But unless you invest huge amount of time in setting up the workflows, it is going to be complex and error-prone. It is also going to be fragile and hard to maintain. Simply imagine the update of a service which blocks your entire infrastructure.

    The other way to do it is to setup several registries with separate scopes, concerns and SLA's.

    Dev users get the Sandbox registry: no rules, free access, easy to setup, easy to clean, easy to redeploy.

    CI users get the CI Pool registry: high traffic, readable by everyone, writable by CI only. It should be located close to the CI workers as it is going to be heavily used by CI builds and test runs.

    There should be also Prod registry: only CI users can publish to it via promotion pipeline. This is the registry which you need to "export" to remote locations. It is also the registry you probably want to backup.

    Depending on your workflows, you might also want to split the CI Pool registry into CI Tests and CI Snapshots. CI Tests would be used for images you build in pre-merge checks and in the test runs, while CI Snaphots are going to be the latest snapshot builds for each of the applications you develop, working or not.

    In the following example of the infrastructure layout we've added also the remote Archive registry. Its primary purpose is to be a replica of certain important DockerHub images. It allows you to be flexible in keeping and overriding upstream images:

    0_1501591850558_docker registry infrastructure.png


    Finally, we come to that tool question again. But now we are equipped with the knowledge of how we want to use them.

    We have high network throughput requirement for CI registries, thus we probably don't want to use a managed service priced by network usage. For random images in the Sandbox we do not want to pay for the number of repositories we create (in Docker, image name represents a repository). For caching instances we'd like to have an easy to maintain and easy to test open-source solution, which we can automate and scale. For archive we may want an independent remote registry with low network requirements but reliable storage options.

    We now can choose and build the hybrid solution which is tailored for our needs, rather then go for the largest and most feature-reach service available and then unavoidably get into building up the pile of workarounds and integration connectors around it.


    It looks scary at first: you have just wanted a place to put the image and now you get multiple registries, scopes, network connections and interactions. But the truth is: we have not created the additional complexity, we've exposed the layout, which has been there in your infrastructure from day one. Instead of hiding the complexity under the hood, we are making it explicit and visible. And by that we can design the infrastructure, which is effective and maintainable and solves the problem at hand, rather than uses those nice tools everyone is talking about.

    posted in CI Book read more
  • B

    In this demo-like tutorial I am not going to explain how Kubernetes works. Instead I will show how you work with Kubernetes.

    For the overview of the Kubernetes key concepts, try this talk https://www.youtube.com/watch?v=v3T9ctJkxJg

    Note: All shell commands prefixed by $ are executed locally on a dev machine without admin rights.


    Things you need to setup to run through this tutorial:

    • Kubernetes cluster
    • kubectl command line tool
    • Docker registry

    Kubernetes cluster

    If you do not have a cluster available, download minikube utility and initiate the local cluster via:

    $ ./minikube start

    It will fetch the virtual machine image with preconfigured one-node Kubernetes cluster and run it on your local machine.


    You need to install kubectl on your local machine and configure it to work with the Kubernetes cluster you got in the prevous step.

    For example on Fedora:

    $ sudo dnf install kubernetes-client

    kubectl reads its configuration from ~/.kube/config file.

    Minikube generates kubectl configuration file automatically. It should look similar to the following:

    apiVersion: v1
    kind: Config
    preferences: {}
        - cluster:
          certificate-authority: /home/bookwar/.minikube/ca.crt
          name: minikube
        - name: minikube
            client-certificate: /home/bookwar/.minikube/apiserver.crt
            client-key: /home/bookwar/.minikube/apiserver.key
        - context:
          cluster: minikube
          user: minikube
          name: minikube
    current-context: minikube

    If you use a remote cluster, you need to create or update this file manually with relevant credentials

    Verify the configuration by running

    $ kubectl cluster-info
    Kubernetes master is running at

    Access to Docker registry

    To work with containerized applications you need a registry of container images. In this tutorial we need to push images to the registry from dev machine and pull them from the cluster. While one can (and, I believe, should) setup private registry for this purpose, it is way out of scope for our simple tutorial.

    Thus, in this tutorial we are going to use DockerHub, which is configured by default in minikube.

    To be able to upload images to Docker Hub, sign up through its web interface. Then login to the registry from local machine by running:

    $ docker login 


    Here is what we are going to do:

    • write an application and test it,
    • package it into a docker image, test it and publish to registry
    • deploy the application to the Kubernetes cluster and test it,
    • roll out an update and, you get this by now, test it.

    Step 1: Application

    Proper containerized applications should work transparently and should never depend on a particular container instance. Thus you would never rely on the local IP address or hostname of a container in real life. But for the purpose of this demo we will use application which exposes the container internals.

    We create a very simple Python application which listens to the port 5000 and replies with the list of host ip addresses.


    Create file ./app.py with a Flask application :

    from flask import Flask
    import subprocess
    app = Flask(__name__)
    def hello():
        ip_a = subprocess.check_output([
        return "IP information:" + " ".join(ip_a) + "\n"
    if __name__ == '__main__':


    Oh, come on, it is Python.


    Run it:

    $ python app.py
    * Running on (Press CTRL+C to quit)
    * Restarting with stat
    * Debugger is active!
    * Debugger pin code: 190-291-556

    Access http://localhost:5000 to see it working:

    $ wget -qO- http://localhost:5000
    IP information:

    Step 2: Container Image

    Create ./Dockerfile

    We start from fedora:latest image, install runtime dependencies, then copy our app.py application from the host
    into container, and then define container endpoint to run the command python app.py on start.

    FROM fedora:latest
    MAINTAINER bookwar "alpha@bookwar.info"
    RUN dnf install -y python-flask hostname && dnf clean all
    COPY ./app.py /app/
    WORKDIR /app
    ENTRYPOINT ["python"]
    CMD ["app.py"]

    Build and tag the image:

    Let's use name local/my-ip and version 0.0.1:

    $ docker build -t local/my-ip:0.0.1 .

    Test image

    Run container locally

    $ docker run -p 8888:5000 local/my-ip:0.0.1

    Note that while application uses port 5000 inside the container, we link it to port 8888 on a host network.

    Thus now we can access http://localhost:8888 and see the report with internal IP address of the container:

    $ wget -qO- http://localhost:8888
    IP information:

    Push container to the registry.

    Tag it as bookwar/my-ip (here bookwar is my user at DockerHub, use yours)

    $ docker tag local/my-ip:0.0.1 bookwar/my-ip:0.0.1

    Push to the registry

    $ docker push bookwar/my-ip:0.0.1

    Now image is available at DockerHUb and anyone can use it via the bookwar/my-ip:0.0.1 name.

    Step 3: Deployment

    Create a deployment object

    We call it my-ip and set replica counter to 5 pods.

    $ kubectl run --image=bookwar/my-ip:0.0.1 --replicas=5 my-ip

    Check that 5 pods were created:

    $ kubectl get pods
    NAME                     READY     STATUS    RESTARTS   AGE
    my-ip-3794442940-1m52n   1/1       Running   0          3m
    my-ip-3794442940-2642d   1/1       Running   0          3m
    my-ip-3794442940-61lqf   1/1       Running   0          3m
    my-ip-3794442940-97nx1   1/1       Running   0          3m
    my-ip-3794442940-fdpvv   1/1       Running   0          3m

    Pods are listening on the local network inside the cluster and are not accessible from the outside.

    Create an exposed service:

    $ kubectl expose deployment my-ip --type=NodePort --name=my-ip-service --port 5000

    Find out the service node port

    Now there is a service which redirects every request to it to port 5000 of a pod in the my-ip deployment group. This
    service has type NodePort, which means that it is exposed as a port on every cluster node. To find our the exact value
    of a NodePort, we can check the service details via describe subcommand.

    $ kubectl describe service my-ip-service
    Name: my-ip-service
    Namespace: default
    NodePort: <unset> 30346/TCP

    Note the 30346 NodePort assigned to the service.

    Check the service

    Now it is easy to reach service from the outside via NodePort by accessing the

    $ wget -qO-
    IP information:

    Here we use the same IP which we got from running kubectl cluster-info command.

    Let us also test it with the debugging pod

    Run the pod:

    $ kubectl run -i --tty busybox --image=busybox --rm --restart=Never

    Using shell prompt inside the pod call the service via command line several times:

    # / wget -qO- my-ip-service.default:5000
    IP information:
    # / wget -qO- my-ip-service.default:5000
    IP information:

    Note that we are using the DNS name and internal port 5000 as we work with internal cluster network.

    You also get different IP addresses in response, as requests get balanced to different pods behind the service.

    Step 4: Rolling out an update

    Deployment objects in Kubernetes come with the update strategy. By default, it is set to RollingUpdate.

    $ kubectl describe deployment my-ip
    Name:      my-ip
    StrategyType:    RollingUpdate
    RollingUpdateStrategy:  1 max unavailable, 1 max surge

    Let's update the base container image for our my-ip pods.

    Edit the app.py

    We add the "Hello world!" string:

        from flask import Flask
        import subprocess
        app = Flask(__name__)
        def hello():
            ip_a = subprocess.check_output([
            return "Hello, world! Here is my IP information: " + " ".join(ip_a) + "\n"
        if __name__ == '__main__':

    Build, tag and push new 0.0.2 version of the image

    $ docker build -t local/my-ip:0.0.2 .
    $ docker tag bookwar/my-ip:0.0.2 local/my-ip:0.0.2
    $ docker push bookwar/my-ip:0.0.2

    Bump version of an image used in deployment object

    Image version is stored in the configuration of our deployment object. There are several ways to change it, let's
    use the intercative edit of a deployment:

    $ kubectl edit deployment my-ip

    Running the command will open the editor with a yaml configuration of a my-ip deployment. Find the container spec
    block and edit the image version:

             - image: bookwar/my-ip:2.0.0

    Save and exit. The rollout procedure will start immediately.

    Verify the update

    Call the service via external port again:

    $ wget -qO-
    Hello, world! My IP information:

    New version is rolled out and we've got a different string.

    posted in Knowledge Base read more
  • B

    In the previous article I explained the underlying idea behind the Continuous Integration concept. Now let's get a bit more practical and talk about how this idea appears in software development.

    CI starts with the code

    CI is a development practice. This is the official meaning of the term and you could find it all over the hardcore Software Architecture and DevOps conferences, in books and high-level discussions. Unfortunately it is rarely known as such among real world programmers, even though they make up the target audience of it.

    Therefore, when introducing CI, the hardest obstacle you see is rarely technical (in the end any tech problem can be solved with the right amount of Python scripting). The true complexity comes from the understanding that CI is not something for infrastructure team to automate, it is something for development team to follow.

    Let me elaborate.

    CI needs good code structure

    The possibility to merge changes into the mainline at least once per day does not come for granted. If every change you do in the codebase has a footprint of 100500 files, the only thing you get at the end of the day is the pile of merge conflicts.

    Tight coupling, huge files, mixed concerns and responsibilities.. everyone knows these are signs of a bad code quality. CI exposes them and makes them harder to avoid.

    If you find that you bump into someone else's conflicting changes all the time, do not blame the CI, consider it to be a good reason to change the code structure.

    CI needs good discipline

    CI practice assumes that you keep the mainline in a releasable state. Every time there is a concern about the current state of the codebase, one should stop doing anything else and deal with the issue.

    No matter how it is important for you to merge your particular change, it must be delayed.

    More to that, do not try to fix the issue in place adding more untested code on top of the broken state. Revert to the known working state and unblock others.

    If master is failing, do not blame the CI, revert the change to previous known state and deal with the commit which introduces the regression separately.

    CI needs good communication

    CI practice needs you to be able to work independently. One part of it is to reduce the footprint of your changes, the other is to make sure that you do not overlap with someone else's work .

    You don't want to spent time writing that helper function just to find out that your colleague has just merged it into master.

    So when you find yourself implementing new feature on top of the functionality, which has been just removed by someone else's refactoring work, don't blame CI, but rather introduce the code and design review process and improve your communication channels.

    CI needs good management

    CI practice requires you to merge unfinished functionality into the code.

    • It saves the time spent on integration efforts as integration is done by small steps in a controlled way.
    • It helps to track the current progress of a feature implementation, as there is no unpredictable integration explosion waiting in the very end.
    • It also gives you the flexibility to release a product on time even if planned features were delayed or canceled.

    The downside is that every unfinished feature introduces a technical debt. And once you've decided to cancel development for a certain subproject, you can not simply get out and move on to the new one. There is a cleanup you must perform.

    If your code is full of deprecated feature toggles, do not blame the CI, but create the policy to track feature implementations and find time for regular cleanups.

    CI is not magic

    CI can not magically make your life easier, your development faster, your code better, your tests greener and your salary higher. It provides you with tools to do that.

    So, generally, just stop blaming the CI :)

    posted in CI Book read more
  • B

    I'd like to add:

    People tend to use @channel as a code for everyone, while in fact, it is paired to the notification alert. Thus it actually means everyone, right now!.
    Same for @here. It translates to everyone online, listen to me right now!.

    Generally, there is a huge difference between important and urgent. These are two independent properties, two different dimensions.

    Important messages should go to the #important channel but without alert. They are important, and supposed to be checked on a regular basis, but not immediately. And alerts are for urgent only.

    And as a rule of thumb, one should always add 'right now' to the message before sending the alert. If it fits - then notification is justified.

    if Slack developers made it more clear from the user interface, people would use the broadcasting options less often and more to the point.

    posted in Knowledge Base read more
  • B

    Let's talk about Slack best practices. Or should I say worst practices?

    In particular about @channel and @here

    For those who don't know: the @channel in Slack alerts everyone on the chat and @here alerts everyone on the chat who is online. Both commands are widely used, but both are, in fact, harmful.

    The Problem

    There is a team A-Team and a channel #a-team-channel. The audience of the channel is: 10 members from the A-Team itself, and 500 members of other teams who came to the channel to discuss the issue with the A-Team or just to browse what is going on there.

    Now you come to the channel and want to get the attention of one of the A-Team members.

    If you use @channel keyword you alert 510 people at once distracting them from what they are doing. If you use @here - you alert about 300 people, and maybe no one from the A-Team itself. And this is extremely counter-productive and leads to people disabling Slack notifications altogether.

    What are the alternatives?

    Slack supports two more alerting mechanisms: keywords and groups.

    User Groups

    Groups provide the more formal way of managing groups casts.

    Pros: You can create a group and add people to it. Group members themselves don't need to do anything on their side.

    Cons: Group management requires admin rights, it is not flexible, and can not be self-managed. If you change your current role from duty engineer to the research engineer for a week, you can not simply leave the 'on-duty' group and must contact Slack admin to do that for you.


    With keywords you can configure per-user alerts and set the notification if someone mentioned a specific word in chat. For example I have an alert set whenever someone mentions the word 'jenkins'.

    But keywords can also be used in a more organized way. If A-Team chooses its personal keyword, like a-team, and every team member subscribes to it, then this keyword can be used instead of 'channel' and 'here' casts. It is going to be more direct and straight to those people you actually need.

    Pros: Fully flexible. Team members decide and manage which keywords they care about.
    Cons: Requires self-discipline. If you are the duty engineer, you must go and subscribe to "on-duty" keyword to get alerted.

    Critical remark

    Note that the crucial step here is to promote the keyword or user group to the people outside of the team. It needs to be discoverable the same way as e-mail address or Jira project. Add it to any landing page your team owns.

    We also found it most effective to announce important keywords in the channel topic. Simply add line “To contact A-Team use keyword a-team to the #a-team-channel topic and after some time people will learn to use it.


    The groups/keywords approach reduces fragmentation, as one don't need to create separate channels with smaller number of participants to limit the alerting power.

    It increases the cross-team presence, as you can join more channels in a browsing mode, without increasing your alerts stream. You can watch and recap on what has happened in the A-Team channel during the day, even if you are not the direct responsible person and don't get alerted.

    It improves the overall usage of Slack as alerts become more specific and go directly to the people you want to target.

    Example 1: Support Channel

    Suppose we have a Datacenter Engineers and Office IT Engineers teams. Each team has its own responsibilities but they also share a lot of common knowledge.

    The Classical Way says we create a #dc-team and #it-team channels. The typical dialog then looks as follows:

    The Support Struggle

    Act 1. In #dc-team

      user: @here Server ABC is not responding, please check!!   # Alert includes ~50 server users
      dc-eng: logs?
      user: ..
      dc-eng: ask at #it-team

    Act 2. In #it-team

      user: @here Can not reach server ABC, please check!!        # Alert includes ~100 office users
      user: dc-team send me here
      it-eng: logs?
      user: .. same logs again..
      it-eng: some more logs?
      user: ...
      it-eng: ask at #dc-team

    Act 3. In #dc-team again

    angry user: @here @channel PLEASE HELP!!!
    angry user: it-team sends me back here, because they said..(and here follows the complete misinterpretation of what it-team has actually said)

    Here user is bounced between channels and forced to repeat the entire context of the discussion. And I’ll leave up to you to calculate the amount of people alerted in these conversation.

    Now the Keyword Way says we should have just one channel, which is the #support channel. DC Engineers respond to the dc-team keyword, and Office IT Engineers to it-team.

    The above dialog would look as follows:

    The better way

    In #support

      user: dc-team, server ABC is not responding, please check!!   # Alert ~5 people
      dc-eng: logs?
      user: ..
      dc-eng: it-team, ^^                                         # Alert ~7 people
      it-eng: some more logs?
      user: ...
      it-eng: @dc-eng ^^                                           # Alert 1 person
      <here the it-eng and dc-eng engineers start to dig in together>

    Here two teams can be cast into conversation independently but they share the context of the discussion. They also can talk to each other directly, which eliminates the misinterpretation problem.

    Example 2: Development Channel

    Suppose there is an ABC project with Dev, QA and Ops teams working on it.

    The common pattern here is to create #abc-dev, #abc-qa and #abc-ops channels. Issues which come from such a division are well known and I won’t even bother you with the play to show it.

    The better way

    Create one #abc channel. And add abc-qa, abc-dev and abc-ops keywords, which would cover different aspects of the project. When project fails due to network issues in the datacenter, alert the abc-ops, when nightly tests start failing — use abc-qa, when there is a new feature planned — discuss it with abc-dev, but keep it all in the same channel.

    Even when you haven’t gone full DevOps yet, you can add a lot of transparency by reorganizing your communication channels.


    When using keywords and groups in Slack you do not need to align channels to team structure. The team structure is now covered by user groups or keywords, while channel can be aligned to the content. It gives you the flexibility, reduces fragmentation and increase overall cross-team collaboration based on projects and topics rather than formal structure.

    posted in Knowledge Base read more
  • B

    OpenShift team created a nice way to describe application deployments


    The only question is - how to make this autogenerated from the set of YAML files.

    posted in Main Track read more
  • B

    @akaRem said in FAQ:

    Q: What is quantum integration and why it suits better?

    Choose the reason you like more :wink:

    1. Continuous Integration works with small changes. So it is discrete (quantum) by definition.
    2. The term CI is overloaded and often misused, most often it is used as a synonym for the test automation infrastructure, which is plain wrong. So we need a new one.
    3. Continuous Integration is just one of the possible methodologies. The term quantum allows us to extend our scope, and go beyond the classical theory of continuous integration to the yet undefined areas.
    4. And it sounds good

    posted in Forum Help read more
  • B

    The Ultimate Source of Truth, Wikipedia, defines continuous integration as the practice of merging all developer working copies to a shared mainline several times a day.

    My version goes a bit deeper:

    Definition: Continuous Integration (CI) is a practice of reaching the goal by doing small changes one at a time, while keeping the main artifact in a releasable state at all times.

    Why do I need a different definition? Let's take a closer look into it.

    First of all, by this definition CI is not limited to the area of the programming or software development. Indeed, one can and should consider CI practice applied to any kind of production workflow:

    Production starts with an artifact. It can be a software application, but it may as well be a book, or picture, or building. Artifact has a certain current state, for example, the concept of a book written on a napkin. And then there is a goal - the target state of an artifact we plan to reach (i.e. 450 pages in a hard cover).

    Production workflow is the process of modifying the artifact, carrying it through the sequence of intermediate states to the predefined goal.

    To add continuous integration to the picture we need a certain notion of quality: the way to differentiate the releasable state of an artifact from the unreleasable one. For a book we might say, for example, that book is in a releasable state if all its chapters are complete. The continuous integration of a book then would be printing the new chapter as soon as it is ready.

    Remark: As you may see the nature of continuous integration is actually quantum. As it follows from the definition, continuous integration is performed by applying small atomic changes - "quants". And all continuous integration workflows work with discrete chunks of data, rather than continuous streams.

    Another important note is that the above definition doesn't imply a specific implementation of the production workflow.

    In the day to day conversations "doing CI" often means setting up a build service which runs certain tasks, build scripts and tests, triggered by certain events. But the CI concept itself does not require deployment automation, test coverage or Slack notifications of failed builds.

    These technicalities are indeed useful and come naturally from applying the CI approach in software development, but these are helper methods, which should always be measured by and aligned to the generic idea.

    In other words: while running automated tests is a good practice for implementing continuous integration development, continuous integration approach can not be reduced to just running automated tests.

    The third thing I want to point out is that in the definition there is nothing said about some third-party, another actor, which you need to integrate with. There is a simple explanation for that: I believe that even when there is one and only acting entity in the workflow (one developer working on the codebase, one author writing the book..) there is still place for integration. Generally speaking, whenever you are involved in any kind of long-term process, you should treat "yourself today", "yourself yesterday" and "yourself tomorrow" as independently acting third-parties, which might work in parallel on different parts of the project and should exchange their work via documentation, code-review and comments just as the usual collaborators do.

    posted in CI Book read more

Looks like your connection to Quantum-Integration.org was lost, please wait while we try to reconnect.