DevOps & SRE practice; Cattle not Pets

Rajinder Joat
2 min readApr 22, 2021

CREATE A PYTHON MICROSERVICE

  • encapsulated with a Dockerfile
  • and requirements.txt
  • orchestrated by kubernetes
  • with a performance profile for expected transaction volume & resiliency

MAKE IT MONITORED

  • by auto-generated telemetry (percentages) alerts
  • and monitoring (OK, WARN, CRIT, DOWN) alerts
  • based on:
  • workload type (expected (sporadic?) cpu bound or (sporadic?) io bound?)
  • expected cost of running stack-resource for expected amount of time, charted by day, week, month, year.
  • expected latency times for cpu/ram/disk-io/net-io/san-io/assigned-piops-for-volumes/assigned-piops-for-other
    - for this microservice AND all running services functioning to support it
  • application log data
    - regex event trigger words
    - changes in rates of transaction types
    — predicted or previously recorded resource utilization by transactions
    - changes in expected volume of time-based baseline deviations
  • host log data
    - with varying verbosity levels which can be
    — tweaked by eBPF code which is dynamically updated by OBSERVABILITY system.
    — exposed by dynamically updated kubernetes configurations

DEPLOY IT USING INFRASTRUCTURE-AS-CODE which is executed by running jobs in Jenkins CI-CD pipelines. With AD/LDAP groups managing which dev or ops teams can execute which jobs; and the RBAC IAM policies and roles applied to stacks generated by executing jenkins jobs. And a plan to implement some way of interconnecting the two (Okta 3rd party SAML?)

NOTE ABOUT STACKS:
“Stacks” are a collection of different resource types. Stacks are created and managed by AWS CloudFormation (and CodePipeline), or whichever equivalent web-services exist in the public or private cloud ecosphere API of your choice. For example:
- An EC2 instance
- An AKS container
- A lambda
- An RDS (optimized reads possible) instance
- A DynammoDB (wire-speed writes) table
- An S3 (efficient (CDN backer) and cost effective object-store) bucket
- A Hadoop cluster to map-reduce large data processing jobs to distributed worker nodes crunching a fairly assigned workload amongst themselves, intelligently — like AWS EMR. This allows log analytics to be effectively summarized for efficient delivery to an ML/AI prediction engine which has an effect on Monitoring, Observability, and Orchestration (MOO) systems.
ALWAYS KNOW WHY YOUR COWS MOO ; cattle not pets.

PUT EVERYTHING IN SOURCE REVISION CONTROL

With Webhooks for commits triggering jenkins pipeline CI jobs which perform unit tests that lead to a cumulative successful triggering of CD jobs which perform cloud API calls permissible by the appropriate users in your organization using the same SSO AD/LDAP for git as they do for jenkins and as they do for aws iam.

Create your code, see how much it costs, and optimize it considering the cost of specific resources used to support it, or performance profiles caused by structural code flow and optimization decisions within it or it’s supporting systems.

Come up with a less offensive metaphor than treating animals as food or fun.

--

--

Rajinder Joat

null, the ultimate evolution of the Unix philosophy of intellectual spaghettification. DevOps, Chaos, Resilience. 🤓