Categories
Hybrid Clouds In detail OpenSource

Desplegando PaaS en un “click"

enter

Siguiendo en la línea del posts sobre Infraestructura como Código (IaC) con la herramienta Terraform, os traemos un nuevo tutorial para desplegar la plataforma de PaaS Rancher de forma totalmente automatizada utilizando RKE.

RKE es el acrónimo de Rancher Kubernetes Engine y se trata de un instalador de Kubernetes escrito en Golang. Es fácil de usar y no requiere mucha preparación por parte del usuario para comenzar.

Como en el tutorial anterior utilizaremos el provider de Terraform para OpenNebula, en esta ocasión utilizaremos una versión mejorada del provider desarrollado por el equipo de Blackberry.

Para acabar recordaros que el próximo 12 y 13 de Noviembre se celebra una nueva edición de la OpenNebulaConf en esta ocasión el lugar elegido a sido Amsterdam y algunos de los miembros de Cloudadmins estaremos allí y participaremos con la ponencia: Hybrid Clouds: Dancing with “Automated” Virtual Machines

Tutorial

Install Terraform

To install Terraform, find the appropriate package for your system and download it

$ curl -O https://releases.hashicorp.com/terraform/0.11.10/terraform_0.11.10_linux_amd64.zip

After downloading Terraform, unzip the package

$ sudo mkdir /bin/terraform
$ sudo unzip terraform_0.11.10_linux_amd64.zip -d /bin/terraform

After installing Terraform, verify the installation worked by opening a new terminal session and checking that terraform is available.

$ export PATH=$PATH:/bin/terraform
$ terraform --version

Add Terraform providers for Opennebula and RKE

You need to install go first: https://golang.org/doc/install
Install Prerequisites

$ sudo apt install bzr

Use the wget command and the link from Go to download the tarball:

$ wget https://dl.google.com/go/go1.10.linux-amd64.tar.gz

The installation of Go consists of extracting the tarball into the /usr/local

$ sudo tar -C /usr/local -xvzf  go1.10.linux-amd64.tar.gz

We will call our workspace directory projects, but you can name it anything you would like. The -p flag for the mkdir command will create the appropriate directory tree

$ mkdir -p ~/projects/{bin,pkg,src}

To execute Go like any other command, we need to append its install location to the $PATH variable.

$ export PATH=$PATH:/usr/local/go/bin

Additionally, define the GOPATH and GOBIN Go environment variables:

$ export GOBIN="$HOME/projects/bin"
$ export GOPATH="$HOME/projects/src"

After go is installed and set up, just type:

$ go get github.com/blackberry/terraform-provider-opennebula
$ go install github.com/blackberry/terraform-provider-opennebula

Post-installation Step

Copy your terraform-provider-opennebula binary in a folder, like /usr/local/bin, and write this in ~/.terraformrc:

$ sudo cp ~/projects/bin/terraform-provider-opennebula /usr/local/bin/terraform-provider-opennebula

For RKE provider, download the binary and copy in the same folder:

$ wget https://github.com/yamamoto-febc/terraform-provider-rke/releases/download/0.5.0/terraform-provider-rke_0.5.0_linux-amd64.zip
$ sudo unzip terraform-provider-rke_0.5.0_linux-amd64.zip -d /usr/local/bin/terraform-provider-rke
providers {
  opennebula = "/usr/local/bin/terraform-provider-opennebula"
}
providers {
  rke = "/usr/local/bin/terraform-provider-rke"
}

Install Rancher

This repository provide a TF file to install Rancher in a high-availability configuration. The goal is easily install a Rancher on machines running CentOS 7.
Clone this repo:

$ git clone https://github.com/mangelft/terraform-rke-paas.git

Create infrastructure

First we have to initialize terraform simply with:

$ terraform init

This will read your configuration files and install the plugins for your provider.
We let terraform create a plan, which we can review:

$ terraform plan

The plan command lets you see what Terraform will do before actually doing it.
Now we execute:

$ terraform apply

terraform-apply
 
oneKubectl is the CLI tool for interacting with the Kubernetes cluster. Please make sure these tools are installed and available.
To make sure it works, run a simple get nodes command.

$ kubectl get nodes

kubectl
 
 
That’s it you should have a functional Rancher server. Point a browser at the hostname: https://rancher.my.org.
 
rancher-dashboard
 

Categories
General In detail OpenSource

Integración continua de infraestructura: Terraform & ONE

ScreenLa Infraestructura como Código (IaC) se está convirtiendo en uno de los elementos clave de los equipos Agile, ya que permite que la infraestructura ya no sea el cuello de botella dentro de nuestro pipeline CI/CD.
Una de las herramientas que se puede utilizar es terraform. Esta aplicación permite codificar la infraestructura según las necesidades del servicio y hacerlo de manera agnóstica al entorno cloud donde se ejecute. Por ello, la IaC nos puede ayudar a agilizar la creación y mantenimiento de infraestructuras de forma automatizada.
Dentro de la comunidad de la plataforma abierta de computación en la nube,  OpenNebula, Runtastic ha desarrollado un provider de OpenNebula para terraform, aprovechando la API OpenNebula XML/RPC. Este proveedor permite crear los principales recursos de OpenNebula, como una máquina virtual, un template, una red virtual o una imagen de disco.
En el siguiente tutorial, se detalla como instalar la herramienta i utilizarla con OpenNebula para desplegar un clúster de Kubernetes sobre Docker de forma totalmente automatizada con Terraform y Ansible.
Por último, recordaros que el próximo 24 de mayo, vuelve el “OpenNebula TechDay“,  a Barcelona que constará de un taller práctico donde se presentará esta plataforma, y se procederá a su instalación y se mostrará su funcionamiento y sus utilidades.
Ya podéis registraros al evento en el siguiente enlace! Y en breve tendréis también disponible la Agenda para la Jornada.

Tutorial

Deploying a Kubernetes Cluster to ONE with Ansible and Terraform

 Installing Terraform

To install Terraform, find the appropriate package for your system and download it

$ curl -O https://releases.hashicorp.com/terraform/0.11.4/terraform_0.11.4_linux_amd64.zip

After downloading Terraform, unzip the package

$ sudo mkdir /bin/terraform
$ sudo unzip terraform_0.11.4_linux_amd64.zip -d /bin/terraform

After installing Terraform, verify the installation worked by opening a new terminal session and checking that terraform is available.

$ export PATH=$PATH:/bin/terraform
$ terraform --version

Installing Terraform provider Opennebula

You need to install go first: https://golang.org/doc/install

Install Prerequisites
$ sudo apt install bzr

Use the wget command and the link from Go to download the tarball:

$ wget https://dl.google.com/go/go1.10.linux-amd64.tar.gz

The installation of Go consists of extracting the tarball into the /usr/local
 

$ sudo tar -C /usr/local -xvzf  go1.10.linux-amd64.tar.gz

We will call our workspace directory projects, but you can name it anything you would like. The `-p` flag for the `mkdir` command will create the appropriate directory tree

$ mkdir -p ~/projects/{bin,pkg,src}

To execute Go like any other command, we need to append its install location to the $PATH variable.

$ export PATH=$PATH:/usr/local/go/bin

Additionally, define the GOPATH and GOBIN Go environment variables:

$ export GOBIN="$HOME/projects/bin"
$ export GOPATH="$HOME/projects/src"

After go is installed and set up, just type:

$ go get github.com/runtastic/terraform-provider-opennebula
$ go install github.com/runtastic/terraform-provider-opennebula
Optional post-installation Step

Copy your terraform-provider-opennebula binary in a folder, like /usr/local/bin, and write this in ~/.terraformrc:

$ sudo cp ~/projects/bin/terraform-provider-opennebula /usr/local/bin/terraform-provider-opennebula

Example for /usr/local/bin:

providers {
  opennebula = "/usr/local/bin/terraform-provider-opennebula"
}
Install Ansible

We can add the Ansible PPA by typing the following command:

$ sudo apt-add-repository ppa:ansible/ansible

Next, we need to refresh our system’s package index so that it is aware of the packages available in the PPA. Afterwards, we can install the software:

$ sudo apt-get update
$ sudo apt-get install ansible

Deploy a Kubernetes cluster

Terraform code is written in a language called HCL in files with the extension “.tf”. It is a declarative language, so your goal is to describe the infrastructure you want, and Terraform will figure out how to create it. This repository provide an Ansible playbook to Build a Kubernetes cluster with kubeadm. The goal is easily install a Kubernetes cluster on machines running CentOS 7. 

$ git clone https://github.com/mangelft/terransible-kubernetes-cluster.git

First, initialize Terraform for your project. This will read your configuration files and install the plugins for your provider:

$ terraform init


In a terminal, go into the folder where you created main.tf, and run the terraform plan command:

The plan command lets you see what Terraform will do before actually doing it. To actually create the instance, run the terraform apply command:


You can access Dashboard using the kubectl command-line tool by running the following command:

$ kubectl proxy --address $MASTER_IP --accept-hosts='^*$'


The last step is to complete the cluster life cycle by removing your resources, do: terraform destroy

 Fuente: https://github.com/mangelft/terransible-kubernetes-cluster

Buen vuelo!

Categories
General In detail

Cloud Simulators.. ¿Por qué y para qué?

cloud simulator

¿Desarrollo de aplicaciones sobre la nube o incluso optimización de esta sin un proceso de “testing” a la altura? La utilización de bancos de pruebas reales sin duda limita los experimentos y hace que la reproducción de resultados sea para este tipo de entornos un tanto difícil.

Una alternativa adecuada es usar herramientas de simulación, ya que abren la posibilidad de evaluar la hipótesis antes del desarrollo de software en un entorno donde se pueden reproducir tests sin impacto en el bolsillo. Como bien conocemos en la computación en la nube, el acceso a la infraestructura incurre en pagos a golpe de tarjeta, los enfoques basados ​​en la simulación ofrecen beneficios significativos, ya que permite a los clientes probar sus servicios/aplicaciones en un entorno repetible y controlable cerca de un coste que tiende a 0. Es decir, que si buscamos ajustar aspectos de rendimiento y evitar futuros cuellos de botella antes de implementarlos en nubes públicas reales nos pueden ser de mucha utilidad.

Por la parte del proveedor, los entornos de simulación permiten la evaluación de diferentes tipos de escenarios de “alquiler” de recursos con diversas configuraciones de cargas de trabajo y a partir de aquí llegar incluso a ajustar o establecer sus tarifas. En ausencia de este tipo de plataformas de simulación, los clientes y los proveedores cloud tienen que confiar en suposiciones teóricas, dónde los enfoques de prueba/error pueden conducir a la prestación de un servicio ineficiente y en consecuencia impacto sobre la propia generación de ingresos.

Resumiendo, tests de integración parecidos al entorno productivo permiten:

  1. Validar rápidamente suposiciones.
  2. Trabajar con volumenes de recursos que no podemos conseguir.
  3. Ahorrar tiempo.
  4. [alguna más que seguro estás pensando..]

Algunos ejemplos de “frameworks” disponibles:
cloudbuslogo-v5aCloudSim http://www.cloudbus.org/cloudsim/  – Univ. of Melbourne

  • Dedicado a entornos cloud (IaaS & PaaS)
  • Java
  • Se está situando como estándar en su campo



simgrid_logo_2011
SimGrid http://simgrid.gforge.inria.fr/ –  Inria / Univ. Lorraine

  • Establecido en al comunidad científica (muy versátil)
  • C y varios bindings (Java incluido)
  • Reorientado hacia cloud por la ANR via el proyecto Songs





greencloud
GreenCloud http://greencloud.gforge.uni.lu/ – Univ. Luxembourg

  • Orientado a la mejora de la eficiencia energética.
  • Simulación de entornos de virtualización y clouds privados.
  • Altamente enfocado a networking.

Y hasta aquí el repaso del ecosistema de herramientas de referencia, como es habitual cada opción con sus ventajas e inconvenientes dónde su aplicación vendrá determinada por cada caso.
A modo de conclusiones:

  • Los simuladores sin duda son de mucha utilidad frente a la validación y testeo de algoritmos de  optimización/automatización de procesos y del manejo del ciclo de vida de recursos en cloud. Un verdadero reto dónde las aplicaciones (lo de arriba) y la infraestructura (lo de abajo) aumentaran su eficiencia debido a alinearse con los primeros y en consecuencia hacer un mejor uso de los segundos.
  • El uso de simuladores permite a investigadores y desarrolladores de la industria concentrarse en cuestiones de diseño de sistemas específicos que quieren investigar, sin estar preocupados por los detalles relacionados con la infraestructura y servicios base que ofrece la nube.

Para finalizar y como recomendación no olvides que en ocasiones la simulación se puede alejar de la realidad si no se contemplan los tiempos,  (provisonar 50k MV’s via simulador (10 seg) sobre tu cloud provider favorito un poco más… 😉

Categories
General Guide

Homebrew – Servicio Cloud SSH


Soy Aitor Roma, CEO de RedAven.com una empresa dedicada a la administración de sistemas y a partir de ahora una nueva incorporación de CloudAdmins.org
Antes de escribir este post hice unas votaciones en Facebook i en las listas de CloudAdmins (Donde os recomiendo que os inscribáis ya que se pueden producir debates muy interesantes), para ver el interés que podría tener varios artículos que tenía pensados. Este es el articulo ganador en las votaciones!

Categories
General Guide

Kyle Rankin’s DevOps Troubleshooting


In all seriousness, cloudadmins.org readers certainly will be interested to know that Kyle Rankin, has written a new book titled DevOps Troubleshooting: Linux Server Best Practices. The purpose of DevOps is to give developers, QA and admins a common set of troubleshooting skills and practices so they can collaborate effectively to solve Linux server problems and improve IT performance, availability and efficiency. Kyle walks readers through using DevOps techniques to troubleshoot everything from boot failures and corrupt disks to lost e-mail and downed Web sites. They’ll also master indispensable skills for diagnosing high-load systems and network problems in production environments. Addison-Wesley Professional is the publisher (and royalty provider) for DevOps.
http://www.informit.com

Categories
General Social

Chef, devops… future of system administration

Some thoughts for the holidays… Best Regards!  Cloudadmins team.
by Julian Dunn, 2012

Opscode Chef logoLast night, at a meeting of NYLUG, the New York City Linux Users’ Group, I watched Sean O’Meara whip through a presentation about Chef, the system configuration management (CM) tool. I was impressed. The last time(s) I tried to play with automation tools like cfengine and Puppet I got very frustrated at their complexity. The folks at Opscode have definitely succeeded at bringing simplicity (as much as can be had) to the CM space.
But what struck me after hearing Sean had nothing to do with Chef. Instead, I came to the conclusion that pure systems administration is eventually going to die out as a profession. The developer is now king (or queen), and that’s not a bad thing.
Let’s step back for a minute and talk about CM tools in general. Traditional CM tools — to the extent that they existed before cfengine et. al. – know nothing about the underlying semantics of what you ask them to do. At CBC, we had a set of elaborate shell and Perl scripts that were written in-house, collectively known as ASC, Application Server Control, to do so-called configuration management of the origin infrastructure. ASC’s sole job was to revision control configurations, perform deploy and rollback operations, and perhaps do some auditing. But it was prescriptive, not descriptive. Most of the time I spent monkeying with ASC was debugging how it was doing things.
Enter Chef (or Puppet, LCFG, cfengine, BCFG2; pick your poison). These are all configuration management tools that allow you to describe your infrastructure  in a fourth-generation language (4GL) way. You describe the features that certain hosts should have, and the tools, using canned recipes, makes it happen. (“Make me a MySQL server,” for instance.) Another advantage of these tools is that they (can) keep track of the state of your infrastructure, and you can query that database to make decisions about new deployments. “How many MySQL servers do I have?” for example. Or even “Which node is the MySQL master?” and then kicking off another job on a new MySQL slave to automatically start replicating from the right server.
Had it not been for the development of IaaS — infrastructure as a service — everything that I’ve told you would not be particularly noteworthy. But IaaS, or “cloud computing”, now allows anyone to provision new (virtual) servers inexpensively. No more waiting around for the system administrator to order a couple servers from Dell, wait a few weeks for them to arrive, rack them up, configure them, etc. Developers, armed with a tool like Chef and its huge cookbook of canned recipes for making many standard infrastructure components, can fire up everything they need to support their application themselves. Therein lies the demise of system administration as a standalone profession and the rise of “devops”.
I admit that when I first heard the concept of “devops”, I snickered. “Give developers the keys to the infrastructure and they’ll surely break it beyond repair and expect the sysadmins to fix it,” I thought. But it’s finally dawned on me that “devops” isn’t just some buzzword concept that someone has thought up to make sysadmins’ lives hell. It’s the natural evolution of both professions. By bringing development and system administration closer together, it does two things. First, it makes developers operationally accountable for their code, because they are the ones that get paged in the middle of the night, not some “operations team” upon whom they can offload that responsibility. And secondly, it makes those on the systems side of the house better at their jobs, because they can use newly-acquired programming skills to manage infrastructure resources in a more natural way.
So will IaaS and sophisticated configuration management tools kill the system administrator? I believe so — but that’s not a bad thing. System administrators have got to stop thinking of servers/disk/memory/whatever as “their resources” that “they manage”. Cloud computing has shown us that all of that stuff is just a service, dedicated to nothing more than serving up an application, which is what really matters. If sysadmins want to remain relevant, they’ll get on board and start learning a bit more about programming.
Source: http://www.juliandunn.net/2012/01/13/chef-devops-and-the-death-of-system-administration/
 

Categories
Social

Chef or Puppet + IaaS = No More Sysadmins?

The DevOps Zone is presented by DZone with partners including ThoughtWorks Studios and UrbanCode to bring you the most interesting and relevant content on the DevOps movement.  See today’s top DevOps content and be sure to check out ThoughtWorks Studio’s Continuous Delivery Whitepapers and UrbanCode’s Webinars.

The end of days is nigh for the profession of systems administration according to Julian Dunn, a digital media systems designer and architect.  At a recent meeting of the New York City Linux Users’ Group, a presentation using the configuration management tool, Chef, led him to the conclusion that

IaaS, or “cloud computing”, now allows anyone to provision new (virtual) servers inexpensively. No more waiting around for the system administrator to order a couple servers from Dell, wait a few weeks for them to arrive, rack them up, configure them, etc. Developers, armed with a tool like Chef and its huge cookbook of canned recipes for making many standard infrastructure components, can fire up everything they need to support their application themselves. Therein lies the demise of system administration as a standalone profession and the rise of “devops”. —Julian Dunn

Traditional CM tools have been composed of elaborate prescriptive, rather than descriptive, scripts that are unaware of the underlying semantic meaning of user requests, according to Dunn.  The tools that we’re now associating with DevOps (Puppet, Chef, etc.) allow you to describe your infrastructure in what Dunn refers to as a ‘4th Generation Language’ way.  That simplified process for creating reusable, canned recipies for configuration, paired with IaaS (where you don’t need to worry about the physical setup and configuration), is what could make the Sysadmin, as we know it, obsolete.  This is Dunn’s theory.
With these new opportunities in cloud computing, Dunn sees the benefits of bringing development and system administration together, and suggests that “if sysadmins want to remain relevant, they’ll get on board and start learning a bit more about programming.”  Good point, Dunn, but let’s not forget that, even with cloud services, code ultimately runs on servers/disk/memory. The hardware is never truly “virtual”.
Source:  http://server.dzone.com/articles/cm-tools-and-end-systems
Memfis (Cloud Admin)