[04:19] <coolmariorocks> hello
[04:19] <coolmariorocks> i have a question
[04:19] <coolmariorocks> if i way ask in here
[04:19] <pleia2> coolmariorocks: you probably want to ask in #ubuntu (this channel is for classes)
[04:20] <coolmariorocks> ok thanks pleia2
[05:10] <chinnappan> HI
[05:11] <chinnappan> evolution + exchange 2010 is not showing folder ? please help me ?
[05:11] <chinnappan> evolution + exchange 2010 is not showing folder ? please help me ?
[05:14] <chinnappan> do you have any documentation for file server in linux?
[12:57] <showkat> when will start the cloud session
[13:22] <rwh> ?
[13:22] <rwh> help
[13:27] <rwh> cloud session starts at 16:00 UTC
[15:08] <Wordpad2> #ubuntu-classroom-cha
[15:08] <Wordpad2> #ubuntu-classroom-chat
[15:08] <Wordpad2> Sorry...
[15:27] <HugoKuo> test
[15:59] <Hugo> test
[16:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
[16:00] <kim0> Hello, Good morning, good evening and good afternoon
[16:01] <kim0> Welcome to Ubuntu Cloud Days!
[16:01] <kim0> This is the second UCD ever
[16:01] <kim0> This event will be run for two days (today and tomorrow)
[16:01] <kim0> You can find more information regarding the event on https://wiki.ubuntu.com/UbuntuCloudDays/
[16:02] <kim0> It would be great to spread the news and let your friends join in
[16:02] <kim0> This is a great chance to get introduced to new ubuntu related server and cloud technologies
[16:02] <kim0> as well as a chance to connect to developers and active community members
[16:02] <kim0> Alright ..
[16:03] <kim0> Let's get started then
[16:03] <kim0> At any time you can "ask a question"
[16:04] <kim0> this is done by prepending your question with QUESTION: .. example ..  "QUESTION: what is xxx?"
[16:04] <kim0> a bot will pick up the question, and the instructor will answer it at a suitable time
[16:05] <kim0> So .. this session is for Ensemble
[16:05] <kim0> Take a moment to check out:  https://ensemble.ubuntu.com/
[16:06] <kim0> Ensemble is a cloud orchestration framework
[16:06] <kim0> Since cloud layers an API over compute resources
[16:06] <kim0> many compute resources such as servers, are more and more being regarded as disposable
[16:06] <kim0> people fire up servers, use them for an hour and destroy them
[16:07] <kim0> this is valid for both public and private clouds
[16:07] <kim0> as such, it would be pretty good to think at a higher level than a "server"
[16:07] <kim0> namely to think at the "service" level
[16:07] <kim0> this is one of the main concepts of Ensemble
[16:08] <kim0> Let's quickly discuss a few key concepts about Ensemble
[16:08] <kim0> 1- Ensemble focuses on the higher level concept of "Services" rather than "servers"
[16:08] <kim0> Examples of a "service" would be
[16:08] <kim0> - MySQL
[16:08] <kim0> - Memcached cluster
[16:09] <kim0> - Munin: as a monitoring service
[16:09] <kim0> - Bacula: as a backup service
[16:09] <kim0> and so on
[16:09] <kim0> 2- The second important concept is that Ensemble completely "encapsulates" those services
[16:09] <kim0> that is, if you have no idea how to get munin running
[16:10] <kim0> if you ask ensemble to deploy it, you would have it running a minute or two
[16:10] <kim0> and you can connect it (read: relate it) to other services
[16:10] <kim0> and it would start graphing performance metrics from all around your infrastructure
[16:10] <kim0> you do not need to know how to control munin, it is encapsulated
[16:11] <kim0> 3- The third important concept, is that with Ensemble services are "composable"
[16:11] <kim0> that is, services have well defined interfaces
[16:11] <kim0> such that you can connect/relate many services together .. to form a large infrastructure
[16:11] <kim0> you can replace infrastructure components with others .. such as replace mysql with pgsql if you so wish
[16:11] <kim0> and if both of them implement the same interface!
[16:12] <kim0> so ..
[16:13] <kim0> Ensemble enables layering a high level API over "services" and allows composing sophisticated infrastructures from that .. easily, consistently and without worrying about any details!
[16:13] <kim0> If you have any questions
[16:13] <kim0> now would be a good time to ask
[16:13] <kim0> remember to prepend any question with "QUESTION:"
[16:14] <kim0> I will now prepare the demo environment, that should clear up things a bit
[16:15] <kim0> For anyone wanting to follow along with the demo
[16:15] <kim0> Please ssh as user guest to the following machine
[16:15] <kim0> ssh guest@ec2-50-19-23-213.compute-1.amazonaws.com
[16:15] <kim0> password: guest
[16:16] <kim0> you will get a read-only view to a shared screen session
[16:17] <kim0> I will start the demo
[16:17] <kim0> I will be pasting commands and output text in this session as well, for archival purposes
[16:18] <kim0> The very first step we do is:
[16:18] <kim0> $ ensemble bootstrap
[16:18] <kim0> 2011-07-25 16:17:22,569 INFO Bootstrapping environment 'sample' (type: ec2)...
[16:18] <kim0> 2011-07-25 16:17:23,637 INFO 'bootstrap' command finished successfully
[16:18] <kim0> What the ensemble bootstrap does, is it starts a "management node" if you will
[16:18] <kim0> that is used to control our cloud deployment
[16:19] <kim0> let's check out the files available in the current directory
[16:19] <kim0> $ ls
[16:19] <kim0> byobu-classroom  drupal  mysql
[16:19] <kim0> byobu-classroom: setup scripts for the shared screen session you are see'ing .. This is not related to Ensemble
[16:19] <kim0> drupal: Ensemble drupal formula
[16:19] <kim0> mysql: Ensemble mysql formula
[16:20] <kim0> What is a formula you ask ?
[16:20] <kim0> A formula holds instructions for Ensemble on how to install and manage a service
[16:21] <kim0> that is .. the drupal formula, tells Ensemble how to install drupal, how to connect it to the database, how to create DB tables, how to configure a drupal website behind a load balancer ...etc
[16:21] <kim0> It is the experience of devops .. distilled .. into a "formula" that everyone can use
[16:21] <kim0> This is one of the great reasons "why use Ensemble" ..
[16:22] <kim0> Your deployment, not only becomes FAST, repeatable but also, you get the experience of the Ensemble community
[16:22] <kim0> all working for you .. without you even knowing about it (if you so choose)
[16:22] <kim0> alright ..
[16:22] <kim0> Let's deploy MySQL
[16:22] <kim0> jump to the screen session
[16:24] <kim0> The command to deploy a production mysql database is
[16:24] <kim0> $ ensemble deploy --repository=. mysql mydb
[16:24] <kim0> Let's break down this command and understand what it does
[16:24] <kim0> ensemble deploy → Asking Ensemble to deploy a service
[16:25] <kim0> --repository = . → Mentioning to Ensemble that the formulas are available in the current directory
[16:25] <kim0> mysql mydb → Deploy the formula "mysql" as a service called "mydb"
[16:25] <kim0> let's quickly paste the output of the command
[16:25] <kim0> $ ensemble deploy --repository=. mysql mydb
[16:25] <kim0> 2011-07-25 16:22:51,307 INFO Connecting to environment.
[16:26] <kim0> 2011-07-25 16:22:54,857 INFO Formula deployed as service: 'mydb'
[16:26] <kim0> 2011-07-25 16:22:54,859 INFO 'deploy' command finished successfully
[16:26] <kim0> So .. deploy .. finished successfully
[16:26] <kim0> similarly .. let's deploy the "drupal" formula .. as "mywebsite"
[16:26] <kim0> $ ensemble deploy --repository=. drupal mywebsite
[16:26] <kim0> 2011-07-25 16:23:04,117 INFO Connecting to environment.
[16:26] <kim0> 2011-07-25 16:23:05,167 INFO Formula deployed as service: 'mywebsite'
[16:26] <kim0> 2011-07-25 16:23:05,168 INFO 'deploy' command finished successfully
[16:26] <kim0> This should be very familiar
[16:27] <kim0> Let us check the status of our deployment
[16:27] <kim0> We use the "ensemble status" command for that
[16:28] <kim0> Here is the command and its output
[16:28] <kim0> $ ensemble status
[16:28] <kim0> 2011-07-25 16:27:37,395 INFO Connecting to environment.
[16:28] <kim0> machines:
[16:28] <kim0>   0: {dns-name: ec2-50-17-158-183.compute-1.amazonaws.com, instance-id: i-8dc16dec}
[16:28] <kim0>   1: {dns-name: ec2-184-72-129-61.compute-1.amazonaws.com, instance-id: i-35de7254}
[16:28] <kim0>   2: {dns-name: ec2-50-16-71-235.compute-1.amazonaws.com, instance-id: i-15de7274}
[16:28] <kim0> services:
[16:28] <kim0>   mydb:
[16:28] <kim0>     formula: local:mysql-98
[16:28] <kim0>     relations: {}
[16:28] <kim0>     units:
[16:28] <kim0>       mydb/0:
[16:28] <kim0>         machine: 1
[16:28] <kim0>         relations: {}
[16:28] <kim0>         state: started
[16:28] <kim0>   mywebsite:
[16:28] <kim0>     formula: local:drupal-9
[16:28] <kim0>     relations: {}
[16:28] <kim0>     units:
[16:28] <kim0>       mywebsite/0:
[16:28] <kim0>         machine: 2
[16:28] <kim0>         relations: {}
[16:29] <kim0>         state: started
[16:29] <kim0> 2011-07-25 16:27:38,635 INFO 'status' command finished successfully
[16:29] <kim0> Let's try to understand this output
[16:29] <kim0> In the "machines" section
[16:29] <kim0> We have 3 machines deployed
[16:29] <kim0> 0 1 and 2
[16:29] <kim0> 0 is always the very first "bootstrap" node
[16:29] <kim0> 1 and 2 are the machines running mysql and drupal ..
[16:29] <kim0> Looking at the "services" section
[16:29] <kim0> we understand that we just deployed the service "mydb" .. remember this is the name we chose
[16:30] <kim0> the mydb service is running on machine "1"
[16:30] <kim0> and it is "started"
[16:30] <kim0> that is .. mysql has been installed and it is "ready" to be used
[16:30] <kim0> the same for drupal .. it is running on machine 2 and is started as well
[16:30] <kim0> It is interesting to note
[16:30] <kim0> that "relations: {}"
[16:31] <kim0> is empty
[16:31] <kim0> what this really means is
[16:31] <kim0> that the services deployed "mysql" and "drupal":
[16:31] <kim0> have not been "coupled" yet ..
[16:31] <kim0> i.e. mysql does not have the drupal database created yet ..etc
[16:32] <kim0> the magic of Ensemble and the very cool part .. is when you start connecting infrastrcuture pieces together
[16:32] <kim0> watching how all pieces jump together and a bigger system is created
[16:32] <kim0> let's connect those two components
[16:33] <kim0> The command to connect them (read: relate them) is
[16:33] <kim0> $ ensemble add-relation mydb:db mywebsite
[16:33] <kim0> We are adding a relation between mydb (our instance of mysql) and mywebsite (an instance of drupal)
[16:34] <kim0> It is extremely interesting what is happening at this instant
[16:34] <kim0> once this relation is established
[16:34] <kim0> both services start communicating and collaborating towards creating that bigger infrastructure
[16:34] <kim0> so .. mysql creates a database for drupal
[16:35] <kim0> it "sends over" the dabase details "username, password, DB name...etc" to the machine running drupal
[16:35] <kim0> drupal gets this configuration information
[16:35] <kim0> rewrites its configuration files to use this DB
[16:35] <kim0> creates its tables and configures the DB
[16:35] <kim0> the services have now been coupled!
[16:36] <kim0> Let's check the status
[16:36] <kim0> $ ensemble status
[16:36] <kim0> 2011-07-25 16:36:08,453 INFO Connecting to environment.
[16:36] <kim0> machines:
[16:36] <kim0>   0: {dns-name: ec2-50-17-158-183.compute-1.amazonaws.com, instance-id: i-8dc16dec}
[16:36] <kim0>   1: {dns-name: ec2-184-72-129-61.compute-1.amazonaws.com, instance-id: i-35de7254}
[16:36] <kim0>   2: {dns-name: ec2-50-16-71-235.compute-1.amazonaws.com, instance-id: i-15de7274}
[16:36] <kim0> services:
[16:36] <kim0>   mydb:
[16:36] <kim0>     formula: local:mysql-98
[16:36] <kim0>     relations: {db: mywebsite}
[16:36] <kim0>     units:
[16:36] <kim0>       mydb/0:
[16:36] <kim0>         machine: 1
[16:36] <kim0>         relations:
[16:36] <kim0>           db: {state: up}
[16:36] <kim0>         state: started
[16:36] <kim0>   mywebsite:
[16:37] <kim0>     formula: local:drupal-9
[16:37] <kim0>     relations: {db: mydb}
[16:37] <kim0>     units:
[16:37] <kim0>       mywebsite/0:
[16:37] <kim0>         machine: 2
[16:37] <kim0>         relations:
[16:37] <kim0>           db: {state: up}
[16:37] <kim0>         state: started
[16:37] <kim0> 2011-07-25 16:36:09,646 INFO 'status' command finished successfully
[16:37] <kim0> Notice how the "relations:" field now relates each component to the other
[16:37] <kim0> of course this could be a much larger system
[16:37] <kim0> i.e. there could be a load balancer front end service, a backup service, a monitoring service ...etc
[16:37] <kim0> But fundamentally .. it's the same
[16:38] <kim0> You deploy components .. connect them together and your good to go!
[16:38] <kim0> So .. our drupal instance is ready .. why not pay it a visit
[16:39] <kim0> Since drupal is running on machine 2 .. from the machines section .. this is the machin we need: ec2-50-16-71-235.compute-1.amazonaws.com
[16:39] <kim0> Go ahead and visit
[16:39] <kim0> http://ec2-50-16-71-235.compute-1.amazonaws.com/ensemble/
[16:39] <kim0> Indeed drupal is there waiting for us! (woohoo) that was easy
[16:39] <kim0> Note how I might have deployed drupal without really knowing anything about how it needs to be deployed
[16:40] <kim0> and yet .. the deployment is done according to best practices of the Ensemble formula writers community
[16:40] <kim0> Awesome .. let's create a tiny first post
[16:41] <kim0> Alright .. we now have some content
[16:41] <kim0> Just refresh http://ec2-50-16-71-235.compute-1.amazonaws.com/ensemble/
[16:42] <kim0> Now .. here comes another (OMG this is awesome) moment
[16:42] <kim0> What about your blog (or whatever service) suddenly becomes popular
[16:42] <kim0> you're slashdotted
[16:42] <kim0> You want to scale out
[16:42] <kim0> sure this has to be complex right!
[16:43] <kim0> let's check out how we can get this done
[16:43] <kim0> This is what we need
[16:43] <kim0> $ ensemble add-unit mywebsite
[16:43] <kim0> Yes that's it .. we have scaled out
[16:43] <kim0> let's quickly understand this command
[16:44] <kim0> add-unit : Adds a service unit to "mywebsite"
[16:44] <kim0> remember mywebsite is that name of our instance of the drupal formula
[16:44] <kim0> So
[16:44] <kim0> A new ec2 instance is created
[16:44] <kim0> It is important to note .. that Enesmeble uses plain "vanilla" ubuntu images
[16:44] <kim0> everything is installed and configured on the fly
[16:44] <kim0> the new node is configured as type "mywebsite"
[16:45] <kim0> what is really awesome is
[16:45] <kim0> since this new node, is of type mywebsite .. it already "knows" how to hook up to the surrounding services!
[16:45] <kim0> In this case .. only mysql .. but could be much more sophisticated
[16:45] <kim0> This is the DRY: Don't Repeat Yourself .. concept
[16:46] <kim0> let's again quickly check out status
[16:46] <kim0> $ ensemble status
[16:46] <kim0> 2011-07-25 16:46:17,368 INFO Connecting to environment.
[16:46] <kim0> machines:
[16:46] <kim0>   0: {dns-name: ec2-50-17-158-183.compute-1.amazonaws.com, instance-id: i-8dc16dec}
[16:46] <kim0>   1: {dns-name: ec2-184-72-129-61.compute-1.amazonaws.com, instance-id: i-35de7254}
[16:46] <kim0>   2: {dns-name: ec2-50-16-71-235.compute-1.amazonaws.com, instance-id: i-15de7274}
[16:46] <kim0>   3: {dns-name: ec2-50-16-175-35.compute-1.amazonaws.com, instance-id: i-73a50912}
[16:46] <kim0> services:
[16:46] <kim0>   mydb:
[16:46] <kim0>     formula: local:mysql-98
[16:46] <kim0>     relations: {db: mywebsite}
[16:46] <kim0>     units:
[16:46] <kim0>       mydb/0:
[16:47] <kim0>         machine: 1
[16:47] <kim0>         relations:
[16:47] <kim0>           db: {state: up}
[16:47] <kim0>         state: started
[16:47] <kim0>   mywebsite:
[16:47] <kim0>     formula: local:drupal-9
[16:47] <kim0>     relations: {db: mydb}
[16:47] <kim0>     units:
[16:47] <kim0>       mywebsite/0:
[16:47] <kim0>         machine: 2
[16:47] <kim0>         relations:
[16:47] <kim0>           db: {state: up}
[16:47] <kim0>         state: started
[16:47] <kim0>       mywebsite/1:
[16:47] <kim0>         machine: 3
[16:47] <kim0>         relations:
[16:47] <kim0>           db: {state: up}
[16:47] <kim0>         state: started
[16:47] <kim0> 2011-07-25 16:46:18,907 INFO 'status' command finished successfully
[16:47] <kim0> "mywebsite" now has two service unit instances mywebsite/0 and mywebsite/1
[16:47] <kim0> the new node is running machine "3" which is ec2-50-16-175-35.compute-1.amazonaws.com
[16:49] <kim0> which means .. visiting http://ec2-50-16-175-35.compute-1.amazonaws.com/ensemble/ .. You should see the second drupal instance
[16:49] <kim0> of course if you'd like to further scale out .. you just keep add'ing more units .. that's all it takes
[16:49] <kim0> The mysql formula supports adding "slave" nodes
[16:50] <kim0> so you can scale your DB via adding more slave nodes
[16:50] <ClassBot> There are 10 minutes remaining in the current session.
[16:50] <kim0> alright .. time flies when you're having fun
[16:50] <kim0> What is really cool is that formulas can be written in ANY language
[16:50] <kim0> so bash, php, python .. whatever you fancy!
[16:51] <kim0> I will vim open the drupal formula in the screen session
[16:51] <kim0> Let me take any questions quickly
[16:51] <ClassBot> rwh asked: is there already a formula repo, or is this a service that's planned for the future?
[16:51] <kim0> great question
[16:52] <kim0> right now .. You can see formulas over at https://code.launchpad.net/principia
[16:52] <kim0> however a more integrated version is coming very soon ..
[16:52] <kim0> where you'll be able to search and install formulas just like you do with ppas
[16:53] <ClassBot> TeTeT asked: how much effort is it to write these relations? Isn't this more complicated than configuring the services themselves, e.g. how many units do I need to have so the initial investment in Ensemble pays off
[16:53] <kim0> Great question as well ..
[16:53] <kim0> It is pretty simple to write those relations
[16:54] <kim0> I just opened the db-relation-changed script for my drupal formula
[16:54] <kim0> as you can see it's a pretty simple bash script
[16:54] <kim0> that gets the database configuration details from ensemble .. then simply uses "sed" to render a template configuration file
[16:55] <kim0> I really like the fact that I do not have to wrestle with learning a new DSL configuration language
[16:55] <ClassBot> There are 5 minutes remaining in the current session.
[16:55] <kim0> I'll use the remaining minutes to let you know that you can find the Ensemble community at
[16:55] <kim0> #ubuntu-ensemble
[16:55] <kim0> all developers, formula writers and community members hang out there
[16:56] <kim0> our goal is to cover all of free software with Ensemble formulas
[16:56] <kim0> such that you're able to ensemble deploy whatever you fancy .. just like you apt-get install whatever you want today
[16:56] <kim0> Please join in .. and start writing and contributing formulas
[16:57] <kim0> it's very easy .. there is no special language to learn, and the community is extremely helpful
[16:57] <kim0> you can ask me (or others ) any questions in #ubuntu-ensemble (or #ubuntu-cloud) at any time
[16:57] <kim0> I hope this was useful and fun .. see you in a next session
[16:58] <kim0> Next session will be for cloud-init .. an Ubuntu originated cloud technology
[16:58] <kim0> the two sessions afterwards will be for Orchestra and its integration with Ensemble .. both great technologies being developed this cycle
[16:58] <kim0> and the final session will be for Eucalyptus v3 .. I hope you will enjoy the first day of UCD
[16:58] <kim0> Good bye
[17:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
[17:00] <koolhead17> hello everyone
[17:01] <koolhead17> cloud-init is the Ubuntu cloud technology that enables a cloud instance to bootstrap itself and customize 
[17:02] <koolhead17> we can do many operations on our instance before it boots up
[17:04] <koolhead17> its like adding an extra layer with more contents
[17:04] <koolhead17> lets talk about an example
[17:05] <koolhead17> you want an instance to have say apache installed automatically every time it boots up
[17:05] <koolhead17> you can simply use
[17:05] <koolhead17> packages:
[17:05] <koolhead17>  - apache2
[17:05] <koolhead17> and if you are using amazon ec2 web interface you can pass the parameter during launching the instance.
[17:06] <koolhead17> cloud-init works for openstack as well as eucalyptus
[17:06] <koolhead17> i will try to show you demo of the same if possible at the end
[17:07] <koolhead17> lets say you want to boot your instance with a specific timezone everytime the instance boots
[17:07] <koolhead17> you can simply define that using
[17:07] <koolhead17> timezone: US/Eastern
[17:08] <koolhead17> parameter in the file which you will be passing
[17:10] <koolhead17> now lets move to the example file we have http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
[17:11] <koolhead17> line 7 apt_update: false
[17:11] <koolhead17> which means the parameter will be passed and at time instance launches  automatic update will not happen
[17:12] <koolhead17> you can change it to apt_update: true and pass it during booting instance to enable it
[17:12] <koolhead17> similarly to enable/disable we have "apt_upgrade"
[17:13] <koolhead17> in the next line we can see it mentions about adding of repository. you can add your custom repository as well.
[17:14] <koolhead17> doing this will save some bandwidth in data-centre like environment :)
[17:14] <koolhead17> i will skip some of the examples from there :D
[17:16] <koolhead17> you can even run commands
[17:16] <koolhead17> line 205
[17:16] <koolhead17> bootcmd:
[17:16] <koolhead17> - echo 192.168.1.130 us.archive.ubuntu.com > /etc/hosts
[17:17] <koolhead17> you can run commands like :
[17:17] <koolhead17> runcmd: - [ ls, -l, / ]
[17:18] <koolhead17> one of the feature which i find most exciting and am fighting with it is debconf_selections: |
[17:19] <koolhead17> byobu_by_default: system
[17:20] <koolhead17> enables byobu to all the uses by default once they login
[17:22] <koolhead17> the availability of cloud-init technology on all the cloud environment am working (openstack, eucalyptus, ec2)
[17:23] <koolhead17> you can find more info and detailed instruction at kim0 blog http://foss-boss.blogspot.com/search/label/cloud-init
[17:26] <koolhead17> cloud-init comes pre installed if you are using ec2
[17:26] <koolhead17> in case of openstack you need to install the package at time of preparing your cloud image
[17:28] <koolhead17> you can use euca tools in case of eucalyptus and openstack
[17:29] <koolhead17> on ec2 you can use web interface as well as via command line
[17:29] <koolhead17> so lets recap what all we have discussed so far
[17:30] <koolhead17> Some of the things cloud-init configures are:
[17:30] <koolhead17> setting hostname
[17:30] <koolhead17> generate ssh private keys
[17:31] <koolhead17> *which i forgot covering earlier :(
[17:31] <koolhead17> adding ssh keys to user's .ssh/authorized_keys so they can log in
[17:31] <koolhead17> setting up ephemeral mount points
[17:32] <koolhead17> to execute a command
[17:32] <koolhead17> runcmd:
[17:32] <koolhead17> automatic package update and upgrade
[17:33] <koolhead17> timezone setup
[17:33] <koolhead17> package installation
[17:33] <koolhead17> like apache2
[17:34] <koolhead17> you can also see https://help.ubuntu.com/community/CloudInit
[17:35] <koolhead17> you people can take break now
[17:35] <koolhead17> the next session is about Orchestra
[17:35] <koolhead17> and it will be presented by 2 members from the server engineering team
[17:36] <koolhead17> thanks
[17:37] <koolhead17> it would have been more interesting with the demo which am unable to do :(
[17:40] <koolhead17>  /msg classbot !q
[17:41] <koolhead17> !y
[17:41] <ClassBot> Guest32626 asked: is cloud-init available for other linux distros?
[17:42] <koolhead17> Guest32626: it is available for Amazon's linux.
[17:42] <koolhead17> which is similar to fedora ..
[17:42] <koolhead17> it has been adopted by Amazon
[17:42] <koolhead17> and can easily be ported to other linux'es
[17:44] <ClassBot> Guest86346 asked: Is that possible to configure route table after retrieving metadata with cloud-init ?
[17:45] <koolhead17> yes its very much possible with the script :)
[17:45] <koolhead17> runcmd
[17:46] <koolhead17> one more important thing
[17:46] <koolhead17> we all are available at #ubuntu-cloud , our official cloud support channel for ubuntu. join us and hangout with ys
[17:47] <koolhead17> *us
[17:47] <koolhead17> and 1 more thing the mega session is coming nest
[17:47] <koolhead17> *next
[17:47] <koolhead17> about Orchestra and Ensemble .. Two pillar technologies for Ubuntu server in 11.10
[17:47] <koolhead17> :)
[17:48] <koolhead17> Good bye .. and that's all :)
[17:50] <ClassBot> There are 10 minutes remaining in the current session.
[17:55] <ClassBot> There are 5 minutes remaining in the current session.
[18:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
[18:00] <smoser> OK, lets get started
[18:01] <smoser> Hi, I'm Scott Moser, an Ubuntu Developer working on the Ubuntu Server Team.
[18:01] <smoser> If you're not familiar with the way classroom works, please see https://wiki.ubuntu.com/Classroom/ClassBot
[18:01] <smoser> hint: join #ubuntu-classroom-chat
[18:02] <smoser> Much of the Server Team's focus this cycle has been on deployment.  That deployment really falls into 2 different categories
[18:02] <smoser>  * ensemble: deploying and managing services on top of existing Ubuntu installs (or new cloud instances)
[18:02] <smoser>  * orchestra: deploying Ubuntu onto "bare metal".
[18:03] <smoser> A few weeks ago, it was decided that we wanted to make Orchestra a "provider" for Ensemble.
[18:03] <smoser> What this means is that we wanted to allow Ensemble to deploy and manage "bare metal" machines the same way that it originally knew how to manage EC2 instances.  Andres [RoAkSoAx] will talk more about that in the next session.
[18:03] <smoser> Like anybody else, we don't have enough hardware, and even less hardware with remotely controllable power switches and fast networks.
[18:03] <smoser> In order to get ourselves an environment that we could develop the "orchestra provider" for ensemble I put together "cobbler-devenv".
[18:04] <smoser> That can be found at http://bazaar.launchpad.net/~smoser/+junk/cobbler-devenv/files/head:/cobbler-server ,
[18:04] <smoser> or via 'bzr branch lp:~smoser/+junk/cobbler-devenv'
[18:04] <smoser> cobbler-devenv allows you to very easily set up a cobbler development environment using libvirt.  That environment:
[18:04] <smoser>  * includes an Orchestra server and 3 "nodes"
[18:04] <smoser>  * includes a dhcp server and dns server
[18:04] <smoser>  * will not interfere with physical networks *or* other libvirt networks.
[18:05] <smoser> The code there is currently focused on deploying cobbler and an ensemble provisioning environment, but it not much is really specific to that purpose.
[18:05] <smoser> If you've not already done so, go ahead and open the cobber-server url above or branch it.  The HOWTO explains how to set all this up.  I'll largely walk through that here with some more explanation as to what is going on than is in that file.
[18:06] <smoser> anyone have any questions so far?
[18:06] <smoser> ok then.
[18:06] <smoser> == Some configuration first ==
[18:07] <smoser> as prereqs, you'll need to
[18:07] <smoser> $ apt-get install genisoimage libvirt-bin qemu-kvm
[18:07] <smoser> In  order to interact with libvirt, you have to be in the libvirtd group,  and in order to use kvm acceleration you have to be in 'kvm' group.  So:
[18:07] <smoser>  $ sudo adduser $USER kvm
[18:07] <smoser>  $ sudo adduser $USER libvirtd
[18:08] <smoser> $ sudo apt-get install python-libvirt
[18:08] <smoser> Also, note that libvirt does not work when images are in a private home directory.  The images must be viewable to the libvirt user.
[18:09] <smoser> this cost me a fair amount of time once trying to debug why my VMs were getting "permission denied" when they were clearly readable (but the path to the images was not)
[18:09] <smoser> the first step in the HOWTO document is to build a cobbler server.  To do that, we utilize build-image like:
[18:09] <smoser> $ ./build-image -vv --preseed preseed.cfg oneiric amd64 8G
[18:10] <smoser> # oops, we have somewhat changed sections of my talk here, we're now in
[18:10] <smoser> == building the Orchestra server VM ==
[18:10] <smoser> please feel free to pipe-in with questions if you have them
[18:10] <smoser> Note, the above command won't actually work right now. :-(
[18:10] <smoser> bug 815962 means that that doesn't currently work, and wont until the next upload of debian-installer.
[18:11] <smoser> it should be fixed in 24 hours or so, though
[18:11] <smoser> That command will take quite a while to run, probably heavily based on network speed as it is doing a network install.  Locally, with my local mirror, a natty build just took 12 minutes.
[18:11] <smoser> You can have it use a mirror by editing preseed.cfg.
[18:11] <smoser> It wraps all the following:
[18:12] <smoser>  * grab the current mini-iso for oneiric
[18:12] <smoser>  * extract the kernel and ramdisk using isoinfo
[18:12] <smoser>  * repack the ramdisk so it has 'preseed.cfg' inside it, and set up the 'late_command' in installer to do some custom configuration for us (see 'late_command.sh').
[18:12] <smoser>  * after install is done, boot the system again to do some final config that 'late_command' layed down.
[18:12] <smoser> It does this via kvm and the kvm user net, so you can build this entirely without libvirt or root access.
[18:13] <smoser> I'm particularly proud of not needing root for this.
[18:13] <smoser> or any network access other than to the archive.
[18:13] <smoser> This basic setup could be used for automated building of virtual machines (as it is here)
[18:13] <smoser> The result is that you now have a disk image that is ready to boot.  We've built the Orchestra virtual server that will be in charge of provisioning the nodes.
[18:14] <smoser> $ ls -lh natty-amd64.img
[18:14] <smoser> -rw-r--r-- 1 libvirt-qemu kvm 1.3G 2011-07-25 12:39 natty-amd64.img
[18:14] <smoser> Now we just we need to set up a libvirt network, and put that image on it.
[18:15]  * smoser pauses a bit for questions
[18:15] <smoser> sees that there are some and is looking
[18:16] <ClassBot> TeTeT asked: do we setup a virtual environment to boot bare metal servers and install them?
[18:16] <smoser> TeTeT, sorry to be unclear
[18:16] <smoser> the goal of cobbler-devenv is to have a purely virtual environment that models a typical hardware setup
[18:17] <smoser> we'll end up with a cobbler server vm, and 3 "node" vms attached to a network where the cobbler server will be able to turn on the nodes and control their pixee boot via tftp
[18:17] <ClassBot> alexm asked: is it necessary to have 8G for the server? i _just_ have 8G in total in my desktop
[18:18] <smoser> alexm, I used 8G, though it is a bit large.  as you can see above, the total space *used* will be much less.
[18:18] <smoser> qcow is a sparse format.  I would guess you can get buy with 4G, but with all the installed components in the server, much less is going to be really tight.
[18:19] <ClassBot> m_3 asked: so './build-image -vv --preseed preseed.cfg natty amd64 8G' should work, but oneiric won't?
[18:19] <smoser> build-image with 'natty' "should work"
[18:19] <smoser> i verified the install went fine, but ran into bug https://launchpad.net/bugs/804267
[18:19] <smoser> that caused me to not be 100% tested that path today
[18:20] <ClassBot> TeTeT asked: so if it's for a virtual environment, this means a non cloud environment, as otherwise installing OS is a non-issue, at least with euca and openstack?
[18:20] <smoser> TeTeT, right. it is for a virtual environment, and "non-cloud"
[18:21] <smoser> the initial reason I developed this was to ease the development of the "orchestra provider" for ensemble
[18:21] <smoser> through that provider, ensemble will be able to install "bare metal" systems.
[18:21] <smoser> we're just creating a virtual network that would be like a physical network and sytems you would have access to, but its easier to work with the virtual.
[18:22] <smoser> the primary goal of "bare metal provisioning" for ensemble, is actually to provision a cloud
[18:22] <ClassBot> kim0 asked: What would it take to install real physical boxes out of that dev-env
[18:23] <smoser> to install real machines off of the cobbler vm, you'd have to set bridging up differnetly than i have it, and have your dhcp server point next-server to the cobbler system
[18:23] <smoser> ok...
[18:23] <smoser> moving on a bit
[18:24] <smoser> == Setting up Libvirt resources ==
[18:24] <smoser> please feel free to ask questions. if you say 'smoser' in #ubuntu-classroom-chat i'm more likely to see it.
[18:24] <smoser> Now, back at the top level directory of cobbler-devenv we have a 'settings.cfg' file [http://bazaar.launchpad.net/~smoser/+junk/cobbler-devenv/view/head:/settings.cfg]
[18:24] <smoser> The goal is that this file defines all of our network settings.  It has sections for 'network', 'systems' (static systems like the Orchestra Server) and 'nodes'.
[18:25] <smoser> the only static system we have is 'cobbler', but there could be more described there.
[18:25] <smoser> We create the libvirt resources by running './setup.py' (which should probably be renamed to something that does not look like it came from python-distutils)
[18:26] <smoser> that script interacts with libvirt via python bindings
[18:26] <smoser> $ ./setup.py libvirt-setup
[18:26] <smoser> That will put some output to the screen indicating that it created a 'cobbler-devnet' network, a 'cobbler' domain, and 3 nodes named 'node01' - 'node03'.
[18:27] <ClassBot> skrewler asked: Is support for Chef in the roadmap?  Or is it possible to substitute puppet for another CM tool, like cfengine or Chef?
[18:28] <smoser> skrewler, well, there is no real CM tool involved here.  The initial goal was to get Ensemble up, but it would take very little changes to make the setup able to use chef, cfengine or puppet.
[18:28] <smoser> Those things wouldprimarily be configured through cobbler kickstart templates (preseed templates).
[18:29] <smoser> i'm not really interested in that, though, this was really just to get a test environment up for ensemble, but it definitely could be utilized to test out other managmeent bootstrapping and management.
[18:30] <smoser> so.... above, we created the cobbler-devnet and 3 nodes
[18:30] <smoser> The libvirt xml is based on the libvirt-domain.tmpl and libvirt-network.tmpl files, which are parsed as Cheetah template files.
[18:30] <smoser> The end result is that we have a 'cobbler-devnet' network at 192.168.123.1, and has statically configured dhcp entries for our cobbler server and 3 nodes, so that when they DHCP they'll get set IP addresses.
[18:31] <smoser> the cobbler-devnet network looks something like:
[18:31] <smoser> http://paste.ubuntu.com/651906/
[18:31] <smoser> notice how we have MAC addresses in the network setup that will match with our mac addresses in the nodes
[18:32] <smoser> now our network is setup, so lets put the cobbler server on it
[18:32] <smoser> We build a qcow "delta" image off of the pristine server image we built above so we can easily start fresh.
[18:32] <smoser> $ virsh -c qemu:///system net-start cobbler-devnet
[18:32] <smoser> Network cobbler-devnet started
[18:32] <smoser> $ qemu-img create -f qcow2 -b cobbler-server/natty-amd64.img  cobbler-disk0.img
[18:32] <smoser> $ virsh -c qemu:///system start cobbler
[18:33] <smoser> Domain cobbler started
[18:33] <smoser> That will take some time to boot, but after a few minutes you should be able to ssh to the cobbler system using its IP address:
[18:33] <smoser>  $ ssh ubuntu@192.168.123.2
[18:33] <smoser> (the password is 'ubuntu', obviously you should change that)
[18:33] <smoser> While you're there, you can verify that 'cobbler' works by running:
[18:34] <smoser>  $ sudo cobbler list
[18:34] <smoser> that should show you that there were some images imported for network install of Ubuntu.
[18:34] <smoser> At this point You can also get to the web_ui of cobbler at: http://192.168.123.2/cobbler_web and poke around there.
[18:34] <smoser> generally, we've got a fully functional cobbler server just waiting for something to install!
[18:34] <smoser> Then, back on the host system we populate the cobbler server with the 3 nodes that we've created.
[18:35] <smoser> $ ./setup.py cobbler-setup
[18:35] <smoser> That uses the cobbler xmlrpc api to set up our nodes.  Now, a 'cobbler list' will show our nodes.
[18:35] <smoser> It also configures those nodes to be controllable by the "virsh" power control (that is like ipmi, but for virtual machines).  We've got one more thing to do though before that can happen.
[18:35] <smoser> On the host system we need to run:
[18:35] <smoser>  $ socat -d -d \
[18:35] <smoser>      TCP4-LISTEN:65001,bind=192.168.123.1,range=192.168.123.2/32,fork \
[18:35] <smoser>      UNIX-CONNECT:/var/run/libvirt/libvirt-sock
[18:35] <smoser> socat is a useful utility, and the above command tells it to listen for ip connections on port 65001 and forward those to the unix socket that libvirt listens on.
[18:36] <smoser> basically this makes libvirtd listen on a tcp socket
[18:36] <smoser> Before you go screaming how horrible that is (it would be)
[18:36] <smoser> notice that We've limited the host IP to the IP range of the guest network, and told it to only listen on the guest's interface, so it is mildly secure. Definitely much better than just listening on all interfaces.
[18:37] <smoser> Once that is in place, you can turn the nodes on and off via the cobbler web_ui.
[18:38] <smoser> Basically, at this point, we have modeled a lab with IPMI power control of node systems from the cobbler system.
[18:38] <smoser> nodes can be turned on and off, and their network boot controlled via the cobbler vm system.
[18:38] <smoser> I should have pointed out above, that our libvirt xml for the Node systems has them network booting.
[18:38] <smoser> If you configure 'network-boot' for a node, and then start it, it should begin to install itself.
[18:38] <smoser> You can try that out, and then (from the host system) watch the install with:
[18:39] <smoser>  $ vncviewer $(virsh vncdisplay node01)
[18:39] <smoser> It should actually walk through a fully automated install.
[18:39] <smoser> questions?
[18:40] <smoser> Well, thats basically all I have.
[18:40] <smoser> [18:40] <smoser> after all of that, what we have is a well configured network with a single cobbler server that is ready to install some nodes.
[18:41] <smoser> The nodes actually have functional static-dhcp addresses and can communicate with one another via hostnames (node01, cobbler, node02...)
[18:42] <smoser> In the next session, Andreas will talk about how we can use ensemble to control the cobbler server and provision the nodes.
[18:42] <smoser> That way, ensemble can control our bare metal servers just like it can request new EC2 nodes.
[18:43] <smoser> (here, we're just pretending that those VMs are real hardware, but ensemble doesn't actually know the difference)
[18:43] <smoser> so...
[18:44] <smoser> kim0, you could have executed examples yesterday...
[18:44] <smoser> so, yeah, i hope you can tomorrow.
[18:44] <smoser> if you want to just play with cobbler some, this is a really nice way to see how it fits all together
[18:44] <smoser> without having 2 or 3 spare systems sitting around.
[18:45] <smoser> i know that that was a big barrier to entry for me.
[18:45] <ClassBot> kim0 asked: So Ensemble would request powering on the hardware and installing it, then orchestrating it .. Is that advantageous to having all boxes installed and "waiting" for Ensemble ?
[18:45] <smoser> w've not shortcutted that, but you could.
[18:46] <smoser> in the real world scenario, though, the provisioning of a node will occur once ensemble is done with it.
[18:46] <smoser> that ensures that they're "clean".
[18:47] <smoser> save some of your questions for RoAkSoAx but i'm guessing that end to end on cable modem speed you could have a cobbler vm built, and then a node deployed on it via ensemble in 3 hours or so at this point.
[18:48] <ClassBot> kim0 asked: Is installing the cobbler server planned as a CD boot option
[18:48] <smoser> kim0, i'm not sure how it will be exposed, but yeah, the goal is to make that *very* easy.
[18:49] <smoser> alexm said: smoser: note that cache=unsafe in build-image in unsupported in maverick's qemu, i just changed it with writeback
[18:49] <smoser> Thanks alexm . 'writeback' is the right value there.
[18:50] <ClassBot> There are 10 minutes remaining in the current session.
[18:51] <smoser> in minutes before this session i tried to see if i could get this all to go inside a ec2 guest
[18:51] <smoser> it "should work", but something was going wrong.
[18:55] <ClassBot> There are 5 minutes remaining in the current session.
[19:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
[19:00] <RoAkSoAx> howdy
[19:00] <RoAkSoAx> alright then lets continue with the presentation
[19:02] <RoAkSoAx> argh
[19:02] <RoAkSoAx> sorry
[19:02] <RoAkSoAx> wrong channel
[19:02] <RoAkSoAx> 5:01 <+RoAkSoAx> Hi all. My name is Andres Rodriguez, and I'm an Ubuntu Developer working on the Ubuntu Server Team as well.
[19:02] <RoAkSoAx> 15:02 <+RoAkSoAx> As Scott already mentioned today, we have been working on getting Ensemble to work with Orchestra. We've been using smoser's devenv to achieve this result. Today I'm going to show you how this work can be tested as a  proof of concept, as this is still  work in progress.
[19:02] <RoAkSoAx> 15:02 <+RoAkSoAx> But first, lets learn a little bit more about the idea behind Orchestra and Ensemble integration
[19:02] <RoAkSoAx> 15:02 <+RoAkSoAx> The main idea behind this was to basically use Ensemble with Orchestra/Cobbler the same way it's been used with ec2. However, on ec2 we can request instances easily and add more and more, but in Orchestra/Cobbler we
[19:02] <RoAkSoAx>  can't. This is a limitation, however the pproach taken in this case, was to simply pre-populate the Orchestra server with "systems" (in terms of Cobbler). A system mis a physical system that is somewhere in the network  and that cobbler can deploy. So, we have a list of available machine ready to be deployed via PXE.
[19:02] <RoAkSoAx> 15:02 < JohnSGruber> RoAkSoAx: do you want to paste this in the classroom?
[19:03] <RoAkSoAx> So we could say that we will have to do two things with ensemble 1. Bootstrap and 2. Deploy, in the same way we would do with ec2.
[19:04] <RoAkSoAx> Bootstrapping is when we tell ensemble to start creating the whole environment. In this case, bootstrapping means starting a machine to be the zookeeper machine, which will interface between a client machine from where we are issuing commands, and the provider (Orchestra), to deploy machine and create the relations between them.
[19:04] <RoAkSoAx> The process here was to simply select a "system" provided by Orchestra/Cobbler. This system will then be powered on by interfacing with the power management interface the hardware has configured  (IPMI, WoL, ect, virsh), and will turn it on. When this machine boots up, it will find a PXE server on the network (Cobbler) and will start the installation process. Once the machine has finished installing, it will use cloud-init to install ensemble
[19:04] <RoAkSoAx> In case of the development environment, we use virsh as the power management interface
[19:05] <RoAkSoAx> As smoser already explained, the cobbler devenv provides machines that are ready to be deployed via PXE
[19:05] <RoAkSoAx> when we bootstrap with ensemble, it simply tells cobbler to start a machine
[19:05] <RoAkSoAx> cobbler uses virsh to start it
[19:05] <RoAkSoAx> and when the machine starts it searches for a PXE server, and installs the OS
[19:06] <RoAkSoAx> So, as mentioned, the bootstrap process will start a new machine that we are gonna call the zookeeper
[19:06] <RoAkSoAx> once the zookeeper is up and running, we can start deploying machines
[19:07] <RoAkSoAx> m_3: will get to that in a min ;)
[19:07] <RoAkSoAx> So, when deploying, Ensemble will tell the zookeeper to deploy a machine with an specific service. The zookeeper will talk to the orchestra server in the same way it did when bootstrapping and will deploy a machine. It will also use cloud-init to install everything necessary to deploy the service.
[19:08] <RoAkSoAx> Now, since obviously ec2 is different from Orchestra/Cobbler we needed to make some changes in the approch taken to make things work (such as provide the meta-data for cloud-init). We needed a few things:
[19:09] <RoAkSoAx> 1. Provide methods in ensemble to interface with Cobbler using its API
[19:09] <RoAkSoAx> 2. Provide a custom preseed to be used when deploying machines through ensemble.
[19:09] <RoAkSoAx> 3. Provide a method to pass cloud-init meta-data, and be populated before first boot so that cloud-init can do its thing.
[19:09] <RoAkSoAx> So, how did we achieve this
[19:09] <RoAkSoAx> 1. As already explained, ensemble uses cobbler as a provider communicating with it via the cobbler API.
[19:10] <RoAkSoAx> 2. Since ec2 instances a VM really quick, it was easy to pass all the necessary values through cloud-init, however, in our case, we needed to do somthing similar, and the conclusion was to do it via a modified preseed to deploy whatever it was needed the same way
[19:11] <RoAkSoAx> 3. We figured out a method to pass the cloud-init meta-data through the preseed
[19:11] <RoAkSoAx> so basically the change sin Cobbler were to provide a custom preseed to deploy the OS
[19:12] <RoAkSoAx> this preseed contains what we call a late_command
[19:12] <RoAkSoAx> this late_command will execute a script that will generate the cloud-init meta-data so after first boot, cloud-init will do its thing
[19:13] <RoAkSoAx> so what we did is to generate the cloud-init meta-data with ensemble as it was always done, but, we had to figure out how to do it to the preseend
[19:14] <RoAkSoAx> here, we generated text that was later enconded in base64.
[19:14] <RoAkSoAx> This text was basically a shell script containing the information to populate cloud-init's meta-data
[19:15] <RoAkSoAx> so the late command in reality was to decode the base64 code and then, wirte the script and execute it
[19:15] <RoAkSoAx> this decoding and writing was done by the preseed, right after finishing installing the system and before booting
[19:15] <RoAkSoAx> so when the machine restarted, cloud-init would do its thing
[19:16] <RoAkSoAx> so that was done by making ensemble interface with cobbler, and once the late command was generated, ensemble told cobbler "This is your late command" and cobbler simply executed it
[19:16] <RoAkSoAx> once the machine finished installing, we had a fully functional zookeeper (or service)
[19:17] <RoAkSoAx> so basically, we wated to achive the same as with ec2, but we just had to figure out how to do it with the preseed
[19:17] <RoAkSoAx> and now, it works in a very similar way
[19:17] <RoAkSoAx> so the only things to consider were to 1. start a machine. 2. deploy the machine using the preseed. 3. ensure to pass the late_command
[19:18] <RoAkSoAx> and this way we would simulate the way how instnaces and cloud-init data is passed to instances in the cloud
[19:18] <RoAkSoAx> other than that, ensemble works pretty much exactly the same as it would with ec2
[19:18] <RoAkSoAx> but using orchestra
[19:18] <RoAkSoAx> Now, another change that was done is that ensemble when working on ec2
[19:18] <RoAkSoAx> it used S3 to store some information that was using by ensemble to identify machines and place the formula meta-data
[19:19] <RoAkSoAx> instead, we used a WebDav service with the apache2 servcer installed by cobbler
[19:19] <RoAkSoAx> here, instead of obtaining and storing files on S3, we use the Orchestra server as storage for ensemble
[19:20] <RoAkSoAx> based on those considerations, pretty much had to ensure that the interaction between the cobbler API and ensemble provided results the way its done with ec2
[19:20] <RoAkSoAx> so how can we really test this with the development environment
[19:20] <RoAkSoAx> but before,
[19:21] <RoAkSoAx> m_3: does this answer your question?
[19:21] <RoAkSoAx> alright
[19:21] <RoAkSoAx> I'll move on
[19:22] <RoAkSoAx> With smoser's cobbler devenv we can certainly simulate a physical deployment using ensemble
[19:22] <RoAkSoAx> the good thing is that the devenv will setup everything necessary from the orchestra side of things
[19:22] <RoAkSoAx> but, I'll give and overview of what will orchestra do very soon
[19:22] <RoAkSoAx> 1st. We would need to install orchestra-server, which will isntall cobbler and cobbler-web
[19:23] <RoAkSoAx> with that, we would need to configure the webdav so that we have storage up and running
[19:23] <RoAkSoAx> (remember, this is already done by the cobbler-devenv)
[19:23] <RoAkSoAx> how did we do this:
[19:23] <RoAkSoAx> [19:23] <RoAkSoAx> 1. Enable Webdav
[19:23] <RoAkSoAx> sudo a2enmod dav
[19:23] <RoAkSoAx> sudo a2enmod dav_fs
[19:23] <RoAkSoAx> 2. Write config file (/etc/apache2/conf.d/dav.conf)
[19:23] <RoAkSoAx> Alias /webdav /var/lib/webdav
[19:23] <RoAkSoAx> <Directory /var/lib/webdav>
[19:23] <RoAkSoAx> Order allow,deny
[19:23] <RoAkSoAx> allow from all
[19:23] <RoAkSoAx> Dav On

[19:23] <RoAkSoAx> 3. Create formulas directory:
[19:23] <RoAkSoAx> sudo mkdir -p /var/lib/webdav/formulas
[19:24] <RoAkSoAx> chown www-data:www-data /var/lib/webdav
[19:24] <RoAkSoAx> sudo service apache2 restart
[19:24] <RoAkSoAx> now, we need to pre-populate cobbler with all the avialable systems and provide it with a power management interface to be able to start a physical mnachine
[19:25] <RoAkSoAx> as previously explained, cobbler devenv uses virsh to simulate this behaviour
[19:25] <RoAkSoAx> howeve,r in cobbler, we needed to know two things
[19:25] <RoAkSoAx> 1. How do we know when a system is available 2. How do we know when the system has already been used and no longer available
[19:25] <RoAkSoAx> for this, we had to look into cobbler's management classes concepts
[19:27] <RoAkSoAx> in this case we are using two, foo-available and foo-acquired. As the name says, one will be used to identify when a system is available to be used by ensemble, and the other one when the system has already been acquired by ensemble and might be in the process of bootstrapping or deploying a service, or even installing the OS
[19:27] <RoAkSoAx> but, in cobbler terms, how can we add management classes and systems?
[19:27] <RoAkSoAx> simple:
[19:27] <RoAkSoAx> [19:27] <RoAkSoAx> 1. Add management classes
[19:27] <RoAkSoAx> sudo cobbler mgmtclass add --name=foo-available
[19:27] <RoAkSoAx> sudo cobbler mgmtclass add --name=foo-acquired
[19:27] <RoAkSoAx> 2. Add systems
[19:27] <RoAkSoAx> sudo system add --name=XYZ --profile=XYZ --mgmt-classes=foo-available --mac-address=AA:BB:CC:DD:EE:FF
[19:27] <RoAkSoAx> Basically, a system is a definition for a physical machine using a OS profile, and what mangement class to use at first
[19:28] <RoAkSoAx> the mprofile is no other than the OS that will be installed in that machine
[19:28] <RoAkSoAx> and the management class has already been explained
[19:28] <RoAkSoAx> of course you will have to configure the power management interface accordingly
[19:28] <RoAkSoAx> but in the cobbler-devenv has alreayd been done
[19:29] <RoAkSoAx> so basically, we now have a Orchestra/Cobbler server up and running and we have configured it with systems, mgmtclasses and the file store
[19:29] <RoAkSoAx> storage*
[19:29] <RoAkSoAx> so it is time for us to install and configure ensemble to use our cobbler server
[19:29] <RoAkSoAx> in this case, we are going to use the cobbler-devenv
[19:29] <RoAkSoAx> however, you will notice that you can simply chnage it to be used by physical machines
[19:30] <RoAkSoAx> if you already have an orchestra server up and rynning and preloaded with systems
[19:30] <RoAkSoAx> so first, we need to obtain the branch of ensemble that has orchestra support
[19:30] <RoAkSoAx> NOTE: This branch contains code that is under development and is still buggy
[19:30] <RoAkSoAx>  1. Obtain the branch:
[19:30] <RoAkSoAx> bzr branch lp:~ensemble/ensemble/bootstrap-cobbler
[19:30] <RoAkSoAx> cd bootstrap-cobbler
[19:31] <RoAkSoAx> now we need to create an environments.yaml file for ensemble
[19:31] <RoAkSoAx> we do this as follows:
[19:31] <RoAkSoAx>  2. Create the environments file (~/.ensemble/environments.yaml)
[19:31] <RoAkSoAx> environments:
[19:31] <RoAkSoAx>    orchestra:
[19:31] <RoAkSoAx>       type: orchestra
[19:31] <RoAkSoAx>       orchestra-server: 192.168.123.2
[19:31] <RoAkSoAx>       orchestra-user: cobbler
[19:32] <RoAkSoAx>       orchestra-pass: cobbler
[19:32] <RoAkSoAx>       admin-secret: foooo
[19:32] <RoAkSoAx>       ensemble-branch: lp:~ensemble/ensemble/bootstrap-cobbler
[19:32] <RoAkSoAx>       acquired-mgmt-class: foo-acquired
[19:32] <RoAkSoAx>       acquired-mgmt-class: foo-available
[19:32] <RoAkSoAx> note that I'm already using the values for the cobbler-devenv
[19:32] <RoAkSoAx> such as orchestra-server IP address
[19:32] <RoAkSoAx> user/pass for cobbler
[19:32] <RoAkSoAx> the branch we need
[19:32] <RoAkSoAx> and the management classes
[19:33] <RoAkSoAx> typo in last line
[19:33] <RoAkSoAx> should be:
[19:33] <RoAkSoAx>       available-mgmt-class: foo-available
[19:33] <RoAkSoAx> so once this is done, and we have setup the cobbler-devenv correctly
[19:33] <RoAkSoAx> we can start bootstrapping the zookeeper and then deploying the machines
[19:34] <RoAkSoAx> so the first step, and from the branch we have obtained, we do the follwoing:
[19:34] <RoAkSoAx> PYTHONPATH=`pwd` ./bin/ensemble bootstrap
[19:34] <RoAkSoAx> this will bootstrap the zookeeper
[19:34] <RoAkSoAx> it will take time for it to install and deploy the zookeeper running
[19:34] <RoAkSoAx> it would probbaly take esveral minutes
[19:34] <RoAkSoAx> so I will containue explaining what it needs to be done
[19:35] <RoAkSoAx> so, when the zookeeper is up and running and cloud init has done its thing, we need to workaround something given that we just came into an error in the code
[19:35] <RoAkSoAx> thatis being examined
[19:35] <RoAkSoAx> but it is simple and doens't actually affect the code
[19:35] <RoAkSoAx> so we need to connect to the zookeeper machine (through ssh, or any ither method
[19:35] <RoAkSoAx> and sudo the following (in the zookeeper machine)
[19:35] <RoAkSoAx> sudo -i
[19:36] <RoAkSoAx> ssh-keygen -t rsa
[19:36] <RoAkSoAx> this will create public keys that are verified by the zookeeper before deploying machines
[19:36] <RoAkSoAx> however, note that this is a work around and will be fixed soon
[19:36] <RoAkSoAx> I'm just pointing you guys to it in case you want to test it after the session of today
[19:36] <RoAkSoAx> once this is done
[19:36] <RoAkSoAx> we can start deploying machine
[19:36] <RoAkSoAx> and we simply do the following:
[19:37] <RoAkSoAx> PYTHONPATH=`pwd` ./bin/ensemble deploy --repository=examples mysql
[19:37] <RoAkSoAx> this will tell zookeeper to deploy a machine, whcih will tell cobbler to start a machine via virsh
[19:37] <RoAkSoAx> and once installed it will run late-command and populate cloud-init meta-data
[19:37] <RoAkSoAx> on first boot
[19:37] <RoAkSoAx> cloud-init will do its thing
[19:37] <RoAkSoAx> and baaam
[19:37] <RoAkSoAx> we would have a mysql server working on a physical node
[19:38] <RoAkSoAx> and I believe that's all I have for you today
[19:38] <RoAkSoAx> I think i run over the session too fast :)
[19:38] <RoAkSoAx> anyone has any questions?
[19:41] <RoAkSoAx> m_3: well that's indeed a limitation we have in comparison to ec2 as in physical environments (and cobbler) we are relying in the power management interface to deploy machines
[19:42] <ClassBot> m_3 asked: does the cobbler instance provide a metadata server for cloud-init?
[19:42] <ClassBot> m_3 asked: reboots... how robust is everthing wrt reboots?  (In EC2-ensemble, we just typically throw instances away)
[19:43] <RoAkSoAx> m_3: now, as far as rebooting machines and keep things persistant, at the moment, we are not handling that
[19:43] <RoAkSoAx> m_3: but the first approach was to preseed all that information and use debconf populate those values
[19:43] <RoAkSoAx> m_3: and have upstart scripts initialize the services on reboot
[19:44] <RoAkSoAx> m_3: however, we discussed the possibility of actually not doing that through the preseed but rather, provide cloud-init with a command to write those persistant values so on reboot they can be used
[19:45] <RoAkSoAx> m_3: you're welcome
[19:45] <RoAkSoAx> anyone any more questions?
[19:47] <RoAkSoAx> alright I guess there's not
[19:47] <RoAkSoAx> thank you all
[19:50] <ClassBot> There are 10 minutes remaining in the current session.
[19:52] <ClassBot> alexm asked: RoAkSoAx: will ensemble/orchestra be in ubuntu-server manual for oneiric? a quick start guide, for instance
[19:52] <RoAkSoAx> alexm: I surely hope so! I guess that will depend how far we can get with this in the development cycle, but I'm confident it would
[19:55] <ClassBot> There are 5 minutes remaining in the current session.
[20:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
[20:01] <nurmi> Hello all, and thank you very much for attending this session!
[20:02] <nurmi> Today, we're going to be discussing some new features of Eucalyptus 3
[20:02] <nurmi> While there are quite a few, two of the most substantial are implementations of high availability and user/group identity management
[20:03] <nurmi> We'll start with a discussion of Eucalyptus HA, and then switch to ID management next
[20:03] <nurmi> Eucalyptus is designed as a collection of services which, when stitched together, form a distributed system that provides infrastructure as a service
[20:04] <nurmi> Roughly, eucalyptus services are organized in a tree hierarchy
[20:04] <nurmi> At the top of the tree, we have components (Cloud Controller, Walrus) that are directly accessed by users
[20:05] <nurmi> In the middle, we have a Cluster Controller and Storage Controller which set up/manage virtual networks and storage (EBS) respectively
[20:05] <nurmi> and and the bottom of the tree, we have Node Controllers which control and manage virtual machines
[20:06] <nurmi> In a nutshell, this collection of services provide users the ability to provision and control virtual infrastructure components that, within eucalyptus, we refer to as 'artifacts'
[20:06] <nurmi> For example, virtual machines, virtual networks, and cloud storage abstractions (EBS volumes and S3 buckets/objects)
[20:07] <nurmi> The design of Eucalyptus HA creates a distinction between the cloud service itself (eucalyptus components), and the artifacts that are created/managed by the service
[20:08] <nurmi> The reason for this distinction is that, while the term 'High Availability' is generally meaningful
[20:08] <nurmi> The requirements of making something 'Highly Available' varies greatly, depending on what that 'something' is
[20:09] <nurmi> In Eucalyptus 3, we have a new architecture that provides High Availability for the cloud service itself
[20:09] <nurmi> The architecture additionally supports adding High Availability to eucalyptus artifacts, in the future
[20:10] <nurmi> So, the core design of Eucalyptus HA is as follows
[20:10] <nurmi> Each Eucalyptus component can run in 'non-HA' mode, exactly as it does today
[20:11] <nurmi> Then, at runtime, each component service can be made highly available by adding an additional running version of the component, ideally on a separate physical system
[20:12] <nurmi> This results in a basic 'Master/Slave' or 'Primary/Secondary' mode of operation, where the Eucalyptus HA deployment is resilient to (at least) a single point of failure (for example, machine failure)
[20:13] <nurmi> At any point in time, when running in HA mode, a component is either in 'Primary' or 'Secondary' mode
[20:13] <nurmi> any component in 'Secondary' mode is running, but is inactive until it is made Primary
[20:14] <nurmi> Next, each component, and the system as a whole, is designed to keep 'ground truth' about artifacts as close to the artifacts as possible
[20:14] <nurmi> For example, all canonical information about virtual machine instances is stored on the node controller that is managing that VM
[20:15] <nurmi> and all canonical information about virtual networks that are active is stored with the Cluster Controller that is managing that network
[20:15] <nurmi> When a eucalyptus component becomes active, then
[20:16] <nurmi> which happens when the component first arrives, when it is 'restarted' or, when it is promoted from Secondary to Primary
[20:16] <nurmi> the component 'learns' the current state of the system by discovering what it needs from ground truth
[20:16] <nurmi> other services that are 'far' from ground truth, then, learn about ground truth from nearer components
[20:17] <nurmi> I'll use the Cluster Controller to illustrate how this design works as an example
[20:18] <nurmi> When a cluster controller enters into a running eucalyptus deployment, there are typically many artifacts that are currently running
[20:18] <nurmi> the very first operation that a cluster controller performs is to poll both above (Cloud Controller) and below (Node Controllers)
[20:18] <nurmi> in order to learn about the current state of all artifacts
[20:19] <nurmi> It then uses this information to dynamically (re)create all virtual networks that need to be present in order for the currently active artifacts to continue functioning
[20:20] <nurmi> So, whether a cluster controller is by itself (non-HA mode) and just reboots, or if a Primary cluster controller has failed and the secondary is being promoted
[20:20] <nurmi> the operation is the same: learn about ground truth and re-create a functional environment
[20:20] <nurmi> All other HA eucalyptus components operate in a similar fashion, semantically
[20:21] <nurmi> Storage controller uses iSCSI volumes as ground truth
[20:22] <nurmi> Walrus uses shared filesystem, or a pre-configured DRBD setup for buckets/objects
[20:22] <nurmi> Finally, while the design of the software permits a simple 'no single point of failure' setup with just additional physical machines
[20:23] <nurmi> (to support Primary/Secondary model)
[20:23] <nurmi> We also support deployments that have redundancy in the network infrastructure
[20:23] <nurmi> This way, 'no single point of failure' can be extended to include network failures, as well, without having to alter the software/software configuration.
[20:25] <nurmi> We've put a lot of effort into the new architecture to provide service high availability first, and hope that others will find the architecture ready to start adding HA for specific artifacts in near future releases
[20:25] <nurmi> Utilizing live migration for VM HA, utilizing HA SAN techniques for in-use EBS volume access HA, etc.
[20:26] <nurmi> This brings us to the end of the first part of our discussion, thank you very much!  I would like to ask if there are any questions about Eucalyptus HA ?
[20:27] <nurmi> Okay ; the second part here will be led by Ye Wen, who will be talking about the new user and group management functionality in Eucalyptus 3
[20:37] <nurmi> Short break until we can get '+v' for Ye
[20:39] <wenye> Hello, everyone. I'm going to continue this topic by discussing another new feature in Eucalyptus 3: the user identity management.
[20:40] <wenye> We have a completely new design for managing user identities in Eucalyptus 3, based on the concept of Amazon AWS IAM (Identity and Access Management).
[20:41] <wenye> In another word, we provide the same API as Amazon AWS IAM. Your existing scripts working for Amazon should be compatible with your new Eucalyptus 3 cloud.
[20:42] <wenye> At the same time, we augment and extend IAM with some Eucalyptus-specific features, to meet the need of some customers.
[20:43] <wenye> With IAM, you essentially partition the access to your resources (i.e. the artifacts as Dan said earlier) into "accounts"
[20:44] <wenye> Each account is a separate name space for user identities.
[20:44] <wenye> Account is also the unit for resource usage accounting.
[20:44] <wenye> Within an account, you can manage a set of users.
[20:45] <wenye> Users can also be organized into groups.
[20:45] <wenye> Note that group is a concept for assigning access permissions to a set of users. So users can be in multiple groups.
[20:45] <wenye> But users can be only in one account.
[20:46] <wenye> Permissions can be assigned to users and groups to control their access to the system resources.
[20:46] <wenye> As in IAM, you write a policy file to grant permissions.
[20:47] <wenye> We have a few extensions to the IAM concepts. I talk about a few here.
[20:48] <wenye> In IAM, you can't specify EC2 resources. For example, you can only say "allow user A to launch instance", but you can't say "allow user A to launch instance using image X".
[20:48] <wenye> We introduce the EC2 resources, so that you can do such things. One good use is to restrict the VM types for some users can launch instance with.
[20:49] <wenye> Another extension is the introduction of VM expiration or lifetime.
[20:49] <wenye> You can use an Eucalyptus-specific policy condition to specify a VM's lifetime or when to expire.
[20:50] <ClassBot> There are 10 minutes remaining in the current session.
[20:50] <wenye> The biggest extension probably is the introduction of resource quota.
[20:51] <wenye> We extend the IAM policy syntax to allow the specification of resource quota. We use a special "Effect" to do that.
[20:51] <wenye> So you can say "Effect: Limit" in a policy, which indicates the permission is a quota permission.
[20:52] <wenye> And then you can use the policy "Resource" and "Condition" to specify what resource and how large of the quota.
[20:53] <wenye> You can assign quota to accounts and users. And if a user is restricted by multiple quota spec, the smallest is taken into effect.
[20:54] <wenye> We don't have much time left. I'll briefly talk about another Eucalyptus 3 feature that is related to the identity management.
[20:54] <wenye> That is we enable the LDAP/AD sync in Eucalyptus 3.
[20:55] <ClassBot> There are 5 minutes remaining in the current session.
[20:55] <wenye> To do that, you can simply write a LIC (LDAP Integration Configuration) and upload to the system. The identities in the system will then be synced from the specified LDAP/AD service.
[20:56] <wenye> There is the question of how to map the structure of LDAP tree to the IAM account/group/user model. We leave that for offline discussion. You can send us email at wenye@eucalyptus.com for more information.
[20:56] <wenye> I'll use the remaining 3 minutes for questions.
[20:59] <wenye> Thanks everyone for attending this class!
[21:00] <nurmi> Thank you all, and we look forward everyone trying out Eucalyptus 3 and letting us know what you think!
[21:00] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html
[22:47] <Guest58609> bbn
[22:48] <Guest58609> t
[23:37] <missgawker> all done