#ubuntu-classroom 2007-07-23
<HarKoT> ol
<[UPG] Pritchard> Hello.
<nalioth> hi
<jrib> hi
<jrib> [UPG] Pritchard: are you familiar with irssi?
<[UPG] Pritchard> Not very much so.  I had thought with irssi I couldn't join multiple channels and work at the same time through the terminal.
<jrib> [UPG] Pritchard: k, do you know if irssi is already installed on your system?
<[UPG] Pritchard> Yes it is ^.^;;
<jrib> good
<jrib> you know how to get to a tty by pressing ctrl-alt-f1?
<[UPG] Pritchard> Yup.  Can I switch workspace tabs using that?
<jrib> well you can get 6 terminals to work with by hitting ctrl-alt-f1, or ctrl-alt-f2, or ..., ctr-alt-f6
<[UPG] Pritchard> Right.  In that case, let me reboot my system and join #ubuntu again
<[UPG] Pritchard> brb ^.^;;
<jrib> k
<advent> Back.  Apparently it's started the installation and I'm on 7.04
<[UPG] Pritchard> I have a couple backups of the kernal on my booter, I think.  Not sure though.
<jrib> k
<jrib> lets see how apt is doing
<jrib> what does 'sudo apt-get dist-upgrade' do?
<[UPG] Pritchard> E: unmet dependencies.  try -f
<nalioth> jrib: tty? why not use Console or Konsole ?
<[UPG] Pritchard> Tells me xorg has unmet dependencies.
<jrib> nalioth: no X after direct dapper -> feisty
<[UPG] Pritchard> It tells me apt-get -f install could fix the problem.
<jrib> try it
<[UPG] Pritchard> Gah.  Unable to correct dependencies.
<nalioth> jrib: ouch
<jrib> k, you need to pastebin some stuff... one sec
<[UPG] Pritchard> Alright.  I can pastebin through the terminal? :D
<jrib> alright, here's how we will do it: run the commands but append '| tee ~/pastebin1' to them.  So run 'sudo apt-get dist-upgrade | tee ~/pastebin1' for example
<[UPG] Pritchard> Okay
<jrib> k, now dcc send them to me with:  /dcc send jrib file1 file2 ...
<[UPG] Pritchard> I didn't see the files that were output.
<jrib> k, let me explain:  COMMAND | tee ~/pastebin1    saves the output of COMMAND in the file ~/pastebin1
<[UPG] Pritchard> Oh :D
<[UPG] Pritchard> Is /dcc an irssi command?
<jrib> yeah, it worked
<[UPG] Pritchard> Didn't give me any feedback.  Jw.  Thanks for the help.
<jrib> now show me 'cat /etc/apt/sources.list && apt-cache policy x11-common'
<jrib> so this seems like it should work:   (cat /etc/apt/sources.list && apt-cache policy x11-common) | tee ~/pastebin2
<[UPG] Pritchard> Alright.  I hope that worked
<jrib> k, I didn't get sources.list
<jrib> just the apt-cache policy part, but I think this is enough
<jrib> lets get rid of the broken X and just install it again
<jrib> do: sudo apt-get remove x11-common
<[UPG] Pritchard> Alright.  Removed it, apparently O.o;;
<jrib> it got rid of a bunch of stuff right?
<[UPG] Pritchard> Not sure.  I can't really tell from the log.
<[UPG] Pritchard> At the end of it, it still complains about dependencies.
<jrib> what does it say
<[UPG] Pritchard> packagename (ex: xserver-xorg) Depends: packagename - but it is not going to be installed
<[UPG] Pritchard> Does that far more packages than I can read, and it gives me another unmet dependency error at the end.
<jrib> k, show me
<jrib> k, check if x11-common is still installed: 'apt-cache policy x11-common
<[UPG] Pritchard> installed: 7.0.0-0ubuntu45
<jrib> pfft
<[UPG] Pritchard> installed: 7.0.0-0ubuntu45 0: 100 /var/lib/dpkg/status
<[UPG] Pritchard> Then I have a candidate version, 1:7.2-0ubuntu11
<jrib> yeah, that's the new one we want to get in there
<jrib> easy way is to get apt to just remove X
<nalioth> tried 'sudo apt-get -f dist-upgrade?
* nalioth adds to the confusion
<[UPG] Pritchard> nalioth, wasn't able to correct dependencies.
<[UPG] Pritchard> alright so sudo apt-get remove X?
<nalioth> [UPG] Pritchard: my command was different from any you've been offered previously
<nalioth> [UPG] Pritchard: it's sudo apt-get remove xserver-xorg <enter>
<[UPG] Pritchard> Is it still common to have it give me dependency errors then?
<[UPG] Pritchard> Removing xserver-xorg is what we tried earlier, right?
<jrib> nope, go ahead and try xserver-xorg, we tried x11-common before
<nalioth> i think honestly, you should start over the install.  basically you're gonna have to break the machine some more, before you can build it back up
<nalioth> and you may break it to the point it doesn't talk to humans at all
<[UPG] Pritchard> ;_;
<jrib> installing will probably be faster to be honest, just backup your /home and it will be the same anyway
<[UPG] Pritchard> Right.
<nalioth> wasn't the warning on the upgrade page about skipping releases?
<[UPG] Pritchard> Yes.  There was indeed a very large warning.
<[UPG] Pritchard> I wasn't aware of that before I tried this, however :P
<[UPG] Pritchard> Alright, will I need to redownload from a CD to install, or can I do this through the command-line?
<[UPG] Pritchard> Heheh.
<nalioth> you'll need a cd,
<[UPG] Pritchard> Darnit.
<[UPG] Pritchard> Would the CD remove my current distro for me?
<nalioth> it has the capability of wiping your drive, yes
<[UPG] Pritchard> By drive you mean partition, meaning only the one Ubuntu's on, right?
<[UPG] Pritchard> I'm on a dual-boot XP/Ubuntu.
<nalioth> no, the drive is the whole thing
<nalioth> a partition is a subsection of a drive
<[UPG] Pritchard> That was a joke.  I don't want my XP partition deleted
<[UPG] Pritchard> :D
<nalioth> losing XP is a step in the right direction, imho
<[UPG] Pritchard> I have a looot of stuff on that partition, and I use XP for development atm because I'm too lazy to learn it on Linux.
<nalioth> it's easier on linux
<[UPG] Pritchard> Probably :P
<[UPG] Pritchard> I *wish* I could be on Linux most of the time, but I can't.
<[UPG] Pritchard> And removing XP also means that I can't play all my fun Windows programs :P
<nalioth> fun windows programs?
<[UPG] Pritchard> Yeah.  like BSOD.
<[UPG] Pritchard> And uhm, games that don't have a Linux port :D
<[UPG] Pritchard> (which is a lot of them, sadly)
<[UPG] Pritchard> Oh, and photoshop.
<nalioth> two words
<nalioth> Cedega
<nalioth> GimpShop
<[UPG] Pritchard> Gimp's UI is awful imo.
<[UPG] Pritchard> Haven't tried anything else really though.
<nalioth> [UPG] Pritchard: um, i didn't say "Gimp"
<nalioth> i said "GimpShop"
<[UPG] Pritchard> Is it free?
<nalioth> yes, it is.
<nalioth> i don't use payware
<nalioth> i've not used Windows since the year 2000
<[UPG] Pritchard> Thanks for the info.  Got to go now.  See ya.
<[UPG] Pritchard> Thanks for the help jrib.
<[UPG] Pritchard> Guess I'm just going to have to reinstall.
<nalioth> jrib: did you see a ban on him in #ubuntu ?
<jrib> nalioth: no
<nalioth> me, neither
#ubuntu-classroom 2007-07-24
<dk0r> I need help learning the basics of installing via command line, synaptic and manually w/ tarballs. Anyone willing to hold my hand?
<jrib> k, where is the script?
<jrib> 2. what did you 'cd' to in the script?
<Polyneux> http://paste.ubuntu-nl.org/31141/
<Polyneux> I had cd to / but I took it out in the linked version.
<jrib> on your system I mean, what is the path to the script?
<Polyneux> Its in /ja
<jrib> you don't have permission to write to /
<jrib> try 'mkdir ~/.ja' in a terminal and then in your script, put 'cd ~/.ja'
<jrib> oh, but then you call files later on directly...
<jrib> did you write this?
<jrib> what is /ja?
<Polyneux> Dedicated server for a game.
<jrib> you understand the issue right?  Your user can't write to /ja
<Polyneux> Yes I think I can see that is there a way to change that?
<jrib> well you can either make your user the owner or set up a group
<Polyneux> I thought my user was already the owner...
<jrib> what does 'ls -ld /ja' return?
<Polyneux> d-w-rwxrwx 8 elm elm 4096 2007-07-24 14:53 /ja
<jrib> what does 'touch /ja/.temp' return?
<Polyneux> Permission denied
<Polyneux> Sudo returns a regular prompt.
<jrib> what does 'ls -ld /ja/.temp' return?
<Polyneux> Had to sudo: -rw-r--r-- 1 root root 4335 2007-07-24 14:57 /ja/.temp
<jrib> alright
<jrib> do this: 'sudo chmod 755 /ja && sudo rm /ja/.temp'
<Polyneux> Done
<Polyneux> Now one of the "permission denied" lines is gone when I try to run the .sh :D
<jrib> check for the other files now: ls -ld /ja/japlus_restart.txt /ja/log.txt
<Polyneux> -rw-r--r-- 1 root root 53 2007-07-24 14:22 /ja/japlus_restart.txt
<Polyneux> -rw-r--r-- 1 root root 62 2007-07-24 14:22 /ja/log.txt
<Polyneux> -rw-r--r-- 1 root root 53 2007-07-24 14:22 /ja/japlus_restart.txt
<Polyneux> -rw-r--r-- 1 root root 62 2007-07-24 14:22 /ja/log.txt
<Polyneux> Oops :3
<jrib> delete them: sudo rm /ja/japlus_restart.txt /ja/log.txt
<jrib> if you sudo the script again, it will cause you issues again
<jrib> btw, the script really should do 'cd /ja' at the second line
<Polyneux> It works
<Polyneux> I love you
#ubuntu-classroom 2007-07-26
* #ubuntu-classroom  [freenode-info]  help freenode weed out clonebots, please register your IRC nick and auto-identify: http://freenode.net/faq.shtml#nicksetup
<jrib> hi
<Keith> hi
<jrib> run this and make sure the package installed correctly: 'apt-cache policy sun-java6-plugin'
<Keith> sun-java6-plugin:
<Keith>   Installed: (none)
<Keith>   Candidate: 6-00-2ubuntu2
<Keith>   Version table:
<Keith>      6-00-2ubuntu2 0
<Keith>         500 http://gb.archive.ubuntu.com feisty/multiverse Packages
<jrib> it's not installed :)
<Keith> hmm
<jrib> do this:  sudo aptitude install sun-java6-plugin
<Keith> it says it did :)
<Keith> ok done
<Keith> Unpacking sun-java6-plugin (from .../sun-java6-plugin_6-00-2ubuntu2_i386.deb) ...
<Keith> Setting up sun-java6-plugin (6-00-2ubuntu2) ...
<jrib> ok, now close all instances of firefox and start a new one
<Keith> I love you, and want your babies :)
<Keith> tyvm sir
<jrib> np
<Keith> so it just didnt finish installing the last time?
<jrib> yeah, for some reason
<Keith> what does aptitude do?
<jrib> it's a package manager, like synaptic, but for the command line
<jrib> !apt > Keith (see the private message from ubotu)
<Keith> thank you once again :)
#ubuntu-classroom 2007-07-27
<dk0r> !hardware
<ubotu> For lists of supported hardware on Ubuntu see https://wiki.ubuntu.com/HardwareSupport
<dk0r> why isn't m-audio on this list? --> https://wiki.ubuntu.com/HardwareSupportComponentsSoundCards
<dk0r> I know Ive seen it there before.
#ubuntu-classroom 2007-07-28
<dk0r> Any suggestions for a torrent client and burning rom? Im new to linux.
<dk0r> Oh, also a news client
<dk0r> with indexing
<tonyyarusso> I use rtorrent (cli, simple)
<tonyyarusso> !burners
<ubotu> CD/DVD Burning software: K3b (KDE), gnomebaker, serpentine, graveman, Nautilus cd burner (Gnome), gtoaster, xcdroast, cdrecord (terminal-based). Burning .iso files: see https://help.ubuntu.com/community/BurningIsoHowto
<tonyyarusso> !news
<ubotu> Sorry, I don't know anything about news - try searching on http://bots.ubuntulinux.nl/factoids.cgi
<tonyyarusso> blah
<tonyyarusso> Liferea, Thunderbird, Sage are all contenders.
<tonyyarusso> (for my purposes, not necessarily yours)
<dk0r> tonyyarusso, are you an op?
<dk0r> staff*
<tonyyarusso> dk0r: For a number of Ubuntu channels, yes; Freenode, no.
<tonyyarusso> (sorry for the long response - I'm on my way to bed)
<dk0r> tonyyarusso, heh ok.
<dk0r> oh. im in the wrong room
<dk0r> geez.
#ubuntu-classroom 2007-07-29
<dk0r> Im trying to install avant window manager following the tutorial here --> http://ubuntuforums.org/showthread.php?p=2093300
<dk0r> Im doing 'Option 2' and have gotten to the 'Compule step' where u enter "make" but I am receiving an error. Any guidance would be appreciated
<dk0r> dk0r@dk0r:~/avant-window-navigator$ make
<dk0r> make: *** No targets specified and no makefile found.  Stop.
<dk0r> theres a makefile.am and makefile.in in the dir. Are these anything significant ?
<dk0r> heh. I said avant window manager. I ment navigator. :)
<dk0r> Im new to linux and just installed avant window navigator and am wondering how I would remove the default taskbar @ the bottom of the screen?
<dk0r> Can someone please tell me how to apply a patch to AWN? (http://pastebin.ubuntu-nl.org/28405/)
<dk0r> Can someone please walk me through implementing this AWN patch (http://pastebin.ubuntu-nl.org/28405/) to get the angeled/reflecting dock look? I h
<dk0r> Can someone please walk me through implementing this AWN patch (http://pastebin.ubuntu-nl.org/28405/) to get the angeled/reflecting dock look? I applied it and tried to edit awn-bar.c, but I cannot find a selection to set the angle, only offset.
<dk0r> sorry
#ubuntu-classroom 2008-07-22
<Lena> Is anyone here?
<Lena> I need some help
<Lena> I want to install linux on Acer Aspire 3000 laptop
<emgent> Lena: plase join #ubuntu and ask.
<Lena> thx
#ubuntu-classroom 2008-07-23
<timmulvihill> hey bob
<bobertdos> Hello
<timmulvihill> thanks for doing this
<timmulvihill> im trying to get my master i need this paper to graduate
<timmulvihill> :D
<timmulvihill> so its important
<timmulvihill> and ill get started.
<unafiliated> hi ompaul
<unafiliated> :)
<ompaul> hi
#ubuntu-classroom 2008-07-24
* pleia2 changed the topic of #ubuntu-classroom to: Ubuntu Open Week is over, thanks for participating! | Information and Logs: https://wiki.ubuntu.com/UbuntuOpenWeek | Next Session: Monday 28th July at 14:00 UTC: MOTU School Session: Maintainer Scripts by Cesare Tirabassi
<Hotkey> im in class
<bobertdos> We just need to work with the terminal a bit, Hotkey.
<Hotkey> OK - did you see my last post
<histo> Schools out for summer
<bobertdos> yes
<Hotkey> that shortcut will not stay there after reboot?
<bobertdos> The shortcut will stay there, but if the drive isn't mounted, it will be a broken shortcut.
<Hotkey> aha
<Hotkey> terminal open
<bobertdos> alright, first we need to make sure we know a couple locations of things
<Hotkey> k
<bobertdos> try typing: ls /media/disk (I'm assuming that's where your Windows drive mounted itself).
<Hotkey> ls: cannot access /media/dis: No such file or directory
<bobertdos> disk
<bobertdos> with a k
<Hotkey> lol works fine with the "k" in there
<bobertdos> So that is the Windows drive, right?
<bobertdos> Okay good
<Hotkey> ya - i see lottsa windows line items
<bobertdos> good
<bobertdos> Now, type: mount
<Hotkey> and press enter?
<bobertdos> yeah
<Hotkey> large linux command list
<bobertdos> Find the line with ntfs in it and tell me what it says on the left for a directory
<bobertdos> Your're looking for something like: /dev/sd.....
<Hotkey> not seeing it
<bobertdos> er.......We should use fdisk instead.
<bobertdos> type: sudo fdisk -l
<Hotkey> ?
<Hotkey> ok
<bobertdos> (that's the letter l by the way)
<Hotkey> done
<bobertdos> Find the line that has ntfs in it.
<Hotkey> looking for this: /dev/sda2   *           9       22955   184321777+   7  HPFS/NTFS
<bobertdos> yes sir
<Hotkey> :)
<bobertdos> now, I will copy and paste the next command, because it's long. I suggest you copy and paste it too.
<Hotkey> k
<bobertdos> actually, before that
<Hotkey> ya?
<bobertdos> type: gksudo gedit /etc/fstab
<Hotkey> editor is open
<bobertdos> with fstab?
<Hotkey> ya
<bobertdos> k, now somewhere in that file, probably down below your Linux hard drive, sandwich this in there:
<bobertdos> /dev/sda2 UUID=12102C02102CEB83  /media/disk  ntfs-3g  auto,users,uid=1000,gid=100,dmask=027,fmask=137,utf8  0  0
<bobertdos> You could probably put it at the bottom of the file too. That would likely be easier.
<Hotkey> is this the linux hd? # /dev/sda5 UUID=e9786600-b9b5-4ff8-a920-a56ea1d997fd /               ext3    defaults,errors=remount-ro 0       1
<Hotkey> o ok
<bobertdos> yeah, it is
<Hotkey> sandwich or bottom?
<bobertdos> up to you :p
<Hotkey> bottom pasted in
<bobertdos> Now save the file.
<Hotkey> done
<bobertdos> Then, back in the terminal, type: sudo mount -a
<Hotkey> [mntent]: line 11 in /etc/fstab is bad
<Hotkey> :(
<bobertdos> uh oh
<Hotkey> uh oh
<bobertdos> I assume line 11 is what was just added.
<Hotkey> il  gksudo gedit /etc/fstab again
<Hotkey> ya line 11 it is.
<bobertdos> k, get rid of it
<Hotkey> this is the line /dev/sda2 UUID	=12102C02102CEB83  /media/disk  ntfs-3g  auto,users,uid=1000,gid=100,dmask=027,fmask=137,utf8  0  0
<Hotkey> gone
<Hotkey> save/close?
<bobertdos> Use this one instead: /dev/sda2 /media/disk ntfs-3g defaults,locale=en_US.utf8 0 0
<Hotkey> ok dun
<Hotkey> saved closed
<bobertdos> okay, try sudo mount -a again
<Hotkey> this again?  sudo mount -a
<Hotkey> k
<Hotkey> no response
<bobertdos> No response = good response :)
<Hotkey> LOL = ok
<bobertdos> Now, the next time you reboot, the drive SHOULD automatically be there and the shortcut SHOULD work.
<Hotkey> shutdown, reboot, cross fingers!
<bobertdos> yes
<Hotkey> thanks so much for helping.  patience is a virtue.
<bobertdos> yes, yes it is
<bobertdos> I will still be here if there are any problems.
<Hotkey> by the way - do you use xchat as irc client or is there something better
<bobertdos> I use Pidgin actually.
<Hotkey> i'll try that next time - i do have it and use it for other msgr. services - so makes sense.  thx again.
<Hotkey> bbl
#ubuntu-classroom 2008-07-25
<norsetto> Hi everybody, welcome to this lecture!
<norsetto> Who do we have here?
<norsetto> Questions anyone?
<norsetto> Furthermore, you should not assume that a conffile being present means that some functionality are available, remember that conffiles are only removed on purge
<norsetto> cmd sed -n '166p' ~/docs/dpkg-install.steps
<norsetto> Maintainer: Ubuntu Core Developers <ubuntu-devel-discuss@lists.ubuntu.com>
<norsetto> Maintainer: Ubuntu Core Developers <ubuntu-devel-discuss@lists.ubuntu.com>
<norsetto> /exec cmd sed -n '100p' ~/docs/dpkg-install.steps
<norsetto> Hi everybody, welcome to this lecture!
<norsetto> Who do we have here?
<norsetto> We will talk in general about the basics of the debian package management and cover maintainer scripts in some detail
<norsetto> We will not tackle very complex subjects, I tried to make this very informative but at the same time accessible to all newcomers, even those with little debian/ubuntu experience (sorry if this will bore everybody else)
<norsetto_> I guess that was a tad stupid ...
#ubuntu-classroom 2009-07-20
<stas> hi, can somebody help me please
<stas> I'm trying to build a deb
<stas> and ppa builder gives me the following error:
<stas> cp: cannot stat `./usr/share/locale/': No such file or directory
<stas> can you point me where should I search for the mistake?
<stas> what's more interesting is that in 64 bits build log the build hangs on a different error
<delcoyote> hi
<TehFlash> hello
<TehFlash> whats this channel used for
<pleia2> ubuntu related classes, the links are in the /topic :)
<TehFlash> i have ubuntu on my laptop, can any one learn through this
<murcherson> my sound is all screwed up and i want to start afresh with all sound config etc removed, any ideas on where i should start with this?
<nalioth> murcherson: #ubuntu ?
<murcherson> lol thanks nalioth
#ubuntu-classroom 2009-07-21
<delcoyote> hi
#ubuntu-classroom 2009-07-22
<wcryer> anyone think they can help me with a wpasupplicant
<jawnsy> wcryer: use Network-Manager
<wcryer> dont have a gui yet
<wcryer> and i dont have a dvd and gnome is too big for a disc
<jawnsy> ethernet?
<wcryer> doesnt see the card
<wcryer> only the wireless card
<wcryer> which it sees as eth1 for some reason
<wcryer> but fastest way to gui is the goal
<wcryer> do you know how i could load gnome from a usb flash drive?
<jawnsy> in theory you could just copy the .deb packages
<jawnsy> and dpkg --install them
<wcryer> well just found out the flash drive i thought was big enough was not
<wcryer> i followed a guide and think i have my network/interfaces right
<wcryer> i just dont think my wpasupplicant is
<wcryer> when i open it, it is just blank
<jawnsy> I dunno how to use wpasupplicant
<jawnsy> I think wpasupplicant provides WPA, but you also need to configure it in /etc/network/interfaces
<jawnsy> that's what network-manager does
<wcryer> ya
<wcryer> ive done the interfaces part
<wcryer> i think i might have found something for the wpasupplicant
<wcryer> but now my problem is this thing that keeps saying *Reloading /etc/samba/smb.conf
<wcryer> every minute
<wcryer> it gets in the way of everything
<jawnsy> can you /etc/init.d/samba stop?
<jawnsy> or whatever the script is, that one I dunno
<wcryer> cool thanks
<wcryer> is there a way to install a version of ubuntu server that already has the gui with it
<jawnsy> why not just install the desktop?
<wcryer> cause i want it to be a file/print server
<nhandler> jawnsy: wcryer: You guys might want to move this discussion to another channel. This channel is for classroom sessions
<wcryer> any suggestions where i can go i never get any responses in #ubuntu
<qwebirc3786> hola
<jawnsy> :-)
<jawnsy> just a few hours until the Perl Packaging Training :-)
<Ryan52> nhandler:
<Ryan52> oops.
<qwebirc61938> date -c
<qwebirc61938> date -u
<qiyan> join #pardus
<qwebirc51067> hi to alll
#ubuntu-classroom 2009-07-23
 * gwolf looks around
<gwolf> Oh! an +o!
<jawnsy> very nice
<jawnsy> oh, me too
<nhandler_> gwolf: Just preparing for the session. You guys still have ~10 minutes
<gwolf> ok
<gwolf> you do the signalling.
* jawnsy changed the topic of #ubuntu-classroom to: Ubuntu Classroom || https://wiki.ubuntu.com/Classroom || https://wiki.ubuntu.com/Packaging/Training || Now: Packaging Perl Modules || Upcoming: July 30 @ 06:00 UTC: Mozilla packaging techniques (extensions, patchsystems, bzr) || Run 'date -u' in a terminal to find out the UTC time
<jawnsy> Hi everyone
<jawnsy> It looks like it's about time for the Packaging Perl Modules session
<jawnsy> Hopefully some of you are here for the talk, or gwolf and I will be talking to ourselves :-)
<gwolf> Yup - 2:00AM according to me, so we are ready to start!
<jawnsy> If you've got a question, feel free to ask whenever. I only ask that you try to keep questions relevant to the thing currently being discussed, just so the logs don't get too confusing
<gwolf> nhandler: So, you are the guy who invited us. Should we expect anything from you?
<gwolf> or should we just address The Audience?
<maddeth> 1am by me :p but dont know how long I will be here
<nhandler> gwolf: You guys are in charge now ;) I'm just here for the show now
<gwolf> good :)
<jawnsy> Okay, so a quick introduction
<gwolf> ok, so lets al just welcome and listen to jawnsy
<ajmitch> most people will probably remain quiet :)
<gwolf> I have this (annoying?) tendency of having a hard time to sit and listen at an audience, I always end up interrupting
<jawnsy> gwolf and myself are members of the Debian pkg-perl team, gwolf is a Debian Developer, and I am not
<gwolf> so I'd rather interrupt jawnsy rather than myself ;-)
<jawnsy> We're going to do this talk under the assumption of no previous knowledge making any packages, either for Ubuntu or for Debian
<jawnsy> Luckily, packaging Perl modules is pretty gentle. A few tools you will need frequently are: dh-make-perl, lintian
<gwolf> That's basically thanks to that the Perl modules all share a basic infrastructure
<jawnsy> I'm going to go through packaging a small module from CPAN, we'll discuss what commands you can use to do it
<jawnsy> first of all, in order to follow along you'll need to apt-get install the aforementioned tools
<gwolf> ...And that's also the main reason we (a group of ~15 people at most) can keep up with ~1300 packages in a reliable way!
<maddeth> aptitude installing now :)
<jawnsy> Another useful package is devscripts, which contains "debuild" -- a command we'll be looking at later
<jawnsy> While everyone's installing that stuff, is there a particular module anyone would be interested in using as an example?
<gwolf> jawnsy: oh, catalyst could be a good example
<jawnsy> Otherwise I'll just pick one at random
<gwolf> (of something you want to avoid ;-) )
<gwolf> lets start simple, with something not dependency-rich
<Ryan52> Locale-Msgfmt
<Ryan52> no dependencies other than Perl core, and it should "just work".
<jawnsy> okay
<jawnsy> So, make a directory and change into it (the build process might clutter your directory with some files, so it's best to put it somewhere). I usually make a bunch of directories like 'tmp'
<jawnsy> the first command you want to run is: dh-make-perl --cpan Locale::Msgfmt
<gwolf> jawnsy: Let me fill in a minute please :)
<jawnsy> Hopefully you have CPAN configured, otherwise it might ask you to configure it. Usually just hitting enter all the way down will be OK
<gwolf> I think we have a bit different approaches - I think it's a bit better to first introduce what is _about to_ happen ;-)
<gwolf> But anyway - Yes, dh-make-perl is a magic script
<gwolf> ... In preparing Debian packages, you will see lots and lots of dh_* calls - To the debhelper scripts
<gwolf> dh-make-perl is _NOT_ part of debhelper, but extensively uses i
<gwolf> it
<gwolf> ...Anyway... Ok, I agree with jawnsy's reasoning (which was sent to me out of band)...
<gwolf> ...so I'll let him continue with the demonstration
<gwolf> we can later talk about pedagogy ;-)
<jawnsy> debhelper, as you may or may not know, is a collection of scripts that Debian uses to build and install packages
<jawnsy> prior to debhelper, we were using huge makefiles.. which, as you can imagine, gets pretty hard to maintain and debug
<jawnsy> Once you do dh-make-perl --cpan Locale::Msgfmt, you should see a bunch of text scroll by your screen, how it's unpacking a tarball
<jawnsy> Most importantly, at the end, you should get a line that says "--- Done" which meant everything went OK
<jawnsy> For those following along with this, please confirm that you can build using dh-make-perl. If not, mention any problems you are having in #ubuntu-classroom-chat
<jawnsy> Okay. So, it did a bunch of magic. Let's look at what it created.
<jawnsy> cd into Locale-Msgfmt-0.14, and list the directory's contents
<jawnsy> Everything in that directory is from the tarball we just downloaded from CPAN
<jawnsy> what is important to note for us is the files in debian/, which basically tell our installer (debhelper) how to do its magic. It also contains metadata files that tell Aptitude how to resolve dependencies, for example
<jawnsy> I'll go through these files (in debian/) file-by-file so we can look at what they are for.
<jawnsy> While dh-make-perl is great at its magic, it's also not perfect. So we have to manually check that everything is OK and makes sense.
<gwolf> Just as a sidemark - often, dh-make-perl's generated files are good enough for straight _local_ use
<gwolf> that means, if you just install them, you can basically trust they will not eat your hard disk, and will even behave
<jawnsy> Oh, yes. If you are building Perl modules for your own consumption, they are often good enough to install right away. You can use: dh-make-perl --install --cpan Locale::Msgfmt for that purpose
<gwolf> of course, if you are interested in contributing Debian or Ubuntu, a human should check it!
<jawnsy> Often doing this is better than using the CPAN shell to do installs, because this way you can remove packages easily
<jawnsy> the debian/compat file just contains the version of debhelper that the module is built for, in this case it should just contain "7" on its own line, which means it's designed for the debhelper 7 series
<gwolf> And IIRC, dh-make-perl will not by default work on modules that are marked as unstable by their authors, just on the highest official release (which can be otherwise using the CPAN shell)
<maddeth> cat compat
<maddeth> lol
<jawnsy> yep, it's small :-)
<maddeth> oops ;)
<gwolf> maddeth: Yup - debhelper uses lots of small files. You will find several such files with minor indications
<jawnsy> Now, look inside the 'control' file
<jawnsy> I'll briefly discuss some important points, but to get the full idea of what's going on, reading the Debian Policy Manual is the best way
<gwolf> right - we got reminded that you might be looking at some possibly different things
<gwolf> i.e. if you are running the latest stable versions, or even more if you are using the LTS releases, you will -of course- face older results. Specially LTS will generate a _very_ different set of results (although should be equally usable)
<gwolf> but anyway, please ignore such small inconsistencies
<jawnsy> Mhm. Basically, this d/control file is what tells aptitude/synaptic various things which you might see as familiar. This is where a package description is defined, among other things
<gwolf> debian/control basically lays out all the metainformation - What packages does this one depend on? What is it about? Who created it? Â± when? (while it does not contain the dates, as we will go over that soon, it has a rough indication on when it was last updated: its standards-version)
<gwolf> ..and several other, less interesting details
<gwolf> Note that even if you are working with Ubuntu... at the package source level, keep in mind it will always be debian/*
<jawnsy> oh, when I say d/control, I should note that it's a short form (in discussion) for saying debian/control
<jawnsy> we get used to that slang a lot in pkg-perl
<gwolf> For this module (or package, depending on which side of reality you are standing), the information in debian/control is quite minimal - but were you to build a more involved package, it can... grow quite a bit
<gwolf> heh, in Debian in general, and I guess it also happens in Ubuntu :)
<jawnsy> Yes, right now, it only contains two sections (things separated by a newline)
<jawnsy> the top part refers to the "source package", the second part refers to binaries generated from the source package. None of that matters right now, so don't worry about it
<jawnsy> Okay, onto the next file. You should see a file called: liblocale-msgfmt-perl.docs
<jawnsy> and a similar one with the extension .examples
<jawnsy> cat these and take a look at them. You'll notice it's just a line-separated list of filenames and paths
<gwolf> some of this files might not exist under some circumstances - they depend on what is included on each of the modules you work on
<jawnsy> These are magical files, and they trigger debuild to "install" them. Note that anything listed in these files will be installed on Ubuntu and Debian systems which install your package.
<jawnsy> Basically, .docs is a way of listing some files which are useful for users to read, so they'll be installed on your system.
<gwolf> And, of course, they are autogenerated - so it is possible you will find some examples or files missing. Feel free to just create them with the filenames (relative to the source package's root) for whatever files you want treated as such.
<jawnsy> It should list a single file, README, for this package
<jawnsy> take a look at the contents of README. you'll notice that it doesn't really say anything interesting.
<jawnsy> just the usual stuff like how to install the package manually. this sort of thing isn't tremendously useful to our users, so you can remove the README entry from the .docs file
<jawnsy> now, it's no use keeping an empty .docs file around, so since there's nothing we need to install, you may simply rm the .docs file
<gwolf> In fact, it could be (and was, several releases ago) simpler: debhelper is quite flexible. here, the filenames are prepended by the package name (liblocale-msgfmt-perl.docs i.e.) - If you were to add other binary packages generated by the same source, they would not be included. you could just name them "docs" and "examples", and the results would end up in all generated files.
<gwolf> sorry, all generated binary packages
<jawnsy> Is everyone following with us here?
<maddeth> could you just remove the *.docs file altogether? or is it a requirment?
<jawnsy> You can remove the .docs file if it's empty
<gwolf> You can of course remove it if there are no docs to install
<jawnsy> And you should.
<jawnsy> Okay, the next file to look at is 'rules'
<jawnsy> Some of you in the audience might notice that it looks an awful lot like a makefile.
<jawnsy> Well, it is :-)
<jawnsy> (and is in fact processed by GNU make)
<gwolf> Documentation is usually installed to /usr/share/doc/<pkgname>, and examples /usr/share/doc/<pkgname>/examples.
<gwolf> jawnsy: In fact, there was a discussion some months (years?) ago in Debian, as to whether debian/rules needed to be a Makefile or could be something else
<gwolf> in theory it would work just if it had the proper shebang to anything other than make
<gwolf> in the end it was left as a GNU make file fo... consistency
<jawnsy> syntactically, makefiles are pretty easy to work with. there's not too much you need to know to work with a makefile.
<gwolf> heh, at first sight don't try to make _much_ sense out of debian/rules
<jawnsy> depending on the version of dh-make-perl you have installed, you might get files of various length.
<gwolf> It basically means: "For every target, call the suitable dh helpers"
<gwolf> oh, right!
<gwolf> Well, for the latest versions, using the short DH7 variant, it basically says:
<jawnsy> Just remember that the rules file you have is what tells debhelper how to build and install your package, and how to clean up after itself
<gwolf> #!/usr/bin/make -f
<gwolf> %:
<gwolf> 	dh $@
<gwolf> But we used to generate a ~50 line debian/rules until recently.
<jawnsy> that's a new shorthand debhelper has, which makes working with and maintaining packages that much easier, especially Perl ones
<jawnsy> the last file we haven't looked at yet is watch
<jawnsy> you don't need to care that much about that one. It's just a way for Ubuntu and Debian to automatically look for new upstream versions
<jawnsy> because everything is on the CPAN, dh-make-perl sets that up for us :-)
<gwolf> it basically consists of an URL and a pattern to look for in its webpage
<jawnsy> this way, some commands that use it, such as uscan (in package devscripts, which you should've installed by now)
<jawnsy> can make use of the information therein. It doesn't really matter for now :-)
<jawnsy> okay, so, now that we know what all these control files are doing, change back to Locale-Msgfmt-0.14 (the root of your package)
<gwolf> but you can try, just for the sake of it: Run "uscan --verbose"
<jawnsy> ah, good point gwolf :-)
<gwolf> :)
<jawnsy> okay, now, to actually build this package into the binary .deb files we are used to seeing
<gwolf> anyway, jawnsy... you were picking up your magic wand
<gwolf> please perform the trick
<gwolf> note that I have my hands firmly tied behind my back and cannot move.
<jawnsy> Make sure you have the 'dpkg-dev' package installed
<jawnsy> this is a collection of programs you'll need to build .deb binaries
<jawnsy> there are two commands we can use to start the build, either debuild or dpkg-buildpackage
<jawnsy> it will display a lot of output. if you've ever installed a CPAN module in the past, you'll note that the output looks mostly the same
<jawnsy> this is because the debhelper process is actually running the Perl build system -- that is, running Makefile.PL or Build.PL
<jawnsy> Is anyone having problems with the build?
<jawnsy> if you do a ls of your current working directory, you'll note that there are some intermediate build files. this is because debhelper built the module and hasn't cleaned up after itself
<jawnsy> in order to restore everything back to the "clean" state, run the command: debian/rules clean
<jawnsy> or 'debclean'
<maddeth> sorry jawnsy where do we run dpkj-buildpackage? the debian directory?
<gwolf> you can run it from the base package directory
<jawnsy> you want to run it in the root directory of the module, so in this case, Locale-Msgfmt-0.14
<maddeth> thankyou
<gwolf> (you will notice my hands have been untied and you have some shiny new packages ready!)
<jawnsy> the reason for that is that dpkg-buildpackage looks for files in debian/* under the base directory
<jawnsy> once you have done the build, change to the directory *above* the base directory (this is why I told you to make a temporary directory at the beginning)
<jawnsy> you should notice a bunch of files there. Most importantly to this process is the .deb file, which was magically built for you by dpkg-buildpackage
<gwolf> In fact, we could have got to this precise place even in a more automated way to begin with :-) If we had specified the command Â«dh-make-perl --build --cpan Locale-MsgfmtÂ», the package would have been built and left in our currently working directory. Many times I have done that.
<jawnsy> if you so desire, you can install it on your system using: sudo dpkg --install liblocale-msgfmt-perl_0.14-1_all.deb
<jawnsy> but, how do we know what it's going to install? how can I be sure it won't overwrite some really important files?
<gwolf> Well, to begin with... The dpkg package management system would refuse!
<dam> gwolf: but you would miss the cleanup of .docs
<jawnsy> quite conveniently, dpkg provides a "--contents" command
<gwolf> dam: Of course. But for local builds, that's often enough :)
<jawnsy> so that we can basically list the files inside our binary, and more importantly see *where* files will be installed
<gwolf> (you'd even miss the warm feeling from having a human look at it)
<jawnsy> ok, so now, please find the .deb file which was built
<jawnsy> and execute the command: dpkg --contents *deb
<jawnsy> you should get a listing of files which looks a lot like some 'ls' output
<jawnsy> this way, you can see exactly where things will install, without actually installing the binary. note that this works with any Debian (or Ubuntu) package :-)
<gwolf> ...or, conveniently, just debc *deb
<jawnsy> the Debian Perl philosophy is very much like the Perl philosophy -- there's more than one way to do it!
<jawnsy> So at this time we'd like to ask the floor for any questions they might have
<jawnsy> by now you've packaged your first Perl module, and I bet you're thinking, wow, that was easy :-)
<gwolf> agree :)
<jawnsy> in the Debian pkg-perl team we have more scripts which help us with this sort of thing, but right now you've got enough know-how to build Perl modules on your local machine
<gwolf> ..and soon afterwards, we will stat talking about how to do this in a better organized way, how to contribute to your distribution, and how to be rich and famous! (or at least, how to get to travel quite a bit)
<nhandler> How would I go about getting this module uploaded to Debian and maintained by pkg-perl ?
<nhandler> ;)
<jawnsy> That, is quite a good question.
<jawnsy> OKay let me dispel some myths first of all
<jawnsy> Many people seem to think that in order to contribute to Debian or Ubuntu, you've got to be a Debian Developer, or a Ubuntu Developer
<jawnsy> the pkg-perl team is a great way to get involved in both Debian and Ubuntu
<gwolf> good!
<jawnsy> and best of all, you don't need to be a full-fleged Developer
<gwolf> In fact, it is a great way to check how your packaging is, and have other people do the heavy lifting
<gwolf> ...Our team, as I said earlier, has around 15 active members (from a much larger list).
<jawnsy> Basically, under the pkg-perl team, we maintain all our packages as a group
<jawnsy> It's nice because if you ever get stuck, you have a great support network of people to ask for help
<gwolf> Of course, to be able to become active in it, you should at least know the basics of Perl and Debian packaging, and be interested and willing to work on any potential bugs
<jawnsy> And, more importantly, it's never hard finding a sponsor -- many Ubuntu and Debian developers are part of the group :-)
<gwolf> (hold a second for me, we are being moved to a different laboratory)
<jawnsy> maddeth: how did your build go?
<maddeth> flawlessly thankyou
<maddeth> just running a few things through my head :)
<jawnsy> Please folks, if you have questions, feel free to ask. And if you're packaging something for your own use later and you come across something else, feel free to join our group channel and ask us
<jawnsy> the pkg-perl group sits on IRC at irc.debian.org (OFTC) #debian-perl
<gwolf> ready
<jawnsy> If you're looking for a way to really make your mark on Debian or Ubuntu, please join the group. You don't need any prior experience to join the group
<gwolf> ok... So in order to work on this group, there is a couple of things to add to your skillset
<gwolf> First of all, as I said, we collectively maintain ~1300 packages
<gwolf> The best (only?) way we can do this is to coordinate it via a version control system
<gwolf> that ensures we all share each other's package modifications - And we use SVN (Subversion) for it
<gwolf> Before anybody asks - there has been talk regarding how to move our workflow towards the much more agile and flexible Git
<gwolf> ..We will get to it, sooner or later.
<jawnsy> having things in subversion or any VCS really.. is indispensible
<jawnsy> for example I could ask gwolf to take a look at something for me and help me figure out why it's not building
<gwolf> And taht also allows us to do transversal, large-scale updates
<gwolf> i.e. if we were to decide to, say, change some part of our build process, we could run it from one single place, for all of our packages
<gwolf> of course, we would have to do each change separately (i.e. via a script), but it would be grouped into one logical entity - into a single commit.
<gwolf> Anyway - where to look for this information?
<jawnsy> Seriously, if you've got some free time and a passion to learn something new, I would really recommend joining the group
<gwolf> Well, the main pointer holder is at the group's (very basic!) page: http://pkg-perl.alioth.debian.org/
<gwolf> This page just holds pointers to our inside bits
<gwolf> I would read them (and refer to them) in +- the following order:
<jawnsy> The great thing about Open Source is that if you want something, you can just dive in and do it yourself
<jawnsy> I myself have 91 packages in Debian's main repository, and like I said, I'm not a Debian Developer :-)
<gwolf> - How does this group use Subversion? http://pkg-perl.alioth.debian.org/subversion.html   (includes a short guide on a great tool we have, which is indispensable for our work: svn-buildpackage)
<jawnsy> it's a great way to get some experience working on Debian and Ubuntu. It's a great way to enhance your system
<jawnsy> actually being part of the group, I come across a lot of really neat modules while packaging them
<jawnsy> so it's useful in that sense too
<gwolf> - The Perl packaging policy: http://pkg-perl.alioth.debian.org/policy.html
<gwolf> jawnsy: That's really true. And you never know who will be the next person to show you something new
<jawnsy> plus, I've met some really nice people in the pkg-perl group that I am honoured to call my friends
<gwolf> - General tips and tricks: http://pkg-perl.alioth.debian.org/tips.html
<jawnsy> joining the group is a huge plus if you're looking to get some experience coding
<jawnsy> we do a lot of things in the group which are transferable skills
<jawnsy> for example managing the bug system, dealing with other people, discussing new ideas
<maddeth> thankyou for the tutorial, I need my sleep now :)
<gwolf> http://pkg-perl.alioth.debian.org/cgi-bin/pet.cgi - possibly the best reference point we have towards the current status of all of our packages
<gwolf> (looking for quilt's...)
<gwolf> http://pkg-perl.alioth.debian.org/howto/quilt.html
<gwolf> This last one: Not very often, we will have to patch (modify) the author's code in order to route around a problem
<gwolf> The best way to do so, keeping everything most traceable, is using the Quilt patch management system.
<gwolf> jawnsy: dam (in #debian-perl) is most right! We forgot one of the most important files to check: debian/copyright
<jawnsy> I thought that'd be a bit too much for a gentle introduction
<gwolf> debian, and Ubuntu (although to a smaller degree) really base their work on respect to Free Software
<gwolf> in order for a package to be built (and yes, this time it _is_ a requirement), it has to list its copyright information
<gwolf> debian/copyright is a human-readable (and increasingly, machine-readable - but that's not necessarily yet a given) file explaining which licensing scheme does this particular package follow
<gwolf> Most usually, Perl modules are licensed "under the same terms of Perl itself"
<gwolf> that means, under the GNU GPL 1 or any posterior version or under the Artistic license, at your choice
<gwolf> so, yes, this has to be listed in debian/copyright
<gwolf> jawnsy: so... where should we go from here? At 03:10, my brain is drying, and I think I'd appreciate some Q&A
<jawnsy> increasingly a lot of companes are working with Debian and Ubuntu
<jawnsy> so joining the group and learning the ins and outs of maintaining packages could very well help you land a sysadmin-type job
<gwolf> of course - it is highly visible, permanent professional information you can talk about
<jawnsy> a lot of companies use locally build packages for distributing and updating even internal packages for distribution to their production servers
<jawnsy> but yes, I'd like to open the floor to any questions
<gwolf> so, people... is there any of you awake+online? :-}
<gwolf> Are there any questions?
<gwolf> There is a lot of where to go on... but I think we have mostly covered the basics
<nhandler> So if I am not a Debian Developer, how could I get my package uploaded? Would I need to go through mentors.debian.net ?
<jawnsy> If you are reading the log of this talk later on, and you have questions, feel free to come on irc.debian.org #debian-perl and ask :-)
<jawnsy> That is a very good question :-)
<gwolf> nhandler: no, not necessarily
<jawnsy> just by joining the pkg-perl team, you can skip the mentors process if you desire
<gwolf> nhandler: If the package is a Perl one, going through mentors would certainly be among the slowest ways possible
<gwolf> The first thing to do is to get an Alioth account
<jawnsy> personally, like I say I'm not a DD, I've never uploaded a package through a mentor
<jawnsy> the purpose of a mentor is to review your package and help you with building it, fixing up the necessary control files, and uploading it
<gwolf> Alioth (http://alioth.debian.org) is Debian's online collaboration subsystem, it is a GForge-derived site
<jawnsy> thankfully, the DD sponsors in our group serve the same purpose
<gwolf> Alioth is open to DDs and non-DDs, with the sole difference that you will have '-guest' appended to your login in case you are not a DD (yet?)
<jawnsy> they also teach newcomers how to build Debian Perl packages (similar to what we've discussed here today, but also covering some rarer cases)
<gwolf> But yes, it is common to find a newbie in the #debian-perl group asking how to join and what to do
<jawnsy> we are a rather informal group.. more of a collection of friendly people
<gwolf> nhandler: So, to round your question: You request an Alioth account and svn-inject your package
<gwolf> nhandler: ...We will notice it and build it, usually soon
<jawnsy> working with our SVN workflow is a totally different lecture for a different day, but it's not hard. we have tools, like svn-inject as gwolf mentioned, that help us do it
<gwolf> oh, of course, you have to request to join the pkg-perl group once you have your Alioth account
<jawnsy> anyway, barring any other questions, it looks like we can draw this talk to a close
<nhandler> You mentioned that you maintain ~1300 packages. How do you keep track of which ones need to be updated, and which have bugs?
<jawnsy> We have a tool (yay, another tool) called the Package Entropy Tracker
<gwolf>  http://pkg-perl.alioth.debian.org/cgi-bin/pet.cgi
<jawnsy> ah I was finding the link, beat me to it gwolf :-)
<gwolf> :)
<jawnsy> as you can see from the page, it lets us know what we need to fix -- packages with really important bugs, packages with new upstream versions (that's what the watch file was about)
<jawnsy> and it allows us to collaborate as a group -- so for example I don't do something to a package gwolf is working on
<jawnsy> (otherwise one of us would be doing double work)
<gwolf> The IRC channel is a great place to hang out (and I very seldom join any IRC channels - But this one is actually productive!)
<nhandler> So does that mean that I can work on any package maintained by pkg-perl? Even if someone else packaged it?
<jawnsy> generally, yes it does. If you want to dive in, feel free to do so!
<gwolf> Of course. Every now and then people appear. And some others disappear
<gwolf> And even if no one does - If I find a package that needs love, and have love to give, I will work on it
<jawnsy> it's a really great way to get experience working with Perl modules, but in a less aggressive way than writing ones for CPAN
<gwolf> Usually, whenever you update a package, you will add yourself as an uploader to it (which means, claim responsablity for its maintenance)
<jawnsy> for example we do a lot of Quality Assurance stuff, like fix outstanding bugs
<jawnsy> one great thing about our package system, and as gwolf mentioned earlier, quilt... is that we can ensure good QA
<jawnsy> by fixing modules ourselves if necessary.
<gwolf> ...and more easily push the changes up to the authors
<jawnsy> so let's say there's a serious security issue with a package. well, in Debian we can just patch it ourselves, and push out a new release, while simultaneously working with the upstream author to resolve the issue
<jawnsy> in that way, Debian and Ubuntu users are safe, even if the Upstream maintainer doesn't know about or doesn't fix a package
<jawnsy> actually a lot of upstream CPAN authors sort of just abandon their stuff, so the pkg-perl team takes care of its long-term maintenance
 * gwolf yawns
<gwolf> people, any questions?
<jawnsy> it's something I think is tremendously useful experience
<jawnsy> just how to debug things, how to work with bug systems
<gwolf> If you don't fire...I'll go to sleep!
<jawnsy> these are things you often won't learn in school, but that can make all the difference in your career
<jawnsy> whether you hit the ground running or whether you'll have to start crawling ;-)
<gwolf> I completely agree with jawnsy
<gwolf> Anyway... jawnsy: I think we should call this as done
<jawnsy> yep. thanks everyone for listening, I hope it's been a good experience for everyone
<gwolf> jawnsy: nhandler says the logs will be available
* jawnsy changed the topic of #ubuntu-classroom to: Ubuntu Classroom || https://wiki.ubuntu.com/Classroom || https://wiki.ubuntu.com/Packaging/Training || Upcoming: July 30 @ 06:00 UTC: Mozilla packaging techniques (extensions, patchsystems, bzr) || Run 'date -u' in a terminal to find out the UTC time
<gwolf> I hope you found this interesting, and hope to have you on board on the team!
<nhandler> Logs will be at https://wiki.ubuntu.com/Packaging/Training/Logs/2009-07-23
<nhandler> Thank you jawnsy and gwolf for leading this session
<porthose> jawnsy gwolf, thx  great session
<gwolf> thank you for inviting!
<nhandler> Remember, if you have and questions, the pkg-perl team is in #debian-perl on irc.debian.org
<jawnsy> and if you're at DebConf you can have a beer with gwolf :-)
<jawnsy> thanks to the Ubuntu community for having us, and for working so closely with Debian to the benefit of both communities
<gwolf> jawnsy: And with dam, and with gregoa, and with diocles!
<gwolf> Thank you all, even if you removed our Op modes ;-)
<jawnsy> I really hope you all take the opportunity to be a bit social.. feel free to drop by just to say hi, even :-)
<jawnsy> the Ubuntu and Debian projects are both highly social, so I hope you all take the chance to participate everywhere you can, when you can
<gwolf> jawnsy: And by all means, specifically you, try to join a DebConf in the future!
<gwolf> Anyway, I'm heading off
<gwolf> nice being here, hope to see you in our little channel!
<duaneb> I missed the chat about Packing Peel Modules last night is there a bot keeping a chat history?
<pleia2> duaneb: the logs are here: https://wiki.ubuntu.com/Packaging/Training/Logs/2009-07-23
<duaneb> thank you, very cool
<pleia2> welcome :)
<wizztjh> date -u
#ubuntu-classroom 2009-07-24
<bs4> a
<bs4> just for test
#ubuntu-classroom 2009-07-26
<Keiffer> Hello. Here you can take ubuntu classes?
<apoleo12> ok, what is this channel about? ;)
<joaopinto> for learning  about different subjects, check the topic for the next event
<Keiffer> Hi. How can I tar my whole system, but not the NTFS partition I wil save the tar file, too?
<nhandler> Keiffer: Try #ubuntu for support
#ubuntu-classroom 2010-07-26
<delcoyote> hi
#ubuntu-classroom 2010-07-27
<rouis> hi
#ubuntu-classroom 2010-07-28
<JoeMaverickSett> there is a "Installing LAMP server" session here, right?
<Ddorda> JoeMaverickSett: it's in few hours
<JoeMaverickSett> Ddorda, okie thanks. i'm gona nap for awhile. coz this part of world will see it at 2am in the morning. =D
<Ddorda> JoeMaverickSett: good night :)
<JoeMaverickSett> Ddorda, good night for now! i'll be back though! =D
<phillw> hi, just trying to tame the bot to announve the source for the slides for this presentation
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Current Session: Installing a LAMP server - Instructor: phillw - Slides: http://people.ubuntu.com/~nhandler/slides/misc/InstallingALAMPServer.pdf
<ClassBot> Slides for Installing a LAMP server: http://people.ubuntu.com/~nhandler/slides/misc/InstallingALAMPServer.pdf
<phillw> Hi everyone, firstly my deepest apology for getting the time wrong. This class is for installing LAMP onto a desktop system, I'll give everyone a moment to get the slides
<phillw> I will be watching #ubuntu-classroom-chat, so if there are any questions, please feel free to ask.
<phillw> [slide 1] As it says, what we are going to cover
<phillw> [slide2] The server manual is an excellent resource, the link for it is at the end of this presentation
<phillw> [slide 3] Doing a search for installing LAMP onto ubuntu will provide many, many different methods for achieving this. This class has come about as a result of people having problems
<phillw> issuing the command from a terminal will list all the various things that tasksel can do, easily, for you.
<phillw> [slide 4] Navigating is quite easy
<phillw> [slide 5] The use of the space bar and the tab key are all important with tasksel
<phillw> tasksel does come pre-installed
<ClassBot> zkriesse asked: Does taskel come pre-installed?
<phillw> [slide 6] Just do as you are told, it is very important not to lose your MySQL master password !!!
<phillw> The installation takes about 5 minutes, dependant on the speed of your internet and computer
<phillw> [slide 7] As we're using a desktop system, installing phpmyadmin is easiest done using Synaptics Package Manager. I've seen occurances where using apt-get has caused problems; So I'd recommend you use Synaptics.
<phillw> [slide 8] The clicking on details is important, other wise you will not see the questions being asked
<phillw> Automatically closing after the changes have been applied is recommended, I haven't done so as I was taking screen shots
<phillw> [slide 9] Again, you are accepting the defaults.
<phillw> [slide 10] It makes sense to me to keep the same password for phpmyadmin as for the MySQL system, you can choose a different one if you wish.
<ClassBot> zkriesse asked: What all is the phpmyadmin for/it's purpose?
<phillw> phpmyadmin gives a nice GUI method of creating / editing MySQL databases, tables, users and data.
<zkriesse> Ok
<phillw> [slide 11] the log on screen for phpmyadmin, the user name created will be root (you can create new ones later)
<phillw> [slide 12] As we only need to confirm that phpmyadmin can see the default databases (on the left), we can now exit it.
<phillw> [slide 13] in php.ini you store all the information that apache2 will read when you start it up, I'm just going to cover the 10.04 one.
<phillw> the file itself is well commented.
<dvinchi> ke onda
<dvinchi> de ke es la clase de hoy?
<pleia2> dvinchi: it's going on right now, please join #ubuntu-classroom-chat to talk :)
<phillw> [Slide 14] As the default installation is for production use, to be able to see any errors and warnings you need to put the development set of rules on and restart the apache2 server.
<phillw> [Slide 15] My old bug-bear, magic quotes. I'm glad to say that they are now turned off by default in 10.04. They are going to be gone completely from php at some point in the future, and their use is not recommended.
<phillw> [slide 16] prior to 10.04, they were on by default. I'd strongly recommend turning them off, as they are going to vanish from php at some point in the future.
<phillw> [slide 17] This slide contains various links for further reading and to read up on how the LAMP server works (if you are curious) and also introductions to using phpmyadmin and programming with php, writing web-sites etc.
<phillw> That concludes the presentation. If you have any questions, please post them in #ubuntu-classroom-chat with the prefix QUESTION:
<ClassBot> OolonColluphid asked: What is a good resource for post install setup and configuration?
<phillw> I'd suggest heading over to the server manual in the first instance, the two php.ini versions provided by 10.04 are excellent for swithich between production and development
<ClassBot> OolonColluphid asked: like how to set up user public_html directories, where to put images, cgi scripts etc.
<phillw> by default the apache server will point to /var/www you can make sub dierctories in there.
<ClassBot> There are are 10 minutes remaining in the current session.
<phillw> If you're not too keen on editing the configuration files to change directories, there is a GUI programme called webmin which can do those things for you. http://www.webmin.com/intro.html
<ClassBot> There are are 5 minutes remaining in the current session.
<ClassBot> OolonColluphid asked: is there a good resource for understanding the Apache config files?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
<phillw> for further details on configuring up apache2, head over to http://httpd.apache.org/docs/2.0/howto/
<phillw> webmin is a GUI method of configuring up Server things (adding additional web sites to your lamp system, reading mail and error logs etc.) http://www.webmin.com/intro.html
<phillw> it can be got for ubuntu via http://www.webmin.com/deb.html
<JoeMaverickSett> oh! i missed the class, is there a log that i can look at?
<pleia2> JoeMaverickSett: yep, http://irclogs.ubuntu.com/2010/07/28/%23ubuntu-classroom.html
<JoeMaverickSett> pleia2, thanks.
<kosaidopo_> hello guys
<kosaidopo_> where can i find the schedule fo topics tnx
<nhandler> kosaidopo_: http://is.gd/8rtIi (It is in the /topic)
<kosaidopo_> nhandler: yeh im there too bad for me i missed all the classes
<kosaidopo_> : (
<nhandler> kosaidopo_: More will be coming up. I would suggest subscribing to that calendar.
<kosaidopo_> nhandler: how can i subscribe ?
<kosaidopo_> by clickin that mascot  google
<nhandler> kosaidopo_: If you use Google Calendar, you can hit the button in the bottom right hand corner of that page. Otherwise, you will need to figure out if your calendar application can subscribe to online ical files
<kosaidopo_> nhandler: yeh i got a google account
<kosaidopo_> but i m not vey fmilair with it
<kosaidopo_> dude when ther eventwill be soon i ll get a mail right
<nhandler> kosaidopo_: If you setup Google Calendar to send you an email reminder, yes.
<kosaidopo_> oh i sud set it up then
<kosaidopo_> ok tnx ill try to find out how
<kosaidopo_> nhandler: what abt those old classes i cant find em on the web or sutmhin ?
<nhandler> kosaidopo_: http://wiki.ubuntu.com/Classroom has links
<kosaidopo_> nhandler: ok ttnx a lot
<kosaidopo_> no i mean get those classes i missed out
<kosaidopo_> u got me
<kosaidopo_> yeh i saw grr@me sorry
<kosaidopo_>  ineed toclean my specs : D
#ubuntu-classroom 2010-07-29
<kayer21> ola
<kayer21> hello
<realopty> mornin.
#ubuntu-classroom 2010-07-31
<abhijit> when is the next user days and dev days?
#ubuntu-classroom 2011-07-25
<coolmariorocks> hello
<coolmariorocks> i have a question
<coolmariorocks> if i way ask in here
<pleia2> coolmariorocks: you probably want to ask in #ubuntu (this channel is for classes)
<coolmariorocks> ok thanks pleia2
<chinnappan> HI
<chinnappan> evolution + exchange 2010 is not showing folder ? please help me ?
<chinnappan> evolution + exchange 2010 is not showing folder ? please help me ?
<chinnappan> do you have any documentation for file server in linux?
<showkat> when will start the cloud session
<rwh> ?
<rwh> help
<rwh> cloud session starts at 16:00 UTC
<Wordpad2> #ubuntu-classroom-cha
<Wordpad2> #ubuntu-classroom-chat
<Wordpad2> Sorry...
<HugoKuo> test
<Hugo> test
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Getting started with Ensemble - Instructors: kim0
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
<kim0> Hello, Good morning, good evening and good afternoon
<kim0> Welcome to Ubuntu Cloud Days!
<kim0> This is the second UCD ever
<kim0> This event will be run for two days (today and tomorrow)
<kim0> You can find more information regarding the event on https://wiki.ubuntu.com/UbuntuCloudDays/
<kim0> It would be great to spread the news and let your friends join in
<kim0> This is a great chance to get introduced to new ubuntu related server and cloud technologies
<kim0> as well as a chance to connect to developers and active community members
<kim0> Alright ..
<kim0> Let's get started then
<kim0> At any time you can "ask a question"
<kim0> this is done by prepending your question with QUESTION: .. example ..  "QUESTION: what is xxx?"
<kim0> a bot will pick up the question, and the instructor will answer it at a suitable time
<kim0> So .. this session is for Ensemble
<kim0> Take a moment to check out:  https://ensemble.ubuntu.com/
<kim0> Ensemble is a cloud orchestration framework
<kim0> Since cloud layers an API over compute resources
<kim0> many compute resources such as servers, are more and more being regarded as disposable
<kim0> people fire up servers, use them for an hour and destroy them
<kim0> this is valid for both public and private clouds
<kim0> as such, it would be pretty good to think at a higher level than a "server"
<kim0> namely to think at the "service" level
<kim0> this is one of the main concepts of Ensemble
<kim0> Let's quickly discuss a few key concepts about Ensemble
<kim0> 1- Ensemble focuses on the higher level concept of "Services" rather than "servers"
<kim0> Examples of a "service" would be
<kim0> - MySQL
<kim0> - Memcached cluster
<kim0> - Munin: as a monitoring service
<kim0> - Bacula: as a backup service
<kim0> and so on
<kim0> 2- The second important concept is that Ensemble completely "encapsulates" those services
<kim0> that is, if you have no idea how to get munin running
<kim0> if you ask ensemble to deploy it, you would have it running a minute or two
<kim0> and you can connect it (read: relate it) to other services
<kim0> and it would start graphing performance metrics from all around your infrastructure
<kim0> you do not need to know how to control munin, it is encapsulated
<kim0> 3- The third important concept, is that with Ensemble services are "composable"
<kim0> that is, services have well defined interfaces
<kim0> such that you can connect/relate many services together .. to form a large infrastructure
<kim0> you can replace infrastructure components with others .. such as replace mysql with pgsql if you so wish
<kim0> and if both of them implement the same interface!
<kim0> so ..
<kim0> Ensemble enables layering a high level API over "services" and allows composing sophisticated infrastructures from that .. easily, consistently and without worrying about any details!
<kim0> If you have any questions
<kim0> now would be a good time to ask
<kim0> remember to prepend any question with "QUESTION:"
<kim0> I will now prepare the demo environment, that should clear up things a bit
<kim0> For anyone wanting to follow along with the demo
<kim0> Please ssh as user guest to the following machine
<kim0> ssh guest@ec2-50-19-23-213.compute-1.amazonaws.com
<kim0> password: guest
<kim0> you will get a read-only view to a shared screen session
<kim0> I will start the demo
<kim0> I will be pasting commands and output text in this session as well, for archival purposes
<kim0> The very first step we do is:
<kim0> $ ensemble bootstrap
<kim0> 2011-07-25 16:17:22,569 INFO Bootstrapping environment 'sample' (type: ec2)...
<kim0> 2011-07-25 16:17:23,637 INFO 'bootstrap' command finished successfully
<kim0> What the ensemble bootstrap does, is it starts a "management node" if you will
<kim0> that is used to control our cloud deployment
<kim0> let's check out the files available in the current directory
<kim0> $ ls
<kim0> byobu-classroom  drupal  mysql
<kim0> byobu-classroom: setup scripts for the shared screen session you are see'ing .. This is not related to Ensemble
<kim0> drupal: Ensemble drupal formula
<kim0> mysql: Ensemble mysql formula
<kim0> What is a formula you ask ?
<kim0> A formula holds instructions for Ensemble on how to install and manage a service
<kim0> that is .. the drupal formula, tells Ensemble how to install drupal, how to connect it to the database, how to create DB tables, how to configure a drupal website behind a load balancer ...etc
<kim0> It is the experience of devops .. distilled .. into a "formula" that everyone can use
<kim0> This is one of the great reasons "why use Ensemble" ..
<kim0> Your deployment, not only becomes FAST, repeatable but also, you get the experience of the Ensemble community
<kim0> all working for you .. without you even knowing about it (if you so choose)
<kim0> alright ..
<kim0> Let's deploy MySQL
<kim0> jump to the screen session
<kim0> The command to deploy a production mysql database is
<kim0> $ ensemble deploy --repository=. mysql mydb
<kim0> Let's break down this command and understand what it does
<kim0> ensemble deploy â Asking Ensemble to deploy a service
<kim0> --repository = . â Mentioning to Ensemble that the formulas are available in the current directory
<kim0> mysql mydb â Deploy the formula "mysql" as a service called "mydb"
<kim0> let's quickly paste the output of the command
<kim0> $ ensemble deploy --repository=. mysql mydb
<kim0> 2011-07-25 16:22:51,307 INFO Connecting to environment.
<kim0> 2011-07-25 16:22:54,857 INFO Formula deployed as service: 'mydb'
<kim0> 2011-07-25 16:22:54,859 INFO 'deploy' command finished successfully
<kim0> So .. deploy .. finished successfully
<kim0> similarly .. let's deploy the "drupal" formula .. as "mywebsite"
<kim0> $ ensemble deploy --repository=. drupal mywebsite
<kim0> 2011-07-25 16:23:04,117 INFO Connecting to environment.
<kim0> 2011-07-25 16:23:05,167 INFO Formula deployed as service: 'mywebsite'
<kim0> 2011-07-25 16:23:05,168 INFO 'deploy' command finished successfully
<kim0> This should be very familiar
<kim0> Let us check the status of our deployment
<kim0> We use the "ensemble status" command for that
<kim0> Here is the command and its output
<kim0> $ ensemble status
<kim0> 2011-07-25 16:27:37,395 INFO Connecting to environment.
<kim0> machines:
<kim0>   0: {dns-name: ec2-50-17-158-183.compute-1.amazonaws.com, instance-id: i-8dc16dec}
<kim0>   1: {dns-name: ec2-184-72-129-61.compute-1.amazonaws.com, instance-id: i-35de7254}
<kim0>   2: {dns-name: ec2-50-16-71-235.compute-1.amazonaws.com, instance-id: i-15de7274}
<kim0> services:
<kim0>   mydb:
<kim0>     formula: local:mysql-98
<kim0>     relations: {}
<kim0>     units:
<kim0>       mydb/0:
<kim0>         machine: 1
<kim0>         relations: {}
<kim0>         state: started
<kim0>   mywebsite:
<kim0>     formula: local:drupal-9
<kim0>     relations: {}
<kim0>     units:
<kim0>       mywebsite/0:
<kim0>         machine: 2
<kim0>         relations: {}
<kim0>         state: started
<kim0> 2011-07-25 16:27:38,635 INFO 'status' command finished successfully
<kim0> Let's try to understand this output
<kim0> In the "machines" section
<kim0> We have 3 machines deployed
<kim0> 0 1 and 2
<kim0> 0 is always the very first "bootstrap" node
<kim0> 1 and 2 are the machines running mysql and drupal ..
<kim0> Looking at the "services" section
<kim0> we understand that we just deployed the service "mydb" .. remember this is the name we chose
<kim0> the mydb service is running on machine "1"
<kim0> and it is "started"
<kim0> that is .. mysql has been installed and it is "ready" to be used
<kim0> the same for drupal .. it is running on machine 2 and is started as well
<kim0> It is interesting to note
<kim0> that "relations: {}"
<kim0> is empty
<kim0> what this really means is
<kim0> that the services deployed "mysql" and "drupal":
<kim0> have not been "coupled" yet ..
<kim0> i.e. mysql does not have the drupal database created yet ..etc
<kim0> the magic of Ensemble and the very cool part .. is when you start connecting infrastrcuture pieces together
<kim0> watching how all pieces jump together and a bigger system is created
<kim0> let's connect those two components
<kim0> The command to connect them (read: relate them) is
<kim0> $ ensemble add-relation mydb:db mywebsite
<kim0> We are adding a relation between mydb (our instance of mysql) and mywebsite (an instance of drupal)
<kim0> It is extremely interesting what is happening at this instant
<kim0> once this relation is established
<kim0> both services start communicating and collaborating towards creating that bigger infrastructure
<kim0> so .. mysql creates a database for drupal
<kim0> it "sends over" the dabase details "username, password, DB name...etc" to the machine running drupal
<kim0> drupal gets this configuration information
<kim0> rewrites its configuration files to use this DB
<kim0> creates its tables and configures the DB
<kim0> the services have now been coupled!
<kim0> Let's check the status
<kim0> $ ensemble status
<kim0> 2011-07-25 16:36:08,453 INFO Connecting to environment.
<kim0> machines:
<kim0>   0: {dns-name: ec2-50-17-158-183.compute-1.amazonaws.com, instance-id: i-8dc16dec}
<kim0>   1: {dns-name: ec2-184-72-129-61.compute-1.amazonaws.com, instance-id: i-35de7254}
<kim0>   2: {dns-name: ec2-50-16-71-235.compute-1.amazonaws.com, instance-id: i-15de7274}
<kim0> services:
<kim0>   mydb:
<kim0>     formula: local:mysql-98
<kim0>     relations: {db: mywebsite}
<kim0>     units:
<kim0>       mydb/0:
<kim0>         machine: 1
<kim0>         relations:
<kim0>           db: {state: up}
<kim0>         state: started
<kim0>   mywebsite:
<kim0>     formula: local:drupal-9
<kim0>     relations: {db: mydb}
<kim0>     units:
<kim0>       mywebsite/0:
<kim0>         machine: 2
<kim0>         relations:
<kim0>           db: {state: up}
<kim0>         state: started
<kim0> 2011-07-25 16:36:09,646 INFO 'status' command finished successfully
<kim0> Notice how the "relations:" field now relates each component to the other
<kim0> of course this could be a much larger system
<kim0> i.e. there could be a load balancer front end service, a backup service, a monitoring service ...etc
<kim0> But fundamentally .. it's the same
<kim0> You deploy components .. connect them together and your good to go!
<kim0> So .. our drupal instance is ready .. why not pay it a visit
<kim0> Since drupal is running on machine 2 .. from the machines section .. this is the machin we need: ec2-50-16-71-235.compute-1.amazonaws.com
<kim0> Go ahead and visit
<kim0> http://ec2-50-16-71-235.compute-1.amazonaws.com/ensemble/
<kim0> Indeed drupal is there waiting for us! (woohoo) that was easy
<kim0> Note how I might have deployed drupal without really knowing anything about how it needs to be deployed
<kim0> and yet .. the deployment is done according to best practices of the Ensemble formula writers community
<kim0> Awesome .. let's create a tiny first post
<kim0> Alright .. we now have some content
<kim0> Just refresh http://ec2-50-16-71-235.compute-1.amazonaws.com/ensemble/
<kim0> Now .. here comes another (OMG this is awesome) moment
<kim0> What about your blog (or whatever service) suddenly becomes popular
<kim0> you're slashdotted
<kim0> You want to scale out
<kim0> sure this has to be complex right!
<kim0> let's check out how we can get this done
<kim0> This is what we need
<kim0> $ ensemble add-unit mywebsite
<kim0> Yes that's it .. we have scaled out
<kim0> let's quickly understand this command
<kim0> add-unit : Adds a service unit to "mywebsite"
<kim0> remember mywebsite is that name of our instance of the drupal formula
<kim0> So
<kim0> A new ec2 instance is created
<kim0> It is important to note .. that Enesmeble uses plain "vanilla" ubuntu images
<kim0> everything is installed and configured on the fly
<kim0> the new node is configured as type "mywebsite"
<kim0> what is really awesome is
<kim0> since this new node, is of type mywebsite .. it already "knows" how to hook up to the surrounding services!
<kim0> In this case .. only mysql .. but could be much more sophisticated
<kim0> This is the DRY: Don't Repeat Yourself .. concept
<kim0> let's again quickly check out status
<kim0> $ ensemble status
<kim0> 2011-07-25 16:46:17,368 INFO Connecting to environment.
<kim0> machines:
<kim0>   0: {dns-name: ec2-50-17-158-183.compute-1.amazonaws.com, instance-id: i-8dc16dec}
<kim0>   1: {dns-name: ec2-184-72-129-61.compute-1.amazonaws.com, instance-id: i-35de7254}
<kim0>   2: {dns-name: ec2-50-16-71-235.compute-1.amazonaws.com, instance-id: i-15de7274}
<kim0>   3: {dns-name: ec2-50-16-175-35.compute-1.amazonaws.com, instance-id: i-73a50912}
<kim0> services:
<kim0>   mydb:
<kim0>     formula: local:mysql-98
<kim0>     relations: {db: mywebsite}
<kim0>     units:
<kim0>       mydb/0:
<kim0>         machine: 1
<kim0>         relations:
<kim0>           db: {state: up}
<kim0>         state: started
<kim0>   mywebsite:
<kim0>     formula: local:drupal-9
<kim0>     relations: {db: mydb}
<kim0>     units:
<kim0>       mywebsite/0:
<kim0>         machine: 2
<kim0>         relations:
<kim0>           db: {state: up}
<kim0>         state: started
<kim0>       mywebsite/1:
<kim0>         machine: 3
<kim0>         relations:
<kim0>           db: {state: up}
<kim0>         state: started
<kim0> 2011-07-25 16:46:18,907 INFO 'status' command finished successfully
<kim0> "mywebsite" now has two service unit instances mywebsite/0 and mywebsite/1
<kim0> the new node is running machine "3" which is ec2-50-16-175-35.compute-1.amazonaws.com
<kim0> which means .. visiting http://ec2-50-16-175-35.compute-1.amazonaws.com/ensemble/ .. You should see the second drupal instance
<kim0> of course if you'd like to further scale out .. you just keep add'ing more units .. that's all it takes
<kim0> The mysql formula supports adding "slave" nodes
<kim0> so you can scale your DB via adding more slave nodes
<ClassBot> There are 10 minutes remaining in the current session.
<kim0> alright .. time flies when you're having fun
<kim0> What is really cool is that formulas can be written in ANY language
<kim0> so bash, php, python .. whatever you fancy!
<kim0> I will vim open the drupal formula in the screen session
<kim0> Let me take any questions quickly
<ClassBot> rwh asked: is there already a formula repo, or is this a service that's planned for the future?
<kim0> great question
<kim0> right now .. You can see formulas over at https://code.launchpad.net/principia
<kim0> however a more integrated version is coming very soon ..
<kim0> where you'll be able to search and install formulas just like you do with ppas
<ClassBot> TeTeT asked: how much effort is it to write these relations? Isn't this more complicated than configuring the services themselves, e.g. how many units do I need to have so the initial investment in Ensemble pays off
<kim0> Great question as well ..
<kim0> It is pretty simple to write those relations
<kim0> I just opened the db-relation-changed script for my drupal formula
<kim0> as you can see it's a pretty simple bash script
<kim0> that gets the database configuration details from ensemble .. then simply uses "sed" to render a template configuration file
<kim0> I really like the fact that I do not have to wrestle with learning a new DSL configuration language
<ClassBot> There are 5 minutes remaining in the current session.
<kim0> I'll use the remaining minutes to let you know that you can find the Ensemble community at
<kim0> #ubuntu-ensemble
<kim0> all developers, formula writers and community members hang out there
<kim0> our goal is to cover all of free software with Ensemble formulas
<kim0> such that you're able to ensemble deploy whatever you fancy .. just like you apt-get install whatever you want today
<kim0> Please join in .. and start writing and contributing formulas
<kim0> it's very easy .. there is no special language to learn, and the community is extremely helpful
<kim0> you can ask me (or others ) any questions in #ubuntu-ensemble (or #ubuntu-cloud) at any time
<kim0> I hope this was useful and fun .. see you in a next session
<kim0> Next session will be for cloud-init .. an Ubuntu originated cloud technology
<kim0> the two sessions afterwards will be for Orchestra and its integration with Ensemble .. both great technologies being developed this cycle
<kim0> and the final session will be for Eucalyptus v3 .. I hope you will enjoy the first day of UCD
<kim0> Good bye
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Introduction to Cloudinit - Instructors: koolhead17
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
<koolhead17> hello everyone
<koolhead17> cloud-init is the Ubuntu cloud technology that enables a cloud instance to bootstrap itself and customizeÂ 
<koolhead17> we can do many operations on our instance before it boots up
<koolhead17> its like adding an extra layer with more contents
<koolhead17> lets talk about an example
<koolhead17> you want an instance to have say apache installed automatically every time it boots up
<koolhead17> you can simply use
<koolhead17> packages:
<koolhead17>  - apache2
<koolhead17> and if you are using amazon ec2 web interface you can pass the parameter during launching the instance.
<koolhead17> cloud-init works for openstack as well as eucalyptus
<koolhead17> i will try to show you demo of the same if possible at the end
<koolhead17> lets say you want to boot your instance with a specific timezone everytime the instance boots
<koolhead17> you can simply define that using
<koolhead17> timezone: US/Eastern
<koolhead17> parameter in the file which you will be passing
<koolhead17> now lets move to the example file we have http://bazaar.launchpad.net/~cloud-init-dev/cloud-init/trunk/view/head:/doc/examples/cloud-config.txt
<koolhead17> line 7 apt_update: false
<koolhead17> which means the parameter will be passed and at time instance launches  automatic update will not happen
<koolhead17> you can change it to apt_update: true and pass it during booting instance to enable it
<koolhead17> similarly to enable/disable we have "apt_upgrade"
<koolhead17> in the next line we can see it mentions about adding of repository. you can add your custom repository as well.
<koolhead17> doing this will save some bandwidth in data-centre like environment :)
<koolhead17> i will skip some of the examples from there :D
<koolhead17> you can even run commands
<koolhead17> line 205
<koolhead17> bootcmd:
<koolhead17> - echo 192.168.1.130 us.archive.ubuntu.com > /etc/hosts
<koolhead17> you can run commands like :
<koolhead17> runcmd: - [ ls, -l, / ]
<koolhead17> one of the feature which i find most exciting and am fighting with it is debconf_selections: |
<koolhead17> byobu_by_default: system
<koolhead17> enables byobu to all the uses by default once they login
<koolhead17> the availability of cloud-init technology on all the cloud environment am working (openstack, eucalyptus, ec2)
<koolhead17> you can find more info and detailed instruction at kim0 blog http://foss-boss.blogspot.com/search/label/cloud-init
<koolhead17> cloud-init comes pre installed if you are using ec2
<koolhead17> in case of openstack you need to install the package at time of preparing your cloud image
<koolhead17> you can use euca tools in case of eucalyptus and openstack
<koolhead17> on ec2 you can use web interface as well as via command line
<koolhead17> so lets recap what all we have discussed so far
<koolhead17> Some of the things cloud-init configures are:
<koolhead17> setting hostname
<koolhead17> generate ssh private keys
<koolhead17> *which i forgot covering earlier :(
<koolhead17> adding ssh keys to user's .ssh/authorized_keys so they can log in
<koolhead17> setting up ephemeral mount points
<koolhead17> to execute a command
<koolhead17> runcmd:
<koolhead17> automatic package update and upgrade
<koolhead17> timezone setup
<koolhead17> package installation
<koolhead17> like apache2
<koolhead17> you can also see https://help.ubuntu.com/community/CloudInit
<koolhead17> you people can take break now
<koolhead17> the next session is about Orchestra
<koolhead17> and it will be presented by 2 members from the server engineering team
<koolhead17> thanks
<koolhead17> it would have been more interesting with the demo which am unable to do :(
<koolhead17>  /msg classbot !q
<koolhead17> !y
<ClassBot> Guest32626 asked: is cloud-init available for other linux distros?
<koolhead17> Guest32626: it is available for Amazon's linux.
<koolhead17> which is similar to fedora ..
<koolhead17> it has been adopted by Amazon
<koolhead17> and can easily be ported to other linux'es
<ClassBot> Guest86346 asked: Is that possible to configure route table after retrieving metadata with cloud-init ?
<koolhead17> yes its very much possible with the script :)
<koolhead17> runcmd
<koolhead17> one more important thing
<koolhead17> we all are available at #ubuntu-cloud , our official cloud support channel for ubuntu. join us and hangout with ys
<koolhead17> *us
<koolhead17> and 1 more thing the mega session is coming nest
<koolhead17> *next
<koolhead17> about Orchestra and Ensemble .. Two pillar technologies for Ubuntu server in 11.10
<koolhead17> :)
<koolhead17> Good bye .. and that's all :)
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Orchestra and Ensemble (part1) - Instructors: smoser
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
<smoser> OK, lets get started
<smoser> Hi, I'm Scott Moser, an Ubuntu Developer working on the Ubuntu Server Team.
<smoser> If you're not familiar with the way classroom works, please see https://wiki.ubuntu.com/Classroom/ClassBot
<smoser> hint: join #ubuntu-classroom-chat
<smoser> Much of the Server Team's focus this cycle has been on deployment.  That deployment really falls into 2 different categories
<smoser>  * ensemble: deploying and managing services on top of existing Ubuntu installs (or new cloud instances)
<smoser>  * orchestra: deploying Ubuntu onto "bare metal".
<smoser> A few weeks ago, it was decided that we wanted to make Orchestra a "provider" for Ensemble.
<smoser> What this means is that we wanted to allow Ensemble to deploy and manage "bare metal" machines the same way that it originally knew how to manage EC2 instances.  Andres [RoAkSoAx] will talk more about that in the next session.
<smoser> Like anybody else, we don't have enough hardware, and even less hardware with remotely controllable power switches and fast networks.
<smoser> In order to get ourselves an environment that we could develop the "orchestra provider" for ensemble I put together "cobbler-devenv".
<smoser> That can be found at http://bazaar.launchpad.net/~smoser/+junk/cobbler-devenv/files/head:/cobbler-server ,
<smoser> or via 'bzr branch lp:~smoser/+junk/cobbler-devenv'
<smoser> cobbler-devenv allows you to very easily set up a cobbler development environment using libvirt.  That environment:
<smoser>  * includes an Orchestra server and 3 "nodes"
<smoser>  * includes a dhcp server and dns server
<smoser>  * will not interfere with physical networks *or* other libvirt networks.
<smoser> The code there is currently focused on deploying cobbler and an ensemble provisioning environment, but it not much is really specific to that purpose.
<smoser> If you've not already done so, go ahead and open the cobber-server url above or branch it.  The HOWTO explains how to set all this up.  I'll largely walk through that here with some more explanation as to what is going on than is in that file.
<smoser> anyone have any questions so far?
<smoser> ok then.
<smoser> == Some configuration first ==
<smoser> as prereqs, you'll need to
<smoser> $ apt-get install genisoimage libvirt-bin qemu-kvm
<smoser> In  order to interact with libvirt, you have to be in the libvirtd group,  and in order to use kvm acceleration you have to be in 'kvm' group.  So:
<smoser>  $ sudo adduser $USER kvm
<smoser>  $ sudo adduser $USER libvirtd
<smoser> $ sudo apt-get install python-libvirt
<smoser> Also, note that libvirt does not work when images are in a private home directory.  The images must be viewable to the libvirt user.
<smoser> this cost me a fair amount of time once trying to debug why my VMs were getting "permission denied" when they were clearly readable (but the path to the images was not)
<smoser> the first step in the HOWTO document is to build a cobbler server.  To do that, we utilize build-image like:
<smoser> $ ./build-image -vv --preseed preseed.cfg oneiric amd64 8G
<smoser> # oops, we have somewhat changed sections of my talk here, we're now in
<smoser> == building the Orchestra server VM ==
<smoser> please feel free to pipe-in with questions if you have them
<smoser> Note, the above command won't actually work right now. :-(
<smoser> bug 815962 means that that doesn't currently work, and wont until the next upload of debian-installer.
<smoser> it should be fixed in 24 hours or so, though
<smoser> That command will take quite a while to run, probably heavily based on network speed as it is doing a network install.  Locally, with my local mirror, a natty build just took 12 minutes.
<smoser> You can have it use a mirror by editing preseed.cfg.
<smoser> It wraps all the following:
<smoser>  * grab the current mini-iso for oneiric
<smoser>  * extract the kernel and ramdisk using isoinfo
<smoser>  * repack the ramdisk so it has 'preseed.cfg' inside it, and set up the 'late_command' in installer to do some custom configuration for us (see 'late_command.sh').
<smoser>  * after install is done, boot the system again to do some final config that 'late_command' layed down.
<smoser> It does this via kvm and the kvm user net, so you can build this entirely without libvirt or root access.
<smoser> I'm particularly proud of not needing root for this.
<smoser> or any network access other than to the archive.
<smoser> This basic setup could be used for automated building of virtual machines (as it is here)
<smoser> The result is that you now have a disk image that is ready to boot.  We've built the Orchestra virtual server that will be in charge of provisioning the nodes.
<smoser> $ ls -lh natty-amd64.img
<smoser> -rw-r--r-- 1 libvirt-qemu kvm 1.3G 2011-07-25 12:39 natty-amd64.img
<smoser> Now we just we need to set up a libvirt network, and put that image on it.
 * smoser pauses a bit for questions
<smoser> sees that there are some and is looking
<ClassBot> TeTeT asked: do we setup a virtual environment to boot bare metal servers and install them?
<smoser> TeTeT, sorry to be unclear
<smoser> the goal of cobbler-devenv is to have a purely virtual environment that models a typical hardware setup
<smoser> we'll end up with a cobbler server vm, and 3 "node" vms attached to a network where the cobbler server will be able to turn on the nodes and control their pixee boot via tftp
<ClassBot> alexm asked: is it necessary to have 8G for the server? i _just_ have 8G in total in my desktop
<smoser> alexm, I used 8G, though it is a bit large.  as you can see above, the total space *used* will be much less.
<smoser> qcow is a sparse format.  I would guess you can get buy with 4G, but with all the installed components in the server, much less is going to be really tight.
<ClassBot> m_3 asked: so './build-image -vv --preseed preseed.cfg natty amd64 8G' should work, but oneiric won't?
<smoser> build-image with 'natty' "should work"
<smoser> i verified the install went fine, but ran into bug https://launchpad.net/bugs/804267
<smoser> that caused me to not be 100% tested that path today
<ClassBot> TeTeT asked: so if it's for a virtual environment, this means a non cloud environment, as otherwise installing OS is a non-issue, at least with euca and openstack?
<smoser> TeTeT, right. it is for a virtual environment, and "non-cloud"
<smoser> the initial reason I developed this was to ease the development of the "orchestra provider" for ensemble
<smoser> through that provider, ensemble will be able to install "bare metal" systems.
<smoser> we're just creating a virtual network that would be like a physical network and sytems you would have access to, but its easier to work with the virtual.
<smoser> the primary goal of "bare metal provisioning" for ensemble, is actually to provision a cloud
<ClassBot> kim0 asked: What would it take to install real physical boxes out of that dev-env
<smoser> to install real machines off of the cobbler vm, you'd have to set bridging up differnetly than i have it, and have your dhcp server point next-server to the cobbler system
<smoser> ok...
<smoser> moving on a bit
<smoser> == Setting up Libvirt resources ==
<smoser> please feel free to ask questions. if you say 'smoser' in #ubuntu-classroom-chat i'm more likely to see it.
<smoser> Now, back at the top level directory of cobbler-devenv we have a 'settings.cfg' file [http://bazaar.launchpad.net/~smoser/+junk/cobbler-devenv/view/head:/settings.cfg]
<smoser> The goal is that this file defines all of our network settings.  It has sections for 'network', 'systems' (static systems like the Orchestra Server) and 'nodes'.
<smoser> the only static system we have is 'cobbler', but there could be more described there.
<smoser> We create the libvirt resources by running './setup.py' (which should probably be renamed to something that does not look like it came from python-distutils)
<smoser> that script interacts with libvirt via python bindings
<smoser> $ ./setup.py libvirt-setup
<smoser> That will put some output to the screen indicating that it created a 'cobbler-devnet' network, a 'cobbler' domain, and 3 nodes named 'node01' - 'node03'.
<ClassBot> skrewler asked: Is support for Chef in the roadmap?  Or is it possible to substitute puppet for another CM tool, like cfengine or Chef?
<smoser> skrewler, well, there is no real CM tool involved here.  The initial goal was to get Ensemble up, but it would take very little changes to make the setup able to use chef, cfengine or puppet.
<smoser> Those things wouldprimarily be configured through cobbler kickstart templates (preseed templates).
<smoser> i'm not really interested in that, though, this was really just to get a test environment up for ensemble, but it definitely could be utilized to test out other managmeent bootstrapping and management.
<smoser> so.... above, we created the cobbler-devnet and 3 nodes
<smoser> The libvirt xml is based on the libvirt-domain.tmpl and libvirt-network.tmpl files, which are parsed as Cheetah template files.
<smoser> The end result is that we have a 'cobbler-devnet' network at 192.168.123.1, and has statically configured dhcp entries for our cobbler server and 3 nodes, so that when they DHCP they'll get set IP addresses.
<smoser> the cobbler-devnet network looks something like:
<smoser> http://paste.ubuntu.com/651906/
<smoser> notice how we have MAC addresses in the network setup that will match with our mac addresses in the nodes
<smoser> now our network is setup, so lets put the cobbler server on it
<smoser> We build a qcow "delta" image off of the pristine server image we built above so we can easily start fresh.
<smoser> $ virsh -c qemu:///system net-start cobbler-devnet
<smoser> Network cobbler-devnet started
<smoser> $ qemu-img create -f qcow2 -b cobbler-server/natty-amd64.img  cobbler-disk0.img
<smoser> $ virsh -c qemu:///system start cobbler
<smoser> Domain cobbler started
<smoser> That will take some time to boot, but after a few minutes you should be able to ssh to the cobbler system using its IP address:
<smoser>  $ ssh ubuntu@192.168.123.2
<smoser> (the password is 'ubuntu', obviously you should change that)
<smoser> While you're there, you can verify that 'cobbler' works by running:
<smoser>  $ sudo cobbler list
<smoser> that should show you that there were some images imported for network install of Ubuntu.
<smoser> At this point You can also get to the web_ui of cobbler at: http://192.168.123.2/cobbler_web and poke around there.
<smoser> generally, we've got a fully functional cobbler server just waiting for something to install!
<smoser> Then, back on the host system we populate the cobbler server with the 3 nodes that we've created.
<smoser> $ ./setup.py cobbler-setup
<smoser> That uses the cobbler xmlrpc api to set up our nodes.  Now, a 'cobbler list' will show our nodes.
<smoser> It also configures those nodes to be controllable by the "virsh" power control (that is like ipmi, but for virtual machines).  We've got one more thing to do though before that can happen.
<smoser> On the host system we need to run:
<smoser>  $ socat -d -d \
<smoser>      TCP4-LISTEN:65001,bind=192.168.123.1,range=192.168.123.2/32,fork \
<smoser>      UNIX-CONNECT:/var/run/libvirt/libvirt-sock
<smoser> socat is a useful utility, and the above command tells it to listen for ip connections on port 65001 and forward those to the unix socket that libvirt listens on.
<smoser> basically this makes libvirtd listen on a tcp socket
<smoser> Before you go screaming how horrible that is (it would be)
<smoser> notice that We've limited the host IP to the IP range of the guest network, and told it to only listen on the guest's interface, so it is mildly secure. Definitely much better than just listening on all interfaces.
<smoser> Once that is in place, you can turn the nodes on and off via the cobbler web_ui.
<smoser> Basically, at this point, we have modeled a lab with IPMI power control of node systems from the cobbler system.
<smoser> nodes can be turned on and off, and their network boot controlled via the cobbler vm system.
<smoser> I should have pointed out above, that our libvirt xml for the Node systems has them network booting.
<smoser> If you configure 'network-boot' for a node, and then start it, it should begin to install itself.
<smoser> You can try that out, and then (from the host system) watch the install with:
<smoser>  $ vncviewer $(virsh vncdisplay node01)
<smoser> It should actually walk through a fully automated install.
<smoser> questions?
<smoser> Well, thats basically all I have.
<smoser> === Summary ===
<smoser> after all of that, what we have is a well configured network with a single cobbler server that is ready to install some nodes.
<smoser> The nodes actually have functional static-dhcp addresses and can communicate with one another via hostnames (node01, cobbler, node02...)
<smoser> In the next session, Andreas will talk about how we can use ensemble to control the cobbler server and provision the nodes.
<smoser> That way, ensemble can control our bare metal servers just like it can request new EC2 nodes.
<smoser> (here, we're just pretending that those VMs are real hardware, but ensemble doesn't actually know the difference)
<smoser> so...
<smoser> kim0, you could have executed examples yesterday...
<smoser> so, yeah, i hope you can tomorrow.
<smoser> if you want to just play with cobbler some, this is a really nice way to see how it fits all together
<smoser> without having 2 or 3 spare systems sitting around.
<smoser> i know that that was a big barrier to entry for me.
<ClassBot> kim0 asked: So Ensemble would request powering on the hardware and installing it, then orchestrating it .. Is that advantageous to having all boxes installed and "waiting" for Ensemble ?
<smoser> w've not shortcutted that, but you could.
<smoser> in the real world scenario, though, the provisioning of a node will occur once ensemble is done with it.
<smoser> that ensures that they're "clean".
<smoser> save some of your questions for RoAkSoAx but i'm guessing that end to end on cable modem speed you could have a cobbler vm built, and then a node deployed on it via ensemble in 3 hours or so at this point.
<ClassBot> kim0 asked: Is installing the cobbler server planned as a CD boot option
<smoser> kim0, i'm not sure how it will be exposed, but yeah, the goal is to make that *very* easy.
<smoser> alexm said: smoser: note that cache=unsafe in build-image in unsupported in maverick's qemu, i just changed it with writeback
<smoser> Thanks alexm . 'writeback' is the right value there.
<ClassBot> There are 10 minutes remaining in the current session.
<smoser> in minutes before this session i tried to see if i could get this all to go inside a ec2 guest
<smoser> it "should work", but something was going wrong.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Orchestra and Ensemble (part2)  - Instructors: RoAkSoAx
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
<RoAkSoAx> howdy
<RoAkSoAx> alright then lets continue with the presentation
<RoAkSoAx> argh
<RoAkSoAx> sorry
<RoAkSoAx> wrong channel
<RoAkSoAx> 5:01 <+RoAkSoAx> Hi all. My name is Andres Rodriguez, and I'm an Ubuntu Developer working on the Ubuntu Server Team as well.
<RoAkSoAx> 15:02 <+RoAkSoAx> As Scott already mentioned today, we have been working on getting Ensemble to work with Orchestra. We've been using smoser's devenv to achieve this result. Today I'm going to show you how this work can be tested as a  proof of concept, as this is still  work in progress.
<RoAkSoAx> 15:02 <+RoAkSoAx> But first, lets learn a little bit more about the idea behind Orchestra and Ensemble integration
<RoAkSoAx> 15:02 <+RoAkSoAx> The main idea behind this was to basically use Ensemble with Orchestra/Cobbler the same way it's been used with ec2. However, on ec2 we can request instances easily and add more and more, but in Orchestra/Cobbler we
<RoAkSoAx>  can't. This is a limitation, however the pproach taken in this case, was to simply pre-populate the Orchestra server with "systems" (in terms of Cobbler). A system mis a physical system that is somewhere in the network  and that cobbler can deploy. So, we have a list of available machine ready to be deployed via PXE.
<RoAkSoAx> 15:02 < JohnSGruber> RoAkSoAx: do you want to paste this in the classroom?
<RoAkSoAx> So we could say that we will have to do two things with ensemble 1. Bootstrap and 2. Deploy, in the same way we would do with ec2.
<RoAkSoAx> Bootstrapping is when we tell ensemble to start creating the whole environment. In this case, bootstrapping means starting a machine to be the zookeeper machine, which will interface between a client machine from where we are issuing commands, and the provider (Orchestra), to deploy machine and create the relations between them.
<RoAkSoAx> The process here was to simply select a "system" provided by Orchestra/Cobbler. This system will then be powered on by interfacing with the power management interface the hardware has configured  (IPMI, WoL, ect, virsh), and will turn it on. When this machine boots up, it will find a PXE server on the network (Cobbler) and will start the installation process. Once the machine has finished installing, it will use cloud-init to install ensemble
<RoAkSoAx> In case of the development environment, we use virsh as the power management interface
<RoAkSoAx> As smoser already explained, the cobbler devenv provides machines that are ready to be deployed via PXE
<RoAkSoAx> when we bootstrap with ensemble, it simply tells cobbler to start a machine
<RoAkSoAx> cobbler uses virsh to start it
<RoAkSoAx> and when the machine starts it searches for a PXE server, and installs the OS
<RoAkSoAx> So, as mentioned, the bootstrap process will start a new machine that we are gonna call the zookeeper
<RoAkSoAx> once the zookeeper is up and running, we can start deploying machines
<RoAkSoAx> m_3: will get to that in a min ;)
<RoAkSoAx> So, when deploying, Ensemble will tell the zookeeper to deploy a machine with an specific service. The zookeeper will talk to the orchestra server in the same way it did when bootstrapping and will deploy a machine. It will also use cloud-init to install everything necessary to deploy the service.
<RoAkSoAx> Now, since obviously ec2 is different from Orchestra/Cobbler we needed to make some changes in the approch taken to make things work (such as provide the meta-data for cloud-init). We needed a few things:
<RoAkSoAx> 1. Provide methods in ensemble to interface with Cobbler using its API
<RoAkSoAx> 2. Provide a custom preseed to be used when deploying machines through ensemble.
<RoAkSoAx> 3. Provide a method to pass cloud-init meta-data, and be populated before first boot so that cloud-init can do its thing.
<RoAkSoAx> So, how did we achieve this
<RoAkSoAx> 1. As already explained, ensemble uses cobbler as a provider communicating with it via the cobbler API.
<RoAkSoAx> 2. Since ec2 instances a VM really quick, it was easy to pass all the necessary values through cloud-init, however, in our case, we needed to do somthing similar, and the conclusion was to do it via a modified preseed to deploy whatever it was needed the same way
<RoAkSoAx> 3. We figured out a method to pass the cloud-init meta-data through the preseed
<RoAkSoAx> so basically the change sin Cobbler were to provide a custom preseed to deploy the OS
<RoAkSoAx> this preseed contains what we call a late_command
<RoAkSoAx> this late_command will execute a script that will generate the cloud-init meta-data so after first boot, cloud-init will do its thing
<RoAkSoAx> so what we did is to generate the cloud-init meta-data with ensemble as it was always done, but, we had to figure out how to do it to the preseend
<RoAkSoAx> here, we generated text that was later enconded in base64.
<RoAkSoAx> This text was basically a shell script containing the information to populate cloud-init's meta-data
<RoAkSoAx> so the late command in reality was to decode the base64 code and then, wirte the script and execute it
<RoAkSoAx> this decoding and writing was done by the preseed, right after finishing installing the system and before booting
<RoAkSoAx> so when the machine restarted, cloud-init would do its thing
<RoAkSoAx> so that was done by making ensemble interface with cobbler, and once the late command was generated, ensemble told cobbler "This is your late command" and cobbler simply executed it
<RoAkSoAx> once the machine finished installing, we had a fully functional zookeeper (or service)
<RoAkSoAx> so basically, we wated to achive the same as with ec2, but we just had to figure out how to do it with the preseed
<RoAkSoAx> and now, it works in a very similar way
<RoAkSoAx> so the only things to consider were to 1. start a machine. 2. deploy the machine using the preseed. 3. ensure to pass the late_command
<RoAkSoAx> and this way we would simulate the way how instnaces and cloud-init data is passed to instances in the cloud
<RoAkSoAx> other than that, ensemble works pretty much exactly the same as it would with ec2
<RoAkSoAx> but using orchestra
<RoAkSoAx> Now, another change that was done is that ensemble when working on ec2
<RoAkSoAx> it used S3 to store some information that was using by ensemble to identify machines and place the formula meta-data
<RoAkSoAx> instead, we used a WebDav service with the apache2 servcer installed by cobbler
<RoAkSoAx> here, instead of obtaining and storing files on S3, we use the Orchestra server as storage for ensemble
<RoAkSoAx> based on those considerations, pretty much had to ensure that the interaction between the cobbler API and ensemble provided results the way its done with ec2
<RoAkSoAx> so how can we really test this with the development environment
<RoAkSoAx> but before,
<RoAkSoAx> m_3: does this answer your question?
<RoAkSoAx> alright
<RoAkSoAx> I'll move on
<RoAkSoAx> With smoser's cobbler devenv we can certainly simulate a physical deployment using ensemble
<RoAkSoAx> the good thing is that the devenv will setup everything necessary from the orchestra side of things
<RoAkSoAx> but, I'll give and overview of what will orchestra do very soon
<RoAkSoAx> 1st. We would need to install orchestra-server, which will isntall cobbler and cobbler-web
<RoAkSoAx> with that, we would need to configure the webdav so that we have storage up and running
<RoAkSoAx> (remember, this is already done by the cobbler-devenv)
<RoAkSoAx> how did we do this:
<RoAkSoAx> === Setting up file storage ===
<RoAkSoAx> 1. Enable Webdav
<RoAkSoAx> sudo a2enmod dav
<RoAkSoAx> sudo a2enmod dav_fs
<RoAkSoAx> 2. Write config file (/etc/apache2/conf.d/dav.conf)
<RoAkSoAx> Alias /webdav /var/lib/webdav
<RoAkSoAx> <Directory /var/lib/webdav>
<RoAkSoAx> Order allow,deny
<RoAkSoAx> allow from all
<RoAkSoAx> Dav On
<RoAkSoAx> </Directory>
<RoAkSoAx> 3. Create formulas directory:
<RoAkSoAx> sudo mkdir -p /var/lib/webdav/formulas
<RoAkSoAx> chown www-data:www-data /var/lib/webdav
<RoAkSoAx> sudo service apache2 restart
<RoAkSoAx> now, we need to pre-populate cobbler with all the avialable systems and provide it with a power management interface to be able to start a physical mnachine
<RoAkSoAx> as previously explained, cobbler devenv uses virsh to simulate this behaviour
<RoAkSoAx> howeve,r in cobbler, we needed to know two things
<RoAkSoAx> 1. How do we know when a system is available 2. How do we know when the system has already been used and no longer available
<RoAkSoAx> for this, we had to look into cobbler's management classes concepts
<RoAkSoAx> in this case we are using two, foo-available and foo-acquired. As the name says, one will be used to identify when a system is available to be used by ensemble, and the other one when the system has already been acquired by ensemble and might be in the process of bootstrapping or deploying a service, or even installing the OS
<RoAkSoAx> but, in cobbler terms, how can we add management classes and systems?
<RoAkSoAx> simple:
<RoAkSoAx> === Setting up cobbler ===
<RoAkSoAx> 1. Add management classes
<RoAkSoAx> sudo cobbler mgmtclass add --name=foo-available
<RoAkSoAx> sudo cobbler mgmtclass add --name=foo-acquired
<RoAkSoAx> 2. Add systems
<RoAkSoAx> sudo system add --name=XYZ --profile=XYZ --mgmt-classes=foo-available --mac-address=AA:BB:CC:DD:EE:FF
<RoAkSoAx> Basically, a system is a definition for a physical machine using a OS profile, and what mangement class to use at first
<RoAkSoAx> the mprofile is no other than the OS that will be installed in that machine
<RoAkSoAx> and the management class has already been explained
<RoAkSoAx> of course you will have to configure the power management interface accordingly
<RoAkSoAx> but in the cobbler-devenv has alreayd been done
<RoAkSoAx> so basically, we now have a Orchestra/Cobbler server up and running and we have configured it with systems, mgmtclasses and the file store
<RoAkSoAx> storage*
<RoAkSoAx> so it is time for us to install and configure ensemble to use our cobbler server
<RoAkSoAx> in this case, we are going to use the cobbler-devenv
<RoAkSoAx> however, you will notice that you can simply chnage it to be used by physical machines
<RoAkSoAx> if you already have an orchestra server up and rynning and preloaded with systems
<RoAkSoAx> so first, we need to obtain the branch of ensemble that has orchestra support
<RoAkSoAx> NOTE: This branch contains code that is under development and is still buggy
<RoAkSoAx>  1. Obtain the branch:
<RoAkSoAx> bzr branch lp:~ensemble/ensemble/bootstrap-cobbler
<RoAkSoAx> cd bootstrap-cobbler
<RoAkSoAx> now we need to create an environments.yaml file for ensemble
<RoAkSoAx> we do this as follows:
<RoAkSoAx>  2. Create the environments file (~/.ensemble/environments.yaml)
<RoAkSoAx> environments:
<RoAkSoAx>    orchestra:
<RoAkSoAx>       type: orchestra
<RoAkSoAx>       orchestra-server: 192.168.123.2
<RoAkSoAx>       orchestra-user: cobbler
<RoAkSoAx>       orchestra-pass: cobbler
<RoAkSoAx>       admin-secret: foooo
<RoAkSoAx>       ensemble-branch: lp:~ensemble/ensemble/bootstrap-cobbler
<RoAkSoAx>       acquired-mgmt-class: foo-acquired
<RoAkSoAx>       acquired-mgmt-class: foo-available
<RoAkSoAx> note that I'm already using the values for the cobbler-devenv
<RoAkSoAx> such as orchestra-server IP address
<RoAkSoAx> user/pass for cobbler
<RoAkSoAx> the branch we need
<RoAkSoAx> and the management classes
<RoAkSoAx> typo in last line
<RoAkSoAx> should be:
<RoAkSoAx>       available-mgmt-class: foo-available
<RoAkSoAx> so once this is done, and we have setup the cobbler-devenv correctly
<RoAkSoAx> we can start bootstrapping the zookeeper and then deploying the machines
<RoAkSoAx> so the first step, and from the branch we have obtained, we do the follwoing:
<RoAkSoAx> PYTHONPATH=`pwd` ./bin/ensemble bootstrap
<RoAkSoAx> this will bootstrap the zookeeper
<RoAkSoAx> it will take time for it to install and deploy the zookeeper running
<RoAkSoAx> it would probbaly take esveral minutes
<RoAkSoAx> so I will containue explaining what it needs to be done
<RoAkSoAx> so, when the zookeeper is up and running and cloud init has done its thing, we need to workaround something given that we just came into an error in the code
<RoAkSoAx> thatis being examined
<RoAkSoAx> but it is simple and doens't actually affect the code
<RoAkSoAx> so we need to connect to the zookeeper machine (through ssh, or any ither method
<RoAkSoAx> and sudo the following (in the zookeeper machine)
<RoAkSoAx> sudo -i
<RoAkSoAx> ssh-keygen -t rsa
<RoAkSoAx> this will create public keys that are verified by the zookeeper before deploying machines
<RoAkSoAx> however, note that this is a work around and will be fixed soon
<RoAkSoAx> I'm just pointing you guys to it in case you want to test it after the session of today
<RoAkSoAx> once this is done
<RoAkSoAx> we can start deploying machine
<RoAkSoAx> and we simply do the following:
<RoAkSoAx> PYTHONPATH=`pwd` ./bin/ensemble deploy --repository=examples mysql
<RoAkSoAx> this will tell zookeeper to deploy a machine, whcih will tell cobbler to start a machine via virsh
<RoAkSoAx> and once installed it will run late-command and populate cloud-init meta-data
<RoAkSoAx> on first boot
<RoAkSoAx> cloud-init will do its thing
<RoAkSoAx> and baaam
<RoAkSoAx> we would have a mysql server working on a physical node
<RoAkSoAx> and I believe that's all I have for you today
<RoAkSoAx> I think i run over the session too fast :)
<RoAkSoAx> anyone has any questions?
<RoAkSoAx> m_3: well that's indeed a limitation we have in comparison to ec2 as in physical environments (and cobbler) we are relying in the power management interface to deploy machines
<ClassBot> m_3 asked: does the cobbler instance provide a metadata server for cloud-init?
<ClassBot> m_3 asked: reboots... how robust is everthing wrt reboots?  (In EC2-ensemble, we just typically throw instances away)
<RoAkSoAx> m_3: now, as far as rebooting machines and keep things persistant, at the moment, we are not handling that
<RoAkSoAx> m_3: but the first approach was to preseed all that information and use debconf populate those values
<RoAkSoAx> m_3: and have upstart scripts initialize the services on reboot
<RoAkSoAx> m_3: however, we discussed the possibility of actually not doing that through the preseed but rather, provide cloud-init with a command to write those persistant values so on reboot they can be used
<RoAkSoAx> m_3: you're welcome
<RoAkSoAx> anyone any more questions?
<RoAkSoAx> alright I guess there's not
<RoAkSoAx> thank you all
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> alexm asked: RoAkSoAx: will ensemble/orchestra be in ubuntu-server manual for oneiric? a quick start guide, for instance
<RoAkSoAx> alexm: I surely hope so! I guess that will depend how far we can get with this in the development cycle, but I'm confident it would
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Eucalyptus 3: cloud HA and ID management - Instructors: nurmi
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html following the conclusion of the session.
<nurmi> Hello all, and thank you very much for attending this session!
<nurmi> Today, we're going to be discussing some new features of Eucalyptus 3
<nurmi> While there are quite a few, two of the most substantial are implementations of high availability and user/group identity management
<nurmi> We'll start with a discussion of Eucalyptus HA, and then switch to ID management next
<nurmi> Eucalyptus is designed as a collection of services which, when stitched together, form a distributed system that provides infrastructure as a service
<nurmi> Roughly, eucalyptus services are organized in a tree hierarchy
<nurmi> At the top of the tree, we have components (Cloud Controller, Walrus) that are directly accessed by users
<nurmi> In the middle, we have a Cluster Controller and Storage Controller which set up/manage virtual networks and storage (EBS) respectively
<nurmi> and and the bottom of the tree, we have Node Controllers which control and manage virtual machines
<nurmi> In a nutshell, this collection of services provide users the ability to provision and control virtual infrastructure components that, within eucalyptus, we refer to as 'artifacts'
<nurmi> For example, virtual machines, virtual networks, and cloud storage abstractions (EBS volumes and S3 buckets/objects)
<nurmi> The design of Eucalyptus HA creates a distinction between the cloud service itself (eucalyptus components), and the artifacts that are created/managed by the service
<nurmi> The reason for this distinction is that, while the term 'High Availability' is generally meaningful
<nurmi> The requirements of making something 'Highly Available' varies greatly, depending on what that 'something' is
<nurmi> In Eucalyptus 3, we have a new architecture that provides High Availability for the cloud service itself
<nurmi> The architecture additionally supports adding High Availability to eucalyptus artifacts, in the future
<nurmi> So, the core design of Eucalyptus HA is as follows
<nurmi> Each Eucalyptus component can run in 'non-HA' mode, exactly as it does today
<nurmi> Then, at runtime, each component service can be made highly available by adding an additional running version of the component, ideally on a separate physical system
<nurmi> This results in a basic 'Master/Slave' or 'Primary/Secondary' mode of operation, where the Eucalyptus HA deployment is resilient to (at least) a single point of failure (for example, machine failure)
<nurmi> At any point in time, when running in HA mode, a component is either in 'Primary' or 'Secondary' mode
<nurmi> any component in 'Secondary' mode is running, but is inactive until it is made Primary
<nurmi> Next, each component, and the system as a whole, is designed to keep 'ground truth' about artifacts as close to the artifacts as possible
<nurmi> For example, all canonical information about virtual machine instances is stored on the node controller that is managing that VM
<nurmi> and all canonical information about virtual networks that are active is stored with the Cluster Controller that is managing that network
<nurmi> When a eucalyptus component becomes active, then
<nurmi> which happens when the component first arrives, when it is 'restarted' or, when it is promoted from Secondary to Primary
<nurmi> the component 'learns' the current state of the system by discovering what it needs from ground truth
<nurmi> other services that are 'far' from ground truth, then, learn about ground truth from nearer components
<nurmi> I'll use the Cluster Controller to illustrate how this design works as an example
<nurmi> When a cluster controller enters into a running eucalyptus deployment, there are typically many artifacts that are currently running
<nurmi> the very first operation that a cluster controller performs is to poll both above (Cloud Controller) and below (Node Controllers)
<nurmi> in order to learn about the current state of all artifacts
<nurmi> It then uses this information to dynamically (re)create all virtual networks that need to be present in order for the currently active artifacts to continue functioning
<nurmi> So, whether a cluster controller is by itself (non-HA mode) and just reboots, or if a Primary cluster controller has failed and the secondary is being promoted
<nurmi> the operation is the same: learn about ground truth and re-create a functional environment
<nurmi> All other HA eucalyptus components operate in a similar fashion, semantically
<nurmi> Storage controller uses iSCSI volumes as ground truth
<nurmi> Walrus uses shared filesystem, or a pre-configured DRBD setup for buckets/objects
<nurmi> Finally, while the design of the software permits a simple 'no single point of failure' setup with just additional physical machines
<nurmi> (to support Primary/Secondary model)
<nurmi> We also support deployments that have redundancy in the network infrastructure
<nurmi> This way, 'no single point of failure' can be extended to include network failures, as well, without having to alter the software/software configuration.
<nurmi> We've put a lot of effort into the new architecture to provide service high availability first, and hope that others will find the architecture ready to start adding HA for specific artifacts in near future releases
<nurmi> Utilizing live migration for VM HA, utilizing HA SAN techniques for in-use EBS volume access HA, etc.
<nurmi> This brings us to the end of the first part of our discussion, thank you very much!  I would like to ask if there are any questions about Eucalyptus HA ?
<nurmi> Okay ; the second part here will be led by Ye Wen, who will be talking about the new user and group management functionality in Eucalyptus 3
<nurmi> Short break until we can get '+v' for Ye
<wenye> Hello, everyone. I'm going to continue this topic by discussing another new feature in Eucalyptus 3: the user identity management.
<wenye> We have a completely new design for managing user identities in Eucalyptus 3, based on the concept of Amazon AWS IAM (Identity and Access Management).
<wenye> In another word, we provide the same API as Amazon AWS IAM. Your existing scripts working for Amazon should be compatible with your new Eucalyptus 3 cloud.
<wenye> At the same time, we augment and extend IAM with some Eucalyptus-specific features, to meet the need of some customers.
<wenye> With IAM, you essentially partition the access to your resources (i.e. the artifacts as Dan said earlier) into "accounts"
<wenye> Each account is a separate name space for user identities.
<wenye> Account is also the unit for resource usage accounting.
<wenye> Within an account, you can manage a set of users.
<wenye> Users can also be organized into groups.
<wenye> Note that group is a concept for assigning access permissions to a set of users. So users can be in multiple groups.
<wenye> But users can be only in one account.
<wenye> Permissions can be assigned to users and groups to control their access to the system resources.
<wenye> As in IAM, you write a policy file to grant permissions.
<wenye> We have a few extensions to the IAM concepts. I talk about a few here.
<wenye> In IAM, you can't specify EC2 resources. For example, you can only say "allow user A to launch instance", but you can't say "allow user A to launch instance using image X".
<wenye> We introduce the EC2 resources, so that you can do such things. One good use is to restrict the VM types for some users can launch instance with.
<wenye> Another extension is the introduction of VM expiration or lifetime.
<wenye> You can use an Eucalyptus-specific policy condition to specify a VM's lifetime or when to expire.
<ClassBot> There are 10 minutes remaining in the current session.
<wenye> The biggest extension probably is the introduction of resource quota.
<wenye> We extend the IAM policy syntax to allow the specification of resource quota. We use a special "Effect" to do that.
<wenye> So you can say "Effect: Limit" in a policy, which indicates the permission is a quota permission.
<wenye> And then you can use the policy "Resource" and "Condition" to specify what resource and how large of the quota.
<wenye> You can assign quota to accounts and users. And if a user is restricted by multiple quota spec, the smallest is taken into effect.
<wenye> We don't have much time left. I'll briefly talk about another Eucalyptus 3 feature that is related to the identity management.
<wenye> That is we enable the LDAP/AD sync in Eucalyptus 3.
<ClassBot> There are 5 minutes remaining in the current session.
<wenye> To do that, you can simply write a LIC (LDAP Integration Configuration) and upload to the system. The identities in the system will then be synced from the specified LDAP/AD service.
<wenye> There is the question of how to map the structure of LDAP tree to the IAM account/group/user model. We leave that for offline discussion. You can send us email at wenye@eucalyptus.com for more information.
<wenye> I'll use the remaining 3 minutes for questions.
<wenye> Thanks everyone for attending this class!
<nurmi> Thank you all, and we look forward everyone trying out Eucalyptus 3 and letting us know what you think!
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/25/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<Guest58609> bbn
<Guest58609> t
<missgawker> all done
#ubuntu-classroom 2011-07-26
<SuperMarioRocks> why won't channel #ubuntu let me send messages
<SuperMarioRocks> what is this room for
<StathisV> Hello! Three weeks before I was installed the Ati Radeon HD 6670 in my computer under  Ubuntu 10.04.  I had a lot of problems with ati drivers and usualy my system in booting,  message me  "checking battery state" and then died . The problems were solved when I made upgraded to 11.04 with "radeon" vga driver. But suddenly yesterday the same message had rise. What should I do to fix
<StathisV> it? I was searching the internet, but nothing useful found.
<head_victim> StathisV: this is not a general support channel, try #ubuntu
<head_victim> !classroom
<ubot2> The Ubuntu Classroom is a project which aims to tutor users about Ubuntu, Kubuntu and Xubuntu through biweekly sessions in #ubuntu-classroom - For more information visit https://wiki.ubuntu.com/Classroom
<StathisV> head_victim ok :) thnx :)
<CloudAche84_droi> Test
<sorrell> hi
 * soren taps the microphone
 * soren taps the microphone
<soren> Hello, everyone. Thanks for stopping by.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Getting started with OpenStack Compute - Instructors: soren
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/26/%23ubuntu-classroom.html following the conclusion of the session.
<soren> I should mention I'm currently at OSCON, on the conference wifi, so I *might* disappear.
<soren> ...but so far the wifi has been ok, so here's hoping.
<soren> Anyways.
<soren> Usually, I start with a run-down of the history of OpenStack and such, but seeing as there's a more general OpenStack session at 1900 UTC, I'll skip that this time.
<soren> Ok, so "Getting started with Openstack Compute".
<soren> I'm probably going to call it "Nova" at times, but it's the same thing.
<soren> usually, i start with a run-down of the history of OpenStack and such, but seeing as there's a more general OpenStack session at 1900 UTC, I'll skip that this time.
<soren> (Also, my shift key seems to be acting up.)
<soren> Hopefully at the end of this session, you'll have a basic understanding of the components involved in an OpenStack Compute cloud, have a cloud running on your laptop and you'll be able to start and stop instances.
<soren> Simple stuff, since we only have an hour :)
<soren> Let's start by downloading an image. It'll probably take a while, so we'll save a bunch of time later on if you start downloading it now.
<soren> If you're running 64 bit Ubuntu, grab this:
<soren> http://cloud-images.ubuntu.com/releases/11.04/release/ubuntu-11.04-server-uec-amd64.tar.gz
<soren> I.e. just fire up a terminal and run:
<soren> wget http://cloud-images.ubuntu.com/releases/11.04/release/ubuntu-11.04-server-uec-amd64.tar.gz
<soren> If you're runing 32 bit ubuntu, grab this instead:
<soren> http://cloud-images.ubuntu.com/releases/11.04/release/ubuntu-11.04-server-uec-i386.tar.gz
<soren> If you hit any problem along the way, let me know, maybe we can solve them straight away, or at least we can keep track of the problems so that we can find solutions later.
<soren> So,  I'll just assume you're all downloading this now and move on.
<soren> OpenStack Compute consists of a number of "internal" and "external" components.
<soren> "External" components (a term I just came up with, so you probably won't find it in any docs or anything) are things that aren't part of Nova itself, but we need anyway. For our needs today this is just rabbitmq.
<soren> RabbitMQ is a message queueing system. It's a core component of a Nova deployment.
<soren> It passes messages between the different components.
<soren> It's the only control communication mehcnaism we have.
<soren> mechanism, even.
<soren> Nova itself has 5 components that we'll work with today.
<soren> nova-api
<soren> nova-compute
<soren> nova-scheduler
<soren> nova-network
<soren> nova-objectstore
<soren> nova-api is the fronten server.
<soren> Whenever you -- as an enduser -- want to interact with Nova, this is the component you talk to.
<soren> It exposes the native OpenStack API (which is derived from the Rackspace Cloud Servers API) as well as the EC2 aPI.
<soren> Well, a subset of the EC2 API.
<soren> It receives your request over HTTP, authenticates and validates the request.
<soren> ...and turns it into a message and puts it on the message queue for someone else to consume.
<soren> You can have any number of these servers.
<soren> They all act the same: Verify the request and sends it on to some other component for processing.
<soren> No failover or anything is involved, all components are "hot".
<soren> nova-compute is the component that actually runs your virtual machines.
<soren> It receives a request from the scheduler, which I apparently should have talked about first, but here we are :)
<soren> Ok, so this request has information about which image to run, how much memory and disk and whatnot to assign to the virtual machine.
<soren> nova-compute makes this happen by creating virtual disk images, shoving the AMI image into it, setting up networking and running the virtual machine.
<soren> So, on the bulk of your servers in your cloud, this is the thing you run.
<ClassBot> kim0 asked: Does rabbitmq choose a node to deliver the message to? if that node fails, does it reroute to some other node ?
<soren> It depends on the type of message.
<soren> Hm... Let me explain a bit more first, and get back to this.
<soren> The scheduler.
<soren> As you might have guessed, it schedules stuff.
<soren> When you ask the API server to run a virtual machine it sends the request to one of the schedulers.
<soren> The scheduler is (meant to be) smart.
<soren> It makes the decision about which compute node is meant to run the virtual machine.
<soren> When it has decided, it sends the message on to the chose compute node.
<soren> Ok, so rabbit.
<soren> The API server sort of broadcasts the request to all the schedulers.
<soren> Here, rabbitmq makes the decision about which scheduler gets the request.
<soren> ...and applies its usual logic about sending the request somewhere else if it doesn't get acked.
<soren> for the scheduler->compute message, it's different, of course.
<soren> ...since it's being sent directly to a specific node.
<soren> I'm not sure what the failure modes are there.
<soren> The scheulder doesn't just schedule VM's, though.
<soren> It's supposed to make decisions about which storage node is supposed to host your EBS-like volumes and so on.
<soren> nova-network is... the network component.
<soren> (surprise)
<soren> It hands out IP addressses and in certain network modes it acts as the gateway.
<soren> It also does NAT'ing for elastic IP's and a few other things.
<soren> It doesn't do firewalling. The compute nodes do that (in order to decentralise the firewall and spread the filtering load).
<soren> Finally, there's the nova-objecstore.
<soren> It's a simple ple implementation of the S3 API.
<soren> You can use OpenStack without using it, but in Ubuntu we still use it because that's how EC2 and Eucalyptus work, so we can reuse a bunch of documentation an tools and whatnot this way.
<ClassBot> kim0 asked: Is there any way to specify locality, like launch this VM besides that VM as close as possible to that EBS volume ?
<soren> I'm not completely sure.
<soren> There's certainly been talk about supporting it, but I don't remember seeing code to do it.
<soren> That said, I've been on holiday and at conferences for a while, so maybe it landed recently.
<soren> It's definitely on the roadmap.
<ClassBot> kim0 asked: Why does nova-objecstore exist? Why not just swift or mini-swift ?
<soren> If I didn't answer this already, can you rephrase this?
<ClassBot> kim0 asked: Has openstack dropped the centralized DBs ? wasn't mysql used before?
<soren> We have not.
<soren> Yet.
<soren> So, if you're setting up a multi-machine cloud, you'll need a shared database. E.g. MySQL, PostgreSQL or whatever floats your boat.
<soren> Whatever sqlalchemy supports should be fine.
<soren> "should" being the operative word.
<soren> Data access works fine for all the different backends, but schema changes have only been tested with mysql, postgresql and sqlite.
<soren> This is a common theme, by th e way.
<soren> We can use a bunch of different DB backends.
<soren> We can also use a bunch of different hypervisors (kvm, Xen, LXC, UML, Xen Server, VMWare, Hyper-V).
<soren> Different storage backends (HP SANs, iSCSI, AOE)
<soren> Adding more drivers is meant to be reasonable easy, so if you want to use something else, that should be possible.
<soren> Let us know if that's the case.
<ClassBot> kim0 asked: Is there code to accelerate certain operations via SAN (like clone volume, snapshotting ..etc)
<soren> I forget which operations are exposed in abstraciton layer, to be honest. I do believe the backend call to create a snapshot is just "create a snapshot", so if a particular backend has nifty shortcuts to do that, it should totally be possible.
<soren> Whether the drivers actually *do*... I don't know.
<soren> Alright then.
<soren> 16:32 < Guest95592> QUESTION : is there any docs talk about openstack with Hyper-V ? Or real use case ?
<soren> I'm not actually sure of the state of the Hyper-V support. We don't really have the means to test it.
<soren> Guest95592: I'll make a note to dig up the docs and send it to you. I'm not sure where they are.
<soren> Ok, so let's get practical.
<soren> I assume you've all downloaded the image.
<soren> For those who just joined:
<soren> 16:03 <+soren> Let's start by downloading an image. It'll probably take a while, so we'll save a bunch of time later on if you start downloading it now.
<soren> 16:03 <+soren> If you're running 64 bit Ubuntu, grab this:
<soren> 16:03 <+soren> http://cloud-images.ubuntu.com/releases/11.04/release/ubuntu-11.04-server-uec-amd64.tar.gz
<soren> 16:04 <+soren> I.e. just fire up a terminal and run:
<soren> 16:04 <+soren> wget http://cloud-images.ubuntu.com/releases/11.04/release/ubuntu-11.04-server-uec-amd64.tar.gz
<soren> 16:04 <+soren> If you're runing 32 bit ubuntu, grab this instead:
<soren> 16:04 <+soren> http://cloud-images.ubuntu.com/releases/11.04/release/ubuntu-11.04-server-uec-i386.tar.gz
<soren> I'm assuming you're running Natty.
<soren> Is anyone not running Natty?
<ClassBot> kim0 asked: Would you explain how a large scale (1000?) server deployment may look like. Guidelines would be great. Is it also an inverted tree of a bunch of clusters like eucalyptus?
<soren> kim0: A lot of it will be dictated by the network structure you want.
<soren> Hmm..
<soren> This is a good question, really. I just write the software, I don't actually run it at scale.
<soren> It's not a strict hierarchy.
<soren> At all.
<soren> You have the message queue, which everything shares.
<soren> ...the database that everything shares.
<soren> "a number" of API servers. :)
<soren> I don't really know how many I'd set up.
<soren> Ideally, they'd run as instances on your cloud so that you could scale them as appropriate.
<soren> I'd be surprised if one or two of them wouldn't be sufficient, but if you find out that you need a bunch of them, just add more and point them at the same message queue and db and you should be golden.
<soren> Number of storage servers depends on how much storage you want to expose, really.
<soren> ...and how much you expect your users to use them.
<soren> Nova doesn't really add any overhead there.
<soren> If you expect to have a lot of I/O-intensive stuff going on on your cloud, you'd probably have a lot of servers with a lot of bandwidth.
<soren> If you don't expect a lot of I/O-intensive stuff you might just spring for fewer servers with more disks in each.
<soren> I don't think there really are any good, general answers.
<soren> The architecture is really flexible, though, so if you discover that you need more store or bandwidth, you just add more.
<soren> Since the architecture is so flat.
<soren> Ok, so back to our demo.
<soren> Everyone runs natty. Great.
<soren> First:
<soren> sudo apt-get install rabbitmq-server unzip cloud-utils
<soren> This install rabbitmq and a few utilities that we'll need in a few minutes.
<soren> When that is done, do this:
<soren> sudo apt-get install nova-api nova-compute nova-scheduler nova-network nova-objectstore
<soren> I'm not compltely sure this still needs to be two separate steps, but just to be safe, we do it this way.
<soren> Ok, so next up, we create a user.
<soren> If your name is not "soren" you can specify something else where it says "soren" :)
<soren> sudo nova-manage user admin soren
<soren> This creates an admin user called "soren"
<soren> sudo nova-manage project create soren soren
<soren> This creates a "project" called "soren" with the user "soren" as the admin.
<soren> Next, we need to create a network.
<soren> If you can't use 10.0.0.0/8 (beacuse you already use it), make something else up.
<soren> sudo nova-manage network create 10.0.0.0/8 2 24
<soren> This creates two networks of 24 IP's each in the 10.0.0.0/8 subnet.
<soren> sudo nova-manage project zipfile soren soren
<soren> This fetches a zipfile with from credentials in it for the user soren for the project soren.
<soren> Unpack it:
<soren> unzip nova.zip
<soren> Source the novarc file:
<soren> . novarc
<soren> Now we upload the image we downloaded earlier.
<soren> uec-publish-tarball ubuntu-11.04-server-uec-amd64.tar.gz ubuntu
<soren> ubuntu-11.04-server-uec-amd64.tar.gz is the filename, "ubuntu" is the name of the S3 bucket you want to use.
<soren> Next, we create a key:
<soren> euca-add-keypair mykey > mykey.priv
<soren> chmod 600 mykey.priv
<soren> uec-publish-tarball may take a while.
<soren> The last thing it outputs might look like:
<soren> emi="ami-3a0c3765"; eri="none"; eki="aki-279dfe6a";
<soren> We need the first ID there (ami-3a0c3765).
<soren> Your id will be different.
<soren> Well, probably.
<soren> there's a 1 in 2^32 chance it'll be the same :)
<soren> Ok, next:
<soren> echo '#!/bin/sh' >> myuserdata
<ClassBot> There are 10 minutes remaining in the current session.
<soren> echo "wget http://f.linux2go.dk:8080/$HOSTNAME/"  >> myuserdata
<soren> And finally:
<soren> euca-run-instances -k mykey -t m1.tiny -f myuserdata "the ami ID we found a bit further up"
<soren> If this works, we'll be able to see your hostname on  http://f.linux2go.dk:8080/
<soren> ...and you'll also be able to ssh into the instance.
<soren> Is this working for everyone?
<ClassBot> There are 5 minutes remaining in the current session.
<soren> I don't really anything more, I didn't know how long this demo thing would take.
<soren> Alright, thanks everyone for stopping by.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: UEC on Ubuntu 10.04 LTS - Instructors: TeTeT
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/26/%23ubuntu-classroom.html following the conclusion of the session.
<TeTeT> hello there, glad to have you aboard for a session on UEC on Ubuntu 10.04 LTS
<TeTeT> thanks to soren for providing us with info on openstack, a very interesting and recent development in cloud space
<TeTeT> in the next hour I will write about UEC, Ubuntu Enterprise Cloud, as we find it in Ubuntu 10.04 LTS
<TeTeT> most of you will already know that the LTS (Long Term Support) releases are quite important to the Ubuntu universe
<TeTeT> with 5 years of support for servers, it is a good choice for deploying it in any datacenter
<TeTeT> drawback is of course, that you don't get the latest and greatest software on it, you pretty much have to live with what was there in April 2010 for Ubuntu 10.04 LTS
<TeTeT> so with regards to UEC, we find that the base is eucalyptus 1.6.2
<TeTeT> some of you might have attended nurmi's talk yesterday on euca 3
<TeTeT> there's of course quite a lot of features missing in 1.6.2, but if you are in an LTS environment, or plan to set one up, you still can get started with UEC
<TeTeT> so the most useful document for getting started with it, is on the wiki:
<TeTeT> https://help.ubuntu.com/community/UEC
<TeTeT> it contains various info that to the best of my knowledge mostly still applies to 10.04 LTS
<TeTeT> however, due to some problems with service discovery, one might need to manually set up a cloud when doing a packaged install
<TeTeT> I've detailed the needed steps here, as I need them often when teaching the UEC class:
<TeTeT> http://people.canonical.com/~tspindler/UEC-Jan/02-cloudPackagedInstall
<TeTeT> typically it takes 10-20 minutes to set up a two node cloud with UEC
<TeTeT> one node acting as the front-end, the other as the node controller
<TeTeT> while this is hardly sufficient for any serious deployment, it's good to get started and you can add more node controllers later on
<TeTeT> once the cloud is up and running, you need to either download credentials for accessing i from the web ui or via the euca_conf command on the front-endt
<TeTeT> I prefer the later, so a $ sudo euca_conf --get-credentials admin.zip
<TeTeT> will store the credentials for the admin user in the admin.zip file
<TeTeT> next I save these credentials on a client system into ~/.euca-admin/
<TeTeT> note that the wiki recommends saving it in ~/.euca, but if you have multiple users on one client system in one account, it's better to have several directories
<TeTeT> e.g. I then usually create a user 'spindler' that has non-admin privileges and store the credentials in ~/.euca-spindler
<TeTeT> I think using multiple users for multi-tenancy in the cloud is a very nice feature, so I like to use it
<TeTeT> once the credentials are available, I source the accompanying 'eucarc' file and we're ready to go
<TeTeT> with euca-describe-availability-zones verbose I do a quick check if the cloud is operational
<TeTeT>  euca-describe-availability-zones verbose
<TeTeT> AVAILABILITYZONE	torstenCluster	172.24.55.253
<TeTeT> AVAILABILITYZONE	|- vm types	free / max   cpu   ram  disk
<TeTeT> AVAILABILITYZONE	|- m1.small	0023 / 0024   1    192     2
<TeTeT> AVAILABILITYZONE	|- c1.medium	0022 / 0022   1    256     5
<TeTeT> AVAILABILITYZONE	|- m1.large	0011 / 0011   2    512    10
<TeTeT> AVAILABILITYZONE	|- m1.xlarge	0005 / 0005   2   1024    20
<TeTeT> AVAILABILITYZONE	|- c1.xlarge	0002 / 0002   4   2048    20
<TeTeT> hope I don't get kicked for flooding
<TeTeT> so here you see a basic UEC in operation. The name of the cluster is 'torstenCluster', you see the private IP of the cluster controller
<TeTeT> the lines below the first show how many instances of each instance type you can run
<TeTeT> the instance types will look familiar to those having AWS background
<TeTeT> basically the command tells me that there's a cluster controller operational and a node controller has been found as well
<TeTeT> at least one node controller
<TeTeT> if you only see 0 for free and max, registering the node controller didn't work and you probably need to repeat that step
<TeTeT> in theory a new node controller in the same subnet as my cluser controller would get picked up automatically
<TeTeT> in practice it always worked for me, but I've read in bug reports that at times it didn't for other users
<TeTeT> talking about the auto registration - there is one caveat
<TeTeT> don't setup more than one UEC cloud in the same LAN
<TeTeT> otherwise the auto registration will bring strange fruits
<TeTeT> you can always turn off the auto registration agents in their upstart / init jobs, though, in case you need to
<TeTeT> e.g. you want to setup a classroom with multiple clouds and one LAN
<TeTeT> back to our cloud, the next thing to do is getting an image and uploading it to the cloud
<TeTeT> the easiest way to do so is using the web interface
<TeTeT> however, almost as easy is downloading a tarball from uec-images.ubuntu.com and provisioning that
<TeTeT> thanks to the developers there are scripts that make this really easy
<TeTeT> uec-publish-tarball is the one for publishing a tarball from uec-images in the cloud
<TeTeT> once this is done, the image resides in /var/lib/eucalyptus/bukkits/<bucket name>/ on the front-end
<TeTeT> cloud tech speaking we have stored a machine image in a bucket on S3
<TeTeT> we need to store it in a way the node controller can later download and execute the image file
<TeTeT> and S3 is one such storage method in UEC
<TeTeT> next step is to create a ssh keypair usually
<TeTeT> this can be done with euca-add-keypair <keyname>
<TeTeT> best is to first check with euca-describe-keypairs which ones already exist
<TeTeT> while the command nowadays blocks creation of another key with the same name as an existing one, this was not always the case in the past ...
<TeTeT> once we have an image and a keypair, we can start an instance
<TeTeT> when doing all of this on the command line, rather than using hybridfox or landscape or any other mgmt tool
<TeTeT> I find it useful to save the identifiers in variables
<TeTeT> for example
<TeTeT> IMAGE	emi-ACC617F6	maverick-2011-01-26/maverick-server-uec-amd64.img.manifest.xml	admin	available	public		x86_64	machine	eki-14A41D03
<TeTeT> this is a maverick server image for 64bit from back in January
<TeTeT> I store the emi identifier in a variable emi
<TeTeT> emi=emi-ACC617F6
<TeTeT> so when I want to run an instance of that type, I state
<TeTeT> euca-run-instances $emi -k <key name> -t <instance type>
<TeTeT> I usually neglect the instance type and make do with m1.small, but at times bigger instances are worth it
<TeTeT> the euca-run-instances command returns another identifier, this time for the instance
<TeTeT> I save that in the variable 'inst' or 'server-<name>' or whatever else I like
<TeTeT> so I can quickly check the status of the instance with $ euca-describe-instances $inst
<TeTeT> if you're out on your own with a single user and a small cloud, it might not make much sense for restricting the output of euca-describe-instances
<TeTeT> but if you have a class full of people, everybody starting and stopping instances, it is useful to only see info on the wanted instance
<TeTeT> once the instance is in state running, one can ssh to it
<TeTeT> ssh -i <identity file> ubuntu@<public or private ip>
<TeTeT> the -i option we need to use the right private key file, the one that was returned by euca-add-keypair earlier
<TeTeT> if we need the public or private ip depends on where our client sits
<TeTeT> if we use the front-end as client, which a lot of people will initially do, the private ip is as good as any
<TeTeT> if the client is on your laptop or any other system not hooked up to the clouds infrastructure, you need to use the public ip
<TeTeT> when you have done all the work needed by the instance, you can terminate it with 'euca-terminate-instances'
<TeTeT> so that covered the basic operation of a UEC cloud
<TeTeT> first install the systems, either by CD or by package
<TeTeT> second get the credentials
<TeTeT> third check if the cloud is operational
<TeTeT> next make an image available
<TeTeT> start an instance of that image
<TeTeT> test via ssh or whatever other service the instance may provide
<TeTeT> but you don't need to stop there, there's plenty to still explore
<TeTeT> like attaching persistent storage devices to the instance
<TeTeT> allocating public ip addresses to the instance
<TeTeT> before we cover the persistent storage, let's see why we have the need for that
<TeTeT> what happens when an instance is started?
<TeTeT> a node controller sooner or later gets tasked by a cluster controller to run an instance
<TeTeT> that instance is of a specific emi, eucalyptus machine image
<TeTeT> the node controller checks a local cache if the emi is there, and if not, the emi is transferred to the node controller via S3
<TeTeT> the node controller then copies the image to an instance directory
<TeTeT> inside the instance directory the image is configured according to the start parameters of euca-run-instances
<TeTeT> e.g. an ssh key is injected
<TeTeT> e.g. a cloud init file is added to the boot sequence of the instance
<TeTeT> some more magic happens and the once xen like image is now a kvm image
<TeTeT> and kvm is used to boot the instance
<TeTeT> so in a way eucalyptus and UEC is like kvm on steroids
<TeTeT> when you want to get rid of an instance, you stop it with euca-terminate-instances
<TeTeT> and what happens on the node controller is that the kvm process stops _and_ the runtime directory containing the image is deleted
<TeTeT> this means that any state in the instance is gone forever
<TeTeT> of course most services require to save some state somewhere. This could be an external database server, or a file server.
<TeTeT> or, as we look at now, EBS, the elastic block storage service
<TeTeT> in EBS a device appears on an instance that is mapped to a file on a storage server, the EBS server
<TeTeT> there are two methods by which EBS is transported, ata over ethernt (AoE) or iscsi. In Ubuntu 10.04 LTS we only have AoE
<TeTeT> this implies a restriction when using 10.04 LTS, as the EBS server has to reside on the same LAN as the node controllers it serves
<TeTeT> as AoE does not route
<TeTeT> with euca-create-volume we can create a new volume for storage on the EBS storage server
<TeTeT> once this has finished, it can be assigned to an instance with euca-attach-volume
<TeTeT> euca-attach-volume <vol id> -i $inst
<TeTeT> oops, there's -d <device> missing
<TeTeT> device would be sdb or sdc or anything
<TeTeT> but to my finding the instance will just pick the next device name anyhow
<TeTeT> with the device attached to the instance, one can create a partition table, filesystem and whatever on the device
<TeTeT> just as with a regular device
<TeTeT> the nice thing is, that you can snapshot these devices
<TeTeT> e.g. you attach the device to an instance, put the needed data there, detach it and snapshot it
<TeTeT> based on that snapshot you can create new volumes that are clones of the snapshot
<TeTeT> might be nice if you have read only data that you need on all instances of your application
<TeTeT> that's it pretty much for EBS
<TeTeT> lastly I want to cover elastic IPs
<TeTeT> with help of euca-describe-addresses you see which IPs are public in your UEC
<TeTeT> you can either randomly get one or pick one with euca-allocate-address
<TeTeT> then this ip can be assigned to a specific instance with euca-associate-address
<TeTeT> in this way you can make sure that a certain service is always reachable by the same IP, minus the time it takes to do the associate dance
<TeTeT> well, that was all I wanted to cover. Thanks for following. If you have any questions, either use the classbot or you can reach me as TeTeT on #ubuntu-cloud usually
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
<m_3> can I type in here yet?
<nigelb> yes :)
<m_3> ok, I'll go ahead and get started
<m_3> Hi, I'm Mark M Mims (hence the m_3)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Node.js/Mongo with Ensemble - Instructors: m_3
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/26/%23ubuntu-classroom.html following the conclusion of the session.
<m_3> On the Ensemble team working to build out the base set of formulas we have to work with in ensemble
<m_3> I'd like to go into a little detail today about the different kinds of formulas ensemble can work with
<m_3> at this point I hope that you've seen the earlier presentation on 'getting started with ensemble'
<m_3> I'll be demoing stuff in byobu-classroom, where you can watch along
<m_3> you can either open http://ec2-67-202-23-199.compute-1.amazonaws.com/ up in a browser
<m_3> or ssh there directly as "guest" with password "guest"
<m_3> So, how do you use ensemble to deploy a node.js app?
<m_3> the short version (we'll go repeat in more detail)
<m_3> is:
<m_3> $ ensemble bootstrap
<m_3> $ ensemble deploy --repository .. mynodeapp
<m_3> ensemble deploy --repository .. mongodb
<m_3> ensemble deploy --repository .. haproxy
<m_3> to deploy all the services as we expect
<m_3> here, we have a simple (exposed) mongodb service
<m_3> a node.js app that sits above the mongodb
<m_3> and haproxy that faces the outside world
<m_3> we deploy the services, then relate them:
<m_3> ensemble add-relation mongodb mynodeapp
<m_3> ensemble add-relation mynodeapp haproxy
<m_3> then wait for the relation hooks to complete
<m_3> the outcome of that is shown in ssh
<m_3> The point of this talk is that there are two different kinds of services involved here:
<m_3> 1.) "canned" services... like haproxy and mongodb
<m_3> these are services you typically just configure and use
<m_3> they're tailored to your infrastructure through configuration parameters
<m_3> 2.) what I like to call "framework" formulas
<m_3> these are formulas for services that you write
<m_3> i.e., mynodeapp is an application that I keep under my own revision control system
<m_3> a framework formula is a formula that helps you deploy such a service alongside other canned services
<m_3> let's look at an example
<m_3> mynodeapp is a _super_ simplistic node.js app
<m_3> there's a basic http server listening on 0.0.0.0/8000
<m_3> that tracks hits to the page
<m_3> and sticks them in a mongo database
<m_3> easy peasy
<m_3> it reads some information from a config file...
<m_3> that's just simple json
<m_3> I've written a basic ensemble formula to wrap the installation and deployment of that node.js application
<m_3> the metadata.yaml shows where it fits within my overall infrastructure
<m_3> this consumes a db, and provides a http interface
<m_3> I can of course attach monitoring services, remote logging services, etc
<m_3> there's a little cruft in here about formula config
<m_3> that's a new feature that recently landed
<m_3> so all of that should be replaced with 'config-get <param>'
<m_3> the installation of node itself and npm are pretty straightforward
<m_3> try to use packages when you can
<m_3> when you're on HEAD or living closer to the edge, you can pull and install from source if you want
<m_3> that's worth mentioning again...
<m_3> packages are great for stability
<m_3> and often what the ops guys want
<m_3> developers want the bleeding edge all the time
<m_3> ensemble formulas support either
<m_3> so it becomes a policy decision within your group
<m_3> but anyway... I'm pulling my node.js from github
<m_3> you can use any number of other options
<m_3> the key is to install the packages first for the VCS you use
<m_3> note there's a need for some idempotency guards
<m_3> this is most important for long lifecycle services
<m_3> let me pause for a sec for questions...
<m_3> ok, so to summarize the first part of what we're looking at...
<m_3> this is a formula to deploy my node.js app right alongside all the 'canned' service formulas
<m_3> this is the 'install' hook that gets called upon service unit deployment
<m_3> this install hook:
<m_3> loads some configuration
<m_3> then goes about installing
<m_3> node
<m_3> npm
<m_3> and then my application (pulled from github)
<m_3> and then delays any further startup/config until we have the right parameters to fill in from the mongodb server
<m_3> s/server/service/
<m_3> so that's simple enough
<m_3> now, the real magic of ensemble
<m_3> are relations
<m_3> this sets ensemble apart
<m_3> the lines of demarcation between what's specific to a service
<m_3> and what's really specific to the relations between the services
<m_3> here, we see the mongodb-relation-changed hook
<m_3> this isn't called until the mongodb service is related to our mynodeapp service
<m_3> start at the bottom...
<m_3> note that we don't actually start the "mynodeapp" service
<m_3> until we've set the relation parameters
<m_3> this makes sense
<m_3> if we look at the config.js again for the app
<m_3> the default paramters the node.js app is using expect mongodb to be local
<m_3> ok, if you noticed during the install hook, I _do_ have it installed locally
<m_3> but that was just for testing
<m_3> in general, the mongodb service is external
<m_3> so the node.js app would barf if we started it with this inf
<m_3> info
<m_3> so, the code before starting the service is just to get the right connection information from the db
<m_3> when a relation is created
<m_3> ensemble opens a two-way comms channel between the services through the use of 'relation-get' 'relation-set'
<m_3> the particular mongodb service we're using is pretty simplistic
<m_3> let's look at the other end of this relation
<m_3> that's it!
<m_3> when a service connects to the mongodb service, mongodb just sends it's connection point
<m_3> a more mature version of this would have access control, ports specified, etc
<m_3> but that's enough info for a first pass at understanding ensemble relations
<m_3> the mynodeapp service just grabs the relation information (here just the host)
<m_3> and crams it in the config file
<m_3> we can go out to the live service and see what the config file looks like
<m_3> first notice the relation status between the mongodb and the mynodeapp services
<m_3> I can ssh directly to one unit
<m_3> and see that the relation hook filled in the ec2 internal address for the mongodb service
<m_3> and the port's just default
<m_3> cool... questions so far?
<m_3> ha, ok... I continue
<m_3> next, let's look at what happens when we fire up additional units
<m_3> ec2's been quite slow today, so we'll wait a bit
<m_3> notice as this comes up that the services 'mynodeapp' and 'mongodb' are already related
<m_3> all we did with ensemble add-unit was to add an additional service unit for the mynodeapp service
<m_3> (horiz scaling)
<m_3> so ensemble spins up a new instance
<m_3> runs the install hook we saw before
<m_3> let's look again for a sec...
<m_3> notice there's nothing in here that would depend on another service unit
<m_3> we piulled some config from the formula
<m_3> we installed stuff that's going to be in common with all instances of our node.js app
<m_3> the relation is the only real information that the node.js app "nodes" or units would share
<m_3> ensemble just calls the relation hooks after installing the new service unit
<m_3> so then this new service unit should get the same relation-specific configuration that the first one had
<m_3> (and cram it in config.js for the app)
<m_3> ugh... I thought for sure I rambled enough for ec2 to catch up
<m_3> well anyway, while we're waiting, we can test a couple of things
<m_3> note that you can hit the first mynodeapp service unit
<m_3> "mynodeapp/0"
<m_3> on machine 2
<m_3> or ec2-72-44-35-213.compute-1.amazonaws.com
<m_3> and see the node.js app in action
<m_3> (remember we're on port 8000)
<m_3> you can try http://ec2-72-44-35-213.compute-1.amazonaws.com:8000/
<m_3> as well as http://ec2-72-44-35-213.compute-1.amazonaws.com:8000/hits
<m_3> should give us some additional traffic in the db
<m_3> this is open in this case
<m_3> it wouldn't be in real life
<m_3> ensemble has features to control ec2 security groups
<m_3> so we'd have this webservice exposed on port 8000 only on the internal ec2 interface
<m_3> you'd have to go through haproxy for public access
<m_3> hey, ok, our new mynodeapp service unit is up
<m_3> mynodeapp/1 on machine 4
<m_3> note that you can hit that one independently too
<m_3> http://ec2-184-72-92-144.compute-1.amazonaws.com:8000/
<ClassBot> There are 10 minutes remaining in the current session.
<m_3> SpamapS: asks "how do we get to the port if I haven't exposed that port"
<m_3> answer is it's all open at the moment for the class
<m_3> ok, so let's check out the config file in the new node and make sure that the relation added the right info
<m_3> boom
<m_3> so when we hit either node, they're writing the hits into the common external mongodb datastore
<m_3> sorry, I keep using bash aliases...
<m_3> es='ensemble status'
<m_3> ok, so what's missing here?
<m_3> notice that we have a haproxy service up
<m_3> but no relations
<m_3> this will call hooks to relate the webservice http interface provided by each mynodeapp to the haproxy balancer
<m_3> really it just tells haproxy its hostname and port (8000 here)
<m_3> the haproxy service is written so that when a webservice joins, it adds it to the roundrobin queue
<ClassBot> There are 5 minutes remaining in the current session.
<m_3> ok, almost done
<m_3> now, we can hit ec2-174-129-107-151.compute-1.amazonaws.com on port 80
<m_3> and it's balancing between the node.js nodes
<m_3> our nodes
<m_3> ok, so to wrap up
<m_3> two types of formulas
<m_3> 1. canned formulas
<m_3> 2. "framework" formulas
<m_3> mongodb and haproxy are examples of canned ones
<m_3> mynodeapp is a framework one
<m_3> the intended use is to fork an example and tailor to your needs
<m_3> thanks all...
<m_3> questions?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: OpenStack: An open cloud infrastructure project - Instructors: ttx
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/26/%23ubuntu-classroom.html following the conclusion of the session.
<ttx> hey!
<ttx> Hi everyone
<ttx> My name is Thierry Carrez, I'm the release manager for the OpenStack project
<ttx> Raise your hand in #ubuntu-classroom-chat if you're here for the OpenStack Project intro session !
<ttx> I'm also an Ubuntu core developer, working on Ubuntu Server, time permitting
<ttx> In this session I'd like to introduce what OpenStack is and how you can participate
<ttx> You can ask questions in #ubuntu-classroom-chat, like this:
<ttx> QUESTION: what is OpenStack ?
<ttx> I'll review them periodically.
<ttx> So, what is OpenStack ?
<ttx> It's an open source project that creates software to run a cloud infrastructure.
<ttx> It allows you to be an IaaS (infrastructure as a service) provider
<ttx> In short, it's software you can run to create a competitor to Amazon Web Services or the Rackspace Cloud
<ttx> But it also scales down so that you can use it to manage your private SMB computing resources in a cloudish way.
<ttx> The project is built around 4 "open":
<ttx> Open source: The software is Apache-licensed, and the whole project is open source
<ttx> So no "enterprise edition" that keeps the tasty features
<ttx> Open design: We have open design summits, much like UDS, every 6 months
<ttx> (The next one is in Boston, October 3-5 !)
<ttx> Also everyone can propose a feature, we use Launchpad for blueprints... but be ready to implement it if you do :)
<ttx> Open development: We use DVCS (bzr and git) with merge proposals and public comments, so you know why a particular piece of code is accepted or not
<ttx> The code is written in Python, which makes it easier for everyone to read the code, understand it, investigate why it fails and propose patches
<ttx> Open community: We have community-elected project leaders and seats on the project policy board, we do meet weekly on IRC
<ttx> (next meeting in 2 hours in #openstack-meeting !)
<ttx> We have about 100 different developers that have committed code, from more than 20 different companies
<ttx> Questions so far ?
<ttx> No questions, so everyone knows OpenStack already, good good
<ttx> OpenStack is made of several subprojects
<ttx> We have "core projects", that follow all our rules and are part of the official OpenStack release
<ttx> We also have "Incubated" projects, that learn the process and should become core projects one day
<ttx> Finally we have "Related" projects, that are created in the OpenStack community but should not become core projects for one reason or another
<ttx> We used to have a 3-month release cycle, but at the last design summit we decided to switch to a 6-month release schedule, with monthly milestones
<ttx> Milestones can be used to evaluate new features as they land in the projects.
<ttx> (For example we'll release the diablo-3 milestone Thursday !)
<ttx> The 6-month release is aligned with the Ubuntu release cycle, so that you can get the latest OpenStack release in the latest Ubuntu release.
<ttx> Therefore OpenStack 2011.3 (also known as "Diablo") should be released on September 22, and included in 11.10.
<ttx> Questions ?
<ClassBot> rwh asked: does Canonical have plans to launch an AWS competitor based on this?
<ttx> I'm not part of Canonical (anymore) but I don't think so.
<ttx> If there are no other questions on the project itself, let's go into more details in the 3 current core subprojects
<ttx> The first one (and most mature) is Swift (OpenStack Object storage)
<ttx> You can use it to run a raw cloud storage provider, very much like Amazon S3.
<ttx> It allows your users to PUT and GET small or very large binary files, and get them properly replicated and available.
<ttx> It's very mature. It's actually the code that runs Rackspace "Cloud files", but also Internap or Nephoscale "Cloud storage".
<ttx> It scales horizontally using a share-nothing architecture, so you can add up hosts to cope up with the load
<ttx> You deploy it using commodity hardware, and Swift replication takes care of the redundancy of your data
<ttx> So you don't need expensive SANs or RAID setups.
<ttx> Questions ?
<ClassBot> koolhead17 asked: any duration for support cycle? like ubuntu?
<ttx> OpenStack is being distributed through downstream distributions, like Ubuntu
<ttx> They all have their own support cycles, we don't have a particular one
<ttx> At this point we don'"t backport anything but critical issues and security issues into past releases
<ttx> for for long term support I advise that you opt for one of those distributions of OpenStack.
<ttx> If there are no more questions, we'll switch to Nova
<ttx> Nova (Cloud Compute) is the biggest of the three current core projects.
<ttx> A few hours ago soren gave you the keys to it, I'll repeat the essentials now
<ttx> You can use it to run a cloud compute provider, like Amazon EC2 or Rackspace Cloud Servers
<ttx> It allows your users to request a raw virtual machine (based on a disk image from a catalog)
<ttx> So for example users can request a VM with Ubuntu Server 11.04 on it, then access it using SSH, customize and run things on it.
<ttx> The architecture is a bit complex. We have several types of nodes, and you can run any number of each of them to cope with load
<ttx> We have API nodes that receive requests, Scheduler nodes that assign workers...
<ttx> Network nodes that handle networking needs, Compute and Storage workers than handle VMs and block storage.
<ttx> Everything communicates using a RabbitMQ message queue
<ttx> It's *very* modular, so you have to choose how you want to deploy it.
<ttx> For example it supports 8 different virtualization technologies right now:
<ttx> QEMU, KVM, UML, LXC, Xen, Citrix XenServer, M$ HyperV and VMWare vSphere
<ttx> Ubuntu by default should concentrate on two of those: KVM and LXC
<ttx> Nova is still fast-moving, but it's used in production at NASA Nebula cloud
<ttx> and it will be deployed this year to replace current Rackspace Cloud Servers software
<ttx> Questions ?
<ClassBot> koolhead17 asked: how feasible it is to use openstack in production environment due to the nature of rapid change in architecture/feature every new release?
<ttx> Depends on the component.
<ttx> Swift is very mature and slow moving now, you can certainly easily deploy it in production
<ttx> Nova is still changing a lot, though that will calm down after Diablo. At this point running it in production requires a good effort to keep up
<ttx> Some deployment options are more tested, and therefore more stable than others
<ttx> So it is feasible... and will become more feasible as time passes
<ttx> Any other question on Nova before we switch to Glance ?
<ttx> ok then
<ttx> Glance (OpenStack Image service) is the latest addition to the core projects family
<ttx> It's a relatively-simple project that handles VM image registration and delivery
<ttx> So your users can use it to upload new disk images for use within Nova
<ttx> It supports multiple disk formats and multiple storage backends
<ttx> So disk images end up being stored in Swift or S3
<ttx> (or locally if you can't afford that)
<ttx> As far as stability is concerned, I'd say it's on par with Nova, and maturing fast.
<ttx> Questions on Glance ?
<ttx> hah! crystal clear, I see
<ttx> So, what can *you*, Ubuntu user, do with it ?
<ttx> You can try it. We provide several distribution channels...
<ttx> Starting with 11.04, the latest OpenStack version is released in Ubuntu universe
<ttx> (And the "Diablo" release should be in main for 11.10)
<ttx> If you need something fresher (or running on LTS), you can use our PPAs:
<ttx> We have release PPAs (with 2011.2 "Cactus") for 10.04 LTS, 10.10, 11.04 and Oneiric
<ttx> We also have "last milestone" PPAs and "trunk" PPAs for the same Ubuntu releases
<ttx> (The "trunk" PPA contains packages built from the latest commit in the trunk branch !)
<ttx> See http://wiki.openstack.org/PPAs for more details
<ttx> You don't need a lot of hardware to test it
<ttx> You can actually deploy Nova + Glance on a single machine
<ttx> It's easier to use a real physical machine to be able to test virtualization correctly
<ttx> Swift would prefer a minimum of 5 servers (to handle its redundancy)...
<ttx> but you can fake it to use the same machine, or you can (ab)use virtual machines to test it
<ttx> That's about all I had in mind for this presentation, so we can switch to general Q&A
<ttx> Feel free to join our community: test, report bugs, propose branches fixing known issues
<ttx> The best way to ensure it's working for *your* use case is actually to try it and report bugs if it doesn't :)
<ClassBot> _et asked: I am confused by the flow of control in the architecture diagram here http://ur1.ca/4sni7. can you shed more light on how each component interacts with each other and how comm to and from outside flows?
 * ttx looks
<ttx> aw
<ttx> ok, let's take an example
<ttx> A call to create a server
<ttx> run_instance in EC2-talk
<ttx> You use a client library or CLI that will talk the EC2 or the OpenStack APi
<ttx> your request will be received by a nova-api node
<ttx> You can run any number of those, in order to cope with your load
<ttx> nova-api will place your request in the queue
<ttx> nova-scheduler pick up requests in the queue
<ttx> again, you can run any number of them in order to cope with your load
<ttx> scheduler looks at your request and determines where it should be handled
<ttx> here, it determines which nova-compute host will actually run your VM
<ttx> scheduler places back on the queue a message that says "compute node X should run this query
<ttx> "
<ttx> nova-compute node X picks message in the queue
<ttx> it retrieves disk image from Glance, starts your VM
<ttx> updated DB so that if you run describ_instances you can see it's running, etc
<ttx> all the other arrows are for other types of requests
<ClassBot> _et asked: scheduler decides based on wat the load balancer says?
<ttx> no, scheduler decides based on the scheduler algorithm you use
<ttx> there are multiple scheduler algorithms available, and you can easily plug your own
<ttx> the default ChanceScheduler is just a round robin
<ClassBot> BuZZ-T asked: are there any possibilities to automatically scale instances (e.g. by load)
<ttx> That would be the role of tools above the IaaS layer
<ttx> ScalR, Rightscale, etc all provide tools that monitor your IaaS and react accordingly
<ttx> but then you tread into PaaS territory
<ttx> and OpenStack is already very busy trying to cover the IaaS space
<ttx> http://wiki.openstack.org/ is a good starting point for all things
<ttx> We also hang out in #openstack (support) and #openstack-dev (development) on Freenode !
<ttx> Any other question ? Or let me know if any of my answers was incomplete or confusing
<ClassBot> _et asked: how are IPs assigned? does compute do it thru scheduler? or is it the scheduler  all by itself? ( if u ve time )
<ttx> You define networks with scores of available IP addresses, and the network node just picks one that is available
<ttx> There is an ambitious project to separate the network components into a set of projects
<ttx> to achieve complete and complex network virtualization
<ttx> Check out Quantum, Melange and Donabe in Launchpad
<ttx> If there are no other questions, I'll close this online event, since this is the last session
<ttx> kim0 is not around, but asked me to thank all attendees
<ttx> For further questions, feel free to hit #ubuntu-cloud
<ttx> where most of the people that presented hang out anyway :)
<ttx> Thank you all for making Ubuntu the best platform for the cloud, and on the cloud
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/07/26/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
#ubuntu-classroom 2011-07-27
<EULUISM> Hi
<EULUISM> I have a problem with 11.04
<EULUISM> failed to get i915 symbols, graphics turbo disabled error on boot
<EULUISM> This have a fix?
<sagaci> EULUISM, might be better to ask in #ubuntu
<EULUISM> thanks sagaci
<midasjohn> Hello
<midasjohn> hello
<highwater> can somebody help me with a vey basic ssh question
<mhall119> probably
<mhall119> if you'd stuck around
<erkan^> asfyxia, !
<erkan^> :p
#ubuntu-classroom 2011-07-29
<jsjgruber> quit
<jsjgruber> quit
<linux> hi
#ubuntu-classroom 2011-07-30
<yender_> hola
#ubuntu-classroom 2011-07-31
<rad0mnic> hmmm too quiet to be a class room
#ubuntu-classroom 2012-07-23
<kayke> hello?
<kayke> ma111
<JoseeAntonioR> Hello, kayke, anything we can help you with?
<sd-praktikanten> hi there
#ubuntu-classroom 2012-07-26
<Guest60693> hi
<Guest60693> I am a Java and Android Developer how can I contribute to Ubuntu Development
<Guest60693> ?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Current Session: Introduction to being an IRC operator - Instructors: Myrtti
<AlanBell> Hello everyone
<AlanBell> today we have Myrtti with an introductory session on being an operator
<IdleOne> we can't send to channel
<AlanBell> covering all sorts of interesting commands
<Myrtti> so
<Myrtti> hiya everyone, if everyone who are participating could give give me a hands up in the form hilight in the chat
<Myrtti> that is, if you're not already joined in #ubuntu-classroom-chat, join and mention my nickname
<Myrtti> I'm horribly late and disprepared this time, so I don't have channel access set up for our playground channel yet
<Myrtti> so bear with me
<Myrtti> !moderate
<Myrtti> alright, that doesn't work then
<Myrtti> nevermind then
<Myrtti> so
<Myrtti> I hope you've already registed your nickname
<Myrtti> if you haven't, people in #freenode are happy to help
<Myrtti> So let's start with dirty basics. I'm using irssi myself but this short primer is intended to be usable no matter what client you are using.
<Myrtti> If you're planning to you some other client, like XChat, connect to freenode with the GUI tools, or wait a few minutes and I'll show how you can connect with the commands. I'll come back to how you can autoidentify to NickServ later on. I know many people like to use pidgin or other clients for IRC, but things are considerably easier if you use either irssi or XChat. Bitlbee can help you in the conversion from Pidgin/Empathy to ...
<Myrtti> ... a proper IRC client, and you can ask about it later on, I can help with it a bit and I'm sure there's plenty of others who can help with it better than I can.
<Myrtti> we're on a strict schedule so I hope you can wait for direct questions to me until end of the session
<Myrtti> Also, if you notice a mistake in what I'm teaching to others or you have a suggestion on how to do things better, feel free to ask!
<Myrtti> for reference, this session is an abbreviated version of the earlier session
<Myrtti> http://irclogs.ubuntu.com/2012/03/12/%23ubuntu-classroom.html#t19:59
<Myrtti> so if you want more hints on how to use irssi, or how to set it up, please refer to that
<Myrtti> I should hope you know how to connect to the network using your preferred client
<Myrtti> since you're here, you should know how to join channels
<Myrtti> and how to message others in private
<Myrtti> let's go to op basics. Before we start the actual business, some philosophical reminders about the job.
<Myrtti> Escalation and catalysing are important things to remember. Person who breaks the guidelines should be warned first, and from there on the actions should be slowly be escalated if no reasonable result isn't achieved. Usually the route goes by *warning* -> *mute* -> *remove* -> *removeban*.
<Myrtti> Freenode philosophy has an excellent essay about catalyzing, I wholeheartedly suggest everyone should read it. http://freenode.net/catalysts.shtml
<Myrtti> The main point to be remembered is that mute/remove/ban isn't supposed to be considered a punishment, but rather a way of preventing more harm done. It should be possible for every single ban to be negotiated and resolved, if not by an op, then by the escalation process.
<Myrtti> mind you - this doesn't mean that you need to entertain trolls.
<Myrtti> if you are unsure how to act on people who appear to be acting like a troll, negotiate with other ops
<Myrtti> or if none are available, with a freenode staff member - possibly in private
<Myrtti> (we can be contacted in pm at any time without asking permission - although we do sleep and work occasionally=
<Myrtti> Trolls don't need to be fed, so don't feed them. The cases where interacting with habitual trolls has benefited anyone are rare as chickens teeth
<Myrtti> whatever you do... Remember: Everything that applies for normal user applies for supporter https://wiki.ubuntu.com/IrcGuidelines. Everything that applies for supporter of the channels applies for you https://wiki.ubuntu.com/IRC/SupportersGuide.
<Myrtti> And most of all, both the normal and Leadership Code of Conduct applies to you as well. http://www.ubuntu.com/project/about-ubuntu/conduct and http://www.ubuntu.com/project/about-ubuntu/leadership-conduct. Behave accordingly. This session will not go into depth with these documents, so we'll move on to the actual training part.
<Myrtti> if you have any questions, you can actually ask them now if you wish in the -chat, I'll answer them in FIFO order at the end
<Myrtti> *IF* I can.
<Myrtti> ;-)
<Myrtti> unlike last time where we practiced the commands while I'm lecturing, this time we'll save that fun until the end
<Myrtti> so we can clear out this channel for the next users in time
<Myrtti> As per freenode recommendations, the IRC team has advised to op up (and stay opped) only when needed. This encourages users to not to ask for support specifically from the ops (in PM or publicly) and atleast in theory will facilitate non-op users to catalyse situations equally to the actual ops, creating a more equal atmosphere.
<Myrtti> At the end of the lesson I will give a few aliases on how to automate opping up/down for certain commands. There are also scripts that will do the same functionality. About those later on. Now for the actual commands.
<Myrtti> most of the op duties require interaction with ChanServ, because Chanserv manages the channel accesslists and what you can and can't do
<Myrtti> if you wish to participate to the sandpit part at the end, please raise your hand in the -chat now, and I'll set up your access for the playpit
<Myrtti> because the default status for an op on any Ubuntu IRC channel is to be unopped unless needed, we need to first op up
<Myrtti> /msg chanserv op #ubuntu-sandpit nickname
<Myrtti> Let's follow the escalation route and start with mute. Mute is a way to prevent users from sending to channel and/or others from seeing the text they send. Mute can be set with
<Myrtti> /mode +q nickname*!*@*
<Myrtti> once you've opped up
<Myrtti> This prevents the user from sending messages to the channel. If the channelmodes include +z, then people opped can see what the muted say. We will return on what all the bits are on the command we used later on when we discuss bans more.
<Myrtti> Some of you may be familiar with /kick - in Ubuntu IRC channels we usually use remove command. Some IRC clients, like irssi, don't include this command at all and don't pass command it doesn't recognize to the server directly, and so you have to give it as a direct command to the server with /quote. Thus, instead of /kick we use /quote remove.
<Myrtti> Remove is used for few reasons, the most important one being that usually IRC clients do not include so-called "autorejoin on kick"-functionality. As the old proverb of the jungle says: "Kick is not an invite" - If we've decided to remove you from the channel for one reason or another, it is unlikely that things change in the split second you've rejoined.
<Myrtti> Combined with a kick or remove, ban is the next up tool. Bans are set up just like quiets are, but as ban is available in all networks, users know how to bypass it by changing their nicknames and rejoining the channel. This is known as banevasion.
<Myrtti> If a banevader is cloaked and is persistant enough in evasion, freenode may revoke their cloak and/or not issue a cloak as it would be possible to use the cloak as a tool of banevasion. Your client should know how to set bans based on only nicknames by using /ban command.
<Myrtti> /ban nickname
<Myrtti> If you are unfamiliar what a cloak - also known as vhost in other networks - is, please see http://freenode.net/faq.shtml#cloaks . Cloaks can be used as either a way just to hide your hostname part, or to show your affiliation to a project, and often by that rights to certain actions on channels.
<Myrtti> Bans are set more generally against a hostmask, that is: nickname!username@hostmask, or more seldomly against a username or a realname. We will touch only the most common usecase of the aforementioned kind, if you need to ban someone by username or realname, ask for help when the need arises. When using a hostmask for banning people, please use appropriate * or ? wildcards.
<Myrtti> For example, nickname?!?username@hostmask, nickname?!*@hostmask, *!*@hostmask, or different ip ranges. Cloaks can be wildcarded the same way IP addresses and hostmasks can, for example *!*@dsl-hkibrasgw*.dhcp.inet.fi would ban everyone connecting to IRC from the Helsinki gateway of Sonera, one of the biggest ISP's in Finland, while *!*@*.staffs.ac.uk bans everyone from Staffordshire University.
<Myrtti> When using wildcards, try to include the smallest possible range that you think will prevent the user from evading their ban. If you do end up setting a ban on a large range of addresses then make it a banforward to #ubuntu-ops so anyone accidentally caught by it can be helped around it (more on banforwards coming up)
<Myrtti> One thing to worth noting is the username field, the *!username@* part. If it does not start with a tilde ~, the user connects from a computer or server running identd. identd operates by looking up specific TCP/IP connections and returning the user name of the process owning the connection.
<Myrtti> when a user is connecting from a host running identd, the user can't change the username on their clients and it is safe to assume that while they are connecting from that machine, their username stays the same.
<Myrtti> Banforwards are used to guide users to other channels. Their main uses in Ubuntu IRC channels are to guide people to the #ubuntu-ops channel, or if a user has a client that is misbehaving disruptively (disconnecting and rejoining channel on quick succession in a way which is disturbing the discussion or channel itself) to ##fix_your_connection.
<Myrtti> if you banforward someone to ##fix_your_connection, join the channel yourself to monitor, when their connection stabilises
<Myrtti> Banforwards are set just like normal bans, but at the end of the banmask the channel you want the user to be forwarded to is appended by $<channelname>, for example $#ubuntu-ops. Since some clients use autorejoin on kick, this is one of the usecases where using kick instead of remove is valid. Just remember to set the banforward *before* kicking. The banforwards are set with
<Myrtti> /mode +b nick!username@hostmask$##fix_your_connection
<Myrtti> Run-by-trolling/spamming attack participants can be unbanned after few hours without discussing it with them, as nicknames, usernames and often ip's are throwaway ones or drone machines and they are unlikely to be used again, and freenode usually responds to big scale multiple channel attacks with K-lines (network bans).
<Myrtti> It is important to try to keep the ban list as short as possible. Otherwise, unbanning should be done only after discussing with the banned person and telling them why they were removed and banned from the channels, either in PM or #ubuntu-ops. Unbanning can be done with
<Myrtti> /mode -b nickname!username@hostmask
<Myrtti> Unmuting is -q.
<Myrtti> There is a bit more to being an op, but usually those situations are such that more than one operator are present and can help you. Please do ask for help.
<Myrtti> You can unop yourself with two methods, one by doing it directly yourself and the other by asking chanserv to do it.
<Myrtti> /mode -o nickname
<Myrtti> /msg chanserv op #ubuntu-sandpit -nickname
<Myrtti> are we still hanging on?
<Myrtti> During the years of being an op in Ubuntu IRC channels, I've used both scripts and aliases to interface with nickserv and chanserv and to perform the ops tasks. For years I've used only aliases instead of scripts as I can better monitor and fix what they do. These aliases are for irssi, but you can modify them for XChat quite easily. There are some instructions on how in https://toxin.jottit.com/xchat_user_commands, irssi $C is ...
<Myrtti> ... %c in XChat, irssi $N is %n in XChat, $0 is %1 in XChat and so on.
<Myrtti> I will posted these aliases on my website and I'll give the link after I've explained them here first and finished the tutorial.
<Myrtti> It is perfectly ok to use scripts instead of these aliases too. Other participants can later give you links to their scripts, tips and thoughts.
<Myrtti> sadly inputting aliases isn't as simple or textbased in XChat as it is in irssi
<Myrtti> /alias NS /^msg nickserv
<Myrtti> How do aliases work? Alias sets a shortcode for combining or renaming commands. For example this one creates an alias called NS, which is /^msg nickserv - /^msg directs the command in irssi to the statuswindow (window 1) and doesn't log the messages if you've set to log all your private messages. Aliases can use parameters that you give while using the alias, or which irssi/Xchat takes from the environment. N is nick, C is ...
<Myrtti> ... channel you're on, 0 and 1 in irssi and 1 and 2 in xchat are the next parameters.
<Myrtti> /ns and /nickserv are also available as server-side commands, so clients that pass unknown commands to the server (which is most of them, not including irssi) will work with them by default.
<Myrtti> other nickserv aliases are
<Myrtti> /alias NSHELP /^msg nickserv help
<Myrtti> /alias NSGHOST /^msg nickserv ghost
<Myrtti> /alias NSIDENTIFY /^msg nickserv identify
<Myrtti> /alias NSINFO /^msg nickserv info
<Myrtti> /alias NSRELEASE /^msg nickserv release
<Myrtti> these should be quite selfexplanatory
<Myrtti> /alias BANS /mode +b;/mode +q
<Myrtti> this command lists all the bans and quiets on the channel you are on, if you're allowed to see them by the channel flags.
<Myrtti> /alias BANSEARCH /msg ubottu @bansearch
<Myrtti> /alias BANLOGIN /^msg ubottu @login;/^msg ubottu @btlogin
<Myrtti> if you've got access to ubottu's ban database, you might find these aliases useful. First one searches the database, but isn't too reliable. The latter combines both login commands, the one you need for the bot to recognise you and the second which gives you a link to the bantracker.
<Myrtti> then we'll move to ChanServ aliases
<Myrtti> /alias CS /^msg chanserv
<Myrtti> /alias CSHELP /^msg chanserv help
<Myrtti> /alias CSACCESS /^msg chanserv access $C list
<Myrtti> this command gives you the access list of the channel you are on
<Myrtti> /alias CSINFO /^msg chanserv info
<Myrtti> /alias CSOP /^msg chanserv op $C $0
<Myrtti> this command is short for the command that we used in the beginning of the session to op up or if given a nickname in the end, someone else. Usage:
<Myrtti> /csop nickname
<Myrtti> /alias CSDEOP /^msg chanserv op $C -$0
<Myrtti> and this is the the deop command
<Myrtti> /alias CSMODE /^msg chanserv op $C $N;/wait 2000;/mode $0;/^msg chanserv op $C -$N
<Myrtti> this command will help you set channelmodes
<Myrtti> /alias CSINVITE /^msg chanserv op $C $N;/wait 2000;/invite $0;/^msg chanserv op $C -$N
<Myrtti> will invite someone to a channel you're on, if you're an op
<Myrtti> /alias CSTOPIC /^msg chanserv op $C $N;/wait 2000;/topic $0-;/^msg chanserv op $C -$N
<Myrtti> and this will help you change the topic. Please see /cshelp TOPICAPPEND and TOPICPREPEND for other ways of getting ChanServ help you with setting the topic.
<ClassBot> There are 10 minutes remaining in the current session.
<Myrtti> then to the actual opping stuff
<Myrtti> /alias CSMUTE /^msg chanserv op $C $N;/wait 2000;/mode +zq $0;/^msg chanserv op $C -$N
<Myrtti> this alias helps you mute
<Myrtti> or rather, the channel management in bigger scale
<Myrtti> /alias CSREMOVE /^msg chanserv op $C $N;/wait 2000;/quote remove $C $0 :$1-;/^msg chanserv op $C -$N
<Myrtti> /alias CSREMOVEBAN /^msg chanserv op $C $N;/wait 2000;/quote remove $C $0 :$1-;/ban $0;/^msg chanserv op $C -$N
<Myrtti> Remove and Removeban aliases:
<Myrtti> /csremove nickname reason
<Myrtti> /csremoveban nickname reason
<Myrtti> and for canned, quickly usable reasons
<Myrtti> /alias CSR /^msg chanserv op $C $N;/wait 2000;/quote remove $C $0 :Please see https://wiki.ubuntu.com/IrcTeam/AppealProcess if you feel mistreated;/^msg chanserv op  $C -$N
<Myrtti> /alias CSRB /^msg chanserv op $C $N;/wait 2000;/quote remove $C $0 :Please see https://wiki.ubuntu.com/IrcTeam/AppealProcess if you feel mistreated;/ban $0;/^msg chanserv op  $C -$N
<Myrtti> of course you can modify the canned responses to whatever you like, or make new ones.
<Myrtti> Note that there are no quickaliases for unbanning, unmuting or doing banforwards as I find it better to do those manually without aliases.
<Myrtti> irssi aliases: http://myrtti.fi/irssialiases
<Myrtti> xchat aliases that you can't sadly input as easily as you would in irssi http://myrtti.fi/xchatalias
<Myrtti> this is the end of the scripted show, those of you who wish to practice how to use the commands by kicking, banning and fooling around can join #ubuntu-sandpit, which I shall set up for our use
<Myrtti> I will also invite you to continue whatever discussion we are in the middle of as the hour turns, there
<ClassBot> There are 5 minutes remaining in the current session.
<Myrtti> IdleOne asked earlier in the other channel what to do with persistant trolls that waste your harddrive space by filling it with logs
<Myrtti> my personal favourite is ignoring them in one way or another
<Myrtti> irssi even has a script that enables you to ignore in irssi but directing the input into a logfile
<Myrtti> (if you prefer to still keep them filling your harddrive in log files)
<Myrtti> that is, if they are trolling you in pm
<Myrtti> just remember, not to feed the trolls
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2012/07/26/%23ubuntu-classroom.html
<Myrtti> le finis!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<Myrtti> thank you everyone, and if you have questions I'm more than happy to answer them in the channels mentioned in the session, or in #ubuntu-irc
<x1k> thanks!
<Myrtti> huh. I actually thought that there'd be another session by someone else right after
<Myrtti> oh well.
<AlanBell> thanks Myrtti, excellent class and I know the logs will be referred to for many years to come
<AlanBell> Myrtti: that ended up in a separate channel #ubuntu-on-air
<Myrtti> ah right
#ubuntu-classroom 2012-07-28
<jumpy> hi all
<jumpy> am I interrupting?
<jumpy> I have a small quesiton about starting my desktop
<JoseeAntonioR> jumpy: Please, join #ubuntu for support.
<jumpy> thanks!
