#ubuntu-classroom 2007-03-19
<frenzy> jrib, hello
#ubuntu-classroom 2007-03-20
<Paddy_EIRE> could someone help me create a .deb out of this package http://qflash.sourceforge.net/webpage/
#ubuntu-classroom 2007-03-22
* Starting logfile irclogs/ubuntu-classroom.log
* Starting logfile irclogs/ubuntu-classroom.log
<soundray> Hey thepars, shall we invite ardchoille?
<thepars> ok
<thepars> if ardchoille wont it's fine
<nalioth> ruh roh, it's ardchoille
<thepars> lol
<ardchoille> :)
<soundray> duck and take cover!
<ardchoille> Who's explaining sudo?
<soundray> thepars, most unixes have a root user who's allowed to do everything.
<soundray> So does ubuntu, but the root account is locked by default.
* nalioth keeps an eye out
<thepars> ah i see
<nalioth> just like OSX
<soundray> With sudo, you can temporarily assume the privileges of the root user, without actually logging in as such.
<soundray> This has major advantages for security.
<nalioth> yes, first among them, there is no 'root' user account to start brute forcing
<ardchoille> I have one major advantage I'd like to explain later.
<soundray> ardchoille: why not now?
<ardchoille> Well, nevermind, nalioth just did it
<thepars> lol
<ardchoille> Also, you can't brute force the user accounts if you don't know the usernames.
<soundray> What's more, the attacker will exhaust him- or herself... are there girl crackers anyway?
<thepars> wouldn't be surprised :P
<nalioth> girls usually have better things to do than brute force a system
<ardchoille> thepars: The items in your home folder are ediable by you at any time. Most of the rest of the system requires root access for editing, and in some cases for viewing. This is where sudo comes in.
<ardchoille> "sudo appname" runs the app "appname" as if it were launched by the root user.
<nalioth> sudo = Super User DO
<soundray> thepars: does this clear things up so far?
<ardchoille> In Ubuntu, it's best to launch gui apps, if root access is needed, with gksudo rather than sudo
<thepars> yeah i'm understanding it so far
<nalioth> ardchoille: use the factoids (that's what they're for)
<nalioth> !gksudo
<ubotu> If you need to run graphical applications as root, use  gksudo , as it will set up the environment more appropriately. Avoid ever using  sudo <GUI-application> 
<thepars> i'm just really new to linux and have been having it handed to me on a plate with windows
<nalioth> !kdesu
<ubotu> In KDE, use  kdesu  to run graphical applications with root privileges when you have to. Do *not* use  sudo <GUI application> ; you can muck up your permissions/config files. For what to use in GNOME, see !gksudo
<ardchoille> nalioth: Ah, good point
<ardchoille> thepars: It is best to use sudo and not log in as root at all. Enabling the root account make the system less secure, as nalioth pointed out earlier.
<nalioth> thepars: Ubuntu was designed to use the sudo model.  please don't enable the root account
<soundray> thepars: try 'sudo -i', it gives you a shell with root privileges (dangerous, for example if you encounter a file called '-rf /' and attempt to remove it)
<thepars> 2 secs i use ubuntu on my upstairs pc :P
<soundray> thepars: when you have time, doesn't have to be now
<thepars> o ok i'll write it all down and do it all later
<nalioth> you'll rarely need 'sudo -i' as 'sudo commandname' works fine most all the time
<ardchoille> If anyone can see how the RootSudo wiki page can be enhanced, I'd be glad to go in and make the changes.
<thepars> all i'll be doing is allowing myself to unzip files into a folder
<soundray> thepars: unzip them into your /home/steskel/ folder, then you won't need sudo
<ardchoille> thepars: That should only require cd'ing to the folder and running "sudo tar <options> file"
<ardchoille> thepars: soundray has a good point.. you can put gimp brushes in ~/.gimp-2.2/brushes and they should work.
<nalioth> thepars: rule of thumb: don't use sudo at all (unless you are sure you need to)
<nalioth> thepars: if you are doing stuff outside your home directory, DON'T.
<nalioth> thepars: EVERY program on your system creates a hidden folder in your home directory with YOUR settings in it
<thepars> ah i see
<nalioth> there is no reason at all to modify system files/folders
<thepars> sorry for being stupid
<nalioth> ignorance is not stupidity
<ardchoille> You're not being stupid, you're just learning.. which is what we all had to do at some point.
<nalioth> ignorance is being erased with every second you are here
<ardchoille> :)
<soundray> thepars: unzipping brushes into the general gimp folder (/usr/share/gimp/2.0/brushes/) may be useful if you have lots of other users who also want to use the same brushes in gimp.
<nalioth> stupidity is (unfortunately) mostly incurable
<nalioth> soundray: how many users here on irc run multi user systems?
<thepars> true true hopefully i can be as competent at linux as i am with windows soon-ish :P
<soundray> nalioth: three or four? ;)
<ardchoille> thepars: Well, you're in the right community for it. The Ubuntu community is one of the best I've ever seen.
<thepars> definitely the best i've visited...i just need to get over the embarassment of asking
<nalioth> thepars: the nice thing about *nix is that once you learn something, the knowledge is good forever (unlike microsoft, which changes 'how stuff works' with almost every revision)
<ardchoille> Asking questions and reading material is the only way to learn
<soundray> thepars: do come back and help us help other newbies when you're past that initial stage.
<thepars> i definitely will
<ardchoille> thepars: Indeed, the things yo learn can help others later.
<thepars> hopefully my computer science degree will come in handy when i goto uni in a year :P
<soundray> Bet it won't ;)
<thepars> oh well i'm going to go and try it all now. I will be back another time no doubt with some other stupid query....and soundray you're probably right :P
<soundray> Computer science is all about proving whether some problem can be solved with a computer. But on paper.
<soundray> thepars: good luck
<thepars> i know but if i have a better understanding than i do now i should be able to put some of it into practice...hopefully
<thepars> well thanks anyway...great job you guys do!
<thepars> talk to you some other time...bye! :)
<nalioth> tag team information pushers?
<ardchoille> hehe
<ardchoille> I knew soundray would be able to explain it but I wanted to be here to offer any info I felt necessary
<ardchoille> Ubuntu is the first time I ever used sudo, I had my doubts about it but Ubuntu has shown me that there are better ways to do things.
<nalioth> i learned sudo from OSX
<nalioth> when Ubuntu came along, i was already trained.
<soundray> See you guys
#ubuntu-classroom 2007-03-23
<unterfranke> hallo
<unterfranke> kbuntu 6.10, wenn ich im Systemverwaltungsmodus hdb aktivieren mchte - Das System meldete: mount:/dev/hdb2 can't read superblock
<unterfranke> selbes Spiel bei hdb1
<jrib> unterfranke: #ubuntu-de may be more helpful
<unterfranke> thx
<RoyB72> anyone here got time for a newbie? trying to compile beryl, and like it says in the install file trying to run ./configure but I get a lot of error messages then it ends
<nalioth> RoyB72: got a pastebinned error log?
<RoyB72> what? been using Kubuntu 2 days...
<nalioth> !paste
<ubotu> pastebin is a service to post large texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (be sure to give the URL of your paste - see also the #ubuntu channel topic)
<RoyB72> http://paste.ubuntu-nl.org/11739/
<RoyB72> been reading and searching most of the day... no help found
<nalioth> RoyB72: do you have universe and multiverse repos enabled?
<RoyB72> how do I check that?
<nalioth> !multiverse
<ubotu> The packages in Ubuntu are divided into several sections. More information at https://help.ubuntu.com/community/Repositories and http://www.ubuntu.com/ubuntu/components - See also !EasySource
<RoyB72> found it.. and no... they were commented.. lemme fix that..
<RoyB72> k.. uncommented them.. got both universe and multiverse on now..
<RoyB72> what now?
<nalioth> did you update your apt?
<RoyB72> u mean adept?
<nalioth> did you update the package manager? apt-get / adept / synaptic
<RoyB72> :( no.. how do I do that?
<nalioth> there should be a button on adept to "update" or "reload"
<RoyB72> ohh I did do a Fetch updates in adept
<nalioth> ok
<nalioth>  search now for 'xrender'
<RoyB72> found 3 files
<RoyB72> libxrender1   + dbg,dev
<RoyB72> libxrender1 is already installed
<nalioth> install the one ending in -dev
<nalioth> every error your .configure throws, it wants the package it whines about ending in -dev
<RoyB72> ahh.. thats nice to know.. thx.. going to write THAT down.. :)
<nalioth> packages ending in -dev are "developmental packages" or "packages required to build other software based on them"
<RoyB72> ohh I thought it was packages used for others to develope on.. hehe
<RoyB72> omg.. now it whines about 9 more packages... :(
<nalioth> find them, look for the -dev ones, install, run ./configure
<nalioth> i'll be around if you have any questions
<RoyB72> thx.. I'll start working then.. u were a big help there
<RoyB72> :(   Requested 'xcomposite >= 0.3' but version of Xcomposite is 0.2.2.2
<nalioth> are you following a beryl how to?
<RoyB72> yea
<nalioth> RoyB72: are there any special repos they suggest?
<RoyB72> repos?
<nalioth> universe and multiverse are software "repositories"
<nalioth> or "repos"
<nalioth> they are where software comes from, both source code and binaries
<RoyB72> all it says is that it needs (to run, not to compile are it's plugins and a win-decorator prog like emerald, g-w-d or yawd)
<RoyB72> under compiling all there is, is what I've been trying to do... no explanations as to what is NEEDED to compile it
<nalioth> are you in #ubuntu-effects ?
<RoyB72> no.. just here and #kubuntu
<nalioth> join me there please, and we'll see if we can get some answers
<RoyB72> I'm there
<nalioth> RoyB72: did you see the /topic in #ubuntu-effects ?
<RoyB72> yea... but it says for aiglx.. and xgl...
<RoyB72> I need KDE, right?
<jrib> why compile beryl?
<LjL> why *use* beryl? :P
<RoyB72> that's all I got from their site...
* jrib spins cube
* LjL uses mousewheel to move between console tabs fast
<nalioth> jrib: if you have suggestions, i'm sure RoyB72 would like to hear them
<nalioth> i don't run any of that stuff, so i have no clue
<jrib> RoyB72: do you have a reason for compiling beryl?
<RoyB72> want to run it? the cube is one of the reasons, yes.. ;)
<LjL> alright but there's precompiled packages you know
<jrib> RoyB72: ah, but there is no need to compile it.  YOu just use the proper repositories
<LjL> checked out the topic in #ubuntu-effects?
<LjL> crammed with howtos
<RoyB72> hmmm... brb then... (more reading) lol
<RoyB72> right.. (omg with the info) I run Kubuntu 6.06.1 64bit... what am I suppose to install and what does they mean... : XGL and AIGLX?
<nalioth> RoyB72: if they are busy in here, #ubuntu-effects can answer those questions
<nalioth> RoyB72: the more you learn, the more you learn that you need to learn
<nalioth> RoyB72: i'd not upgrade unless you see a very good reason to do so
<nalioth> this is not windows.
<RoyB72> nalioth: well, I'm just playing around for now so nothing will be lost... and I would really like to try beryl
<nalioth> RoyB72: did you follow the upgrade link ubotu sent you?
<RoyB72> yea... but nothing seemed to have happened when I tried the command: gksu "update-manager -c"
<nalioth> open a terminal please (or konsole)
<RoyB72> have 1 open
<nalioth> type "kdesu update-manager -c" <enter>
<RoyB72> get some lines... 1: X Error: BadDevice, invalid or uninitialized input device 166
<RoyB72> Failed to open device
<nalioth> ignore them
<nalioth> any time you open a GUI program from the terminal, you'll get all that cruft
<RoyB72> sudo: update-manager: command not found
<nalioth> right...
<RoyB72> ohh ok.. :)
<nalioth> type "kdesu kate /etc/apt/sources.list"
<RoyB72> kate open...
<nalioth> anything in it? or is it blank?
<RoyB72> looks fine to me...
<nalioth> did it open the sources.list or did it open blank?
<RoyB72> the deb lines I edited are still there
<nalioth> ok, let's do a "find and replace"
<RoyB72> yep.. it's the source.list
<nalioth> find "dapper" and replace them with "edgy"
<RoyB72> all?
#ubuntu-classroom 2007-03-24
<RoyB72> what if it's _Dapper Drake_  do I replace it with _edgy_  or just edgy
<nalioth> i don't think that will be the case
<RoyB72> done
<RoyB72> close it?
<nalioth> save and close kate, yes
<nalioth> then in konsole, type "sudo apt-get update"
<RoyB72> done
<nalioth> now here is the upgrade part
<jrib> cross your fingers
<RoyB72> hehe
<nalioth> when you type "sudo apt-get dist-upgrade" <enter>  you'll be on your way
<jrib> you check for *-desktop?
<RoyB72> well.. it's working...
<RoyB72> it will still be Kubuntu, right?
<nalioth> yup
<RoyB72> not Ubuntu
<RoyB72> k
<nalioth> it will only upgrade what you have now
<RoyB72> goodie
<RoyB72> hmm.. that was a word a teenager would use...
<RoyB72> hehe
<RoyB72> ohh great.. it's midnight.....
<jrib> go to sleep and wake up to 6.10
<RoyB72> well.. I'm unemplyed... so I don't have to get up..
<jrib> note you need to dist-upgrade one more time to get upstart to install
<RoyB72> dist? and that is? remember I had linux for a little over 2 days
<RoyB72> all my other comps got win
<nalioth> you only use dist-upgrade when you are upgrading to the next distribution
<jrib> RoyB72: by "dist-upgrade" I just meant the command you just ran
<nalioth> otw, you use just "apt-get upgrade"
<RoyB72> a question while I upgrade here...
<RoyB72> what if I uninstall something and it uninstalls more file I didn't know about?
<nalioth> it will always tell you what it's doing.
<RoyB72> ok.. this install I'm doing... is it only updating what I had, or making a standard 6.10 install?
<nalioth> it will bring all your installed programs up to what they are in Kubuntu 6.10
<RoyB72> hmm :( anyway to make an installer just install everything clean? like "standard install" ?
<nalioth> RoyB72: wipe and install?
<RoyB72> something like that.. just don't want to do a format and install all over again..
<RoyB72> just make it setup like it was after original install
<jrib> what is the difference?
<RoyB72> get read of the things I installed and uninstalled
<jrib> RoyB72: so you want to keep the packages you have installed but you want to kind of "reset" them all?
<RoyB72> no.. I want to get read of all I did after I installed kubuntu 6.06 but keep 6.10
<RoyB72> remove all I installed after, I mean
<RoyB72> but make it install all I removed from the original install
<jrib> the easiest way to do that is to just reinstall ubuntu using a 6.10 disk as nalioth suggested
<RoyB72> hmm.. was hoping it wouldn't be necessary.. :( ohh well...
<jrib> RoyB72: why not?
<RoyB72> 1st I'd have to download the dvd... have only 6.06
<nalioth> RoyB72: no dvd necessary
<nalioth> RoyB72: there are CD images
<jrib> RoyB72: nah, just download the cd iso.  That's essentially what you are doing right now as we speak
<RoyB72> 2nd I had a lot of problems installing this in the first place...
<RoyB72> especially the bloody grup
<RoyB72> *grub
<RoyB72> refering to the wrond hd's
<RoyB72> had to use Super GRUB disk
<RoyB72> find out which hd is the correct on... bla bla bla
<RoyB72> reason I want to make it clean (default packages) is, I think I might have screwd up somewhere...
<RoyB72> and since I'm noob, I have no way of finding that out
<RoyB72> done
<RoyB72> done with the update...
<RoyB72> *upgrade
<nalioth> RoyB72: restart and come back  :)
<nalioth> or restart twice and come back
<RoyB72> right... brb then.. if not.. something went wrong... hehe
<Ahorner> hi
<nalioth> !msg the bot
<ubotu> Please investigate with me only in /msg or in #ubuntu-bots (see also !Bot). Abusing the channel bots will only result in angry ops...
<nalioth> !xcfg
<ubotu> The X Window System is the part of your system that's responsible for graphical output. To restart your X, type  sudo /etc/init.d/?dm restart  in a console - To fix screen resolution or other X problems: http://help.ubuntu.com/community/FixVideoResolutionHowto
<Ahorner> huh
<Ahorner> im brand new lol
<nalioth> Ahorner: your menu doesn't allow you to change resolution?
<Ahorner> ive done that before
<Ahorner> well its stuck on 800x600 oor 600x480
<Ahorner> ive already installed envy
<Ahorner> and enabled universe
<Ahorner> any ideas
<coldfish> ahorner problem is resolution?
<nalioth> enabling repos has nothign to do iwth your resolution
<Ahorner> yes
<Ahorner> wait
<Ahorner> i just downloaded the driver from ati
<Ahorner> its a .run how do i use it
<Ahorner> brb
<Ahorner> .
<coldfish> hm ok ubuntu right_?
<coldfish> gnome?
<Ahorner> im on ubuntu
<coldfish> ok
<coldfish> wait a minute
<coldfish> sudo nano /etc/X11/xorg.conf
<coldfish> find section screen, and subsection modes
<coldfish> add "1280x800" "1024x768" into front of them("800x600" "640x480")
<coldfish> subsection "display" sorry:)
<jrib> fasm_erx_CC2_: yeah, because there is too much traffic in #ubuntu
<fasm_erx_CC2_> I see Jrib
<jrib> so when I need to actually talk back and forth, I usually come here...
<fasm_erx_CC2_> I see
<fasm_erx_CC2_> Not sure whats wrong with this
<jrib> how did you do the checkout?
<fasm_erx_CC2_> svn co svn+ssh://<username>@www......./trunk/webroot/
<fasm_erx_CC2_> something like that
<fasm_erx_CC2_> <username> is my username
<fasm_erx_CC2_> then ask me for the password, I'm in
<fasm_erx_CC2_> i'm in then ask me for the password I mean
<fasm_erx_CC2_> then permission denied
<fasm_erx_CC2_> Then ask me for the password again
<Ahorner> i think im back
<fasm_erx_CC2_> Could it be the configuration server side possible?
<fasm_erx_CC2_> The guy who set it up knew I was developing under windows.
<jrib> fasm_erx_CC2_: well that's what I was thinking which is why I wanted you to try a different server, but since it works for you in windows it should be working here
<fasm_erx_CC2_> I see
<Ahorner> ok so im in xorg.cfg
<Ahorner> conf*
<jrib> fasm_erx_CC2_: try 'rapidsvn', it's a gui client so it may nudge us in the right direction
<fasm_erx_CC2_> I trie every GUI client. They did not work. I'll try again with this version..
<Ahorner> im back
<Ahorner> something worked and myresolution is back yay
<fasm_erx_CC2_> Jrib: still there?
<jrib> fasm_erx_CC2_: yeah
<fasm_erx_CC2_> I tried to run rapidsvm from terminal as I don't know where to locate it. Its asking me for the password in terminal
<jrib> fasm_erx_CC2_: how are you running it?
<fasm_erx_CC2_> Is there a way to seperate rapidsvn from terminal when I execute rapidsvn from terminal?
<fasm_erx_CC2_> rapidsvn shows, when I type rapid svn from terminal
<jrib> fasm_erx_CC2_: it should show up in applications > programming
<fasm_erx_CC2_> it doesnt for me
<jrib> fasm_erx_CC2_: try 'killall gnome-panel' to refresh the menu
<fasm_erx_CC2_> jrib: okay
<fasm_erx_CC2_> Still nothing, I only have meld there
<jrib> fasm_erx_CC2_: hmm  ok, well just run it as:  rapidsvn &
<fasm_erx_CC2_> jrib: ok
<fasm_erx_CC2_> Its still asking me the password from terminal
<fasm_erx_CC2_> jrib: I'll go for now. Its a bit late here already and sleepy already. I'll be back here tommorrow here. Thanks for the help btw.
<jrib> fasm_erx_CC2_: try #svn too, hope you figure it out
<fasm_erx_CC2_> jrib: Thanks, I'll be back tommorrow.
<sacater> erm
<sacater> err...
<sacater> teach me..
<sacater> :P
<RoyB72> nalioth: hello.. remember me from yesterday? I really did muck it up.. but I got the 6.10 dvd and installed it now... with all updates... kinda scared to do anything... hehe
#ubuntu-classroom 2007-03-25
<PriceChild> Hey
<slylyias> hi
<slylyias> You're the first glimmer of hope I've had since the start, PriceChild
<slylyias> So what do I do?
<PriceChild> slylyias, I'm not 100% sure yet as I don't have all the info... but I think your problem is that you are running the default "vesa" driver.
<PriceChild> could you "cat /etc/apt/sources.list | grep Driver" please
<slylyias> I think I'm running the 'nv' driver
<PriceChild> slylyias, ok.... well lets find out for sure ^ :)
<nalioth> PriceChild: ?
<PriceChild> nalioth, ?
<nalioth> sources.list has nothing to do with graphics
<slylyias> Okay, did that, no output
<PriceChild> arrrghhhh
<PriceChild> I always do that
<PriceChild> cat /etc/X11/xorg.conf | grep Driver
* nalioth plugs PriceChild into a light socket
<PriceChild> those are the two files people always use and I just don't think sometimes :P
<slylyias> kbd, mouse, wacom (three times), nv
<PriceChild> ok so it is nv
<nalioth> ahh envy
<slylyias> so now what?
<PriceChild> slylyias, nv is the opensource community written nvidia driver. It is "free".
<slylyias> free is good
<PriceChild> "nvidia" (the driver) is the binary driver by nvidia (the company) availiable at nvidia.com
<PriceChild> It is free as in price....
<PriceChild> However it is closed source.
<slylyias> I'm a college student, and I even ran out of money on my meal-plan, so right now if I even find change in the couch it's going towards food...
<slylyias> ah, I see.
<slylyias> the official driver - does it support my card fully?
<PriceChild> slylyias, should do yes... although I find the 7800 to be a problem card in experience
<slylyias> or is it outdated because a big company doesn't really care about the linux community?
<nalioth> slylyias: you can try it
<nalioth> if all else fails, you can go back to 'nv'
<nalioth> or even 'vesa'
<PriceChild> slylyias, I'm basically explaining this to you as some people have problems with installing binary closed source drivers/software.
<tonyyarusso> What is vesa exactly?
* tonyyarusso will probably have to use that - current ati is being awful
<PriceChild> tonyyarusso, that's the default, bare bones driver that runs basically everything.
<nalioth> tonyyarusso: VESA is a standard.
<tonyyarusso> PriceChild, nalioth: Any downsides to it if I don't care about compiz?
<PriceChild> tonyyarusso, slooowww....
<PriceChild> tonyyarusso, ati/radeon ftw
<nalioth> tonyyarusso: none at all
<tonyyarusso> PriceChild: Well, it will be faster than the 85% CPU usage I have right now
<nalioth> VESA is guaranteed to work on all displays/graphics cards
<PriceChild> Hehe :) by slow, I mean it refreshes on screen slowly
<PriceChild> nalioth, btw I'm sure I've heard of vesa not working for a card or two
<nalioth> PriceChild: i think that you heard about operator error
<PriceChild> hehe maybe :)
<PriceChild> *probably
<tonyyarusso> nalioth: well, it runs Gnome but not XFCE - I think b/c I have some transparency stuff still set in XFCE
<slylyias> sorry apparently I got disconnected, not sure why
<slylyias> PriceChild: you still here?
<PriceChild> Hey slylyias
<slylyias> did you see me apologize for asking so many rapid questions?
<PriceChild> Hehe no but don't worry about it.
<slylyias> not sure what the last you saw from me was
<slylyias> Okay, well I was saying, what do I need to do now?
<PriceChild> <slylyias> or is it outdated because a big company doesn't really care about the linux community?
<slylyias> to run 'nvidia'
<PriceChild> nvidia are pretty good about keeping their drivers up to date with cards.
<PriceChild> But anyway...
<PriceChild> !nvidia9
<ubotu> For Ubuntu 6.10 (Edgy Eft), you can obtain the (unsupported!) 9746 version of the binary NVidia drivers by using this repository: deb http://nvidia.limitless.lupine.me.uk/ubuntu edgy stable
<PriceChild> slylyias, You are on Edgy right?
<slylyias> yes
<slylyias> What does unsupported mean?
<PriceChild> slylyias, unsupported means that the ubuntu community will not help you if something goes wrong.
<slylyias> well that's not good. :(
<slylyias> guess I have no choice though.
<slylyias> Oh, can we reinstall nv first to try and 'fix' this?
<slylyias> And does X use openGL, and if so can we check that too?
<PriceChild> reinstall nv? What's broken about it?
<slylyias> the X environment is running really slowly if I enable special effects
<PriceChild> "special effects"?
<slylyias> translucency, etc
<PriceChild> Are you on ubuntu, kubuntu or xubuntu? :s
<slylyias> alt+f3 > configure window behavior > translucency
<slylyias> I had to turn all that off to make the windows appear quickly
<PriceChild> Either way I'm pretty sure this is normal...
<slylyias> otherwise it was sloooowwww
<slylyias> kubuntu
<slylyias> huh?
<PriceChild> Yeah I'm pretty sure this is normal. The nv driver doesn't provide the best performance or any 3d acceleration :)
<slylyias> no 3d accel?!
<slylyias> Then what's the point?
<slylyias> Okay, how do I install nvidia?
<RoyB72> take a look here, talkes about how to install the gfx drivers.. u can ignore the beryl stuff... https://help.ubuntu.com/community/BerylOnEdgy
<PriceChild> slylyias, could you pull up a terminal?
<RoyB72> got to Driver  Install / NVIDIA
<slylyias> already open
<PriceChild> then type in "kdesu kate /etc/apt/sources.list"
<PriceChild> (these are the same instructions as RoyB72 gave, just with more detail)
<PriceChild> slylyias, that should make a text editor appear...
<slylyias> RoyB72:  I'm really new to all this, so a little handholding is appreciated, RTFM is good, but I'm several hours into this already. :)
<Elr0d> howdy
<RoyB72> I had kubuntu for 3 days.. so...
<PriceChild> slylyias, got it?
<slylyias> can I do emacs /etc/apt/source.list?
<Elr0d> /who evilpig
<slylyias> I'm used to that editor
<PriceChild> slylyias, haha yeah sure
<PriceChild> slylyias, you need to use root privelages though... so prefix it with sudo
<slylyias> emacs is open
<slylyias> kk
<PriceChild> add the following line to the bottom of the file and save
<PriceChild> deb http://nvidia.limitless.lupine.me.uk/ubuntu edgy stable
<slylyias> btw, I know I can !$ for the path, so I can sudo emacs !$
<slylyias> but is there a way to include the whole last command, not just the last argument?
<slylyias> so I can do sudo _____ without the 'emacs'?
<slylyias> Oh, I already added that before when you mentioned it, PriceChild
<PriceChild> ah ok
<slylyias> Added the depository to apt-get.
<nalioth> slylyias: did port 8001 not work for you?
<PriceChild> I'm n ot sure what you meant a minute ago...
<PriceChild> *repository
<PriceChild> "sudo apt-get update" - will update your apt database
<slylyias> nalioth: I tried it in konversation, and couldn't connect.
<slylyias> updated
<RoyB72> slylyias: I followed the instructions in the link I gave u, and all worked fine.. now I just use the Envy prog to update my nVidia drivers...
<RoyB72> and I'm as noob as u are
<RoyB72> probably worse
<PriceChild> sudo apt-get install linux-headers-generic build-essential gcc xserver-xorg-dev pkg-config
<PriceChild> whoops not that.
<slylyias> oh, okay
<PriceChild> sudo apt-get install linux-generic nvidia-glx
<slylias_8001> Ah ha! Bye Bye tor!
<PriceChild> lol
<slylyias> sudo apt-get install linux-generic nvidia-glx
<slylyias> After unpacking 27.0MB of additional disk space will be used.
<slylyias> Do you want to continue [Y/n] ? y
<slylyias> WARNING: The following packages cannot be authenticated!
<slylyias>   linux-restricted-modules-2.6.17-10-generic nvidia-glx
<slylyias> Install these packages without verification [y/N] ?
<slylyias> E: Some packages could not be authenticated
<slylyias> 
<slylyias> I don't know if that means they were intalled or not
<PriceChild> They weren't
* PriceChild finds the key
* slylyias didn't know there was a lock.,
<PriceChild>  wget -O- --quiet http://nvidia.limitless.lupine.me.uk/ubuntu/root@lupine.me.uk.gpg | sudo apt-key add -
<slylyias> OK
<PriceChild> basically your machine reads the signing to make sure the download has come from where it says its from
<slylyias> done
<PriceChild> try again :)
<slylyias> PGP kinda key?
<PriceChild> Yeah
<slylyias> ah
<slylyias> installing btw
<slylias_8001> this is like the biggest package I've ever installed
<slylias_8001> or a reallllly slow server, lol
<slylias_8001> while this updates, can I talk about something else with you PriceChild ?
<PriceChild> yeah sure
<slylias_8001> I got 3 issues keeping me from using linux full-time instead of XP
<slylias_8001> this is #1
<slylias_8001> I have an external NTFS hard drive I need to be able to use (USB 250gig drive)
<slylias_8001> And I need my TV tuner to work properly
<slylias_8001> can those be done?
<PriceChild> "yes"
<RoyB72> I got a similar problem.. can I write to NTFS drives?
<PriceChild> !ntfs-3g
<ubotu> ntfs-3g is is a Linux driver which allows read/write access to NTFS partitions. It has been extensively tested but please remember to keep backups of critical data. Installation instructions at http://lunapark6.com/?p=1710 (Dapper) and http://ubuntuforums.org/showthread.php?t=217009/ (Edgy)
<RoyB72> thx
<PriceChild> be careful wit hit ;)
<PriceChild> back up all data
<nalioth> RoyB72: be very careful
<RoyB72> is it really that unstable?
<PriceChild> if you tv tuner works out the box like mine then good... if not then I don't think I'll be much help
<nalioth> RoyB72: the best way to write to an NTFS drive is by using Windows(tm)
<RoyB72> duhhh... hehe
<PriceChild> RoyB72, its at "Release Candidate" stage afaik. Extensive tests show its good however....
<slylyias> Can I convert an NTFS drive to FAT and just save all these headaches?
<RoyB72> but I'd like to write some small files to my larger drives
<PriceChild> It is not based on MS's specs and is basically all guesswork
<PriceChild> so some may be wrong ;)
<PriceChild> slylyias, no can do sorry
<slylyias> Can I banish microsoft to the farthest corners of the earth for having proprietary file systems?
<RoyB72> hmm... guess I can wait with that then
<PriceChild> You can try :)
<RoyB72> lol
<slylyias> I just wanna get READ access to this drive! it's mounted but owned by root
<slylyias> so when I try to sudo chmod on it I get an error because it's read only!
<PriceChild> slylyias, did you mount it manually or was it automounted?
<slylyias> manually
<slylyias> I think
<PriceChild> using /etc/fstab ?
<slylyias> I tried to fix it myself, so who knows what I did.
<slylyias> lol
<slylyias> yes
<RoyB72> u lost me there guys...
<nalioth> slylyias: DO NOT chmod things
<slylyias> btw, aptitude is up to 60%
<nalioth> 1: windows has no idea about linux permissions
<slylyias> why's that?
<nalioth> 2: done incorrectly, chmod will totally wreck your system
<slylyias> Okay, so what do I do now? (the drive still works under windows)
<PriceChild> pastebin your /etc/fstab
<slylyias> probably because chmod failed
<slylyias> (how sad is it that I'm a comp sci major?)
<slylyias> what's the hotkey for a new terminal window?
<slylyias> I'm tired of reaching for the mouse, it's all the way like, three inches away.
<RoyB72> I get this msg on youtube site: Hello, you either have JavaScript turned off or an old version of Macromedia's Flash Player. Get the latest flash player.
<slylias_8001> I'm still here, btw
<RoyB72> don't work going to the link
<slylias_8001> just closed the tor copy
<PriceChild> !flash9 > RoyB72 (see the pm from ubotu)
<PriceChild> ok
<PriceChild> slylias_8001, how's install going?
<slylias_8001> 97%
<slylyias> done!
<slylyias> now what should I do?
<PriceChild> installed also?
<slylyias> huh?
<PriceChild> nevermind... you have the prompt back?
<slylyias> yes
<PriceChild> ok
<PriceChild> I want you to write the following command down...
<slylyias> okay...
<PriceChild> If things fail, you will need to type it into the terminal to reset things
<PriceChild> sudo dpkg-reconfigure xserver-xorg -phigh
<slylyias> Okay, I'm making an alias for it in bash_profile so I can't forget (And don't have to type later)
<PriceChild> write it down anyway just incase ;)
<PriceChild> better safe than sorry
<PriceChild> If X does not restart, you will have to CTRL+ALT+F1, log in, type that command.
<slylyias> Okay
<slylyias> Written down
<PriceChild> then restart X with "sudo /etc/init.d/gdm restart"
<PriceChild> you need them both written down really...
<slylyias> Done
<slylyias> Btw, what is Cntrl + alt + f1 do?
<slylyias> does, even
<PriceChild> try it.... ctrl+alt+f7 to get back to gui
<slylyias> Ah, it drops me back to the console
<slylyias> I like it!
<PriceChild> "sudo nvidia-xconfig" should edit your /etc/X11/xorg.conf to the correct setup. CTRL+ALT+BACKSPACE will then restart your X server... losing all open windows.
<slylyias> Okay, all the 'if stuff dies" is done, :)
<slylyias> so I should close all running programs first, right?
<PriceChild> Might as well
<slylyias> brb
<slylyias> Okay, I'm on my laptop, this as you can guess, is a BAD thing.
<PriceChild> what happenned? :(
<slylyias> I'm at the terminal screen, and there is no -reconfigure option for dpkg!
<nalioth> slylyias: sure there is
<nalioth> sudo dpkg-reconfigure blah lah
<PriceChild> dpkg-reconfigure is its own command... not dpkg -reconfigure
<slylyias> oh, no space!
<slylyias> lol
<PriceChild> hehe :)
* PriceChild has to run soon :(
<slylyias> Also, is there a way for me to try to restart the x server without reconfiguring it?
<slylyias> just wanna give it a second chance
<PriceChild> but hopefully I can get you rescued till then
<nalioth> slylyias: ctrl-alt-backspace
<PriceChild> "sudo /etc/init.d/kdm restart" or the above
<PriceChild> (i said gdm earlier sorry, should be kdm as you are on kubuntu)
<slylyias> Okay, I got the dos-like screen
<slylyias> should I choose VESA? (I hope not)
<PriceChild> choose nv
<slylyias> oh, I chose nvidia
<slylyias> okay, hold on, will start over
<PriceChild> nvidia is the driver you just installed which you said broke...
<slylyias> restarting kdm
<slylyias> with nv
<slylyias> okay I'm in
<PriceChild> wooo :)
<PriceChild> now reeeeally quickly...
<slylyias> yes?
<PriceChild> pastebin the ouput of "cat /var/log/Xorg.0.log.old"
<PriceChild> so we can see what went wrong :)
<PriceChild> but I really have to go soon... :P
<PriceChild> BTW I did tell you earlier that the 7800 causes troubles for people ;)
<slylyias2> can you repeat what to pastebin?
<PriceChild> pastebin the ouput of "cat /var/log/Xorg.0.log.old"
<PriceChild> !paste
<ubotu> pastebin is a service to post large texts so you don't flood the channel. The Ubuntu pastebin is at http://paste.ubuntu-nl.org (be sure to give the URL of your paste - see also the #ubuntu channel topic)
<slylyias2> http://paste.ubuntu-nl.org/11933/
<PriceChild> I've not seen that before... and I'm sorry I "really" have to go.
<slylyias2> anyone else you know that would be willing to help me?
<slylyias2> thank you though
<PriceChild> Well as I said earlier... those drivers are "unsupported"
<slylyias> Okay, well, with getting my fstab done right so I can read off the NTFS drive?
<PriceChild> You've got to find someone nice :) I don't know who's online atm though sorry
<slylyias> Okay, thank you
<PriceChild> !mountwindows
<ubotu> To view your Windows/Mac partitions see https://help.ubuntu.com/community/AutomaticallyMountPartitions - See also !fuse
<PriceChild> sorry
<slylyias> !fuse
<ubotu> Though it's still very unsafe, you can read about Ubuntu NTFS writing using fuse here: https://wiki.ubuntu.com/Lkraider/NtfsFuse
<RoyB72> got a quick question for those still awake...
<RoyB72> why do I get the error: Requested audio codec family [mp3]  (afm=mp3lib) not available. Enable it at compilation.
<RoyB72> but I can still listen to mp3.. just get this error everytime it starts on one
<RoyB72> and a second question.. how can I make the sound go to S/PDIF?
<RoyB72> tried all the links... think I installed all that got anything to do with mp3... ;)
<RoyB72> goodnight then..
<ubotu> Announcement from my owner (Seveas): ubotu will be offline for maintenance
<Ubugtu> Announcement from my owner (Seveas): ubugtu will be taken offline and integrated with ubotu - epect some downtime
* mode/#ubuntu-classroom [+o LjL]  by ChanServ
* mode/#ubuntu-classroom [-b *!*@82-42-56-84.cable.ubr06.knor.blueyonder.co.uk]  by LjL
* mode/#ubuntu-classroom [-o LjL]  by LjL
#ubuntu-classroom 2008-03-17
<PDET> Hi!
 * mypapit going to shutdown, bye bye people!!
#ubuntu-classroom 2008-03-18
<ryanakca> thanks pleia2 / popey / rihanha / whoever gets to moderation requests before I do :)
<Heartsbane> ryanakca: for what?
<individual_eleve> is there a dvd player for linux?
<ryanakca> Heartsbane: the mailing list
<Heartsbane> oh
<Heartsbane> Good morring
<pleia2> FYI - 7 subs to the mailing list since that UWN :)
<Zelut> cool
#ubuntu-classroom 2008-03-19
<kraut> hi jrib
<jrib>  hey
<kraut> gimme a moment
<kraut> is there a flag for perl to see, where the script fails?
<kraut> like sh -x?
<kraut> root@kaya:~# /usr/share/debconf/frontend /var/lib/dpkg/info/tzdata.config configure 2007k-0ubuntu0.7.10.1
<kraut> root@kaya:~# echo $?
<kraut> 1
<kraut> i want to see, where this fails
<kraut> jrib? *highlight*
<jrib> don't know about perl, why are you concentrating there?  It seemed like .postinst was the issue
<jrib> kraut: try changing "#!/bin/sh" to "#!/bin/sh -v" in your .postinst
<kraut> http://pastebin.com/m64984742
<kraut> jrib? (forgot a highlight) ;)
<jrib> I see it, but don't know what to make of it yet :/
<kraut> jrib: you ar'nt alone ;)
<jrib> kraut: you tried -x too instead of -v?
<kraut> yep, mom
<jrib> same output?
<kraut> http://pastebin.com/m52d0138d
<kraut> + /usr/share/debconf/frontend /var/lib/dpkg/info/tzdata.postinst configure 2007f-3ubuntu1 <- that's the tricky part i think
<kraut> that's why i said that:
<kraut> <kraut> root@kaya:~# /usr/share/debconf/frontend /var/lib/dpkg/info/tzdata.config configure 2007k-0ubuntu0.7.10.1
<kraut> <kraut> root@kaya:~# echo $?
<kraut> <kraut> 1
<kraut> brb
<jrib> kraut: getting rid of -x and instead using: DEBCONF_DEBUG=developer dpkg --configure tzdata       gives no more information?
<kraut> http://pastebin.com/m6bbe6c16
<kraut> nothing usefull, jrib.
<jrib> kraut: https://bugs.edge.launchpad.net/ubuntu/+source/tzdata/+bug/116193 people there had luck by changing /etc/timezone to "Europe/Berlin" and then trying to configure.  Don't know why, but try it and see
<kraut> Europe/Berlin is my default zone
<kraut> root@kaya:~# cat /etc/timezone
<kraut> Europe/Berlin
<jrib> kraut: I don't know what else to suggest.  Try the main channel again later.  If you can't resolve or do and find it was a bug, file it.  Try the mailing list and forums too
<kraut> hrmpf, ok.
<jrib> good luck
<visualdeception> j #ubuntu-us-in
<visualdeception> j #ubuntu-us-in/
<visualdeception> oops
 * pleia2 chuckles and gives visualdeception a cookie
<visualdeception> lol yea my brilliance was on full display there
#ubuntu-classroom 2008-03-20
<Traveler4> anyone knows an app like fspot for pdf documents (tagging) ??
<Kirrus> Traveler4, this room is fairly quiet when theres no scheduled session
<Kirrus> You'll find your questions answered quicker in #ubuntu
 * mypapit out!!!
<navinem> ok here it goes.... Suppose i want to permit a particular device like a USB or any other device to a particular user only, can anyone help me here.that too dynamically,i want to know it specifically??
<Zelut> navinem: search for
<Zelut> navinem: erg, "udev rules" on google. that'll be the best route.
<navinem> ok
<clown8> works for me
<IRCcrackbaby> I guess this works too.... w00t or something
#ubuntu-classroom 2008-03-21
<Palin_linux> Simple ? for someone I am trying it script a install.. into the logged in user home dir. what is the script command to print the user dir in windows it %user% but linux i am unsure
<Palin_linux> Nevermind
#ubuntu-classroom 2008-03-22
<mypapit> anti khairy !!
<pleia2> Zelut: hmm, right, tomorrow is Easter
<Zelut> pleia2: ohh, right. that too.
<Zelut> pleia2: maybe you should reply with a 'maybe next week'?
<pleia2> I'll be here for the meeting, but I'm not holding my breath for a huge turnout
#ubuntu-classroom 2008-03-23
<Traveler9> hello
#ubuntu-classroom 2009-03-18
<massiveoni> date -u
<massiveoni> heya everyone
<massiveoni> heya, does anyone know how long the class tomorrow goes for?
#ubuntu-classroom 2009-03-19
<WastePotato> Wee.
<bodhi_zazen> FYI If you would like I am going to run a shared ssh session to demo apparmor
<bodhi_zazen> Please do not try to connect to the server as it is not set up yet and you will be blacklisted if you make too many attempts
<bodhi_zazen> http://paste.ubuntu.com/133993/
<bodhi_zazen> I have asked anyone in the beginners team to assist with ssh in a private message if you are having problmes connecting please ask
<WastePotato> Apparmor? What's that? >_>
<bodhi_zazen> I will give the passphrase soon
<bodhi_zazen> he he he ... watch ;)
<Halow> Do we need anything in particular? I know  completely nothing about SSH.
<Snova> No. Those instructions will do.
<bodhi_zazen> Halow: you on windows or linux ?
<bodhi_zazen> If on linux, go ahead and get the key
<Halow> Currently testing Jaunty, actually. :)
<Snova> If you can run 'ssh', it'll work. :)
<bodhi_zazen> you run ssh in a terminal
<Halow> ...key?
<bodhi_zazen> Halow: http://paste.ubuntu.com/133993/
<Halow> Oh, OK.
<jimi_hendrix> bah people talk slower...too many channels with people talking!
<bodhi_zazen> This will be primarily a Q @ A session, so I will ask for questions at the beginning and try to answer as many as I can
<jimi_hendrix> bodhi_zazen, can i start asking now?
<bodhi_zazen> jimi_hendrix: not yet, lol
<bodhi_zazen> OK the shared session is up
<bodhi_zazen> you can ssh into it with
<bodhi_zazen> ssh bodhizazen.net -i ~/.ssh/ufbt-guest
<bodhi_zazen> passphrase = padawan
<bodhi_zazen> this is voyeurism, so you can watch but not enter commands :p
<bodhi_zazen> If anyone needs help connecting , this is a good time to ask
<bodhi_zazen> he he he ...
<jimi_hendrix> is there a way to talk with people through ssh?
<bodhi_zazen> I will disable that flash now
<bodhi_zazen> yes jimi
<bodhi_zazen> control-a wall "message"
<bodhi_zazen> C-A == Control +a
<bodhi_zazen> C-a:wall "message"
<Halow> OK, I'm apparently having trouble connecting.
<Halow> Should it prompt me for the passphrase?
<WastePotato> I'm in. Haxxx.
<jimi_hendrix> bodhi_zazen, zomg
<jimi_hendrix> but i cant use that cause i am a guest right
<Snova> Nope.
<jimi_hendrix> Halow, its padawan
<Snova> Halow: How far did you get? Did you download the key?
<Snova> Halow: Oh, and yes.
<Halow> And I was blacklisted? Irk... Only made one attempt.
<bodhi_zazen> yes jimi_hendrix you can use that
<bodhi_zazen> check it out
<Halow> I got the key.
<bodhi_zazen> C-a:wall "message"
<Nano_ext3> how far we into it?
<WastePotato> I hear lots of beeping, and it's not from me. >_<
<jimi_hendrix> bodhi_zazen, it yells at me saying i am a guest
<jimi_hendrix> i hit ctrl + a
<jimi_hendrix> then i hit w
<jimi_hendrix> and it yells at me
<bodhi_zazen> he he heh WastePotato
<Rocket2DMn> i didnt realize the session was in here, sorry
<bodhi_zazen> yea , I changed the bell to beep
<bodhi_zazen> people are trying to hack the system
<bodhi_zazen> C-a:wall "message"
<Halow> I don't think I'm doing this right.
<Rocket2DMn> are we able to login to bodhizazen.net yet?
<bodhi_zazen> syntax is very specific
<bodhi_zazen> yes Rocket
<Rocket2DMn> passphrase?
<Snova> padawan
<jimi_hendrix> bodhi_zazen, it says colen permision denied
<Rocket2DMn> thanks
<Halow> bodhi_zazen: I'm getting the error "Permission denied (publickey)."
<bodhi_zazen> Halow: you need to download the key
<bodhi_zazen> http://paste.ubuntu.com/133993/
<Halow> I did.
<Nano_ext3> Halow: http://paste.ubuntu.com/133993/
<bodhi_zazen> what command did you use to ssh in ?
<Nano_ext3> follow exactly
<bodhi_zazen> ssh guest@bodhizazen.net -i ~/.ssh/ufbt-guest
<Halow> Aha. I didn't have the guest@ in front of the domain. *blush* Sorry.
<bodhi_zazen> 5 minutes
<bodhi_zazen> passphrase = padawan
<Halow> Alright. I have it now. Thank you.
<bodhi_zazen> Is wall working now ?
<bodhi_zazen> can someone test it ?
<rraj_be> i am in Nano_ext3
<Nano_ext3> ? didnt come in right away @.@
<bodhi_zazen> Hit the control-a
<bodhi_zazen> then ;
<bodhi_zazen> then wall "message"
<rraj_be> i have just joined irc asusuall Nano_ext3
<bodhi_zazen> you need the message in "quotes"
<bodhi_zazen> w00t :()
<Nano_ext3> how do i write?
<bodhi_zazen> Rocket2DMn: message in "quotes"
<Nano_ext3> permission denied for guest :( lol
<bodhi_zazen> c-a:wall "message"
<Rocket2DMn> is it supposed to record it somewhere?
<bodhi_zazen> lol
<Snova> Is it just me, or can we interfere with each other's :wall's?
<bodhi_zazen> Rocket everyone should see your message, I did
<Rocket2DMn> i see the whole phrase
<Rocket2DMn> for everybody, including the :wall part
<Nano_ext3> i can ttype when others are?
<Nano_ext3> or no
<bodhi_zazen> only when they type Rocket2DMn
<bodhi_zazen> then we see just the message
<Rocket2DMn> i dont see the message except when they are typing it out...
#ubuntu-classroom 2009-03-20
<bodhi_zazen> Probably one at at time for guests
<Rocket2DMn> ack im fighting with someone
<Nano_ext3> we are all fighting lolz
<Nano_ext3> can I type something everyone?
<Nano_ext3> :)
<bodhi_zazen> I can see everyone has hit the wall :)
<Rocket2DMn> i should customize my terminal like bodhi_zazen has
<Rocket2DMn> is that a bash thing?
<jimi_hendrix> bodhi_zazen, what programs are those
<bodhi_zazen> OK, lets get this show on the wall
<bodhi_zazen> :)
<Nano_ext3> haha
<WastePotato> \o/
<bodhi_zazen> First , thank you everyone for coming to this session
<rraj_be> bodhi_zazen: sorry for intrupting,   when i tried it , its giving like "Enter passphrase for key '/home/raj/.ssh/ufbt-guest':"
<Snova> rraj_be: "padawan"
<jimi_hendrix> bodhi_zazen, whats tha shell
<bodhi_zazen> Let me assure you , the beginners team put me up to this
<rraj_be> k Snova
<jimi_hendrix> ive heard zsh but not jailzsh
<Snova> jimi_hendrix: A jailed Zsh. :)
<jimi_hendrix> which is?
<bodhi_zazen> it is a shell I make for apparmor jimi_hendrix
<bodhi_zazen> it is zsh
<Snova> Zsh, in a restricted environment.
<jimi_hendrix> ahh
<jimi_hendrix> did you edit it or something
<jimi_hendrix> edit the source*
<WastePotato> :(
<Snova> No, that's what AppArmor is for.
<bodhi_zazen> The intention is to raise awareness of security and so here we are :)
<jimi_hendrix> ok
 * jimi_hendrix raises hand
<bodhi_zazen> What do people want me to cover, what questions do you have ?
 * jimi_hendrix raises hand
<rraj_be> Snova:  Enter passphrase for key '/home/raj/.ssh/ufbt-guest':
<bodhi_zazen> go jimi_hendrix :)
<rraj_be> Permission denied (publickey).
<Nano_ext3> show how to implement profiles
<bodhi_zazen> rraj_be: padawan
<Nano_ext3> http://paste.ubuntu.com/133993/
<jimi_hendrix> bodhi_zazen, i dual boot windows and ubuntu
<jimi_hendrix> do i need an antivirus on ubuntu
<rraj_be> ok bodhi_zazen
<Nano_ext3> jimi_hendrix: hahah no
<Nano_ext3> this is for user control
<Nano_ext3> security on a server if you may
<bodhi_zazen> someone help rraj_be in a private window or on ##beginenrs-help
<bodhi_zazen> OK, antivirus first then :)
<bodhi_zazen> you will get varied opinions
 * jimi_hendrix uses AVG on windows
<bodhi_zazen> IMO antivirus is best used on your windows boxes
<Nano_ext3> Agreed
<bodhi_zazen> IMO Linux antivirus is best on file or mail servers
<Nano_ext3> things that need the security
<bodhi_zazen> IMO scanning your Linux desktop with antivirus will yield lots fo false positives
<jimi_hendrix> what about a webserver
<Nano_ext3> for desktop , not an issue really
 * jimi_hendrix is thinking of setting up a webserver
<Nano_ext3> yes on a webserver I would say
<Rocket2DMn> bodhi_zazen, if you need a place to start the discussion, why dont you briefly explain some of the tools you use to enhance security in linux (apparmor, iptables, ossec, snort, etc).  e.g. in one sentence each, what do they do?
<Nano_ext3> anything that deals with heavy user traffic
<bodhi_zazen> good idea Rocket2DMn :)
<Nano_ext3> yea
<bodhi_zazen> The linux tools are a bit different
<bodhi_zazen> and linux is modular ...
<bodhi_zazen> The first line of defense is, of course, permissions
<bodhi_zazen> sudo vs su ?
<Nano_ext3> yea
<jimi_hendrix> sudo runs one command su changes your user
<bodhi_zazen> su gives all or none root access
<Rocket2DMn> (or other user access)
<bodhi_zazen> sudo allows finer control
<bodhi_zazen> sudo -i for a root shell
<bodhi_zazen> Next a firewall
<bodhi_zazen> firewall are also full of opinions
<bodhi_zazen> In general, you should use a router as a router has a firewall built in
<Nano_ext3> thats how I do it
<bodhi_zazen> a default install of ubuntu has no servers listening, so the default settings behind a router are just fine
<Nano_ext3> Not versed in linux firewalls yet
<bodhi_zazen> If you wish to user  a firewall, to set up your own router (NAT) or limit connections, teh firewall is iptables
<jimi_hendrix> what about firestarter?
<bodhi_zazen> iptables can be configured with commands, a  script, ufw, or a gui tool such as GUFW, Guraddog, firestarter, shorewall, etc
<bodhi_zazen> guraddog has very nice built in help
<bodhi_zazen> the gui tools are not the firewall, only config tools
<bodhi_zazen> Open them, config iptables, close them
<Nano_ext3> think router access list , but on the OS itself via iptables
<bodhi_zazen> I advise you NOT use Firestarter to monitor your network traffic
<bodhi_zazen> Next , everyone know the terms HIDS / NIDS ?
<Nano_ext3> no
<bodhi_zazen> http://en.wikipedia.org/wiki/Intrusion-detection_system
<bodhi_zazen> http://en.wikipedia.org/wiki/Host-based_intrusion_detection_system
<bodhi_zazen> http://en.wikipedia.org/wiki/Network_intrusion_detection_system
<bodhi_zazen> OK, HIDS, most new users are familiar with say Windows antivirus scanners
<bodhi_zazen> This is a HIDS
<Nano_ext3> k
<bodhi_zazen> so is rkhunter and chkrootkit
<bodhi_zazen> as is OSSEC, tripwire, etc
<bodhi_zazen> use these tools to monitor your system for unauthorizzed changes
<bodhi_zazen> rkhunter and chkrootkit have a bunch of flase positives, learn what they are
<duanedesign> do you recommend running chkrootkit from a usb device
<bodhi_zazen> and what a "normal" sustem is
<bodhi_zazen> duanedesign: I do not think it matters really
<bodhi_zazen> The point is, you can not monitor your system for changes if you do not know what normal is
<bodhi_zazen> You will get alerts when you say install new software as well, or change a config file
<bodhi_zazen> Next NIDS
<bodhi_zazen> NIDS is sophisticated and even the geekiest will find this hard
<bodhi_zazen> You need to understand basic networking protocols, tcp, udp, ping, etc
<bodhi_zazen> Tools include snort and wireshark
 * jimi_hendrix tried wireshark one to sniff some packets i was sending
<Nano_ext3> ive take Cisco CCNA, and Id still have enormous trouble with that
<Nano_ext3> wireshark I have used
 * jimi_hendrix 's head blew up
<bodhi_zazen> these tools are "packte sniffers" and will montior your network traffic
<Nano_ext3> I reccomend wireshark
<bodhi_zazen> snort will user a set of rules to identify potentially problematic activity, although lots of false positives
<bodhi_zazen> wireshark will monitor the raw packets
<bodhi_zazen> in a nut shell
<bodhi_zazen> Next line of defense - SELinux / Apparmor
<Nano_ext3> :)
<jimi_hendrix> SELinux != distro right
<Snova> No, it's a security framework built into the kernel.
<Nano_ext3> no
<Nano_ext3> to jimi
<Nano_ext3> security monitor
<bodhi_zazen> These are very powerful tools and these are the first tools that can protect you against unknown exploits and Zero day exploits
<bodhi_zazen> These tools can limit even root
<Nano_ext3> zero day?
<Snova> Security exploits, on the day they are found, before they are patched.
<bodhi_zazen> http://en.wikipedia.org/wiki/Zero-Day_Attack
<bodhi_zazen> Ubuntu uses Apparmor, but it needs to be configured
<bodhi_zazen> Most people find apparmor easy to understand
<bodhi_zazen> The point, IMO, of apparmor is to "confine" any network applications
<bodhi_zazen> such as firefox, thunderbird, etc
<bodhi_zazen> you limit what they can do on your os
<bodhi_zazen> you can also limit a users shell, as I will show you on the shared ssh session
<Nano_ext3> cool
<lovinglinux> can be used with torrent applications?
<Snova> Anything.
<bodhi_zazen> IMO SELINUX and Apparmor are mis characterized as "overkill"
<bodhi_zazen> lovinglinux: yes
<bodhi_zazen> I am collecting apparmor profiles here : http://bodhizazen.net/aa-profiles/
<lovinglinux> So if someone exploit a vunerability on my torrent client, then Apparmor can prevent it from achieving success?
<bodhi_zazen> I have a profile for rtorrent
<Snova> lovinglinux: AppArmor can prevent it from accomplishing anything by restricting access to the filesystem, which is mostly the same thing.
<bodhi_zazen> If anyone is willing to contribute, send me your profiles ( bodhi.zazen @ ubuntu.com)
<bodhi_zazen> and I will post them as well
<Nano_ext3> i will have time this weeked to learn it bodhi
<lovinglinux> do you know a good tutorial for apparmor?
<Nano_ext3> bodhi link him your thread
<Nano_ext3> :)
<bodhi_zazen>  /end long winded security drive by
 * jimi_hendrix puts away machine gun
<bodhi_zazen> Links are here : http://paste.ubuntu.com/133993/
<lovinglinux> thanks
<Snova> AppArmor introduction: http://ubuntuforums.org/showthread.php?t=1008906
<bodhi_zazen> OK , with that background, questions please ?
<Snova> Oh, didn't notice the links at the bottom of that..
<bodhi_zazen> Or do you want to see what the shared session can do ?
<bodhi_zazen> ie live demo ?
 * jimi_hendrix raises hand
<bodhi_zazen> go jimi_hendrix :)
<jimi_hendrix> if i am running a webserver (linux of course...well maybe a *BSD)...and its just pages with html, what am i at risk for
<bodhi_zazen> apache attacks, php attacks, and DOS are the major ones
<bodhi_zazen> The damage depends on the attack
<bodhi_zazen> I have seen php code that takes you cookies for example (think passwords for web sites)
<bodhi_zazen> If a crack allows "arbitrary code" think an intruder then has root access
<lovinglinux> Do I need to create apparmor profiles for all applications that connect to network or just for those that listen to ports?
<bodhi_zazen> many attacks then use your box to attack others, send spam, spoof ip, what have you
<bodhi_zazen> IMO lovinglinux all apps that access the internet
<jimi_hendrix> bodhi_zazen, i said just html, no php
<bodhi_zazen> although as you can see I do not yet have profiles for all apps yet
<bodhi_zazen> jimi_hendrix: LAMP == Linux apache Mysql and PHP so I included it in the broader discussion
<jimi_hendrix> ok
<bodhi_zazen> Want to see a demo ?
<jimi_hendrix> yes
<bodhi_zazen> On the ssh session ?
<Nano_ext3> yeps
<bodhi_zazen> OK
<bodhi_zazen> anyone need assistance connecting via ssh ?
<bodhi_zazen> ok, the guru account has root access
<bodhi_zazen> as you can see
<bodhi_zazen> the guru account can install applications
<Traveler15164> yeah, i keep getting the Permission denied (publickey) error
<bodhi_zazen> :)
<bodhi_zazen> someone help Traveler15164 please :)
<lovinglinux> sorry, I know how to use ssh, but don't which server I'm supposed to connect
<bodhi_zazen> I will wait and answer questions
<bodhi_zazen> you need the key
<bodhi_zazen> then ssh guest@bodhizazen.net -i ~/.ssh/ufbt-guest
<bodhi_zazen> pw = padawan
<Nano_ext3> http://paste.ubuntu.com/133993/
<Nano_ext3> follow exactly
<Nano_ext3> verbatim
<bodhi_zazen> http://paste.ubuntu.com/133993/
<Nano_ext3> via terminal
<bodhi_zazen> for keys
<Nano_ext3> beat you to it :)
<bodhi_zazen> any other questions while we are waiting
<bodhi_zazen> ?
<bodhi_zazen> chickens, all questions are welcome :)
<bodhi_zazen> you in Traveler15164 ?
<bodhi_zazen> lovinglinux: ?
<Traveler15164> nope
<bodhi_zazen> Traveler15164: what do you need help with ?
<bodhi_zazen> do you have the key ?
<lovinglinux> just a second
<Traveler15164> yes
<bodhi_zazen> do you know how to use it ?
<Traveler15164> i got it and placed it in a new empty file?
<Traveler15164> named ufbt-guest and chmod 400 on that
<Snova> Stick it in ~/.ssh
<Traveler15164> it is
<bodhi_zazen> ok
<Snova> ssh guest@bodhizazen.net -i ~/.ssh/ufbt-guest
<Nano_ext3> you have to place that text in ~/.ssh/ufbt-guest
<Nano_ext3> and then chmod 400 on that file
<Nano_ext3> its all in the paste link
<Nano_ext3> http://paste.ubuntu.com/133993/
<lovinglinux> The authenticity of host xxxxxxxxxxx can't be established.
<Traveler15164> i'll redo it all to make sure
<bodhi_zazen> lol lovinglinux
<Snova> lovinglinux: That's normal, just confirm it.
<bodhi_zazen> say yes :)
<bodhi_zazen> Traveler15164: cd .ssh
<lovinglinux> lol, stupid me
<bodhi_zazen> rm ufbt-guest
<bodhi_zazen> wget http://bodhizazen.net/beginners/ufbt-guest
<bodhi_zazen> chmod 400 ufbt
<Rocket2DMn> you may have to "ssh bodhizazen.net" first and accept the fingerprint
<bodhi_zazen> ssh guest@bodhizazen.net -i ./ufbt-guest
<Rocket2DMn> then just ctrl-c without doing any authentication
<Rocket2DMn> then do the ssh command above to use the key
<lovinglinux> Connection closed by xxxxxxxxx
<Rocket2DMn> i found if you use the key without having the fingerprint cached, it doesnt give you the option to store it and it aborts
<bodhi_zazen> thanks Rocket2DMn
<bodhi_zazen> Traveler15164: you in ?
<Traveler15164> redoing it worked
<bodhi_zazen> lovinglinux: ?
<Traveler15164> strange
<bodhi_zazen> OK, so ...
<bodhi_zazen> as you can see we are root :)
<lovinglinux> OK, I am in
<Nano_ext3> yay!
<bodhi_zazen> as you can see, we started a new shell
 * Nano_ext3 runs around in circles with streamers
<bodhi_zazen> guru was jailzsh
<bodhi_zazen> root is bash
<bodhi_zazen> but the apparmor confinement follows us
<bodhi_zazen> so ...
<bodhi_zazen> First I am limiting root with iptables ...
<bodhi_zazen> sorry for the typo :(
<bodhi_zazen> as you can see, root can ping google , but not my lan
<jimi_hendrix> back
<bodhi_zazen> so lets stop iptables :)
<bodhi_zazen> OH NO
<bodhi_zazen> Permission denied
<jimi_hendrix> sudo it!
<Halow> He's root....
<jimi_hendrix> (i know)
<Rocket2DMn> tab complete fail
<bodhi_zazen> ok ..
<bodhi_zazen> lets mess with the settings a little
<bodhi_zazen> foiled again :)
<bodhi_zazen> Lets try this ::)
<bodhi_zazen> :)
<Halow> :O
<Snova> Ok, so the AppArmor restrictions followed you from jailzsh to root's Bash?
<bodhi_zazen> so you can see, although root can install apps, access to critical system files is restricted
<jimi_hendrix> r00t has uber fail?
<bodhi_zazen> yes Snova
<bodhi_zazen> We can start a new shell if we wish
<Rocket2DMn> My head just exploded.
<Nano_ext3> ugh gotta run, sorry guys
<bodhi_zazen> so ..
<Nano_ext3> have to head home for work tommorow :(
<Rocket2DMn> now bodhi_zazen , do these restrictions apply only when using sudo to access root?  What if you had a try root login, like "su -" ?
<Snova> Bye Nano_ext3.
<Nano_ext3> laters :(
<bodhi_zazen> any process you start is confined by apparmor
<bodhi_zazen> the restrictions follow you
<Nano_ext3> ill read more on aa this weekend
<Nano_ext3> def
<Nano_ext3> laters
<bodhi_zazen> no Rocket, watch
<bodhi_zazen> see, we are now guru again ?
<bodhi_zazen> guru is given jailzsh as a default shell
<bodhi_zazen> jailzsh in an apparmor profile and I think I can show it to you
<bodhi_zazen> There it is ...
<lovinglinux> That's it? Looks simple.
<bodhi_zazen> that was jail bash
<bodhi_zazen> jailbash is from jdong
<bodhi_zazen> posted here :
<bodhi_zazen> http://bodhizazen.net/aa-profiles/jdong/ubuntu-8.04/usr.local.bin.jailbash
<bodhi_zazen> and yes, it is simple
<lovinglinux> I'm gonna try this
<bodhi_zazen> I am restricting access to jailzsh as it is a fair amount more permissive then jailbash
<bodhi_zazen> anything else you want to see in the shared session ?
<bodhi_zazen> please, other security questions ?
<jimi_hendrix> bodhi_zazen, is it possible to secure a windows server?
<bodhi_zazen> yes, of course
<Rocket2DMn> ahh hardened windows servers :)
<lovinglinux> I have one stupid question at http://ubuntuforums.org/showthread.php?t=1100778
<bodhi_zazen> Again, I am collecting aa profiles here : http://bodhizazen.net/aa-profiles/
<bodhi_zazen> download them, try them out, and if you wish send me your modifications and I will post them for others
<bodhi_zazen> lovinglinux: in a nut shell, no your router is not ipv6
<bodhi_zazen> most people disable ipv6
<jimi_hendrix> Rocket2DMn, is it possible then?
<bodhi_zazen> ip providers hate ipv6 because ipv6 makes them obsolete as an ip provider
<bodhi_zazen> they would need to provide the physical layer howerver
<Rocket2DMn> yes jimi_hendrix you can lock down windows servers
<lovinglinux> bodhi_zazen:  so just leave ipv6 alone right? No need for iptables rules?
<bodhi_zazen> yes, or you can disable it if you wish
<lovinglinux> bodhi_zazen:  thanks
<bodhi_zazen> some people think their box runs faster if they disable it
<bodhi_zazen> np
<bodhi_zazen> please, I have been ranting, questions, questions :)
<jimi_hendrix> what is the average airspeed of a swallow
<lovinglinux> is there an alternative for intrusion detection without using MySQL?
<bodhi_zazen> yes lovinglinux
<bodhi_zazen> you can use snort + barnyard
<lovinglinux> I will look into that. Thanks
<bodhi_zazen> lovinglinux: http://searchenterpriselinux.techtarget.com/tip/0,289483,sid39_gci1255683_tax307468,00.html
<bodhi_zazen> although that may use mysql, and if so, my mistake
<ds305> quit Thanks bodhi
<jgoguen> lol :)
<lovinglinux> I have another question. Please wait because I have a inflamed finger, so I need time to type.
<bodhi_zazen> go lovinglinux
<bodhi_zazen> Well, we are close to the hour
<bodhi_zazen> Watch, if I close the screen session you all are disconnected :)
<bodhi_zazen> >:)
<Snova> Oh, like that? ;)
<bodhi_zazen> Just like that
<lovinglinux> I have an iptables rule to accept established connection. If I have a client listening to a port, but no other ports opened, is it possible for someone already connected to my client to establish connections on other ports?
<bodhi_zazen> The guest account can not connect without a session running
<bodhi_zazen> if you try you will be blacklisted after a few attempts
<bodhi_zazen> hard to follow lovinglinux
<lovinglinux> bodhi_zazen: maybe is just my paranoia
<bodhi_zazen> If your client is cracked and you are droping new connections I do not think normally the client could establish a new connection on a new port
<bodhi_zazen> I guess they could use the established connection and leverage additional exploits
<lovinglinux> bodhi_zazen: through the same port?
<bodhi_zazen> Well, thank you everyone, it is 7 so we are "oficially" over, although I will be available for say 10-15 minutes
<bodhi_zazen> then I have to go to my family
<duanedesign> aawesome!!! thank you
<bodhi_zazen> in theory lovinglinux
<Halow> Yes, thank you!
<bodhi_zazen> since the connection is established ...
<lovinglinux> Thank you very much. Really nice experience, specially the shared ssh session.
<bodhi_zazen> you are most welcome everyone
<duanedesign> applause
<bodhi_zazen> the beginners team is going to run additional sessions
<bodhi_zazen> and the shared ssh session is available to anyone willing to teach
<bodhi_zazen> I have found the shared ssh session is a very effective demo for apparmor and iptables , lol
<bodhi_zazen> wb k0001 :)
<lovinglinux> bodhi_zazen:  what do you think about UPnP?
<bodhi_zazen> Not a lot
<bodhi_zazen> Again, we all like convienience
<k0001> bodhi_zazen: hwllo
<bodhi_zazen> but we all hate it when we are cracked, lol
<lovinglinux> lol
<bodhi_zazen> so it is nice (off UPnP) for our flash drives to auto mount
<bodhi_zazen> but not so nice when a malignant code the uses this to automatically start it's evil work ;)
<bodhi_zazen> security and convenience == yin and yang and we must bring balance to the force
<bodhi_zazen> it is just that the balance point is dependent on sphincter tone, :p
<lovinglinux> lol
<bodhi_zazen> If anyone is interested in topics or teaching sessions, please let me know
<lovinglinux> do I need to keep your key for further sessions?
<bodhi_zazen> I shall try to run a session every other week at this time with varied topics
<bodhi_zazen> I am sorry to have such limited times, I wish I could vary it more, but I have a family so this works best
<duanedesign> that is much appreciated
<bodhi_zazen> yes lovinglinux
<duanedesign> :)
<lovinglinux> what time is there right now and what time it starts?
<bodhi_zazen> I hope that the sessions are logged and posted in classroom
<bodhi_zazen> It is just past 7 PM local time for me
<bodhi_zazen> Sessions will start at 6 pm local time
<lovinglinux> Ok, great
<bodhi_zazen> and if anyone has a topic, add it to the list
<bodhi_zazen> I think we do another security session in 2 weeks
<bodhi_zazen> and after that I have been asked to cover permissions
<lovinglinux> permissions will be nice
<linuxwarrior> is the session on 26th will be the same as this one ?
<bodhi_zazen> Add your topic here : https://wiki.ubuntu.com/BeginnersTeam/FocusGroups/Education/Proposals
<bodhi_zazen> put my name in as the instructor
<bodhi_zazen> and I will add them here : https://wiki.ubuntu.com/BeginnersTeam/FocusGroups/Education/Events
<bodhi_zazen> linuxwarrior: same topic
<bodhi_zazen> Hopefully different questions :)
<bodhi_zazen> I hope people will try iptables, apparmor, etc and bring questions
<Snova> Hmm... I could probably help with a few of those.
<linuxwarrior> ok ;)
<bodhi_zazen> http://bodhizazen.net/Tutorials/iptables/
<bodhi_zazen> I posted a number of links here : http://paste.ubuntu.com/133993/
<Traveler15164> what i don't get is i can genprof firefox and play around with it, then do the scan and it doesn't really add that much to the profile
<bodhi_zazen> no Traveler15164
<bodhi_zazen> That is the problem with apparmor, you will need to emulate a profile or make your own
<bodhi_zazen> firefox is not the best to start because it is large
<bodhi_zazen> Start with say xchat
<bodhi_zazen> or your irc client
<bodhi_zazen> and then go to firefox
<bodhi_zazen> sudo aa-enforce xchat
<bodhi_zazen> then
<lovinglinux> Is there a requirement for classes to be related with system configuration or can they be about how to use a specific kind of program, like multimedia for example?
<bodhi_zazen> tail -F /var/log/messages
<bodhi_zazen> open xchat and watch and resolve errors
<bodhi_zazen> lovinglinux: topics are open
<bodhi_zazen> we (the beginners team) is here to educate and we really want to grow this service and cover topics of interest to the community
<bodhi_zazen> We hope to add things like Moodle
<bodhi_zazen> http://fmc.isgreat.org/Ubuntu_Classroom/index.html
<bodhi_zazen> so we can develop more formal content
<bodhi_zazen> but ...
<Traveler15164> iif you put just enough in the firefox profile to allow firefox to start up, then it lets you view or change anything in that session but the settings or cache isn't saved, correct?
<bodhi_zazen> we are in the beginning phases
<Traveler15164> sorta like a sandboxing app
<bodhi_zazen> yes, I think Traveler15164
<lovinglinux> So maybe I could help with some stuff, like how to organize image collections using IPTC, EXIF and so on. I will think about it.
<bodhi_zazen> If you change (edit) the profile, you need to restart both apparmor and firefox for the effects to take place
<Traveler15164> ok
<bodhi_zazen> no always firefox, but it does not hurt
<bodhi_zazen> Sometimes you also need to clear your cache on firefox as well
<bodhi_zazen> lovinglinux: any help you can offer would be awesome
<bodhi_zazen> some team members help with content
<bodhi_zazen> others teach
<bodhi_zazen> some do nothing
<bodhi_zazen> :)
<lovinglinux> lol
<bodhi_zazen> it is a team effort and we are all volunteers
<bodhi_zazen> the main limiting factor , of course, is my time
<bodhi_zazen> I rely on the focus groups to help
<bodhi_zazen> OK, I gotta go
<bodhi_zazen> really, thank you all for coming
<bodhi_zazen> and lets see if we can continue and extend these sessions
<Halow> Thanks again. :)
<bodhi_zazen> we need both helpers and an audience :)
<lovinglinux> bodhi_zazen:  thanks again
<bodhi_zazen> PM me on the forms or come on by #ubuntuforums-beginners :)
<lovinglinux> cya
<bodhi_zazen> you are all most welcome Halow lovinglinux and everyone really
<bodhi_zazen> it was fun, I hope I did not rant on too long
<bodhi_zazen> c ya
<Traveler15164> thank you, bye
<linuxwarrior> thx bye
#ubuntu-classroom 2009-03-21
<G__81> can this channel be used to take sessions for people who are using ubuntu and who wish to do some software development on Ubuntu
<G__81> ?
<G__81> example:Lets say a person wants to know how to setup virtualization (qemu,UML) in ubuntu and build a virtual router environment
<G__81> can those kind of sessions be taken ?
<G__81> and some sessions on the Linux Kernel ?
#ubuntu-classroom 2009-03-22
<RomeoAva> Oau!  Ce de lume pe aici! Unde ati fost pana acum, ca nu v-am stiut?
<RomeoAva> 56 de persoane... e ceva
#ubuntu-classroom 2010-03-23
 * elky hugs pleia2
 * pleia2 hugs elky 
<pleia2> :)
<MichelleQ> am I late?
<pleia2> nope :)
<pleia2> starting in 6 minutes
<MichelleQ> oh, good
<MichelleQ> just dashed in from another meeting
<mythos> who planed those schedule... it is one o'clock in the morning ._.
<pleia2> mythos: doodle poll
<pleia2> and my schedule
<mythos> pleia2, i was joking ;0)
<pleia2> :)
#ubuntu-classroom 2010-03-24
 * pleia2 nudges ClassBot 
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Current Session: Being an Op in Ubuntu Women channels - Instructor: pleia2 || Questions in #ubuntu-classroom-chat
<pleia2> ok, let's get started
<pleia2> Hi everyone! Welcome to a short class on being an operator (+o) in #ubuntu-women and #ubuntu-women-project
<pleia2> We'll have a pretty informal class here, so please feel free to jump in at any time and ask questions :)
<pleia2> who all is here for it?
<MichelleQ> o/
 * charlie-tca waves
<arand> \o
<pleia2> OK, we'll start off with some basics about opping in our channels
<pleia2> Operator status in the channels is given to responsible volunteers who have a history with the project and we (current ops) feel we can trust. Addition is pretty informal so if you've been with us for a while you're welcome to ask me.
<pleia2> this is not consistant across ubuntu namespace
<pleia2> Now, this class will cover a lot of the basics of being an op so will be applicable to opping in other channels, but the etiquette section of this class will be targeted at handling the types of trolls our channels frequently encounter
<pleia2> and for the other ops in the audience - if you have some information I missed (or get wrong!) please feel free to jump in and add/correct as needed
<pleia2> = Class Outline =
<pleia2> 1. Etiquette
<pleia2> 2. Basic technical op basics: freenode and channel options, opping yourself, removing a user, banning a user
<pleia2> 3. Writing good bans
<pleia2> starting with the fun one :)
<pleia2> = Etiquette =
<pleia2> First and foremost, as an op you have a responsibility to the community to uphold the rules, tactfully handle abusive users and set an example for other members of the channel
<pleia2> As an Ubuntu channel we are upheld to the standards of ops throughout the Ubuntu namespace: https://wiki.ubuntu.com/IRC/IrcTeam/OperatorGuidelines
<pleia2> and of course to the Code of Conduct http://www.ubuntu.com/community/conduct
<pleia2> Additionally, as our channel presents certain challenges, so we have our own guidelines which can be found here: http://wiki.ubuntu-women.org/IrcGuidelines and a page with further Op guidelines is here: http://wiki.ubuntu-women.org/IRCOpGuidelines
<pleia2> As we frequently get trolled for being a feminist project, it's important to anticipate this kind of trolling and be informed about it, an interested document on the subject can be found here: http://rkcsi.indiana.edu/archive/CSI/WP/WP02-03B.html
<pleia2> er, interesting
<pleia2> I actually re-read this document last night while doing my final prep on this class, given the recent discussions about our channels, logging policies and safe spaces, it was even more enlightening and useful than the first time I read it :)
<pleia2> so I'd strongly encourage everyone to have a look, even if they just skim it
<pleia2> So, in our channel there is a delicate balance that we must seek to maintain to be welcoming to all and handling questions about the project from those who honestly wish to learn, while also protecting our members from users who wish to do harm
<pleia2> this isn't easy, I still make mistakes sometimes - either by letting someone troll too long or removing too quickly
<pleia2> to help us with this, if you ever have questions about whether a user should be removed or how to handle a situation, please don't hesitate to private message another op and ask their opinion
<pleia2> we also have a number of current and former irc council members who spend time in the channel, and they're all friendly if you want an outside opinion :)
<pleia2> Now, the escalation process for handling disruptive users:
<pleia2> (again, this is not uniform across ubuntu namespace, this is the procedure we've developed in #ubuntu-women and #ubuntu-women-project)
<pleia2> 1. If someone appears to be disruptive, try to get a handle on the core of their purpose. Maybe they are not intentionally be troublesome.
<pleia2> so perhaps you ask why they joined, what interest they have in the project
<pleia2> 2. Issue a warning
<pleia2> this is often done privately if their behavior borderline as an attempt to nudge them back into respecting the intent of the channel
<pleia2> but a public warning may be issued if the infraction is more overt - remember, part of our job is making sure our members feel safe, we want them to know that attentive ops are paying attention
<pleia2> 3. Ask them to leave
<pleia2> warnings may not be enough, so a simple "you're not welcome here" notice does actually get rid of some folks who are causing trouble
<pleia2> 4. Remove them from the channel
<pleia2> I typically just remove users rather than setting a ban right away, but it depends on the situation
<pleia2> if they keep trolling...
<pleia2> 5. Ban
<pleia2> Once the user has been taken care of, please deop yourself
<pleia2> this whole process can frequently be stressful (yep, even to seasoned ops!) so it's important that you work at friendly throughout this whole process
<pleia2> ..even if they refuse to treat you with respect, even private message you after hurling insults, do your best to deal with this professionally too
<pleia2> frequently trolls are looking for getting a reaction, acting reasonably and maturely to remove a problem counters this
<pleia2> Any questions? Comments?
<pleia2> ok, we'll get to the technical aspects we all came for :)
<pleia2> = Basic technical op basics =
<pleia2> First, we'll start off with the network itself. Ubuntu IRC channels reside on the freenode irc network - this is where you end up when you log on to irc.ubuntu.com
<pleia2> The freenode network uses software called "ircd seven" and freenode has an excellent page to get you started learning how to use the network and the available channel and user modes here: http://freenode.net/using_the_network.shtml
<pleia2> To quickly cover some important things, there are "user modes" on the network which determine attributes of a user, and "channel modes" which set attributes of channels on the network. Knowing what these are is pretty useful, so keep this document handy :)
<Pici> May I add that its by no means necessary to memorize what all the different modes mean.
 * pleia2 memorizes very little, has memory capacity of a sieve
<pleia2> thanks Pici
<pleia2> Now, to cover some basics you'll need as an op, I have documented much of this over on a page I wrote a few years ago: http://wiki.ubuntu-women.org/Courses/IRCOp
<pleia2> First, if you're an op in the channel, how do you op up?
<pleia2> /msg chanserv op #ubuntu-women
<pleia2> For more about what you can do with chanserv, /msg chanserv help
<pleia2> How do you remove your op status?
<pleia2> /msg chanserv deop #ubuntu-women
<pleia2> How do you remove a user?
<pleia2> /remove #ubuntu-women nickname
<pleia2> ..but in some clients, like irssi, you will need to: /quote remove #ubuntu-women nickname
<pleia2> just play around and see how your client works (we'll have a playing session over in ##pleia2 after this session)
<pleia2> We like to "remove" rather than "kick" because this makes the client think it was a /part and so doesn't trigger auto-rejoin
<pleia2> (and remove tends to be the "preferred freenode way")
<pleia2> So, how do you ban a user?
<pleia2> A simple ban can usually be added with:
<pleia2> /ban nickname
<pleia2> But often the automatic ban a client sets isn't good enough, so now we'll talk about writing good bans
<pleia2> any questions before we get to that?
<pleia2> so over in #ubuntu-classroom-chat someone asked about why /quote may need to be used
<pleia2> ;)
<pleia2> this is because /remove is not a command that irssi understands, and it won't automatically pass things it doesn't understand to the server
<pleia2> so for commands like that in irssi you need to use /quote
<pleia2> alright, let's move on
<pleia2> (oh, and you don't need to ask questions in -chat, feel free to speak freely here :))
<pleia2> = Writing good bans =
<pleia2> Writing bans is an intimidating topic at first, but it turns out that it's actually not that hard!
<pleia2> When a user joins a channel, you'll typically see something like:
<pleia2> MeanTroll (~verybad@1.2.3.4-example.com) has joined #ubuntu-women
<pleia2> (in some clients you may need to enable viewing hostnames)
<pleia2> There are 3 important pieces of this which can be targeted in a ban: nickname (MeanTroll), ident (verybad) and hostname (1.2.3.4-example.com)
<pleia2> In a ban stanza, it would look like: nickname!ident@hostname
<pleia2> notice that a ! separates nickname and ident, and the @ symbol is put before hostname
<pleia2> there are more options with this, which nhandler will cover in a bit, but this is a basic ban
<pleia2> So in this example, MeanTroll!verybad@1.2.3.4-example.com would be a ban exactly matching their nickname, ident and hostname
<pleia2> Any questions so far?
<pleia2> Ok, well, this is not a great ban, since the offending user could just change their nickname and get back in to the channel - we need to write a better ban
<pleia2> It's easy to change a nickname (just "/nick SuperMeanTroll" and they get past our ban!) so here is how to ban based on ident and hostname:
<pleia2> /mode +b *!verybad@1.2.3.4-example.com
<pleia2> As you can see we put a * where the nickname used to be, this is a wildcard
<pleia2> What if they decide to change their ident too?
<pleia2> /mode +b *!*@1.2.3.4-example.com
<pleia2> again, a * is put where the ident used to be
<pleia2> the asterisk can be used in conjunction with other numbers and letters too, so if a person always has "bad" in their ident, you can do something like  /mode +b *!*bad*@1.2.3.4-example.com
<ClassBot> mhall119 asked: is * the only wildcard, or can do you some regex-style matching too?
<erUSUL> what if they change nick ident and renew ip with his/her isp? back to square one ;)
<pleia2> regarding mhall119's question - there are some options here, nhandler will cover them :)
<pleia2> erUSUL: that's next!
<pleia2> So, what if they get really desperate and reset their modem to get a new hostname?
<pleia2> You can try: /mode +b *!*@*-example.com
<pleia2> But be careful - this will ban everyone coming from the exmaple.com service provider
<pleia2> and honestly this ban should only be put in place in emergencies and should be accompanied by joining #freenode and reporting that there is a user who is evading bans
<pleia2> there are also users who evade by using "proxies" which make writing bans essentially useless, this too is against freenode policy and should be reported
<ClassBot> mhall119 asked: is using proxies in general against freenode policy, or just to avoid bans?
<pleia2> "proxy" is a broad term
<pleia2> technically anyone who runs irssi from a server elsewhere is using a "proxy" since they aren't connecting from their originating connection
<pleia2> and of course this is ok :)
<pleia2> same with bouncers you run, etc
<pleia2> I don't know the precise policy (maybe a freenode staffer can help me out here) but in general abusing proxies which frequently aren't yours to cause trouble is where freenode starts putting in rules
<nhandler> freenode also bans open proxies
<pleia2> there we go, thanks nhandler :)
<nhandler> http://freenode.net/policy.shtml#proxies
<pleia2> any more questions about basic bans before I hand things off to nhandler to explain extbans?
<pleia2> alright, well, freenode also has "Extbans" which are discussed more at http://freenode.net/using_the_network.shtml which are used less frequently but you will see them
<pleia2> nhandler: all yours!
<nhandler> Thanks pleia2
 * pleia2 didn't know about these, and didn't have time to get up to speed :)
<nhandler> Extbans are a new feature that came with ircd-seven. To quote http://freenode.net/using_the_network.shtml:
<nhandler> "They take the form of: +b $type or +b $type:data
<nhandler> 'type' is a single character (case insensitive) indicating the type of match, optionally preceded by a tilde (~) to negate the comparison."
<nhandler> One of the most common uses of this type of ban is to replace the old +R channel mode which prevented unidentified users from talking in the channel.
<nhandler> This can be done with: /mode #channel +q $~a
<nhandler> Type 'a' matches the account name, and the ~ negates it. This command has the same result as setting channel mode +R used to have.
<nhandler> Extbans can also be used to match the realname or gecos (type r, which is similar to the old +d channel mode) or to match against the full nick!username@host#gecos (type x)
<ClassBot> There are are 10 minutes remaining in the current session.
<nhandler> mhall119 wanted me to explain $~a a bit more
<nhandler> Basically, $a is an extban that can be used to match people with a certain account name. For instance, You could set a ban on $a:nhandler* to ban all people with an account name beginning with nhandler
<nhandler> The ~ negates that. So $~a:nhandler* could be used to ban everyone whose account name does not begin with nhandler
<nhandler> $a with nothing after it is the same if I had used a wildcard
<nhandler> $~a has the effect of matching all people not identified
<nhandler> Examples of how to set these bans can be found on the freenode website: http://freenode.net/using_the_network.shtml
<nhandler> That site also has a bit more information on what everything means
<nhandler> Sometimes, you want a user to get forwarded to another channel when they attempt to join your channel.
<nhandler> A common example of this is a user whose IRC client keeps quiting/joining, spamming the channel.
<nhandler> You can append $#channel to the end of a ban to cause the user to get forwarded to #channel when they try and join your channel.
<nhandler> You will need to either be an OP in #channel or the channel will need to have mode +F set in order to create this type of ban forward.
<ClassBot> There are are 5 minutes remaining in the current session.
<nhandler> An example of how to set this ban would be: /mode #mychannel +b $r:Foo*$#forwardtome
<nhandler> This would prevent any user whose gecos begins with "Foo" (note the extban) from joining #mychannel.
<nhandler> If they attempt to join #mychannel, they will get forwarded to #forwardtome
<nhandler> Any questions?
<erUSUL> why you use $r here ? can you explain that "type" ?
<nhandler> erUSUL: $r matches on the realname field. So any users who had a real name that began with Foo in my example would be affected
<mhall119> what is a "realname" in IRC?
<mhall119> the ident?
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
<nhandler> mhall119: Do a /whois on me. My real name is set to Nathan Handler
<mhall119> ok, irssi is calling it "ircname"
<pleia2> it calls it "real_name" in the settings
<nhandler> Yep
<pleia2> just to be confusing *pats irssi*
<pleia2> for those of who who have their head spinning from nhandler's part - don't worry, this is advanced ban stuff that we covered to be thorough so you know they exist when you see them, I guarentee you that most of the ops out there don't know how to use them :)
<pleia2> (like me, until now!)
<stooj> Thank goodness :)
<mhall119> that makes me feel better
<nhandler> Yep. The only extban you will probably actually see/use would be $~a (to quiet unidentified users during a bot attack)
<pleia2> any other questions?
<pleia2> ok, well if anyone wants to test their new found skills, please feel free to nudge me and we'll hop over to ##pleia2 to play around
<pleia2> we can kick and ban some stuff for sport! ;)
<nigelb> haha, thanks for the class you folks :)
<mhall119> yes, thanks pleia2 and nhandler
<pleia2> thanks for coming, everyone
<pleia2> we'll post the logs soon :)
<stooj> Thanks pleia2 & nhandler
<MichelleQ> braavo, pleia2 & nhandler
<nhandler> :)
<charlie-tca> Thanks very much
 * jcastro taps on the microphone
<jcastro> who's here for adopt-an-upstream in 10 minutes?
<cjohnston> o/ mayb
<cjohnston> e
<jcastro> 1 more minute!
<jcastro> anyone else around?
<cjohnston> I guess it may be just you and me
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Current Session: Adopt an Upstream - Instructor: jcastro || Questions in #ubuntu-classroom-chat
<cjohnston> your opped.. you dont get to see all the coolness that is ClassBot
<jcastro> oh, should I unop?
<cjohnston> doesnt matter.. ClassBot will voice you if you are not opped or voiced
<cjohnston> but since your already opped and it already ran, prolly better to stay the way you are
<jcastro> ah
<jcastro> let's give it one more minute
<jcastro> for the stragglers
<cjohnston> want me to leave and come back in? :-P
<jcastro> ok let's get started!
<jcastro> anyone else here or just cjohnston supporting the cause?
<jcastro> cjohnston: hmm, should we reschedule again?
<cjohnston> jcastro: I don't know if you will get much of a better response without being in a UDW/ODW type setting
<cjohnston> except maybe by doing some advertising
<jcastro> yeah, I should have blogged about it
<jcastro> what do you think about next week, but I can blog about it now?
<jcastro> and ask some other people to blog it too?
<cjohnston> sure
<cjohnston> day/time?
<jcastro> at least you were here. *hug*
<jcastro> wednesday same time I think?
<cjohnston> sure
<cjohnston> rescheduling now
<cjohnston> done
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - http://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi
<jcastro> bah I wouldn't be in this mess if I did it right the first time!
<cjohnston> or the second time? :-P
<jcastro> heh
<cjohnston> maybe you should teach me something else then.. ;-)
<jcastro> Well, first the earth cooled, and then the dinosaurs came ...
<cjohnston> lol
#ubuntu-classroom 2010-03-27
<diablo_> help
<diablo_> i just installed a turtlebeach riviera in order to produce 5.1 sound and cant get any sound from the spdif
<nhandler> diablo_: You might have better luck in #ubuntu, our support channel
<diablo_> thats what ppl keep telling me but they done help either
<diablo_> but ill keep trying
#ubuntu-classroom 2010-03-28
<Trinidad> does anyone know how to install a modem in ubuntu.  i can get it to work in windows but not ubuntu
#ubuntu-classroom 2011-03-21
<JackyAlcine> o/
<Guest65480> Is a class in progress?
<Pendulum> Guest65480: no
#ubuntu-classroom 2011-03-22
<pbear> hi, i have an ubuntu on vmware but i can't log in remotely unless i'm already logged in on the vmware's ubuntu. where is the setting that can let me log in even if the vm isn't logged in?
<nhandler> pbear: Try #ubuntu for support
<jmarsden> pbear: Use #ubuntu for support, this is a classroom and there is no class now :)
<pbear> ok thx
<fernandofat> some one knows if the class log will be published or available ?
<pleia2> !logs
<ClassBot> Logs for all classroom sessions that take place in #ubuntu-classroom can be found on http://irclogs.ubuntu.com/
<fernandofat> i'm intrested in "cloud computing 101" but I problably will be stuck at work  so the logs would help
<fernandofat> !logs
<ClassBot> Logs for all classroom sessions that take place in #ubuntu-classroom can be found on http://irclogs.ubuntu.com/
<fernandofat> nice :D
<duanedesign> lernid test
#ubuntu-classroom 2011-03-23
<hungtran> hello
<sadeedtech> on
<Siri_> hello
<sadeedtech> hi
<Siri_> can you tell me what's going on here
<Siri_> i don't find kim0
<sadeedtech> so do I not find him, may timing
<daiver> as you see he is not online yet
<Siri_> ohh...this was supposed to start at 4 right
<sadeedtech> I think the event is three hours from now
<sadeedtech> info
<sadeedtech> help
<anebi> hi guys
<JoeyI> hi
<ChrisRut> Hi
<natea> is there a session on GlusterFS happening right now?
<natea> i didn't get the timezone conversion right, so i'm coming late to the party :/
<EvilPhoenix> natea:  what timezone are you?
<natea> EvilPhoenix: EST
<EvilPhoenix> if you read the schedule, it starts at 4PM
<EvilPhoenix> oh wait
<EvilPhoenix> that's GMT
 * EvilPhoenix does the math
<EvilPhoenix> UTC... -0400...
<EvilPhoenix> oh
<EvilPhoenix> 12PMish
<natea> oh, it looks like it's not until 13:00
<natea> according to http://www.timezoneconverter.com/
<natea> "17.00 UTC Scaling shared-storage web apps in the cloud with Ubuntu & GlusterFS â semiosis"
<natea> 17.00 UTC -> 13.00 EST
<EvilPhoenix> grah my system isnt displaying times right
 * EvilPhoenix shoots his system
<EvilPhoenix> 13:00 is about 1PM
 * EvilPhoenix shall return after destroying his system
<kim0> Howdy
<kim0> Hello everyone, welcome to the very first Ubuntu Cloud Days
<ttx> yay!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Cloud Computing 101, Ask your questions - Instructors: kim0
 * ttx attends two conferences at once.
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/23/%23ubuntu-classroom.html following the conclusion of the session.
<kim0> So again, good morning, good afternoon and good evening wherever you are
<kim0> Please be sure you're joined to this channel plus
<kim0> #ubuntu-classroom-chat : For Questions
<kim0> In case you would like to ask a question
<kim0> please start it with "QUESTION: <question goes here>
<kim0> and write it down in the #ubuntu-classroom-chat channels
<kim0> This session is mostly about taking questions and making sure everyone is well seated :)
<kim0> Seems like I have a question already
<ClassBot> EvilPhoenix asked: I think this could be the start of it.  Could you give a brief explanation of what "Cloud Computing" is defined as?
<kim0> Hi EvilPhoenix .. Good question indeed
<kim0> Trying to answer your question .. I will begin by saying
<kim0> Cloud has so many different definitions already :)
<kim0> Almost all companies by bent it to mean whatever product they're selling
<kim0> the term has really been abused
<kim0> The are also various definitions by institutions like NIST and others
<kim0> since there is no one single true definition .. I'll lay down some properties
<kim0> that almost everyone agrees should be present in a "cloud"
<kim0> 1- Pay per use .. Cloud are online resources that can be characterized by "pay per use"
<kim0> you only pay for the resources that you need .. the storage you consume
<kim0> the CPU/Memory compute capacity that you are using ..etc
<kim0> You never really (or should never) pay in advance .. (just in case you need that resource)
<kim0> 2- Instant scalability: Cloud solutions should be instantly scalable
<kim0> that is .. with one api call (that's one command, or a click of a button for non programmers)
<kim0> you should be able to allocate more resources
<kim0> Clouds convey the feeling of inifinite scale .. of course in reality it's not truly infinite .. but it's large enough
<kim0> 3- API programmability .. Most cloud solutions are going to have an API .. an API is a programmatic way to control your resources
<kim0> Taking a prime example .. The largest commercial compute and storage cloud today is Amazon's AWS cloud
<kim0> With Amazon's cloud, with an api call (or running a command)
<kim0> you can instantly allocate "servers"
<kim0> so it's got an API interface
<kim0> it's scalable .. since you can always add more servers (or S3 storage) should you want to
<kim0> and you only pay for the consumed CPU hours .. or gigabytes of storage
<kim0> Clouds are usually split up by their type as well
<kim0> IaaS , PaaS and SaaS
<kim0> let me quickly comment on those types
<kim0> IaaS : Infrastructure as a Service
<kim0> This basically means you get "infrastructure" components (that is servers, storage space, networking ...etc" as as service ..
<kim0> You use those to build your own cloud or application
<kim0> PaaS : Moves a little up the value stack
<kim0> It provides a complete development environment as a service
<kim0> so you basically upload some code .. and without needing to worry about servers or networks/switches or storage ..etc
<kim0> your application just runs on the "cloud" .. is scalable, is redundant
<kim0> someone else (the PaaS provider) did that work for you
<kim0> Examples of PaaS would be Google's AppEngine .. salesforce.com or others
<kim0> The last type is SaaS : Software as a Service
<kim0> This basically means providing a full complete application, that you are directly using in the cloud
<kim0> examples of that would be facebook, gmail, twitter ..etc
<kim0> Those are "applications" if you come to think of it .. more so than the notion of webpages
 * kim0 checks if he has more questions 
<ClassBot> BluesKaj asked: ok then what Ubuntu Cloud about ?
<kim0> Hi BluesKaj
<kim0> Very good question as well
<kim0> So Amazon's cloud is a very popular IaaS cloud. However, some people are not totally happy with the fact that they'd upload their data to amazon's datacenters
<kim0> some enterprises or ISPs .. would like to utilize the improved economics of the cloud model
<kim0> however still keeping their data and servers in-house (whatever that means to them)
<kim0> In order to build a cloud that competes with Amazon's cloud
<kim0> you need various software components
<kim0> Ubuntu packages, integrates and makes available the best of breed open-source software
<kim0> that enables you to build and operate your own cloud should you want to
<kim0> In the upcoming 11.04 natty release
<kim0> Ubuntu packages two open-source complete cloud stacks
<kim0> those would be
<kim0> - Ubuntu Enterprise Cloud : An Ubuntu integrated and polished cloud stack based on the popular Eucalyptus stack
<kim0> - OpenStack : A new opensource cloud stack that's gaining a lot of popularity
<kim0> Actually we have dedicated sessions for each of those cloud stacks!
<kim0> An interesting fact .. is that UEC and OpenStack both allow you to expose an API that is the equivalent of Amazon's API
<kim0> that means you can use the same management tools to control both the public (Amazon's ) cloud and your own private one!
<kim0> This is also great for providers wanting to run their own clouds
<kim0> so that was an overview of the cloud stacks available to enable
<kim0> you to build your own cloud envrionment
<kim0> Other than that .. and to fully answer the question of "What is ubuntu cloud" .. I need to add a few more points
<kim0> Ubuntu makes available official Ubuntu images that run on the Amazon cloud as well
<kim0> You can check them out (as they're regularly updated) on http://cloud.ubuntu.com/ami/
<kim0> you basically search for what you want, like (maverick 64 us-east) pick the ami-id
<kim0> and launch that
<kim0> Also Canonical makes available Landscape a cloud management tool .. you can check it out at https://landscape.canonical.com/
<kim0> Also, Ubuntu is soon unleashing cloud management and orchestration tool called "ensemble"
<kim0> that is going to revolutionize cloud deployments and management .. it's still in early tech-preview stage
<kim0> however we're having an ensemble session and demo today
<kim0> I think that mostly covers a broad definitions of ubuntu and cloud
<ClassBot> Kruptein asked: so dropbox isn't cloud related? as you don't have to pay for it (basic)
<kim0> Hi Kruptein
<kim0> Well .. dropbox is cloud storage indeed
<kim0> I meant that with cloud .. when you want to grow you pay for what you used/need
<kim0> as opposed to buying a 1TB disk that lays on your desk so that when you need the capacity it'll be available for you
<kim0> with dropbox you pay for what you use .. although I believe they only allow payment in coarse packages
<kim0> as opposed to Amazon's S3 which charges you per GB of storage per month
<kim0> which is a more fine grained model
<ClassBot> BluesKaj asked: ok then what is Ubuntu Cloud about ?
<kim0> So I believe we covered that
<kim0> To quickly recap
<kim0> - Building your own private cloud : UEC/Eucalyptus or OpenStack
<kim0> - Running over the Public Amazon Cloud : Official Ubuntu Server images http://cloud.ubuntu.com/ami/
<kim0> - Systems Management tools : https://landscape.canonical.com/
<kim0> - Infrastructure automation : Ensemble (tech-preview)
<kim0> Again all of those tools and technologies (except for landscape) are having their own sessions that you'll enjoy :)
<kim0> Let me not forget as well about "Ubuntu ONE"
<kim0> a personal storage cloud (very similar to dropbox)
<kim0> Check it out at https://one.ubuntu.com/
<ClassBot> popey asked: Should your average end-user care about Ubuntu cloud? If so, why? If not, what do we say to end users when they see all this promotion of Ubuntu cloud stuff?
<kim0> Hi popey
<kim0> Great question
<kim0> It really depends on your point of view
<kim0> The usual-suspects to care about "cloud" stuff are going to be sys-admins, devops, IT professionals .. people who care about server environments and such .. However!
<kim0> If you ask me, yes non IT pros should care as well
<kim0> because the computing model is quickly shifting to a cloud model
<kim0> that is .. instead of you buying a pc, loading it with your personal applications and settings
<kim0> and being a sysadmin for yourself .. handling backups .. troubleshooting, software upgrades ..etc
<kim0> the world is shifting into an ipad/iphone/thin-client/mobile devices world
<kim0> where your data lives on a cloud
<kim0> is accessible by a wide varierty of tools
<kim0> and all tools sync up together
<kim0> obviously the point of interest is going to be different, however it remains that the cloud touches all of us
<ClassBot> cdbs asked: The Clous world is buzzing about OpenStack. Natty will include support for OpenStack along with Eucalyptus. Once OpenStack Nova becomes stable enough (should happen soon, by May) then will Ubuntu begin recommending OpenStack for its cloud offerings?
<kim0> Hi cdbs
<kim0> Seems you're on top of things hehe I can't really claim to foresee the future. Ubuntu is and has always aimed at providing the best of class open-source cloud technologies and software
<kim0> As it stands, UEC product is based on Eucalyptus bec it is a mature product
<kim0> however since openstack is rapidly maturing, it has been packaged and made available as well
<kim0> I am confident Ubuntu will continue to make available all mature choices of best of breed software
<ClassBot> Yuvi_ asked: you can differentiate between public cloud and private cloud?
<kim0> Hi Yuvi_
<kim0> Well, yeah I guess
<kim0> Public clouds are cloud operated by an entity you don't control
<kim0> and that provide services to multiple other tenants
<kim0> examples would be Amazon cloud, rackspace, go-grid, terremark ...etc
<kim0> A private cloud, is a cloud that probably runs behind your firewall on your own servers
<kim0> and that you can control, i.e. is operated by IT people you have direct influence upon
<ClassBot> at141am asked: Is the demo open to all for ensemble, if so when and where?
<kim0> Hi at141am
<kim0> Yes absolutely!
<kim0> The Ensemble session is today in less than a couple of hours
<kim0> right here in this same channel
<kim0> The session leader is probably going to be copy/past'ing text so that you can follow up the demo
<kim0> I'm not really sure how it would go .. but I'm sure it's gonna be loads of fun
<ClassBot> marenostrum asked: What does "Ubuntu One" have to do something with "cloud" concept?
<kim0> Hi marenostrum
<kim0> Ubuntu ONE is a personal cloud service
<kim0> It is designed for end-users .. that is non IT pros
<kim0> It provides services to sync your files and folders to the cloud
<kim0> sharing them to other people
<kim0> not only that .. but also
<kim0> sync's your "notes" across multiple machines
<kim0> your music
<kim0> Bookmarks
<kim0> I think soon it might sync application settings and the apps installed
<kim0> so that when you get a new Ubuntu machines .. it installs all your applications, applies all settings, syncs your data/notes/bookmarks ..etc
<kim0> that would be lovely indeed .. I'm not sure if it can do all that just yet thought
<kim0> though*
<ClassBot> sveiss asked: do the official Ubuntu EC2 images receive updates? Specifically kernel updates, which are a bit of a pain to deal with via apt-get on boot.
 * kim0 trying to answer questions quickly :)
<kim0> Hi sveiss
<kim0> The answer is absolutely YES
<kim0> they do receive regular updates
<kim0> of course you can always apt-get upgrade them any way
<kim0> the one potential pain point .. is the one you have mentioned "kernel upgrades"
<kim0> for that .. I've some good news
<kim0> Newer AMIs are designed to use pv-grub
<kim0> which is a method exposed by Amazon to load the kernel from inside the image
<kim0> which means .. you can now apt-get upgrade your kernel .. and very simply reboot into it
<ClassBot> There are 10 minutes remaining in the current session.
<kim0> if you need to know which exact version switched to pvgrub .. check in at #ubuntu-cloud
<ClassBot> IdleOne asked: Repost for AndrewMC :What would be the benifits of using the "cloud" instead of, say  a dedicated server?
<kim0> Hi IdleOne
<kim0> the main benefits is really
<kim0> - Pay per use .. I might need ten servers today .. but only one tomorrow .. cloud allows that .. dedicated servers don't (you'd have to buy 10 servers all the time)
<kim0> - flexibility .. If we web application gets slashdotted .. and the load is too high .. within a few seconds .. I can spin up 20 extra cloud servers to handle the load
<kim0> - Also .. since almost all clouds provide an extensive API
<kim0> it really helps with IT automation .. spin up servers, assign them IPs, attach storage to them, mount a load balancer on top
<kim0> all by running a script .. not by running around connection cables :)
<ClassBot> Yuvi_ asked: What is hybrid cloud? Under which scenario we can use that
<kim0> A hybrid cloud is a mix of public + private
<kim0> a typical use case would be
<kim0> you prefer running everything on a private cloud that you own and operate
<kim0> *however* should the incoming load by too high
<kim0> like your application was slashdotted
<kim0> you would dynamically "expand" to using a public cloud like amazon/rackspace
<kim0> to take some heat for you .. to lessen the load on your servers
<ClassBot> There are 5 minutes remaining in the current session.
<kim0> You can pull off something like that today with UEC and some smart scripts
<ClassBot> chadadavis asked: what advantage does a private cloud provide, vs a traditional server cluster, assuming that then the sysadmin work is not outsourced?
<kim0> running out of time ..
<kim0> trying to quickly answer
<kim0> well basically it's the same concept of public cloud
<kim0> Benefits would be
<kim0> - Complete infrastructure automation
<kim0> - Enabling "teams" to handle their own needs .. a team would spin up/down servers according to their needs
<kim0> lessening the load on IT staff
<kim0> also .. "pooling" of IT servers into one private cloud
<kim0> means providing a better service to everyone
<kim0> since everyone can use some of the resources when they need it
<kim0> so in short .. pooling, self service, low overhead, spin up/down
<kim0> Great
<kim0> Seems like I did manage to bust all questions :)
<kim0> If anyone would like to get a hold of me afterwards
<kim0> I am always hanging out in #ubuntu-cloud
<kim0> you can ping me any time and I will get back to you once I can
<kim0> The next session is by semiosis
<kim0> o/
<kim0> Using gluster to scale .. very intersting stuff!
<kim0> I love scalable file systems :)
<semiosis> Thanks kim0
<semiosis> Hello everyone
<semiosis> This Ubuntu Cloud Days session is about scaling legacy web applications with shared-storage requirements in the cloud.
<semiosis> I should mention up front that I'm neither an official nor an expert, I don't work for Amazon/AWS, Canonical, Gluster, Puppet Labs, or any other software company.
<semiosis> I'm just a linux sysadmin who appreciates their work and wanted to give back to the community.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Scaling shared-storage web apps in the cloud with Ubuntu & GlusterFS - Instructors: semiosis
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/23/%23ubuntu-classroom.html following the conclusion of the session.
<semiosis> My interest is in rapidly developing a custom application hosting platform in the cloud.  I'd like to avoid issues of application design by assuming that one is already running and can't be overhauled to take advantage of web storage services.
<semiosis> I'll follow the example of migrating a web site powered by several web servers and a common NFS server from a dedicated hosting environment to the cloud.  In fact this is something I've been working on lately, as I think others are as well.
<semiosis> I invite you to ask questions throughout the session.  I had a lot of questions when I began working on this problem, but finding answers was very time-consuming and sometimes impossible.
<semiosis> My background is in Linux system administration in dedicated servers & network appliances, and I just started using EC2 six months ago.  I'll try to keep my introduction at a high level, and assume some familiarity with standard Linux command line tools and basic shell scripting & networking concepts, and the AWS Console.
<semiosis> Some of the advanced operations will also require euca2ools or AWS command line tools (or the API) because they're not available in the AWS Console.
<semiosis> Cloud infrastructure and configuration automation are powerful tools, and recent developments have brought them within reach of a much wider audience.  It is easier than ever for Linux admins who are not software developers to get started running applications in the cloud.
<semiosis> I've standardized my platform on Ubuntu 10.10 in Amazon EC2, using GlusterFS to replace a dedicated NFS server, and CloudInit & Puppet to automate system provisioning and maintenance.
<semiosis> GlusterFS has been around for a few years, and its major recent development (released in 3.1) is the Elastic Volume Manager, a command-line management console for the storage cluster.  This utility controls the entire storage cluster, taking care of server setup and volume configuration management on servers & clients.
<semiosis> Before the EVM a sysadmin would need to tightly manage the inner details of configuration files on all nodes, now that burden has been lifted enabling management of large clusters without requiring complex configuration management tools.  Another noteworthy recent development in GlusterFS is the ability to add storage capacity and performance (independently if  necessary) while the cluster is online and in use.
<semiosis> I'll spend the rest of the session talking about providing reliable shared-storage service on EC2 with GlusterFS, and identifying key issues that I've encountered so far.  I'd also be happy to take questions generally about using Ubuntu, CloudInit, and Puppet in EC2.  Let's begin.
<semiosis> There are two types of storage in EC2, ephemeral (instance-store) and EBS.  There are many benefits to EBS: durability, portability (within an AZ), easy snapshot & restore, and 1TB volumes; the drawback of EBS is occasionally high latency.
<semiosis> Ephemeral storage doesn't have those features, but it does provide more consistent latency, so it's better suited to certain workloads.
<semiosis> I use EBS for archival and instance-store for temporary file storage.  And I can't recommend enough the importance of high-level application performance testing to determine which is best suited for your application.
<semiosis> GlusterFS is an open source scale-out filesystem.  It's developed primarily by Gluster and has a large and diverse user community.  I use GlusterFS on Ubuntu in EC2 to power a web service.
<semiosis> What I want to talk about today is my experience setting up and maintaining GlusterFS in this context.
<semiosis> First I'll introduce glusterfs architecture and terminology.  Second we'll go through some typical cloud deployments, using instance-store and EBS for backend storage, and considering performance and reliability characteristics along the way.
<semiosis> I'll end the discussion then with some details about performance and reliability testing and take your questions.
<semiosis> I think some platform details are in order before we begin.
<semiosis> I use the Ubuntu 10.10 EC2 AMIs for both 32-bit and 64-bit EC2 instances that were released in January 2011.  You can find these AMIs at the Ubuntu Cloud Portal AMI locator, http://cloud.ubuntu.com/ami/.
<semiosis> I configure my instances by providing user-data that cloud-init uses to bootstrap puppet, which handles the rest of the installation.  Puppet configures my whole software stack on every system except for the glusterfs server daemon, which I manage with the Elastic Volume Manager (gluster command.)
<semiosis> I've deployed and tested several iterations of my platform using this two-stage process and would be happy to take questions on any of these technologies.
<semiosis> Unfortunately the latest version of glusterfs, 3.1.3, is not available in the Ubuntu repositories.  There is a 3.0 series package but I would recommend against using it.
<semiosis> I use a custom package from my PPA which is derived from the Debian Sid source package, with some metadata changes that enable the new features in 3.1, my Launchpad PPA's location is ppa:semiosis/ppa.
<semiosis> Gluster also provides a binary deb package for Ubuntu, which has been more rigorously tested than mine.  You can find the official downloads here: http://download.gluster.com/pub/gluster/glusterfs/LATEST/
<semiosis> You can also download and compile the latest source code yourself from Github here:  https://github.com/gluster/glusterfs
<semiosis> Now I'd like to begin with a quick introduction to GlusterFS 3.1 architecture and terminology.
<ClassBot> EvilPhoenix asked: repost for marktma: any consideration for using Chef instead of Puppet?
<semiosis> i chose puppet because it seemed to be best integrated with cloud-init, it's mature, and has a large user community
<ClassBot> kim0 asked: Could you please mention a little intro about cloud-init
<semiosis> CloudInit bootstraps and can also configure cloud instances.  This enables a sysadmin to use the standard AMI for different purposes, without having to build a custom AMI or rebundle to make changes.
<semiosis> CloudInit takes care of setting the system hostname, installing the master SSH key and evaluating the userdata from EC2 metadata.  That last part, evaluating the userdata, is the most interesting.
<semiosis> It allows the sysadmin to supply a brief configuration file (called cloud-config), shell script, upstart job, python code, or a set of files or URLs containing those, which will be evaluated on first boot to customize the system.
<semiosis> CloudInit even has built-in support for bootstrapping Puppet agents, which as I mentioned was a major deciding factor for me
<semiosis> Now getting back to glusterfs terminology and architecture...
<semiosis> Of course there are servers and there are clients.  With version 3.1 there came the option to use NFS clients to connect to glusterfs servers in addition to the native glusterfs client based on FUSE.
<semiosis> Most of this discussion will be about using native glusterfs clients, but we'll revisit NFS clients briefly at the end if theres time.  I havent use the NFS capability myself because I think that the FUSE client's "client-side" replication is better suited to my application
<semiosis> Servers are setup in glusterfs 3.1 using the Elastic Volume Manager, or gluster command.  It offers an interactive shell as well as a single-executable command line interface.
<semiosis>  In glusterfs, servers are called peers, and peers are joined into (trusted storage) pools.  Peers have bricks, which are just directories local to the server.  Ideally each brick is its own dedicated filesystem, usually mounted under /bricks.
<ClassBot> natea asked: Given the occasional high latency of EBS, do you recommend it for storing database files, for instance PostgreSQL?
<semiosis> my focus is hosting files for web, not database backend storage.  people do use glusterfs for both, but I haven't evaluated it in the context of database-type workloads, YMMV.
<semiosis> as for performance, I'll try to get to that in the examples coming up
<ClassBot> natea asked: Can you briefly explain the differences between GlusterFS and NFS and why I would choose one over the other?
<semiosis> simply put, NFS is limited to single-server capacit, performance and reliability, while glusterfs is a scale out filesystem able to exceed the performance and/or capacity of a single server (independently) and also provides server-level redundancy
<semiosis> there are some advanced features NFS has that glusterfs does not yet support (UID mapping, quotas, etc.) so please consider that when evaluating your options
<semiosis> Glusterfs uses a modular architecture, in which âtranslatorsâ are stacked in the server to export bricks over the network, and in clients to connect the mount point to bricks over the network.  These translators are automatically stacked and configured by the Elastic Volume Manager when creating volumes (under /etc/glusterd/vols).
<semiosis> A client translator stack is also created and distributed to the peers which clients retrieve at mount-time.   These translator stacks, called Volume Files (volfile) are replicated between all peers in the pool.
<semiosis> A client can retrieve any volume file from any peer, which it then uses to connect to directly to that volume's bricks.  Every peer can manage its own and every other peer's volumes, it doesn't even need to export any bricks.
<semiosis> There are two translators of primary importance: Distribute and Replicate.  These are used to create distributed or replicated, or distributed-replicated volumes.
<semiosis> In the glusterfs 3.1 native architecture, servers export bricks to clients, and clients handle all file replication and distribution across the bricks.
<semiosis> All volumes can be considered distributed, even those with only one brick, because the distribution factor can be increased at any time without interrupting access (through the add-brick command).
<semiosis> The replication factor however can not be changed (data needs to be copied into a new volume).
<semiosis> In general, glusterfs volumes can be visualized as a table of bricks, with replication between columns, and distribution over rows.
<semiosis> So a volume with replication factor N would have N columns, and bricks must be added in sets (rows) of N at a time.
<semiosis> For example, when a file is written, the client first figures out which replication set the file should be distributed to (using the Elastic Hash Algorithm) then writes the file to all bricks in that set.
<semiosis> Some final introductory notes... First as a rule nothing should ever touch the bricks directly, all access should go through the client mount point.
<semiosis> Second, all bricks should be the same size, which is easy with using dedicated instance-store or EBS bricks.
<semiosis> Third, files are stored whole on a brick, so not only can't volumes store files larger than a brick, but bricks should be orders of magnitude larger than files in order to get good distribution.
<semiosis> Now I'd like to talk for a minute about compiling glusterfs from source on Ubuntu.  This is necessary if one wants to use glusterfs on a 32-bit system, since Gluster only provides official packages for 64-bit.
<semiosis> (as a side note, the packages in my PPA are built for 32-bit, but they are largely untested, i have only begun testing the 32 bit builds myself yesterday, and although it's going well so far, YMMV)
<semiosis> Compiling glusterfs is made very easy by the use of standard tools.
<semiosis>  First, some required packages need to be installed, these are: gnulib, flex, byacc, gawk, libattr1-dev, libreadline-dev, libfuse-dev, and libibverbs-dev.
<semiosis> After installing these packages you can untar the source tarball and run the usual â./configure; make; make installâ sequence to build & install the program.
<semiosis> By default, this will install most of the files under /usr/local, with the notable exceptions of the initscript placed in /etc/init.d/glusterd, the client mount script placed in /sbin/mount.glusterfs, and the glusterd configuration file /etc/glusterfs/glusterd.vol.
<semiosis> (thats a static config file which you'll never need to edit, btw)
<semiosis> If you wish to install to another location (using for example ./configure âprefix=/opt/glusterfs) make sure those three files are in their required locations.
<semiosis> Once installed, either from source or from a binary package, the server can be started with âserver glusterd startâ.  This starts the glusterd management daemon, which is controlled by the gluster command.
<semiosis>  The glusterd management daemon takes care of associating servers, generating volume configurations (for servers & clients,) and managing the brick export daemon (glusterfsd) processes.  Clients that only want to mount glusterfs volumes do not need the glusterd service running.
<semiosis> Another packaging note... the official deb package from Gluster is a single binary package that installs the full client & server, but the packages in my PPA are derived from the Debian Sid packages, which provide separate binary pkgs for server, client, libs, devel, etc allowing for a client-only installation
<semiosis> Now, getting back to glusterfs architecture, and setting up a trusted storage pool...
<semiosis> Setting up a trusted storage pool is also very straightforward.  I recommend using hostnames or FQDNs, rather than IP addresses, to identify the servers.
<semiosis> FQDNs are probably the best choice, since they can be updated in one place (the zone authority) and DNS takes care of distributing the update to all servers & clients in the cluster, whereas with hostnames, /etc/hosts would need to be updated on all machines
<semiosis> Servers are added to pools using the 'gluster peer probe <hostname>' command.  A server can only be a member of one pool, so attempting to probe a server that is already in a pool will result in an error.
<semiosis> To add a server to a pool the probe must be sent from an existing server to the new server, not the other way.  When initially creating a trusted storage pool, it's easiest to use one server to send out probes to all of the others.
<ClassBot> remib asked: Would you recommend using separate glusterfs servers or use the webservers both as glusterfs server/client?
<semiosis> excellent question!  there are benefits to both approaches.  Without going into too much detail, read-only can be done locally but there are some reasons to do writes from seperate clients if those clients are going to be writing to the same file (or locking on the same file)
<semiosis> there's a slight chance for coherency problems if the client-servers lose connectivity to each other, and writes go to the same file on both... that file will probably not be automatically repaired, but that's an edge case that may never happen in yoru application.  testing is very important
<semiosis> thats called a split-brain in glusterfs terminology
<semiosis> writes can go to different files under that partition condition just fine, it's only an issue if the two server-clients update the same file and they're not synchronized
<semiosis> and i dont even know if network partitions are likely in EC2, it's just a theoretical concern for me at this point, so go forth an experiment!
<semiosis> When initially creating a trusted storage pool, it's easiest to use one server to send out probes to all of the others.
<semiosis> As each additional server joins the pool it's hostname (and other information) is propagated to all of the previously existing servers.
<semiosis> One cautionary note, when sending out the initial probes, the recipients of the probes will only know the sender by its IP address.
<semiosis> To correct this, send a probe from just one of the additional servers back to the initial server â this will not change the structure of the pool but it will propagate an IP address to hostname update to all of the peers.
<semiosis> From that point on any new peers added to the pool will get the full hostname of every existing peer, including the peer sending the probe.
<ClassBot> kim0 asked: What's your overall impression of glusterfs robustness and ability to recover from split-brains or node failures
<semiosis> it depends heavily on your application's workload, for my application it's great, but Your Mileage May Vary.  this is the biggest concern with database-type workloads, where you would have multiple DB servers wanting to lock on a single file
<semiosis> but for regular file storage i've found it to be great
<semiosis> and of course it depends also a great deal on the cloud-provider's network, not just glusterfs...
<semiosis> resolving a split-brain issue is relatively painless... just determine which replica has the "correct" version of the file, and delete the "bad" version from the other replica(s) and glusterfs will replace the deleted bad copies with the good copy and all futhre access will be synchronized, so it's usually not a big deal
<ClassBot> natea asked: Is the performance of GlusterFS storage comparable to a local storage? What are the downsides?
<semiosis> that sounds like a low-level component performance question, and I recommend concentrating on high-level aggregate application throughput.
<semiosis> i'll get to that shortly talking about the different types of volumes
<semiosis> Once peers have been added to the pool volumes can be created.  But before creating the volumes it's important to have set up the backend filesystems that will be used for bricks.
<semiosis> In EC2 (and compatible) cloud environments this is done by attaching a block device to the instance, then formatting and mounting the block device filesystem.
<semiosis> Block devices can be added at instance creation time using the EC2 command ec2-run-instances with the -b option.
<semiosis> EBS volumes are specified for example with -b /dev/sdd=:20 where /dev/sdd is the device name to use, and :20 is the size (in GB) of the volume to create.
<semiosis>  Glusterfs recommends using ext4 filesystems for bricks since it has good performance and is well tested.
<semiosis> As I mentioned earlier, the two translators of primary importance are Distribute and Replicate.  All volumes are Distributed, and optionally also Replicated.
<semiosis> Since volumes can have many bricks, and servers can have bricks in different volumes, a common convention is to mount brick filesystems at /bricks/volumeN.  I'll follow that convention in a few common volume configurations to follow.
<semiosis> The first and most basic volume type is a distributed volume on one server.  This is essentially unifying the brick filesystems to make a larger filesystem.
<semiosis> Remember though that files are stored whole on bricks, so no file can exceed the size of a brick.  Also please remember that it is a best-practice to use bricks of equal size.  So, lets consider creating a volume of 3TB called âbigstorageâ.
<semiosis> We could just as easily use 3 EBS bricks of 1TB each, 6 EBS bricks of 500GB each, or 10 EBS bricks of 300GB each.  Which layout to use depends on the specifics of your application, but in general spreading files out over more bricks will achieve better aggregate throughput.
<semiosis> so even though the performance of a single brick is not as good as a local filesystem, spreading over several bricks can achieve comparable aggreagate throughput
<semiosis> Assuming the server's hostname is 'fileserver', the volume creation command for this would be  simply âgluster volume create bigstorage fileserver:/bricks/bigstorage1 fileserver:/bricks/bigstorage2 â¦ fileserver:/bricks/bigstorageNâ.
<semiosis> This trivial volume which just unifies bricks on a single server has limited performance scalability.  In EC2 the network interface is usually the limiting factor, and although in theory a larger instance will have a chance at a larger slice of the network interface bandwidth, in practice I have found that this usually exceeds the bandwidth available on the network.
<semiosis> And by this I mean what I've found is that larger instances do not get much more bandwidth to EBS or other instances (going beyond Large instance anyway, i'm sure smaller instances could get worse but haven't really evaluated them.)
<semiosis> Glusterfs is known as a scale-out filesystem, and this means that performance and capacity can be scaled by adding more nodes to the cluster, rather than increasing the size of individual nodes.
<ClassBot> neti asked: Is GLusterFS using local caching in memory?
<semiosis> yes it does do read-caching and write-behind caching, but I leave their configuration at the default, please check out the docs at gluster.org for details, specifically http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Setting_Volume_Options
<semiosis> Glusterfs is known as a scale-out filesystem, and this means that performance and capacity can be scaled by adding more nodes to the cluster, rather than increasing the size of individual nodes.
<semiosis> So the next example volume after 'bigstorage' should be 'faststorage'.  With this volume we'll combine EBS bricks in the same way but using two servers.
<semiosis> First of course a trusted storage pool must be created by probing from one server (fileserver1) to the other (fileserver2) by running the command 'gluster peer probe fileserver2' on fileserver1, then updating the IP address of fileserver1 to its hostname by running 'gluster peer probe fileserver1' on fileserver2.
<semiosis> After that, the volume creation command can be run, 'gluster volume create faststorage fileserver1:/bricks/faststorage1 fileserver2:/bricks/faststorage2 fileserver1:/bricks/faststorage3 fileserver2:/bricks/faststorage4 ...â where fileserver1 gets the odd numbered bricks and fileserver2 gets the even numbered bricks.
<semiosis> In this example there can be an arbitrary number of bricks.  Because files are distributed evenly across bricks, this has the advantage of combining the network performance of the two servers.
<semiosis> (interleaving the brick names is just my convention, it's not required and you're free to use any convention you'd like)
<ClassBot> kim0 asked: Since you have redudancy through replication, why not use instance-store instead of ebs
<semiosis> ah I was just about to get into replication, great timing.  in short, you can, and I do!  instance-store has consistent latency going for it, but EBS volumes can be larger, can be snapshotted & restored, and can be moved between instances (within an availability zone) so that makes managing your data much easier
<semiosis> Now I'd like to shift gears and talk about reliability.
<semiosis>  In glusterfs clients connect directly to bricks, so if one brick goes away its files become inaccessible, but the rest of the bricks should still be available.  Similarly if one whole server goes down, only the files on the bricks it exports will be unavailable.
<semiosis> This is in contrast to RAID striping where if one device goes down, the whole array becomes unavailable.  This brings us to the next type of volume, distributed-replicated.  In a distributed- replicated volume as I mentioned earlier files are distributed over replica sets.
<semiosis> Since EBS volumes are already replicated in the EC2 infrastructure it should not be necessary to replicate bricks on the same server.
<semiosis>  In EC2 replication is best suited to guard against instance failure, so its best to replicate bricks between servers.
<semiosis> The most straightforward replicated volume would be one with two bricks on two servers.
<semiosis> By convention these bricks should be named the same, so for a volume called safestorage the volume create command would look like this, âgluster volume create safestorage replica 2 fileserver1:/bricks/safestorage1 fileserver2:/bricks/safestorage1 fileserver1:/bricks/safestorage2 fileserver2:/bricks/safestorage2 ...â
<semiosis> Bricks must be added in sets of size equal to the replica count, so for replica 2, bricks must be added in pairs.
<semiosis> Scaling performance on a distributed-replicated volume is similarly straightforward, and similar to adding bricks, servers should also be added in sets of size equal to the replica count.
<semiosis> So, to add performance capacity to a replica 2 volume, two more server should be added to the pool, and the volume creation command would look like this, âgluster volume create safestorage replica 2 fileserver1:/bricks/safestorage1 fileserver2:/bricks/safestorage1 fileserver3:/bricks/safestorage2 fileserver4:/bricks/safestorage2 fileserver1:/bricks/safestorage3 fileserver2:/bricks/safestorage3 fileserver3:/bricks/
<semiosis> safestorage4 fileserver4:/bricks/safestorage4...â
<semiosis> Up to this point all of the examples involve creating a volume, but volumes can also be expanded while online.  This is done with the add-brick command, which takes parameters just like the volume create command.
<semiosis> Bricks still need to be added in sets of size equal to the replica count though.
<semiosis> also note, the "add-brick" operation requires a "rebalance" to spread existing files out over the new bricks, this is a very costly operation in terms of CPU & network bandwidth so you should try to avoid it.
<semiosis> A similar but less costly operation is "replace-brick" which can be used to move an existing brick to a new server, for example to add performance with the addition of new servers without adding capacity
<ClassBot> There are 10 minutes remaining in the current session.
<semiosis> another scaling option is to use EBS bricks smaller than 1TB, and restore from snapshots to 1TB bricks.  this is an advanced technique requriring the ec2 command ec2-create-vol & ec2-attach-vol
<semiosis> Well looks like my time is running out, so I'll try to wrap things up.  please ask any questions you've been holding back!
<semiosis> Getting started with glusterfs is very easy, and with a bit of experimentation & performance testing you can have a large, high throguhput file storage service running in the cloud.  Best of all in my opinion is the ability to snapshot EBS bricks with the ec2-create-image API call/command which is also available in the AWS console
<ClassBot> kim0 asked: Did you evaluate ceph as well
<semiosis> I am keeping an eye on ceph, but it seemed to me that glusterfs is already well tested & used widely in production, even if not yet used widely in the cloud... it sure will be soon
<ClassBot> neti asked: Is GlusterFS Supporting File Locking?
<semiosis> yes glusterfs supports full POSIX semantics including file locking
<semiosis> one last note about snapshotting EBS bricks... since bricks are regular ext4 filesystems, they can be restored from snapshot & read just like any other EBS volume, no hassling with mdadm or lvm to reassemble volumes like with RAID
<ClassBot> remib asked: Does GlusterFS support quota's?
<ClassBot> There are 5 minutes remaining in the current session.
<semiosis> no quota support in 3.1
<semiosis> Thank you all so much for the great questions.  I hope you have fun experimenting with glusterfs, I think it's a very exciting technology.  One final note for those of you who may be interested in commercial support...
<semiosis> Gluster Inc. has recently released paid AMIs for Amazon EC2 and Vmware that are fully supported by the company.  I've not used these, but they are there for your consideration.
<semiosis> The glusterfs community is large and active.  I usually hang out in #gluster which is where I've learned the most about glusterfs.  There's a lot of friendly and knowledgeable people there, as well as on the mailing list, who enjoy helping out beginners
<semiosis> thanks again!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: What is Ensemble? - Presentation and Demo - Instructors: SpamapS
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/23/%23ubuntu-classroom.html following the conclusion of the session.
<SpamapS> So, I have prepared a short set of slides to try and explain what Ensemble is here: http://spamaps.org/files/Ensemble%20Presentation.pdf
<SpamapS> I will elaborate here in channel.
<SpamapS> Ensemble is an implementation of Service Management
<SpamapS> up until now this has also been called "Orchestration", and the term is not all that inaccurate, though I feel that Service Management is more appropriate
<SpamapS> "What is Service Management?"
<SpamapS> Service Management is focused on the things that servers do that end users consume
<SpamapS> Users connect to websites, dns servers, or (at a lower level) databases, cache services, etc
<SpamapS> Ensemble models how services relate to one another.
<SpamapS> Web applications need to connect to a number of remote resources. Load balancers need to connect to web application servers.. monitoring services need to connect to services and test that they're working.
<SpamapS> Ensemble models all of these in what we call "formulas" (more on this later)
<SpamapS> If this starts to sound like Configuration Management, you won't be the first to make that mistake.
<SpamapS> However, this sits at a higher level than configuration management.
<SpamapS> "Contrast With Configuration Management"
<SpamapS> Configuration management grew from the time when we had a few servers that were expensive to buy/lease/provision, and lived a long time.
<SpamapS> Because of this, system administrators modeled system configuration very deeply. Puppet, chef, etc., first and foremost, model how to configure *a server*
<SpamapS> As the networks grew and became more dependent on one another, the config management systems have grown the ability to share data about servers.
<SpamapS> However the model is still focused on "how do I get my server configured"
<SpamapS> Ensemble seeks to configure the service.
<SpamapS> With the cloud, we have the ability to rapidly provision and de-provision servers. So service management is tightly coupled with provisioning.
<SpamapS> Chef, in particular, from the config management world, has done a good job of adding this in with their management tools.
<SpamapS> However, where we start to see a lot of duplication of work in configmanagement, is in sharing of the knowledge of service configuration.
<SpamapS> Puppet and Chef both have the ability to share their "recipes" or "cookbooks"
<SpamapS> However, most of these are filled with comments and variables "change this for your site"
<SpamapS> The knowledge of how and when and why is hard to encode in these systems.
<SpamapS> Ensemble doesn't compete directly with them on this level. Ensemble can actually utilize configuration management to do service management.
<SpamapS> The comparison is similar to what we all used to do 15+ years ago with open source software
<SpamapS> download tarball, extract, cd, ./configure --with-dep1=/blah && make && sudo make install
<SpamapS> This would be an iterative process where we would figure out how to make the software work for our particular server every time.
<SpamapS> Then distributions came along and created packaging, and repositories, to relieve us from the burden of doing this for *most* low level dependencies.
<SpamapS> So ensemble seeks to give us, in the cloud, what we have on the single server.. something like 'apt-get install'
<SpamapS> "Terms"
<SpamapS> "service unit" - for the most part this means "a server", but it really just means one member of a service deployment. If you have 3 identical web app servers, these are 3 service units, in one web app service deployment.
<SpamapS> "formula" - this is the sharable, ".deb" for the cloud. It encodes the relationships and runtime environment required to configure a service
<SpamapS> "environment" - in ensemble, this defines the machine provider and settings for deploying services together. Right now this means your ec2 credentials and what instance type. But it could mean a whole host of things.
<SpamapS> "bootstrap" - ensemble's first job in any deployment is to "boostrap" itself. You run the CLI tool to boostrap it
<SpamapS> that means it starts a machine that runs the top level ensemble agent that you will communicate with going forward
<SpamapS> "Basic Workflow"
<SpamapS> This is how we see people using ensemble, though we have to imagine the details of this will change as ensemble grows, since it hasn't even been "released" yet.
<SpamapS> (though, as a side note, it is working perfectly well, and available for lucid at https://launchpad.net/~ensemble/+archive/ppa)
<SpamapS> 0. (let this out of the slide) - configure your environment. This means establish AWS credentials, and record them in .ensemble/environment.yaml
<SpamapS> 1. Bootstrap (ensemble bootstrap) - this connects to your machine provider (EC2 right now) and spawns an instance, and seeds it using cloud-init to install ensemble and its dependencies
<SpamapS> 2. Deploy Services (ensemble deploy mysql wiki-db; ensemble deploy mediawiki demo-wiki)
<SpamapS> This actually spawns nodes with the machine provider, and runs the ensemble agent on them, telling them what service they're a part of and running the service "install" hooks to get them ready to participate in the service
<SpamapS> 3. Relate Services (ensemble add-relation demo-wiki:db wiki-db:db)
<SpamapS> This part won't always be necessary. Automatic relationship resolution is being worked on right now. But sometimes you will want to be explicit, or do a relation that is optional.
<SpamapS> In the example above, this tells demo-wiki and wiki-db about eachother. I will pastebin a formula example to clear this up.
<SpamapS> http://paste.ubuntu.com/584424/
<SpamapS> This is the metadata portion of the mediawiki formula, which I created recently as part of the "Principia" project, which is a collection of formulas for ensemble: https://launchpad.net/principia
<SpamapS> If you look there, you see that it 'requires:' a relationship called 'db'
<SpamapS> the interface for that relationship is "mysql"
<SpamapS> These interface names are used to ensure you don't relate two things which have different interfaces
<SpamapS> (almost done will take questions shortly)
<SpamapS> http://paste.ubuntu.com/584425/
<SpamapS> This is the corresponding metadata for mysql..
<SpamapS> as you see, it provides a relationship called 'db' as well, which uses the interface 'mysql'
<SpamapS> What this means is that the 'requires' side of the formula can expect certain data to be passed to it when it joins this relationship
<SpamapS> and likewise, the provides side knows that its consumers will need certain bits
<SpamapS> When this relationship is added, "hooks" are fired
<SpamapS> These are just scripts that are run at certain events in the relationship lifecycle
<SpamapS> These scripts use helper programs from ensemble to read and write data over the two-way communication channel.
<SpamapS> In the case of mysql, whenever a service unit joins a relationship, it creates a database for the service if it doesn't exist, and then creates a username/password/etc. and sends that to the consumer
<SpamapS> and the mediawiki hook for the relationship will configure mediawiki to use that database
<SpamapS> The code for all of this is in lp:principia if you are curious.
<SpamapS> the final slide is just an overview of ensemble's architecture under the hood.
<SpamapS> I will take questions now...
<SpamapS> marktma: GREAT question. Definitely. One of the goals is to make it easy to write new "machine providers". By doing EC2 first though, we should have a reasonable chance at working with UEC/Eucalyptus and maybe even OpenStack out of the box.
<ClassBot> marktma asked: is there any chance ensemble will be used for private clouds as well?
<SpamapS> hah, ok, see answer ^^
<ClassBot> kim0 asked: What does the interface: mysql .. actually mean
<SpamapS> I think I may have answered that already in the ensuing description..
<SpamapS> but essentially its a loose contract between providers/requirerers/peers on what will be passed through the communication channel
<ClassBot> EvilPhoenix asked: (for kim0): that contract .. is it defined somewhere
<SpamapS> It is only defined via the formulas. It is intentially kept as a loose coupling to make formulas flexible. I could see it being strengthened a bit in the future.
<SpamapS> Now, I wanted to stream my desktop to demo ensemble in action..
<SpamapS> but that has proven difficult given the 20 minutes I had to attempt to set that up.
<SpamapS> So I will paste bin the terminal output of an ensemble run...
<SpamapS> I have setup a lucid instance for this, and the only commands not seen here are: sudo add-apt-repository ppa:ensemble/ppa ; apt-get update ; apt-get install ensemble ; bzr branch lp:principia ; cat > aws.sh
<SpamapS> the last bit is to store my aws credentials
<SpamapS> http://paste.ubuntu.com/584430/
<SpamapS> this is the boostrap phase
<SpamapS> bootstrap even
<SpamapS> I now need to wait for EC2 to start an instance
<SpamapS> ubuntu@ip-10-203-81-87:~$ ensemble status
<SpamapS> 2011-03-23 18:42:02,263 INFO Connecting to environment.
<SpamapS> 2011-03-23 18:42:18,586 INFO Environment still initializing.  Will wait.
<SpamapS> And now it has spawned my bootstrap
<SpamapS> there will be live DNS names here, so hopefully my security groups will keep your prying eyes out..
<SpamapS> machines: 0: {dns-name: ec2-50-17-142-155.compute-1.amazonaws.com, instance-id: i-10f63f7f}
<SpamapS> services: {}
<SpamapS> 2011-03-23 18:42:50,216 INFO 'status' command finished successfully
<ClassBot> TeTeT asked: what would a system administrators task with ensemble be - write formulas or just deploy them or a mix?
<SpamapS> I'd imagine sysadmins would write the formulas for an organization's own services which consume existing services.
<SpamapS> The common scenario is a LAMP application which takes advantage of memcached, mysql, and has a load balancer
<SpamapS> The lamp app needs to have its config files written with the db, cache servers, etc., so the sysadmin would write the relation hooks for mysql and memcached. OR a developer could write these. The devops paradigm kind of suggests that they work together on this.
<SpamapS> Ok now I'll run my "demo.sh" script which builds a full mediawiki stack
<SpamapS> While this is going, I will stress that this is *unreleased* alpha software, though the dev team has been very dilligent and the code is of a very high quality (written in python with twisted, and available at lp:ensemble
<SpamapS> http://paste.ubuntu.com/584433/
<SpamapS> Now we'll need to wait a few minutes while all of those nodes spawn
<SpamapS> Now, I'm using t1.micro, so these provision *fast* .. we can watch their hooks run w/ debug-log...
<SpamapS> However they may already be done..
<SpamapS> Ideally, we'll have a wiki accessible at the address of 'wiki-balancer' .. lets see
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> TeTeT asked: is the deployment through ensemble itself or via cloud-init or puppet or other config tools?
<SpamapS> http://paste.ubuntu.com/584437/
<SpamapS> While you guys try to decipher that I'll answer TeTeT
<SpamapS> TeTeT: the nodes are configured via cloud-init to run ensemble's agent. After that, ensemble is in control running hooks. The formulas are pushed into S3, and then downloaded by the agent once it starts.
<SpamapS> So unfortunately, our load balancer has failed.. it is "machine 4" http://ec2-50-17-47-115.compute-1.amazonaws.com/ ... but the individual mediawiki nodes *are* working..
<SpamapS> http://ec2-204-236-202-35.compute-1.amazonaws.com/mediawiki/index.php/Main_Page
<SpamapS> Ahh, there was a bug in my demo.sh :)
<SpamapS> $ENSEMBLE add-relation wiki-balancer demo-wiki:reverseproxy
<SpamapS> mediawiki has no relation named reverseproxy
<SpamapS> 2011-03-23 18:46:37,900 INFO Connecting to environment.
<SpamapS> No matching endpoints
<SpamapS> 2011-03-23 18:46:38,473 ERROR No matching endpoints
<SpamapS> 2011-03-23 18:46:38,865 INFO Connecting to environment.
<SpamapS> We actually had that error but missed it. ;)
<SpamapS> lets relate the load balancer now
<ClassBot> There are 5 minutes remaining in the current session.
<SpamapS> ubuntu@ip-10-203-81-87:~$ ensemble add-relation wiki-balancer:reverseproxy demo-wiki:website
<SpamapS> 2011-03-23 18:56:28,059 INFO Connecting to environment.
<SpamapS> 2011-03-23 18:56:28,691 INFO Added http relation to all service units.
<ClassBot> kim0 asked: Can't a cache and a wiki service-units share the same ec2 instance
<SpamapS> 2011-03-23 18:56:28,691 INFO 'add_relation' command finished successfully
<SpamapS> kim0: the idea is that in that instance, its simpler to use something like LXC containers to make it easier to write formulas. However, in the case of purely non-conflicting formulas, there should be a way in the future to do that yes
<SpamapS> http://ec2-50-17-47-115.compute-1.amazonaws.com/mediawiki/index.php/Main_Page
<SpamapS> And there you have a working mediawiki
<ClassBot> TeTeT asked: will ensemble also provide service monitoring, or is that better left to munin/nagios and alike
<SpamapS> TeTeT: The latter. nagios/munin/etc are just services in themselves. And they speak the same protocols as consuming services. If a formula wants to explicitly expose *more* over a monitoring interface they certainly can
<SpamapS> I think thats about all the time we have
<SpamapS> Thanks so much for taking the time to listen. https://launchpad.net/ensemble has more information!
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Using Linux Containers in Natty - Instructors: hallyn
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/23/%23ubuntu-classroom.html following the conclusion of the session.
<hallyn> Ok, hey all
<hallyn> I'm going to talk about containers on natty.
<hallyn> In the past, that is, until definately lucid, there were some constraints which made containers more painful to administer -
<hallyn> i.e.  you couldn't safely upgrade udev
<hallyn> that's now gone!
<hallyn> but, let me start at the start
<hallyn> containers, for anyone really new, are a way to run what appear to be different VMs, but without the overhead of an OS for each VM, and without any hardware emulation
<hallyn> so you can fit in a lot of containers on old hardware with little overhead
<hallyn> they are similar to openvz and vserver - they're not competition, though.
<hallyn> rather, they're the ongoing work to upstream the functionality from vserver and openvz
<hallyn> Containers are a userspace fiction built on top of some nifty kernel functionality.
<hallyn> There are two popular implementations right now:
<hallyn> the libvirt lxc driver, and liblxc (or jsut 'lxc') from lxc.sf.net
<hallyn> Here, I'm talking about lxc.sf.net
<hallyn> All right, in order to demo some lxc functionality, I set up a stock natty VM on amazon.  You can get to it as:
<hallyn> ssh ec2-50-17-73-23.compute-1.amazonaws.com -l guest
<hallyn> password is 'none'
<hallyn> that should get you into read-only screen session.  To get out, hit '~.' to kill ssh.
<hallyn> One of the kernel pieces used by containers is the namespaces.
<hallyn> You can use just the namespaces (for fun) using 'lxc-unshare'
<hallyn> it's not a very user-friendly command, though.
<hallyn> because it's rarely used...
<hallyn> what I just did there on the demo is to unshare my mount, pid, and utsname (hostname) namespaces
<hallyn> using 'lxc-unshare -s 'MOUNT|PID|UTSNAME" /bin/bash'
<hallyn> lxc-unshare doesn't remount /proc for you, so I had to do that.  Once I've done that, ps only shows tasks in my pid namespace
<hallyn> also, I can change my hostname without changing the hostname on the rest f the system
<hallyn> When I exited the namespace, I was brought back to a shell with the old hostname
<hallyn> all right, another thing used by containers is bind mounts.  Not much to say about them, let me just do a quick demo of playing with them:
<ClassBot> ToyKeeper asked: Will there be a log available for this screen session?
<hallyn> yes,
<hallyn> oh, no. sorry
<hallyn> didn't think to set that up
<hallyn> hm,
<hallyn> ok, i'm logging it as of now.  I'll decide where to put it later.  thanks.
<hallyn> nothing fancy, just bind-mounting filesystems
<hallyn> which is a way of saving a lot of space, if you share /usr and /lib amongst a lot of containers
<hallyn> anyway, moving on to actual usage
<hallyn> Typically there are 3 ways that I might set up networking for a container
<hallyn> Often, if I'm lazy or already have it set up, I'll re-use the libvirt bridge, virbr0, to bind container NICs to
<hallyn> well, at least apt-get worked :)
<hallyn> If I'm on a laptop using wireless, I"ll usually do that route, because you can't directly bridge a wireless NIC.
<hallyn> And otherwise I'd have to set up my own iptables rules to do the forwarding from containers bridge to the host NIC
<hallyn> If I'm on a 'real' host, I'll bridge the host's NIC and use that for containers.
<hallyn> that's what lxc-veth.conf does
<hallyn> So first you have to set up /etc/network/interfaces to have br0 be a bridge,
<hallyn> have eth0 not have an address, and make eeth0 a bridge-port on br0
<hallyn> as seen on the demo
<hallyn> Since that's set up, I can create a bridged container just using:
<hallyn> 'lxc-create -f /etc/lxc-veth.conf -n nattya -t natty'
<hallyn> nattya is the naem of the container,
<hallyn> natty is the template I'm using
<hallyn> and /etc/lxc-veth.conf is the config file to specify how to network
<hallyn> ruh roh
<hallyn> so lxc-create is how you create a new container
<hallyn> The rootfs and config files for each container are in /var/lib/lxc
<hallyn> you see there are three containers there - natty1, which I created before this session, and nattya and nattyz which I jsut created
<hallyn> The config file under /var/lib/lxc/natty1 shows some extra information,
<hallyn> including howmany tty's to set up,
<hallyn> and which devices to allow access to
<hallyn> the first devices line, 'lxc.cgroup.devices.deny = a' means 'by default, don't allow any access to any device.'
<hallyn> from there any other entries are whitelist entries
<ClassBot> kim0 asked: Can I run a completely different system like centos under lxc on ubuntu ?
<hallyn> yes, you can, and many people do.
<hallyn> The main problem, usually, is in actually first setting up a container with that distro which works
<hallyn> You can't 100% use a stock iso install and have it boot as a container
<hallyn> It used to be there was a lot of work you had to do to make that work,
<hallyn> but now we're down to very few things.  In fact, for ubuntu natty, we have a package called 'lxcguest'
<hallyn> if you take a stock ubuntu natty image,
<hallyn> and install 'lxcguest', then it will allow that image to boot as a container
<hallyn> It actually only does two things now:
<hallyn> 1. it detects that it is in a container (based on a boot argument provided by lxc-start),
<hallyn> uh, that wasn't suppsoed to be 1 :),
<hallyn> and based on that, if it is in a container, it
<hallyn> 1. starts a console on /dev/console, so that 'lxc-start' itself gets a console (like you see when i start a container)
<hallyn> 2. it changes /lib/init/fstab to one with fewer filesystems,
<hallyn> bc there are some which you cannot or should not mount in a container.
<hallyn> now, lxc ships with some 'templates'.
<hallyn> these are under /usr/lib/lxc/tempaltes
<hallyn> /usr/lib/lxc/templates that is
<hallyn> some of those templates, however, don't quite work right.  So a next work item we want to tackle is to make those all work better, and add more
<hallyn> let's take a look at the lxc-natty one:
<hallyn> it takes a MIRROR option, which I always use at home, which lets me point it at a apt-cacher-ng instance
<hallyn> it starts by doing a debootstrap of a stock natty image into /var/cache/lxc/natty/
<hallyn> so then, every time you create another container with natty template, it will rsync that image into place
<hallyn> then it configures it, setting hostname, setting up interfaces,
<hallyn> shuts up udev,
<hallyn> since the template by default creates 4 tty's, we get rid of /etc/init/tty5 and 6
<hallyn> since we're not installing lxcguest, we just empty out /lib/init/fstab,
<hallyn> actually, that may be a problem
<hallyn> upstart upgrades may overwrite that
<hallyn> so we should instaed have lxc-natty template always install the lxcguest package
<hallyn> (note to self)
<hallyn> and finally, it installs the lxc configuration, which is that config file we looked at before with device access etc
<hallyn> ok, i've been rampling, let me look for and address any/all questions
<ClassBot> kapil asked: What's the status of using lxc via libvirt?
<hallyn> good question, zul has actually been working on that.
<hallyn> libvirt-lxc in natty is fixed so that when you log out from console, you don't kill the container any more :
<hallyn> seconly, you can use the same lxcguest package I mentioned before in libvirt-lxc,
<hallyn> so you can pretty easily debootstrap an image, chroot to it to install lxcguest, and then use it in libvirt
<hallyn> we still may end up writing a new libvirt lxc driver, as an alternative to the current one, which just calls out to liblxc, so that libvirt and liblxc can be used to maniuplate the same containers
<hallyn> but still haven't gotten to that
<ClassBot> kim0 asked: can I live migrate a lxc container
<hallyn> nope
<hallyn> for that, we'll first need checkpoint/restart.
<hallyn> I have a ppa with some kernel and userspace pieces - basically packaging the current upstream efforts.  But nothing upstream, nothing in natty, not very promising short-term
<ClassBot> ToyKeeper asked: Why would you want regular ttys in a container?  Can't the host get a guest shell similar to openvz's "vzctl enter $guestID" ?
<hallyn> nope,
<hallyn> if the container is set up right, then you can of course ssh into it;
<hallyn> or you can run lxc-start in a screen session so you can get back to it like that,
<hallyn> what the regular 'lxc.tty = 4' gives you is the ability to do 'lxc-console' to log in
<hallyn> as follows:
<hallyn> I start the container with '-d' to not give me a console on my current tty
<hallyn> then lxc-console -n natty1 connects me to the tty...
<hallyn> ctrl-a q exits it
<hallyn> now, the other way you might *want* to enter a container, which i think the vzctl enter does,
<hallyn> is to actually move your current task into the container
<hallyn> That currently is not possible
<hallyn> there is a kernel patch, being driven now by dlezcano, to make that possible, and a patch to lxc to use it using the 'lxc-attach' command.
<hallyn> but the kernel patch is not yet accepted upstream
<hallyn> so you cannot 'enter' a container
<ClassBot> rye asked: Are there any specific settings for cgroup mount for the host?
<hallyn> Currently I just mount all cgroups.
<hallyn> Using fstab in the demo machine, or just 'mount -t cgroup cgroup /ccgroup'
<hallyn> the ns cgroup is going away soon,
<hallyn> so when you don't have ns cgrou pmounted, then you'll need cgroup.clone_children to be 1
<hallyn> however, you don't need that in natty.  in n+1 you probably will.
<ClassBot> kim0 asked: How safe is it to let random strangers ssh into containers as root ? how safe is it to run random software inside containers .. can they break out
<hallyn> not safe at all
<hallyn> If you google for 'lxc lsm' you can find some suggestions for using selinux or smack to clamp down
<hallyn> and, over the next year or two, I'm hoping to keep working on, and finally complete, the 'user namespaces'
<hallyn> with user namespaces, you, as user 'kim0' and without privilege, woudl create a container.  root in that container would have full privilege over things which you yourself own
<hallyn> So any files owned by kim0 on the host;  anything private to your namespaces, like your own hostname;
<hallyn> BUT,
<hallyn> even when that is done, there is another consideration:  nothing is sitting between your users and the kernel
<hallyn> so any syscalls which have vulnerabilities - and there are always some - can be exploited
<hallyn> now,
<hallyn> the fact is of course that similar concerns should keep you vigilent over other virtualization - kvm/vmware/etc - as well.  The video driver, for instance, may allow the guest user to break out.
<ClassBot> kim0 asked: Can one enforce cpu/memory/network limits (cgroups?) on containers
<hallyn> you can lock a container into one or several cpus,
<hallyn> you can limit it's memory,
<hallyn> you can, it appears (this is new to me) throttle block io (which has been in the works for years :)
<hallyn> the net_cls.classid has to do with some filtering based on packet labels.  I've looked at it in the past, but never seen evidence of anyone using it
<hallyn> for documentation on cgroups, I would look at Documentation/cgroups in the kernel source
<hallyn> oh yes, and of course you can access devices
<hallyn> you remove device access by writing to /cgroup/<name>/devices.deny, an entry of the form
<hallyn> major:minor rwm
<hallyn> where r=read,w=write,m=mknod
<hallyn> oh, i lied,
<hallyn> first is 'a' for any, 'c' for char, or 'b' for block,
<hallyn> then major:minor, then rwm
<hallyn> you can see the current settings for cgroup in /cgroup/devices.list
<hallyn> and allow access by writing to devices.allow
<ClassBot> sveiss asked: is there any resource control support integrated with containers? Limiting CPU, memory/swap, etc... I'm thinking along the lines of the features provided by Solaris, if you're familiar with those
<hallyn> you can pin a container to a cpu, and you can track its usage, but you cannot (last I knew) limit % cpu
<hallyn> oh, there is one more cgroup i've not mentioned, 'freezer', which as the name sugguests lets you freeze a task.
<hallyn> so i can start up the natty1 guest and then freeze it like so
<hallyn> lxc-freeze just does 'echo "FROZEN" > /cgroup/$container/freezer.state' for me
<hallyn> lxc-thaw thaws it
<hallyn> make that lxc-unfreeze :)
<hallyn> can't get a console when it's frozen :)
<ClassBot> There are 10 minutes remaining in the current session.
<hallyn> there are a few other lxc-* commands to help administration
<hallyn> lxc-ls lists the available containers in the first line,
<hallyn> and the active ones inthe second
<hallyn> lxc-info just shows its state
<hallyn> lxc-ps shows tasks int he container, but you have to treat it just right
<hallyn> lxc-ps just does 'ps' and shows you if any tasks in your bash session are in a container :)
<hallyn> lxc-ps --name natty1 shows me the processes in container natty1
<hallyn> and lxc-ps -ef shows me all tasks, prepended by the container any task is in
<hallyn> lxc-ps --name natty1 --forest is the prettiest :)
<hallyn> now, i didn't get a chance to try this in advance so iwll probably fail, but
<hallyn> hm
<ClassBot> There are 5 minutes remaining in the current session.
<hallyn> there is the /lib/init/fstab which lxcgueset package will use
<hallyn> ok, what i did there,
<hallyn> was i had debootstrapped a stock image into 'demo1',  i jsut installed lxcguest,
<hallyn> and fired it up as a container
<hallyn> only problem ims i don't know the password :)
<ClassBot> kim0 asked: Any way to update the base natty template that gets rsync'ed to create new guests
<hallyn> sure, chroot to /var/cache/lxc/natty1 and apt-get update :)
<hallyn> ok, thanks everyone
<kim0> Thanks a lot .. It's been a great deep dive session
<kim0> Next Up is OpenStack Intro session
<soren> o/
<soren> kim0: How does it work? Do you copy questions from somewhere else or do I need to do that myself?
<soren> Or do people just ask here?
<kim0> soren: you "/msg ClassBot !q" then !y on every question
<kim0> soren: please join #ubuntu-classroom-chat as well
<soren> This is complicated :)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud  Days - Current Session: Open-Stack Introduction - Instructors: soren
<soren> Hello, everyone!
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/23/%23ubuntu-classroom.html following the conclusion of the session.
<soren> I'm Soren, I'm one of the core openstack developers.
<soren> OpenStack consists of two major components and a couple of smaller ones.
<soren> The major ones are OpenStack Compute, codenanmed nova.
<soren> ...and OpenStack Storage, codenamed Swift.
<soren> Swift is what drives Rackspace Cloud Files, which is a service very much like Amazon S3.
<soren> It's *massively* scalable, and is used to store petabytes of data today.
<soren> I work on Nova, though, so that's what I'll spend most time talking about today.
<soren> Nova is a project that started at NASA.
<soren> Apart froms ending stuff into space, NASA also does a bunch of other research things for the US government.
<soren> AMong them: "Look into this cloud computing thing"
<soren> This is what turned into the NASA Nebula project.
<soren> If you google it (I forgot to do so in advance), you'll find images of big containers that say Nebula on the side.
<soren> They're building blocks for NASA's cloud.
<soren> Anyways, they started our running this on Eucalyptus.
<soren> The same stuff that drives UEC.
<soren> This got.. uh... "old" eventually, and they decided to throw it out and write their own thing.
<soren> ..so they did, and they open sourced it.
<soren> RAckspace had plans for open sourcing their cloud platform, too, so they called NASA and said "wanna play?" (paraphrasing a little bit), and they were up for it.
<soren> So Rackspace had Swift, NASA had Nova. We put it together and called it OpenStakc.
<soren> OpenStack, even.
<soren> If you go to look at them, and they don't look like two pieces of the same puzzle, this is why. They share no ancestry, really.
<soren> They now work happily together, though.
 * soren attempts to work that qeustions thing
<ClassBot> EvilPhoenix asked: What exactly IS Open-Stack?
<soren> I guess that one is answered..
<ClassBot> medberry asked: Can you briefly differentiate openstack from eucalyptus
<soren> Yes. Yes, I can.
<soren> So, Eucalyptus corresponds to Nova.
<soren> They both focus on the compute side of things, while providing a *very* simple object store. Neither try to do any sort of large scale stuff.
<soren> Err..
<soren> For storage, I mean.
<soren> For the compute part, the architectures are *very* dissimilar.
<soren> So, last I looked (admittedly 1Â½ year ago, but I'm told this is still true), Eucalyptus is strictly hierarchical.
<soren> There's one "cloud controller" at the top.
<soren> There's a number of cluster controllers beneath this one cloud controller.
<soren> ...and there's a number of "node controllers" beneath the cluster controllers.
<soren> Eucalyptus is written in Java, and uses XML and web services for all its communication.
<soren> It polls from the top down.
<soren> Never the other way around.
<soren> Nova uses message queues.
<soren> Nova is written in Python.
<soren> We have no specific structure that must be followed.
<soren> There are a number of components: compute, network, scheduler, objectstore, api, and volume.
<soren> There can be any number of each of them.
<soren> So Nova itself has no single points of failure.
<soren> Oh, Eucalyptus's cluster and node controllers are written in C, by the way. I forgot.
<soren> All of Nova is Python.
<soren> AFAIK, Eucalyptus supports KVM and Xen.
<soren> We support KVM, Xen, Hyper-V, user-mode-linux, LXC (if not now, then *very* soon), VMWare vsphere..
<soren> Eerr..
<soren> Yeah, I think that's all.
<soren> WE also support a number of different storage backends (for EBS-like stuff): iSCSI, sheepdog, Ceph, AoE..
<soren> And one more, which I forget what is.
<soren> We're very, very modular in this way.
<soren> Last I checked, Eucalyptus supported AoE. They may or may not support more now. I'm not sure.
<ClassBot> kim0 asked: I understand openstack focuses on large scale deployments .. How suitable is it for openstack to be deployed in a small setting (5 servers?)
<soren> I'm glad you asked.
<soren> The Ubuntu packages I made of Nova work out-of-the-box on a single machine.
<soren> Scaling it out to 5 servers shouldn't be much work. There's some networking things that need to be set up, you need to point it at a shared database (so far, we're working towards a completely distributed data store) and a rabbitmq server.
<soren> We're suffreing a bit from our flexibility, really.
<soren> We can make very few assumptions about people's set up, so there might be a number of things that need to be set up correctly (e.g. which ip to use to reach an api server (or a load balancer in front of them)), which server to use for this, whcih server to use for that).
<soren> It's pretty obvious pretty quickly, though, if something isn't pointed the right way.
<soren> We're "blessed" with a team of people in Europe and in most US timezones, so if you run into trouble #openstack (irc channel) is open almost 24/7 :)
<ClassBot> kim0 asked: Is nova deployed at rackspace in production yet ? did you guys go with xen or kvm, and why ?
<soren> Nova is not in production at Rackspace yet, no.
<soren> Rackspace has an existing platform with which we've not completely hit feature parity.
<soren> ...and apparently, it's not ok to make Rackspace's customers suffer because we want to run a different platform :)
<soren> Rackspace will be using Xen Server.
<soren> Oh, I forgot to list that as a supported hypervisor. It is.
<soren> That's what they're used to, and that's what they can get support for for running Windows and stuff.
<ClassBot> markoma asked: Gluster was mentioned in a previous discussion. Is swift the right way to go, or Gluster?
<soren> They do very different things.
<soren> Gluster aims to provide a POSIX compliant filesystem.
<soren> Swift is an object store.
<soren> You address full objects. You cannot seek back and forth, replace parts of objects, etc.
<soren> Very much like Amazon S3.
<soren> Gluster recently announced they want to contribute to Swift. I don't know exactly how, but something's afoot :)
<ClassBot> jrisch asked: I think it's still unclear from the documentation, but it mentions something about a cloudpipe vm, but doesn't clarify it's role nor it's usage. Can you elaborate on that?
<soren> Ah, yes.
<soren> Cloudpipe is something NASA uses.
<soren> I don't think anyone else does, and perhaps will.
<soren> Each project has its own private subnet assigned.
<soren> Typically in the 10.0.0.0/8 range.
<soren> It's not reachable from the internet.
<soren> Cloudpipe images are images with an openvpn server in them.
<soren> Each project has such an instance running. They can connect to it using openvpn, and they can then reach their instances.
<soren> It's not required at all.
<soren> I've never used it.
<ClassBot> topper1 asked: is rabbitmq a SPOF since its clustering doesn't replicate queues?
<soren> In a sense.
<soren> From Nova's point of view, it's a bit of a black box.
<soren> We speak to something that speaks AMQP. We expect it to behave.
<soren> Just like we use an SQL database of some sort and expect it to behave.
<soren> RabbitMQ is way more stable than what we could have hacked up in the time it took to run "apt-get isntall rabbitmq-server".
<soren> *way* more stable.
<soren> There's work in progress to build a queueing service for OpenStack, but in general, we try to use existing components.
<ClassBot> n1md4 asked: There seems to be install guides for CentOS, RHEL, and Ubuntu, is there nothing specifically for Debian?
<soren> Not right now, I don't think.
<soren> I'd be *thrilled* if a DD stepped up and put OpenStack into Debian.
<soren> ...and sorted out all the dependencies.
<soren> It's silly not to, really.
<soren> It's just that noone has done it yet.
<ClassBot> markoma asked: do you, would you, use Ensemble to manage services for OpenStack?
<soren> I've no clue about what Ensemble does at the moment, so I can't really answer that.
<ClassBot> jrisch asked: If cloudpipe isn't required, how do you set up access to the VM's, IP mappings and stuff. Do the physical node act as a pipe/NAT device?
<soren> I tend to use floating ip's.
<soren> They're public IP's that you can dynamically assign to instances.
<soren> Alterntively, you can just use one of the other netowrk managers and use a subnet that's routable.
<ClassBot> jrisch asked: So if you speak AMQP to the message queue, could one use ActiveMQ instead? (it supports clustering as far as I know).
<soren> AFAIK, we don't do anything that requires RabbitMq.
<soren> So I guess ActiveMQ would work, if it speaks AMQP.
<ClassBot> topper1 asked: Is there work afoot to create API documentation (rest api) for swift... right now it requires 'you read the python')
<soren> Uh, there's plenty of docs.
<soren> HAng on.
<soren> http://www.rackspace.com/cloud/cloud_hosting_products/files/api/
<soren> Same thing.
<soren> I don't know where the ones labeled "openstack" are, but it's the same thing.
<soren> Ah, question queue is empty..
<soren> Where was I? :)
 * soren scrolls up
<soren> Nowhere, apparantly.
<soren> Ok, process..
<soren> We do time based releases.
<soren> Just like Ubuntu.
<soren> Except we have 3 months cycles, rather than 6 months.
<soren> We align with Ubuntu so that every other OpenStack release should almost coincide with an Ubuntu release.
<soren> We have feature freezes, beta freezes, RC freezes and final freezes just like Ubuntu.
<soren> This is no coincidence :)
<soren> Ubuntu is our reference platform.
<soren> I'm a core dev of Ubuntu, too, so if we have an problem with a component outside Nova, we can fix it and get it into our reference platform quite easily.
<soren> This holistic view of the distribution has served us very well, I think.
<soren> Nova can be way cool, but if there are bugs in libvirt, we're going to suffer, too, for instance.
<soren> Ok, so say you wanted to work on something in Nova (or other parts of Openstack).
<soren> You can branch the code from launchpad (which we use for everything: blueprints, bugs, code, answers) using "bzr branch lp:nova"
<soren> Hack on it, upload a branch to launchpad, and click the "propose for merge" button.
<soren> Within a couple of days someone should have looked at it and reviewed id.
<soren> it.
<soren> If it's good, it gets approved. If it's less good, we (try to) give constructive feedback so that you can fix ti.
<soren> Once it's good, it's approved.
<soren> Once approved, a component called Tarmac takes over.
<soren> Tarmac is run from our Jenkins instance: http://jenkins.openstack.org/
<soren> It looks for approved branches on Launchpad, merges them, and runs our test suite.
<soren> We have around 75% code coverage, I think.
<soren> Far from ideal, but it cathces quite a few things.
<soren> If the tests pass, your branch is merged.
<soren> And that's it.
<soren> If the tests fail, your branch gets set back to "needs review" and you can go and fix it again.
<soren> This is fine. It happens all the time. Don't sweat it.
<soren> WE're also doing some integration tests.
<soren> Oh, one other thing:
<soren> When a patch gets merged, it triggers a package build.
<soren> This means that if Launchpad doesn't have a huge backlog, less than 20 minutes after your branch has been reviewed, you can "apt-get upgrade" and get a fresh version of Nova with your patch in it.
<soren> So we continuously test that our packages build.
<soren> I have a Jenkins instance that checks the PPA for updates.
<soren> If there are updates, it installs the updates and runs a bunch of integration tests.
<soren> So within... I dunno, 35 minutes or so, probably, your patch has gone through unit tests, packages builds, and integration tests.
<soren> I think that's pretty cool.
<soren> We're working on expanding these tests.
<soren> So that we test more combinations of stuff.
<soren> I currently test KVM with the EC2 API using iSCSI volumes on Lucid, Maverick, and Natty.
<soren> We provide backported versions of stuff that is needed to run Openstack on Lucid, which we do support.
<ClassBot> There are 10 minutes remaining in the current session.
<soren> ...as well as Maverick and NAtty.
<soren> Well, there's nothing backported for Natty, because we put that directly into UBuntu.
<ClassBot> kim0 asked: Can you talk a bit about nova's roadmap
<soren> Sort of.
<soren> There are some things on the road map already.
<soren> ...but we have a design summit coming up, where we'll be talking much more about the roadmap.
<soren> It's an open event in Santa Clara in about a month, if anyone wants to come.
<soren> Should be fun.
<soren> Things that I do know on the road map already:
 * soren looks desperately for the list.
<soren> https://blueprints.launchpad.net/nova
<soren> Well, this is the list of everything.
<soren> Cactus is the release we're working on now.
<soren> Bexar is the previous one.
<soren> Diablo the next one.
<soren> Lots of different companies work on OpenStack. They have their own priorities.
<soren> Whatever they want to work on, they can.
<ClassBot> There are 5 minutes remaining in the current session.
<soren> So in that respect, it's hard to say what's going to land at any given time. It depends on what people feel like working on.
<soren> We're going to split out some stuff from nova (volume and network services), though.
<soren> That seems pretty certain right now.
<soren> And add support for the EC2 SOAP API.
<soren> People keep telling me no-one uses it, but... meh. I want to add it.
<soren> MAn, I can't really remember more stuff right now :(
<ClassBot> jrisch asked: I know that Swift is in production several places (other than Rackspace) - do you know of any companies that are using NOVA (besides NASA)...?
<soren> Not at the moment, no.
<soren> This current dev cycle has been one focused on stability and deployability.
<soren> The goal has been to get Nova to a point where people could actually use it in production.
<soren> I've blogged a bit about some the stuff I've done on that.
<soren> ..but lots of others have worked on it, too.
<soren> I guess that's it?
<soren> I hope it's been useful.
<kim0> Thanks soren
<kim0> This has been great
<kim0> Thanks everyone ..
<kim0> Hope you enjoyed the sessions
<kim0> See you tomorrow for the second day
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/23/%23ubuntu-classroom.html
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
 * DigitalFlux Missed today's Cloud day :(
<Meths> http://irclogs.ubuntu.com/2011/03/23/%23ubuntu-classroom.html
<DigitalFlux> Meths: Cool Thanks, may be tomorrow i can catch up
#ubuntu-classroom 2011-03-24
<nevrax> #ubuntu-classroom-chat
<budiw> hello all
<MarkAt2od> #ubuntu-classroom-chat
 * koolhead17 kicks nigelb 
<kim0> Hi folks .. Ubuntu Cloud Days starting in around 5mins here in #ubuntu-classroom .. You can discuss and ask questions in #ubuntu-classroom-chat .. Please feel free to tweet and share this info with your friends
<kim0> Those unfamiliar with IRC can simply use this web page http://webchat.freenode.net/?channels=ubuntu-classroom%2Cubuntu-classroom-chat
<smoser> hi all
<smoser> ok...
<smoser> so shall i start, mr kim0 ?
<kim0> You might indeed
<smoser> Hi, my name is Scott Moser.  I'm a member of the Ubuntu Server team.
<smoser> For the past 18 months or so, I've been tasked with preparing and managing the "Official Ubuntu Images" that can be used on Ubuntu Enterprise Cloud (UEC) or on Amazon EC2.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Rebundling/re-using Ubuntu's UEC images - Instructors: smoser
<smoser> Hi, my name is Scott Moser.  I'm a member of the Ubuntu Server team.
<smoser> For the past 18 months or so, I've been tasked with preparing and managing the "Official Ubuntu Images" that can be used on Ubuntu Enterprise Cloud (UEC) or on Amazon EC2.
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/24/%23ubuntu-classroom.html following the conclusion of the session.
<smoser> Some links, for reference:
<smoser> - https://help.ubuntu.com/community/UEC/Images
<smoser> - http://uec-images.ubuntu.com/releases/
<smoser> The first gives some general information about the images, the second gives access to download the images for use in UEC or Eucalyptus and AMI ids on Amazon.
<smoser> The subject of my discussion here is "rebundling/re-using Ubuntu's UEC images".
<smoser> "Rebundling" is the term used for taking an existing image (or instance), making some modifications to it, and creating a new image from it.
<smoser> With the Ubuntu images on UEC or EC2, There are generally 3 ways to rebundle an image.
<smoser>  * use a "bundle-vol" command from either euca2ools (euca-bundle-vol) or ec2-ami-tools (ec2-bundle-vol)
<smoser>  * use the EC2 'CreateImage' interface.
<smoser>  * make modifications to a pristine (unbooted) image via loop-back mounts
<smoser> I'll talk a little bit about each one of these, and then open the floor to questions.
<smoser> == General Re-Bundling Discussion ==
<smoser> The "Official Ubuntu Images" are generally stock Ubuntu server installs.  As with any default install, they're not much use out of the box.
<smoser> There are 2 basic ways to use Ubuntu images on EC2.
<smoser>  * "rebundle" one and have your own private (or public) AMI. (as we're discussing here)
<smoser>  * use the stock images, and customize the instance on first boot via scripted or manual methods.
<smoser> We've gone to a fair amount of trouble to make them generally malleable so that you can use them without the need to rebundle.
<smoser> Cloud-init (https://help.ubuntu.com/community/CloudInit) was developed to reduce the need for users to need to maintain their own AMIs.  Instead, the images are easily customizable on first boot.
<smoser> kim0, has put together several blog posts about how to get cloud-init to do your bidding so its ready to use once you attach to it.
<smoser> There is some work in maintaining a rebundled image, and, it you can remove that effort by using stock images.
<smoser> s/it you/it /
<smoser> bah. you understand what i was saying. you can remove the effort of maintaining a rebundled image by using a stock image.
<smoser> http://foss-boss.blogspot.com/ is kim0's blog.
<smoser> you should all have that bookmarked and available in your RSS reader of choice
<smoser> :)
<smoser> So, while there are some costs involved in rebundling, there are reasons to rebundle.  If you have a large stack that you're installing on top of the stock image, it may take some time for you to do so.
<smoser> Rebundling generally allows you to have a more "ready-to-go", "custom" image.
<smoser> By rebundling an image, you can add your stack and then reduce the bringup time of your custom AMI.
<smoser> any questions ?
<smoser> == Bundle Volume ==
<smoser> Using 'ec2-bundle-vol' was the first way that I'm aware of to rebundle.  The euca2ools also provide a work-alike command 'euca-bundle-vol'.
<smoser> The way most people use this tool is to
<smoser>  * boot an instance that they want to start from
<smoser>  * add some packages to it, make some changes ...
<smoser>  * issue the rebundle command
<smoser> [sorry, issue the 'ec2-bundle-vol' or 'euca-bundle-vol' command]
<smoser> what this does is basically copy the contents of the root filesystem into a disk image
<smoser> and then package that disk image up for uploading
<smoser> as you can imagine, simply doing something like "cp -a / /mnt" (ignoring the recursive copy of /mnt) is not the most "clean" thing in the world.
<smoser> the euca-bundle-vol command and ec2-bundle-vol command both include some OS specific hacks , so they dont copy certain files.
<smoser> and, inside the images themselves, we've made some changes to make this "just work".
<smoser> once you've set up the euca2ools, you might rebundle with something like:
<smoser>   sudo euca-bundle-vol --config /mnt/cred/credentials --size 2048 --destination /mnt/target-directory
<smoser> !question
<smoser> hm..
<smoser> there was a question as to whether this applied to openstack
<smoser> largely, openstack's EC2 compatibility should make this work.
<smoser> i've not tried rebundling an image in openstack exactly, but i do know that the euca2ools interact with openstack fine. and copying a filesystem is not really cloud specific
<ClassBot> koolhead17 asked: Is this class limited to eucalyptus image bundling or openstack as well ?
<smoser> right. so my previous comments attempted to address that question
<ClassBot> kim0 asked: is euca-bundle-vol only for running instances, can't I poweroff an instance and bundle its disk while powered off
<smoser> euca-bundle-vol only runs in instances.
<smoser> euca-bundle-image (and ec2-bundle-image) take a filesystem image as input.
<smoser> after using euca-bundle-image (or ec2-bundle-image) you then have to upload and register the output
<smoser> i generally would suggest using 'uec-publish-image' instead, which is a wrapper that does those three things.  The most recent version of this tool in natty allows you to use either the ec2- tools or euca2ools under the covers.
<ClassBot> koolhead17 asked: but open-stack doesnot use the ramdisk part of image
<smoser> i might be missing something.
<smoser> it is my understanding that the issue with the ubuntu images and openstack was that openstack was hard coded to expect a ramdisk
<smoser> where as the 10.04 and beyond images from Ubuntu do not use a ramdisk, so none was available in the tarball that you download.
<smoser> i believe that a.) that bug is fixed
<smoser> b.) there *is* in openstack a way to boot a instance with an internal kernel, ie not having a separate kernel/ramdisk at all, but relying on the bootloader installed in the disk image
<smoser> boy... i'm getting loads of questions, and i'd like to kind of get back to my over all plan, and then i can take questions.
<smoser> rather than sitting in interupt mode for the whole hour
<smoser> we will *definitely* have time to take questions, so please queue them up in #ubuntu-classroom-chat
<smoser> now where was i...
<smoser> so, after bundling, then you have to use euca-upload-bundle or ec2-upload-bundle and <prefix>-register to register your image.
<smoser> I should have noted above, that this "bundle-vol" really is only for instance-store images.
<smoser> Eucalyptus (in 2.0.X) only supports instance store images.
<smoser> i believe that they plan to have EBS root images in the future.
<smoser> So, that brings us to the second type of bundling
<smoser> == CreateImage ==
<smoser> When amazon began offering EBS root instances, they added an API call called 'CreateImage'
<smoser> CreateImage is an AWS API call that basically does the following:
<smoser>  * stop the instance if it is not stopped
<smoser>  * snapshot it's root volume
<smoser>  * register an AMI based on that snapshot
<smoser>  * start the instance back up.
<smoser> The CreateImage api is exposed via a command line tool (http://docs.amazonwebservices.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-CreateImage.html) and also via the EC2 Web Console.
<smoser> This feature makes it dramatically easier for anyone to create a custom AMI.  There is literally one button that you push in the EC2 Console, and then type in a name and description.
<smoser> I would generally recommend using CreateImage if you're using an EBS based image.  It is an extremely useful wrapper, and will get your a consistent filesystem snapshot.
<smoser> Once you have a snapshot id of a filesystem, you could actually fairly easily upload an instance-store image based on that snapshot.
<smoser> this is left as an excercise to the reader.
<smoser> So, the final way of rebundling an image
<smoser> == modify pristine download images ==
<smoser> Few other image producers on EC2 make their images available as filesystem images for download.
<smoser> Ubuntu does this so you can easily grab the image, make some changes to it, and then upload and register your modified image
<smoser> This might be the most involved way of creating an image, but it is also the one that lends it self best to automation
<smoser> For a simple example, say I wanted to add a user to an ubuntu image so that I could log in as that user on initial boot.
<smoser> What I would do is:
<smoser>  * launch a utility instance in EC2
<smoser> I'd pick a lucid 64 bit image, possibly even use an EBS root image and a t1.micro size.  The size would largely depend on what I wanted to do.
<smoser> once that image was up, I'd ssh to it.
<smoser> then, download a image tarball that I found a link to from https://uec-images.ubuntu.com/releases/lucid/
<smoser> $ wget https://uec-images.ubuntu.com/releases/lucid/release/ubuntu-10.04-server-uec-amd64.tar.gz
<smoser> then, extract the image, mount it loopback and make my modifications
<smoser> $ tar -Sxvzf ubuntu-10.04-server-uec-amd64.tar.gz
<smoser> $ sudo mkdir /target
<smoser> $ sudo mount -o loop *.img /target
<smoser> $ sudo chroot /target adduser foobar
<smoser> ... follow some prompts ...
<smoser> maybe make some other changes here
<smoser> $ sudo umount /target
<smoser> assuming you've also set up your credentials so that you can use euca-* or ec2-* tools, then you can do:
<smoser> uec-publish-image x86_64 *.img my-s3-bucket
<smoser> and out will pop a AMI-XXXXX id that you can then launch.
<smoser> This process lends itself *very* well to scripting.  You can launch the instance, connect to it, and do all the modifications via a program and revision control them.
<smoser> so you'll know exactly what you have
<smoser> Also, we make machine consumable information about how to download the images available at https://uec-images.ubuntu.com/query/
<smoser> For some things I was working on, I put together a script that does much of the above
<smoser> It assumes the instance is launched, and you're on it with credentials, but then does the rest
<smoser> http://bazaar.launchpad.net/~smoser/+junk/kexec-loader/view/head:/misc/publish-uec-image
<smoser> so...
<smoser> sorry for pushign through all that without taking interupts, but I wanted to get through it.
<ClassBot> obino asked: is there a "preferred" file system for instances?
<smoser> 10.04 images I believe use ext3 filesystem.
<smoser> that should have been ext4, as the images are really intended to be "stock ubuntu installs", and ext4 was the default filesystem in 10.04
<smoser> the 10.10 images use ext4, and so does natty.
<smoser> it is possible that Ubuntu will move to BRTFS as the default in 11.10. if thats the case, I'd like to follow that in the images (brtfs is has some *really* nice features).
<smoser> so as to "preferred"....
<smoser> I know people use xfs, as it offers snapshotting functionality not available in ext4
<smoser> and Eric Hammond's "create-consistent-snapshot" is a popular tool that sits atop using xfs for your data partitions.
<smoser> I wrote a blog entry on how you can rebundle the Ubuntu images into an xfs based image at
<smoser> http://ubuntu-smoser.blogspot.com/2010/11/create-image-with-xfs-root-filesystem.html
<smoser> navanjr, regarding "are you suggesting we should use the CreateImage method on EC2 to create an image for my UEC Private Cloud?"
<smoser> i might have somehwat covered that.
<smoser> but you really cannot use CreateImage on EC2 to create an image for UEC
<smoser> one general approach that would work would be to get your instance into a state that you're happy with it
<smoser> then stop the instance
<smoser> snapshot its root volume
<smoser> start the instance
<smoser> attach that snapshot as another disk
<smoser> then copy the filesystem contents of the second disk to a disk image.
<smoser> that disk image then could be brought to UEC.
<smoser> i'd have to think through that a bit more, but i believe the general path is correct.
<smoser> semiosis pointed out: CreateImage will also snapshot any other (non-root) EBS volumes attached to the instance, and those snapshots are automatically restored & attached to new instances made from the AMI.
<smoser> CreateImage is *really* a handy wrapper.
<ClassBot> obino asked: thanks for cloud-init! Is there any plan to make cloud-init available for other distro?
<smoser> well, amazon has taken cloud-init to their CentOS derived "Amazon Linux".
<smoser> and I believe that they intend to continue doing so.
<smoser> i'm definitely interested in helping them, and have worked with some of their engineers.
<smoser> I'd also like to get cloud-init into debian.  I know of one person who was trying to do that, and one person who was interested in getting it into fedora.
<ClassBot> There are 10 minutes remaining in the current session.
<smoser> So, yes, I would like to see that.  I think consistent bootstrapping code accross linuxes would be a general win.
<smoser> navanjr, asked 'so there is no "createImage" similar for use on a running UEC instance?'
<smoser> there is no createImage like functionality in UEC.  It relies upon EBS and block level snapshotting.  Eucalyptus does not have EBS root functionality in any released version that i'm aware of
<smoser> however, navanjr you might be able to get some more information out of obino
<smoser> (sorry, obino)
<smoser> I suspect htat question might have been planted.
<smoser> it leads into the next session very well
<smoser> TeTeT will talk about "UEC Persistency", which offers a way to get EBS root-like function on UEC, and even LTS 10.04.
<smoser> i wont take his thunder, though.
<ClassBot> There are 5 minutes remaining in the current session.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: UEC persistency - Instructors: tetet
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/24/%23ubuntu-classroom.html following the conclusion of the session.
<TeTeT> Hello! Nice to have you in class today.
<TeTeT> It's a bit weird, but I'm as nervous as before giving a live class :)
<TeTeT> If you have any questions, ask them in #ubuntu-classroom-chat with the prefix
<TeTeT> QUESTION
<TeTeT> I am not very familiar with the classbot, but I'll do my best to check if there are any questions in the queue.
<TeTeT> A brief introduction: I'm Torsten Spindler, been working for Canonical since December 2006 and I am part of the corporate services team, so unlike most other presenters here, I'm on the commercial side.
<TeTeT> I bring this up as one of my responsibilities is to maintain and deliver the Ubuntu Enterprise Cloud classes.
<TeTeT> This is directly related to this session, as I present one of the case study exercises found in the UEC class, it's the latest addition to the course material and brand new
<TeTeT> So what do I want to present here?
<TeTeT> During the UEC class I often got the question: 'How can I have an instance on UEC that stores all of its information on an EBS volume, so I can simply use it like a regular virtualized server?'
<TeTeT> Why is that of interest? The data on an instance is volatile, e.g. if the instance dies, all data of it is gone.
<TeTeT> Unless you store the data on a persistent cloud datastore; in UEC we have two of them: Walrus S3 and EBS volumes.
<TeTeT> An EBS volume is a device in an instance that serves as a disk, very much like a USB stick you insert in your system.
<TeTeT> you can see a graphic for this at http://people.canonical.com/~tspindler/UEC/attach-volume.png
<TeTeT> keep in mind that the disk actually attaches to an instance running on the node controller, not on the node controller itself
<TeTeT> the technology used in UEC is 'ATA over Ethernet - AoE', which means that the EBS volume is exported via network from a server, the EBS storage controller.
<TeTeT> With Amazon Web Services (AWS) it is possible to boot from an EBS volume in the public cloud, with a technology named 'EBS root'. For more info on this, please see http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?Concepts_BootFromEBS.html
<TeTeT> With UEC for the private cloud we use Eucalyptus as upstream technology. With Eucalyptus the concept of booting an instance straight from an EBS volume is not given.
<TeTeT> So the situation looks a bit like this: http://people.canonical.com/~tspindler/UEC/01-instance-volume.png
<TeTeT> forgive my lack of design skills ;)
<TeTeT> in UEC people usually have this: http://people.canonical.com/~tspindler/UEC/02-instance-volume-standard.png
<TeTeT> an instance holds the kernel and /, while the data is saved on a persistent EBS volume
<TeTeT> UEC persistency is about moving the kernel and / to the EBS volume
<TeTeT> depicted in http://people.canonical.com/~tspindler/UEC/03-instance-volume-ebs-based-instance.png
<TeTeT> that is, the kernel of the instance launches a kernel stored on an EBS volume
<TeTeT> the kernel on the EBS volume will use / from the ebs volume and run completely from there
<TeTeT> I asked the Ubuntu server team on advice for realizing such a service back in January 2011. It was motivated by the questions I received during the UEC class.
<TeTeT> My initial thought was to have a specialized initrd that calls pivot_root on a root filesystem stored in an EBS volume.
<TeTeT> But Scott Moser (smoser) had a much better idea: Why not use kexec to load the kernel and root from the EBS volume.
<TeTeT> More information on kexec-loader can be found at http://www.solemnwarning.net/kexec-loader/
<TeTeT> Scott went ahead and implemented this and my colleague Tom Ellis tested it.
<TeTeT> Then I used the docs from Scott, tested them and created an exercise for the UEC class out of it.
<TeTeT> We decided to publish this work in Launchpad and you can find the result in https://launchpad.net/uec-persistency.
<TeTeT> The branch under lp:uec-persistency contains the needed code and the exercise in the docs directory.
<TeTeT> smoser just told me that we use something like kexec-loader, not exactly the same tech
<TeTeT> see the chat channel for more background info ;)
<TeTeT> If you don't have bazaar installed right now, you can also take a look at the exercise PDF, found at http://people.canonical.com/~tspindler/UEC/ebs-based-instance.pdf
<TeTeT> the odt file is also freely available
<TeTeT> I will now cover this exercise step by step.
<TeTeT> No questions so far on what the aim is?
<TeTeT> to repeat, we want to use the kernel and root filesystem found on an EBS volume, not that in the instance itself
<TeTeT> The steps during the exercise have to be conducted from two different system, one I named the [cloud control host], the other the [utility instance].
<TeTeT> it would be useful to have the PDF or ODT open for the exercise now
<TeTeT> With cloud control host I mean any system that holds your UEC credentials, so you can issue commands to the front end.
<TeTeT> The utility instance is created during the exercise and is used to populate an EBS volume with an Ubuntu image.
<TeTeT> The first two steps are preparing your cloud with a loader emi. This loader emi will later be used to kexec the kernel on your EBS volume.
<TeTeT> Step three and four setup the utility instance. This is a regular UEC instance that is large enough to hold an Ubuntu image and store it on the attached EBS volume.
<TeTeT> The EBS volume created and attached in step three will be the base for your EBS based instances later on.
<TeTeT> All the steps five to nine are needed to copy an Ubuntu image to the EBS volume and ensure it boots fine later on.
<TeTeT> In step 11 a snapshot of the pristine Ubuntu EBS volume is made. While one could use the EBS volume right away, it's much nicer to clone it via the snapshot mechanism of EBS.
<TeTeT> Just in case you later want another server based on that image.
<TeTeT> Steps 12 and 13 are there to launch an instance based on an EBS volume.
<TeTeT> The final step 14 describes how to ssh to the instance and check if it is really based on the EBS volume, e.g. /dev/sdb1.
<TeTeT> That's really all there is to it, thanks to smosers work. Perfectly doable by yourself within 2 - 4 hours, starting with two bare metal servers
<TeTeT> Once you've been through all the steps and you want more EBS based instances of the same image, simply repeat from step 11 boot_vol.
<TeTeT> With this you should have virtualized servers in your UEC within a few minutes, quite a nice time for provisioning.
<TeTeT> Especially useful might be to assign permanent public addresses to those instances.
<TeTeT> This can be done with help of euca-allocate-address and euca-associate-address.
<TeTeT> any questions so far? Everything crystal clear?
<TeTeT> Well, it was a very short session then I fear...
<TeTeT> Closing words, we're looking into automating the storage of the Ubuntu image on the EBS volume to make this step less work intense. So keep an eye on the Launchpad project.
<TeTeT> In the end you should be able to use any Ubuntu image within UEC on Ubuntu 10.04 LTS on an EBS volume in a few minutes, rather than hours
<TeTeT> if there's anyone interested in contributing to the project or UEC in general, please get in touch with kim0
<TeTeT> we're looking for people with any skills, from coding to writing documentation
<TeTeT> we'd also love to hear from you if you try UEC persistency and it works or doesn't work for you
<TeTeT> I tested the exercise a few times, but you'll never know
<TeTeT> you can touch base with us in #ubuntu-cloud and myself also in #ubuntu-training
<ClassBot> obino asked: have you even RAIDed the EBS volumes? Is there any advantage in doing so?
<TeTeT> nope, I've never RAIDed the EBS volumes, but would think there might be a bit of a performance hit due to the ATA over Ethernet protocol
<TeTeT> also keep in mind that while served via network, the EBS volumes are likely to come from the same Storage Controller (SC)
<TeTeT> so not sure if this is a good approach
<TeTeT> might be interesting to use drbd though, and maybe use the EBS volume as well as the ephemeral storage and see how that goes
<TeTeT> I guess there's quite a bit of room for experimentation for EBS based instances
<TeTeT> guess you have no 30 minutes left in the session, so enough time to actually do the exercise if you have a UEC ready :)
<TeTeT> no=now
<TeTeT> thanks for attending, catch me in #ubuntu-training if you run into problems with the exercise, bye now
<kim0> So we finished a bit early on this session
<kim0> Daviey starts in less than 30 mins with a puppet session
<kim0> Time for a coffee break :)
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> There are 5 minutes remaining in the current session.
<Daviey> kim0, Are you managing the session?
<kim0> It's pretty much self managing .. topic will be changed in a few seconds
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Puppet Configuration Management - Instructors: Daviey
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/24/%23ubuntu-classroom.html following the conclusion of the session.
<Daviey> My name is Dave Walker (Daviey), and I am a member of the Ubuntu Server Team.
<Daviey> Welcome to the puppet classroom session.  This session is mainly targeted at those that have had minimal or no exposure to the puppet.
<Daviey> It allows reproducible, consistent deployments, which is good for horizontal scaling and replacing machines which have malfunctioned.
<Daviey> A good reference for more details about the project is at:
<Daviey> http://projects.puppetlabs.com/projects/puppet/wiki/About_Puppet
<Daviey> Please take a few moment to grok the content of that page, there is little point in my reproducing the content here.
 * Daviey waits a few minutes.
<Daviey> Now, some of that might sound little complicated but it really is simple when you get started.
 * Daviey continues.
<Daviey> Puppet focuses on the 'configuration' management.  The initial operating system deployment is usually done with either, preseeding the installer, cobbler, FAI or simply spawning a cloud machine, such as EC2.
<Daviey> In regards to EC2.. people tend to use user-scripts or increasingly cloud-init.
<Daviey> Once the base operating is installed, there is always some changes that need to be made to make the server usable for production.  This varies from performance tweaks, application configuration and even custom versions of packages.  This could all be handled with scripts and such, but this is less than clean and near impossible to maintain.  This is where puppet provides a clean solution.
<Daviey> Once the base operating is installed, there is always some changes that need to be made to make the server usable for production.  This varies from performance tweaks, application configuration and even custom versions of packages.  This could all be handled with scripts and such, but this is less than clean and near impossible to maintain.  This is where puppet provides a clean solution.
<Daviey> Puppet generally acts on a client/server method, to manage multiple nodes.  However, it is also possible to use puppet on a single host.  For simplicity, this session will demonstrate a single host deployment example and some of the features of puppet via their configuration format - called a manifest.
<Daviey> In this session, we will do the following:
<Daviey> â¢ Connect to an instance in the cloud
<Daviey> â¢ Install puppet
<Daviey> â¢ Initial configuration
<Daviey> â¢ Configure the same node to install and create a basic apache virtual host
<Daviey> Firstly, i hope everyone will be able to look at a console window, and this IRC session concurrently.
<Daviey> I'm going to invite everyone to connect via ssh to a cloud instance:
<Daviey> $ ssh demo@demo.daviey.com
<Daviey> You'll need to accept the host key
<Daviey> I don't think it really requires verification in this instance.
<Daviey> (Although, it's generally good pratice to compare the fingerprint)
<Daviey> The password is 'demo'
<Daviey> (Secure huh?)
 * Daviey waits for a confirmation.
<Daviey> I will type in the IRC channel comments, so please multi-task by looking at both.. Thanks :)
<Daviey> So, i just checked to see if we have apache2 installed... we do not!
<Daviey> (You can check there is nothing running as a httpd on port 80, by visiting http://demo.daviey.com
<Daviey> (You should get a failure)
<Daviey> I'm running sudo apt-get update, to make sure our indexes are updated
<Daviey> The observant amongst you, will notice i'm running Natty
<Daviey> The current development version
<Daviey> (I must be crazy doing a demo on this! :)
<Daviey> So, i just, sudo apt-get install puppet
<Daviey> This installs the puppet application and it's dependencies.
<Daviey> This stage, would normally be done automatically during installation
<Daviey> (if you preseed it such)
<Daviey> You'll notice the output here:
<Daviey> puppet not configured to start, please edit /etc/default/puppet to enable
<Daviey> Did you all see the START=no, option
<Daviey> This means that the puppet client agent will not run automatically
<Daviey> My intention is to invoke puppet manually.. so i do not need the client to be running
<Daviey> (one moment please)
<Daviey> (slight technical issue, please hold)
<Daviey> Okay!
<Daviey> we are back
<Daviey> okay, this is the directory structure we should see
<Daviey> on a fresh installation
<Daviey> Okay, i have just copied a manifest to /etc/puppet/manifest
<Daviey> I hope everyone can see this
<Daviey> It's quite a quick one i have thrown together
<Daviey> It should:
<Daviey> Install apache2
<Daviey> add a virtual host, called demo.daviey.com
<Daviey> and enable it
<Daviey> (I'll make it avaliable afterwards via a pastebin)
<Daviey> The stanza towards the bottom mentions, ip-10-117-82-138
<Daviey> (for the observant, you'll notice this is the hostname of the machine)
<Daviey> I could equally, have put 'default' here... which would mean that it would do it for every machine connected
<Daviey> (in this instance, i am only using one machine)
<Daviey> Now, the actual virtual host needs a template...
<Daviey> lets create it.
<Daviey> puppet uses Ruby's ERB template system:
<Daviey> You'll notice that there are parts which can be expanded.
<Daviey> So, this is a generic apache virtual host template, that could be used for other virtualhosts
<Daviey> other than demo.daviey.com
<Daviey> Now... lets make puppet do it's thing!
<Daviey> I love it when a plan comes together.
<Daviey> Essentially, i did a dry run with this configs before the session.. and didn't clean up properly!
<Daviey> This is why i should have used puppet to clean up, as it would have done it better than me!
<Daviey> So, puppet install apache2 and enabled the virtual host
<Daviey> puppet knows which package hander to use
<Daviey> ie, apt, yum etc
<Daviey> Now... if we check to see if apache started.. we'll see it failed... one moment
<Daviey> So...
<Daviey> (2)No such file or directory: apache2: could not open error log file /var/log/apache/demo.daviey.com-error.log.
<Daviey> Unable to open logs
<Daviey> This means i made a typo in my template... suggestions on how i should fix this?
<Daviey> kim0, Is quite correct with:
<Daviey> <kim0> Daviey: should be "apache2" there
<Daviey> But... How should i *fix* it?
<Daviey> We edit the template of course!
<Daviey> Now, we should be able to go to http://demo.daviey.com/
<Daviey> (My simple Task didn't try to start apache if it wasn't already running!)
<Daviey> notice: /Stage[main]//Node[ip-10-117-82-138]/Apache2::Simple-vhost[demo.daviey.com]/File[/etc/apache2/sites-available/demo.daviey.com]/content: content changed '{md5}5047b9f9a688c04e2697d9fd961960ed' to '{md5}2c32102fd06543c85967276eeee797e2'
<Daviey> ^^ Puppet knew it should create a new virtual host, based on the template changing!
<Daviey> How neat is that?!
<Daviey> Now, in a real life example - puppet would also manage pulling in the website..
<Daviey> puppet provides a FileBucket interface..
<Daviey> This is similar to rsync, and allows files to be retrieved from there.
<Daviey> However, for large files - people often use an external application which is configured via puppet.
<Daviey> This could be anything from rsyncd, nfs or event torrent!
<Daviey> facter is a really useful tool.  This is where the variables used in the templates are expanded from...  I think of it as lsb_release supercharged.
<Daviey> Here is an example of the output, just generated:
<Daviey> http://paste.ubuntu.com/584952/
<Daviey> This is a list of 'facts' about the system
<Daviey> One of the really nice things about the manifests... is that they can be conditional
<Daviey> So, i could do a different task based on they virtual type (or lack of) for example.
<Daviey> There is no point trying to use this machine as a virtual machine server, if it doesn't fit the requirements
<Daviey> Usually bare metal - and amount of memory free
<Daviey> The configuration files are largely inheritance based, which fully supports overriding of configurations from the base class.
<Daviey> When puppet is installed on a client / server basis... it uses SSL for secure communciation between the elements
<Daviey> The server runs on port 8140. so make sure firewall is opened (or ec2 security group allows communication!)
<Daviey> Client (Agent) - puppetd
<Daviey> Server - puppetmasterd
<Daviey> ^^ This is the name of the applications
<Daviey> The puppetd runs on all the clients, and polls the Server with the default value of every 30 minutes looking for changes
<Daviey> It defaults to looking for the dns hostname of 'puppet'
<Daviey> So, it's a good idea for the puppet master to have that dns entry set for a local network
<Daviey> Equally, i could have set puppet.mydomain.com
<Daviey> This is probably a good place to stop the demo.  I will make my puppet configuration avaiable for others to experiment with.
<Daviey> It really is not as complicated as it seems to get started.
<Daviey> When i first tried puppet, i found the 'getting started' docs to be somewhat complicated.
<Daviey> I would recommend people start with a minimal example like this.. and then build from there.
<Daviey> The puppet website has some excellent recipies to use as an example... but probably a good idea to start simple.
<Daviey> I will now take questions, and answer them as best as i can
<Daviey> Annnnd. classbot, i hate you
<Daviey> classbot isn't +v
<Daviey> <ClassBot> sveiss asked: how large is 'large'? is there a rule of thumb as to how much data a FileBucket can cope with? -- There is 1 additional question in the queue.
<Daviey> sveiss, that is a good question.. I seem to remember reading that since 2.6... massive improvements have gone into increasing it's efficiency
<Daviey> However, it is still believed to be the likely bottlekneck
<Daviey> I haven't found the later versions to suffer to badly from this bottlekneck
<Daviey> but others have commented.
<Daviey> I think it depends on load..
<Daviey> I would ask that if you do try the filebucket that you report back to the ubuntu server team with your success.
<Daviey> (We often don't get enough feedback)
<ClassBot> sveiss asked: how large is 'large'? is there a rule of thumb as to how much data a FileBucket can cope with?
<Daviey> kim0 asked: Wouldn't clients looking for dns name "puppet" and blindly following it .. be a secruity risk
<Daviey> Well yes.. this is true.. This is one of the reasons SSL is used.
<Daviey> Essentially, the pupper master (usually has a self signed key)
<Daviey> but the client needs to accept it.
<Daviey> This would normally happen as part of the installation, or bootstrapping
<Daviey> Which is an area before puppet works.
<ClassBot> kim0 asked: Wouldn't clients looking for dns name "puppet" and blindly following it .. be a secruity risk
<Daviey> Wow.. i now understand ClassBot
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> kim0 asked: Do you reuse ready made recipies
<Daviey> You would be foolish not to!
<Daviey> There is a true gem of samples on the puppet wiki, and other locations.
<Daviey> Additionally, there are additional modules
<Daviey> Which allow you to reduce the burden of what you need to do
<Daviey> shttp://forge.puppetlabs.com/
<Daviey> http://forge.puppetlabs.com/ , rather
<Daviey> If there are no more questions, i will end my session.
<Daviey> I would like to thank everyone for coming
<Daviey> Please do experiment with puppet, and report back to us.
<Daviey> We are a friendly team, which hang around in #ubuntu-server
<Daviey> Thank you for your time.
 * kim0 claps
<kim0> Thanks Daviey for the awesome session
<kim0> Next up is Edulix .. Presenting "Hadoop" The ultimate hammer to bang on big data :)
<ClassBot> There are 5 minutes remaining in the current session.
<Edulix> hello people, thanks for your assistance. this is the session titled "Using hadoop, divide and conquer"
<Edulix> kim0 told me about these ubuntu cloud sessions, and kidly asked me to do a talk over hadoop, so here I am  =)
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud Days - Current Session: Using hadoop, divide and conquer - Instructors: edulix
<Edulix> first I must say that I am in no way a hadoop expert, as I have been working with hadoop just for a bit over a month
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/24/%23ubuntu-classroom.html following the conclusion of the session.
<Edulix> but I hope that I can help to show you a bit of hadoop and ease the learning curve for those who want to use it
<Edulix> I'm going to base this talk in the hadoop tutorial available in http://developer.yahoo.com/hadoop/tutorial/ as it helped me a lot, but it's a bit dense, so I'll do a resumed version
<Edulix> So what's hadoop anyway?
<Edulix> it's a large-scale distributed batch processing infrastructure, designed to efficiently distribute large amounts of work across a set of machines
<Edulix> here large amounts of work means really really large
<Edulix> Hundreds of gigabytes of data is low end for hadoop!
<Edulix> hadoop supports handling hundreds of petabytes... Normally the input data is not that big, but the intermediate data is or can be
<Edulix> of course, all this does not fit on a single hard drive, much less in memory
<Edulix> so hadoop comes with support for its own distributed file system: HDFS
<Edulix> which breaks up input data and sends fractions  (blocks) of the original data to some machines in your cluster
<Edulix> everyone that has tried will know that performing large-scale computation is difficult
<Edulix> whenever multiple machines are used in cooperation with one another, the probability of failures rises: partial failures are an expected and common
<Edulix> Network failures, computers over heating, disks crashing, data corruption, maliciously modified data..
<Edulix> shit happens, all the time (TM)
<Edulix> In all these cases, the rest of the distributed system should be able to recover and continue to make progress. the show must go on
<Edulix> Hadoop provides no security, and no defense to man in the middle attacks for example
<Edulix> it assumes you control your computers so they are secure
<Edulix> on the other hand, it is designed to handle hardware failure and data congestion issues very robustly
<Edulix> to be successful, a large-scale distributed system must be able to manage resources efficiently
<Edulix> CPU, RAM, Harddisk space, network bandwidth
<Edulix> This includes allocating some of these resources toward maintaining the system as a whole
<Edulix> ..... while devoting as much time as possible to the actual core computation
<Edulix> So let's talk about the hadoop approach to things
<Edulix> btw if you have nay questions, just ask in #ubuntu-classroom-chat with QUESTION: your question
<Edulix> Hadoop uses a  simplified programming model which allows the user to quickly write and test distributed systems
<Edulix> and also to tests its efficient & automatic distribution of data and work across machines
<Edulix> and also allows to use the underlying parallelism of the CPU cores
<Edulix> In a hadoop cluster, data is distributed to all the nodes of the cluster as it is being loaded in
<Edulix> HDFS will split large data files into blocks which are managed by different nodes in the cluster
<Edulix> Also replicating data in different nodes, just in case
<ClassBot> kim0 asked: Does hadoop require certain "problems" that fits its model ? can I throw random computations to it
<Edulix> I'm going to answer that now =)
<Edulix> basically, hadoop uses the mapreduce programming paradigm
<Edulix> In hadoop, Data is conceptually record-oriented. Input files are split into input splits referring to a set of records.
<Edulix> The stragy of the scheduler is moving the computation to the data, i.e. which data will be processed by a node is chosen based on its locality to the node, which results in high performance.
<Edulix> Hadoop programs need to follow a particular programming model (MapReduce), which limits the amount of communication, as each individual record is processed by a task in isolation from one another
<Edulix> In MapReduce, records are processed in isolation by tasks called Mappers
<Edulix> The output from the Mappers is then brought together into a second set of tasks called Reducers
<Edulix> where results from different mappers can be merged together
<Edulix> Note that if you for example don't need the Reduce step, you can implement a Map-only processing.
<Edulix> This simplification makes the Hadoop framework much more reliable, because if a node is slow or crashes, other node can simply replace the former taking the same inputsplit and processing it again
<ClassBot> chadadavis asked: Is there any facility for automatically determining how to partition the data, i.e. based on how long one chunk of processing takes?
<Edulix> to be able to partitoon the data,
<Edulix> you need to have first a structure for that data. for example,
<Edulix> if you have a png image that you need processthen the input data is the image file. you might partition your image in chunks that start in a given position (x,y) and have a height and a width
<Edulix> but the partitioning is usually done by you, the hadoop program developer
<Edulix> though hadoop is in charge of selecting where to send to that partition, depending on data locality
<Edulix> when you partition the input data, you don't send the data (input split) to the node that will process it: ideally it will already have that data!
<Edulix> how is this possible?
<Edulix> because when you do the partition, the InputSplit only defines this partition (so it might be in the image example 4 numbers: x,y, height, width) and depending on which nodes the file blocks of the input data reside, hadoop will send that split to that node
<Edulix> and then the node will open the file in HDFS for reading starting (fseek) in there
<Edulix> ok, I continue =)
<Edulix> separate nodes in a Hadoop cluster still communicate with one another, implicitly
<Edulix> pieces of data can be tagged with key names which inform Hadoop how to send related bits of information to a common destination node
<Edulix> Hadoop internally manages all of the data transfer and cluster topology issues
<Edulix> One of the major benefits of using Hadoop in contrast to other distributed systems is its flat scalability curve
<Edulix> Using other distributed programming paradigms, you might get better results for 2, 5, perhaps a dozen machines. But when you need to go really large scale, this is where hadoop excels
<Edulix> After you program is written and functioning on perhaps ten nodes (to tests that it can be used in multiple nodes with replication etc and not only in standalone mode),
<Edulix> then  very little --if any-- work is required for that same program to run on a much larger amount of hardware efficiently
<Edulix> == distributed file system ==
<Edulix> a distributed file system is designed to hold a large amount of data and provide access to this data to many clients distributed across a network
<Edulix> HDFS is designed to store a very large amount of information, across multiple machines, and also supports very large files
<Edulix> some of its requirements are:
<Edulix> it should store data reliably even if some machines fail
<Edulix> it should provide fast, scalable access to this information
<Edulix> And finally it should integrate well with Hadoop MapReduce, allowing data to be read and computed upon locally when possible
<Edulix> This last point is crucial. HDFS is optimized for MapReduce, and thus has made some decisions/tradeoffs:
<Edulix> Applications that use HDFS are assumed to perform long sequential streaming reads from file because of MapReduce
<Edulix> so HDFS is optimized to provide streaming read performance
<Edulix> this comes at the expense of random seek times to arbitrary positions in fileswhen a node
<Edulix> this comes at the expense of random seek times to arbitrary positions in files
<Edulix> i.e. when a node reads, it might start reading in the middle of a file, but then it will read byte after byte, not jumping here and there
<Edulix> Data will be written to the HDFS once and then read several times; AFAIK there is no file update support
<Edulix> due to the large size of files, and the sequential nature of reads, the system does not provide a mechanism for local caching of data
<Edulix> data replication strategies combat machines or even whole racks failing
<Edulix> hadoop comes configured to have each file block stored in three nodes by default: two in the same rack, and the other block in another machine
<Edulix> if the first rack fails, speed might degrade relatively but information wouldn't be lost
<Edulix> BTW HDFS design is based on google file system (GFS)
<Edulix> and as you probably  has guessed, in HDFS data/files is/are split in blocks of equal size in DataNodes (machines in the cluster)
<ClassBot> gaberlunzie asked: would this sequential access mean hadoop can work with tape?
<Edulix> I haven't heard anyone doing such a thing,
<Edulix> and I don't think it's a good idea
<Edulix> why? because the reads are sequential, but you need to do the first seek to start reading at the point your inputsplit indicates
<Edulix> doing this first seek might be too slow for a tape, but I might be completely wrong  here
<Edulix> note that the data stored in HDFS is supposed to be temporary, mostly, just for working
<Edulix> so you copy the data there, do your thing, then copy the output result back
<Edulix> in contrast, tapes are mostly used for large term storage
<Edulix> (cotinuing) default block size in HDFS is very large (64MB)
<Edulix> This decreases metadata overhead and allows for fast streaming reads of data
<Edulix> Because HDFS stores files as a set of large blocks across several machines, these files are not part of the ordinary file system
<Edulix> For each DataNode machine, the blocks it stores reside in a particular directory managed by the DataNode service, and these blocks are stored as files whose filenames are their blockid
<Edulix> HDFS comes with its own utilities for file management equivalent to ls, cp, mv, rm, etc
<Edulix> the metadata (names of files and dirs and where are the blocks stored) of the files can be modified by multiple clients concurrently
<Edulix> The metadata (names of files and dirs and where are the blocks stored) of the files can be modified by multiple clients concurrently. To orchestrate this, metadata is stored and handled by the NameNode, that stores metadata usually in memory (it's not much data), so that it's fast (because this data *will* be accessed randomly).
<ClassBot> chadadavis asked: If I first have to copy the data (e.g. from a DB) to HDFS before splitting, couldn't the mappers just pull/query the data directly from the DB as well?
<Edulix> yes you can =)
<Edulix> and if the data is in a DB, you should
<Edulix> input data is read from an InputFormat
<Edulix> and there are different input formats provided by hadoop: FileInputFormat for example to read from a single file
<Edulix> but there's also DBInputFormat, for example
<Edulix> in my experience, you will probably create your own =)
<Edulix> Deliveratively I haven't explained any code, but I recommend you that if you're interested you should start playing with hadoop locally in your own machine
<Edulix> just download hadoop from http://hadoop.apache.org/ and follow the quickstart http://hadoop.apache.org/common/docs/r0.20.2/quickstart.html
<Edulix> for quickstart and for development, you typically use hadoop as standalone in your own machine
<Edulix> in this case HDFS will simply refer to your own file system
<Edulix> You just need to download hadoop, configure Java (because hadoop is written in java), and execute the example as mentioned in the quickstart page
<Edulix> as mentioned earlier, with hadoop you usually operate as follows, because of its batching nature: you copy input data to HDFS, then request to launch the hadoop task with an output dir, and when it's done, the output dir will have the task results
<Edulix> For starting developing a hadoop app was this tutorial because it explains pretty much everything I needed http://codedemigod.com/blog/?p=120
<Edulix> but note that it's a bit old
<Edulix> and one of the things that I found most frustrating in hadoop while developing was that there are duplicated classes i.e. org.apache.hadoop.mapreduce.Job and org.apache.hadoop.mapre.jobcontrol.Job
<Edulix> In that case, use alwys org.apache.hadoop.mapreduce, because is the new improved API
<Edulix> be warned, the examples in http://codedemigod.com/blog/?p=120 use the old mapred api :P
<Edulix> and hey, now I'm open to even more questions !
<Edulix> and if you have questions later on, you can always join us in freenode.net, #hadoop, and hopefully someone will help you there =)
<ClassBot> There are 10 minutes remaining in the current session.
<ClassBot> gaberlunzie asked: does hadoop have formats to read video (e.g., EDLs and AAFs)?
<Edulix> most probably.. not, but maybe someone has done that before
<Edulix> anyway, creating a new input format is really easy
<ClassBot> chadadavis asked: Mappers can presumably also be written in something other than Java? Are there APIs for other languages (e.g. Python?) Or is managed primarily at the shell level?
<Edulix> good question!
<Edulix> yes, there are examples in python and in C++
<Edulix> I haven't used them though
<ClassBot> kim0 asked: Can I use hadoop to crunch lots of data running on Amazon EC2 cloud ?
<Edulix> heh I forgot to mention it =)
<Edulix> answer is yes!
<Edulix> more details in http://aws.amazon.com/es/elasticmapreduce/
<ClassBot> There are 5 minutes remaining in the current session.
<Edulix> that's one of the nice things of using hadoop: many big people uses it in the industry. yahoo, for example, and amazon has support for it too
<Edulix> so don't need to really have lots of machines for doing large computation
<Edulix> just use amazon =)
<ClassBot> gaberlunzie asked: is there a hadoop format repository?
<Edulix> I don't know huh
<Edulix> :P
<Edulix> I didn't investigate much about this because I needed to have my own
<Edulix> but probably in contrib there is
<Edulix> ok so that's it!
<Edulix> Thanks for your assistance to the talk, and thanks for the organizers
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu Cloud  Days - Current Session: UEC/Eucalyptus Private Cloud - Instructors: obino
 * kim0 claps .. Thanks Edulix 
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/24/%23ubuntu-classroom.html following the conclusion of the session.
<obino> thanks Edulix
<obino> very nice presentation
<obino> I am graziano obertelli and I work at eucalyptus systems
<obino> feel free to ask questions at any time
<obino> if they are about eucalyptus I may be able to answer them :)
<obino> Eucalyptus powers the UEC
<obino> Ubuntu added a nice themes to Eucalyptus, the image store and very nifty way to autoregister the components
<obino> which makes it a breeze to install UEC on Ubuntu clusters
<obino> at http://open.eucalyptus.com/learn/what-is-eucalyptus you can quickly check what is eucalyptus
<obino> with it you can have your own private cloud
<obino> currently Eucalyptus supports AWS EC2 and S3 API
<obino> thus a lot of clients tools written for EC2 should work with Eucalyptus
<obino> minus minor changes like the endpoint URL
<obino> Eucalyptus has a modular architecture: there are 5 main components
<obino> the cloud controller (CLC)
<obino> walrus (W)
<obino> the cluster controller (CC)
<obino> the storage controller (SC)
<obino> and the node controller (NC)
<obino> the CLC and W are the user facing components
<obino> they are the endpoints for the client tools
<obino> respectively for the EC2 API and for the S3 API
<obino> there has to be 1 CLC and 1 W per installed cloud
<obino> and they need to be publicly accessible
<obino> the CC is the middle man
<obino> it controls a set of NCs
<obino> and reports them to the CLC
<obino> it controls the network for the instances running on its NCs
<obino> there can be multiple CCs in a cloud
<obino> the SC usually sits with the CC
<obino> there has to be one SC per CC
<obino> otherwise EBS won't be available for that cluster
<obino> SC and CC needs to be able to reach (talk to) the CLC and W
<obino> the NC is the real worker
<obino> instances runs on the machine hosting the NC
<obino> the previous tutorials explained a great deal of the user interaction, so I'll talk a bit of the behind the scene
<obino> for example what happened when an instance is launched
<obino> the CLC receive the requests
<obino> depending on the 'zone' the request asks, it will select the correspondent CC
<obino> after of course having checked that there is enough capacity left in that zone
<obino> with it is sends information about the network to set up for the instance
<obino> since every instance belongs to a security group and each security group has its own private network
<obino> the CC will then decide which NC will run the instance
<obino> based on the selected scheduler
<obino> and it will setup the network for the security group
<obino> this step is dependent on how Eucalyptus is configured, since there are 4 different Networking Modes
<obino> once the NC receives the requests it will need to emi file (the root fs of the future instance)
<obino> the NC keeps a local cache for the previous emi it saw before
<obino> it's a LRU cache so the least used image will be evicted if the cache grows too big
<obino> so the NC will check first to see if the emi is in the cache
<obino> if not it will have to contact W to get it
<obino> this is why W needs to be accessible by the NCs
<obino> of course it's not only the emi that the NC downloads but the eki and the eri too
<obino> once the image is transferred. it is copied into the cache first
<obino> after that the emi, eki and eri are assembled for the specific hypervisor the NC has access to
<obino> so, in the case of KVM, a single disk is created
<obino> the size of which depends on the configuration the cloud administrator gave to the instances
<obino> and the emi is copied into the first partition
<obino> the 3rd partition is populated with the swap
<obino> and the second one will be ephemeral
<obino> libvirt is finally instructed to start the instance
<obino> and of course the NC will take all the steps to setup the network  for the instance
<obino> from this quick run down, you will see why the first time an instance is booted on one NC takes longer
<obino> there is an extra network transfer (from W) and an extra disk copy (to populate the cache) that takes place
<ClassBot> smoser asked: is Eucalyptus expecting to have EBS-root and accompaning API calls (StartInstances StopInstances ...) ?
<obino> boot from EBS is expected to be in the next release
<obino> at least that's what they told me :)
<obino> I'm not sure about the start and stop instances call
<obino> the above instance life cycle that I went through should be helpful to understand how to debug the most frequent problem on a Eucalyptus installation
<obino> the instance won't reach running state
<obino> from the above is easy to see that starting backward may be helpful
<obino> so starting from the NC logs to see if the instance started correctly (or at least libvirt tried to start it)
<obino> and if nothing is there, check the CC logs
<obino> to finish with the CLC logs
<obino> despite the complexity, eucalyptus is fairly easy to install
<obino> and the UEC has taken this step even further
<obino> so if you want to play with Eucalyptus or the UEC, you just need 2 machines available
<obino> if instead you want to play with Eucalyptus before installing to see what is can do and how good is the EC2/S3 APIs
<obino> then you can try our community cloud http://open.eucalyptus.com/CommunityCloud
<obino> called ECC
<obino> the ECC is available to everybody
<obino> the SLA are designed to avoid abuses
<obino> so your instance(s) will be terminated after 6 hours of running time
<obino> you can of course re-run instances at will, but no more than 4 at any point in time
<obino> same idea for the bucket, volumes and snapshots
<obino> the ECC runs the latest stable version of Eucalyptus, currently 2.0.2
<obino> if you are a developers and you are more insterested in the code and architecture, we have assembled few pages at http://open.eucalyptus.com/participate/contribute
<obino> which may be useful
<obino> starting from our launchpad home, and the API version we support
<obino> we have 2 launchpad branches, for stable version and for the development of the next version
<obino> both are accessible of course
<obino> we provide also some 'nightly builds'
<obino> they are actually produced on a weekly basis, but they kept the name
<obino> finally we give some information on how to contribute back to eucalyptus
<obino> and the final page is an assortment of various tips which may be of use to developers
<obino> like debugging tricks, or using eclipse or partial compile/deploy
<obino> we are hoping to expand this area sooon
<obino> finally under http://open.eucalyptus.com/participate you will see various ways to interact with us and Eucalyptus
<obino> in particular the forum is quite active and it is quite a resource to solve issues
<obino> as well as the community wiki
<obino> is there anything in particular that you want to hear about eucalyptus?
<obino> of questions?
<obino> *or*
<obino> well then, it looks like I managed to put everyone to sleep! :)
<obino> this http://open.eucalyptus.com/contact contains all the different ways to reach us in case you have questions
<obino> and of course there is the UEC page https://help.ubuntu.com/community/UEC
<obino> which contains very good information about Eucalyptus/UEC
<ClassBot> tonyflint1 asked: are there any plans for addition tools/utilities/documentation for creating custom images?
<obino> we currently have few pages under our community wiki under the tab images http://open.eucalyptus.com/wiki/images
<obino> which could be a chore at times
<obino> but they are useful to understand how things works
<obino> most of the EC2 images should work with UEC/Eucalyptus
<obino> so any way you have to generate images should work
<obino> the kernel/initrd combination depends of course from the hypervisor the instances will use
<ClassBot> There are 10 minutes remaining in the current session.
<obino> in short we don't have short term plan to generate new tools but we are working with the current tools to make sure they are compatibles with eucalyptus
<obino> if you have a favorite tool to generate images, let us know :)
<obino> tonyflint1: does it answer your question?
<ClassBot> gholms|work asked: Boxgrinder seems to be a decent tool for building images.  Any idea if that works with Ubuntu?
<obino> good question
<obino> we are in contact with a developer (marek) of boxgrinder, who has been very helpful so we are hoping to have an official eucalyptus plugin soon
<obino> as for the question itself it could be interpreted in 2 ways: will boxgrinder create ubuntu images? or will boxgrinder be packaged for ubuntu?
<ClassBot> There are 5 minutes remaining in the current session.
<obino> both are probably better answer by the boxgrinder developers
<obino> I would hope yes
<obino> boxgrinder should be fairly portable
<obino> and it has a nice plugin structure
 * kim0 claps 
<kim0> Thanks a lot for this wonderful session
 * obino bows
<kim0> Alright everyone .. Thank you for attending Ubuntu Cloud Days
<kim0> I hope it was fun and useful
<kim0> You can find us at #ubuntu-cloud
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/24/%23ubuntu-classroom.html
<kim0> and feel free to ping me personally later
<kim0> Thanks .. best regards .. till next time
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
#ubuntu-classroom 2011-03-25
<basil_kurian_> \q
<j_hn> Hi all
<j_hn> I'm having problems with random freezes. Have installed multiple versions and tried un-installed nouveau. Anyone have a clue?
<nigelb> 5 more minutes folks!
<jcastro> Alright
<jcastro> let's get started!
<skaet> :)
<jcastro> Welcome everyone to our Weekly Q+A
<jcastro> This week we have skaet, who is the release manager for Ubuntu
<jcastro> skaet: ok, it's all yours!
<skaet> Thanks jcastro
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Current Session: Q and A with Kate Stewart, Ubuntu Release Manager - Instructors: skaet
<skaet> Hi,  my name is Kate Stewart and I'm having a lot of fun these days working with the release team and all the development team to get Ubuntu release out for Natty.   Beta-1 will be coming out next week!  :)
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/25/%23ubuntu-classroom.html following the conclusion of the session.
<skaet> As release manager,  I mostly try view the release as a whole and make sure there's some balance between teams as the code enters the archive.  This means making sure the important bugs aren't lost in limbo,  getting the latest features in from the various development teams and upstream projects, and having something stable enough to ship.   My knowledge tends to be broad and how the key areas need to interact/de
<skaet> pend  on each other.  For specific details of the individual projects I lean on the tech leads and managers a lot (Ubuntu's got lots and lots of packages after all ;) ).
<skaet> The release team does most of the heavy lifting and are awesome to work with.  I couldn't do this job without them.
<skaet> Mostly day to day,  I herd cats, and ask questions,  lots of questions..
<skaet> Now its your turn...   are there any questions?
<skaet> !y
<skaet> !q
<jcastro> (one sec while she figures out the bot)
<ClassBot> mhall119 asked: What's the future plans for adding or updating packages in the archives mid-cycle?
<skaet> thanks mhall119
<skaet> packages are able to update now until feature freeze
<skaet> so, there's no plans to change that.
<skaet> as we get closer to shipping a release though
<skaet> we need to cut down on the amount of churn and interaction between packages
<skaet> so we enter a feature freeze mode,  where only approved packages are allowed
<skaet> to go into the archive.
<skaet> this is done by a Feature Freeze Exception, being filed.
<skaet> so, package can still get update.   The rate just slows down after mid cycle or so.
<ClassBot> IdleOne asked: Besides Unity what can we expect from the 11.04 release in terms of "new stuff" ?
<skaet> thanks for the question IdleOne.
<skaet> there will be lots of updates beyond Unity landing, although as you know that is a major one.
<skaet> We'll see updates to the toolchain and some of the infrastructure to support the next release
<skaet> there will also be alot of key desktop packages updated, as well as server, etc.
<skaet> the kubuntu team and other flavors will be seeing their packages update as well.
<skaet> Probably best to refer to the release notes that will come out with the beta though, rather than me trying to go into them all here.
<ClassBot> jcastro asked: What kind of things do you do on release day before the final image is pushed to the servers?
<skaet> thanks jcastro
<skaet> on the release day its mostly a matter of making sure all the images are working as advertised, and we have evidence (thanks to QA team and community volunteers)
<skaet> and then for the bugs we know about,  reviewing that we've got them documented so we can try to help folk avoid stumbling into them
<skaet> after it all looks good, and my typos/grammar errors/etc. have mostly been caught :P
<skaet> we work with the IS team to get the images up to the mirrors and start populating them.
<skaet> oh yeah,  and make sure the annouce goes out to the
<skaet> appropriate IRC channels ;)
<jcastro> For the question about what features are coming in 11.04: https://wiki.ubuntu.com/NattyNarwhal/TechnicalOverview
<skaet> thanks jcastro   :)
<skaet> and expect it to be changing alot over the next week ;)
<ClassBot> semiosis asked: sometimes primary distribution site (archive.canonical.com/security.ubuntu.com) gets real busy... lots of people are starting to use ubuntu in EC2, how about making the EC2 mirror first-class official?
<skaet> thanks semiosis
<skaet> good question
<skaet> I'll make a note of that and look into it, with robbiew and the IS folk as to what the space implications are like for supporting it on themirrors.
<skaet> Its a balancing act here too, and there are economic considerations,  but if its starting to be a bottleneck. Yes it is well worth revisiting.
<skaet> thanks again for excellent question.
<jcastro> Protip: We do support apt mirrors now, just not by default: http://mvogt.wordpress.com/2011/03/21/the-apt-mirror-method/
<skaet> good point.   Thanks for mentioning, Protip.   Will add mvo to the discussion ;)
<ClassBot> anadon asked: is there anything in this release to help foster or support resource sharing?
<skaet> hmm...
<skaet> anadon,  resources in which context?
<nigelb> resources such as disk storage pooling over a network, hopefully sharing processing power, sharing distributed resources to help  scalability at my university
<skaet> There are work groups investigating distributed development practices, that may be relevant.
<skaet> Let me check with robbiew and others and get back to you offline.
<skaet> We're mostly pulling in what debian supports and it probably depends on what you're already using as well.
<nigelb> nigelb asked: Do we have a plan for deb delta like the rpm delas that seem to have come out? Updates always seem to suck a lot of  bandwitdth -- There are 0 additional questions in the queue.
<skaet> Thanks nigelb
<skaet> hmm
<skaet> we tend to be updating our stable releases in regular point releases, as deltas already.
<skaet> and are pushing out fixes and updates to the supported releases as we get them figured out.
<skaet> not to mention security fixes :)
<skaet> is there some particular function that we're missing that the deb deltas is providing?
<jcastro> I think he means this spec: https://blueprints.launchpad.net/ubuntu/+spec/foundations-m-rsync-based-deb-downloads
<jcastro> where apt would download only the deltas instead of the full deb, at the expense of more CPU/IO consumption
<jcastro> Colin's already answered though: http://askubuntu.com/questions/10167/will-11-04-include-delta-updates/13198#13198
<jcastro> so you can go on to the next question. :)
<nigelb> skaet just lost her connection, she'll be right back.
<skaet> sorry about that
<skaet> typing error on my part.
<ClassBot> areloaded asked: I just heard about ubuntu planning for a rolling release instead of the 6 month release cycle, how do you plan on implementing that? How do you think that will be beneficial?
<skaet> thanks areloaded
<skaet> hmm
<skaet> thats news to me...
<skaet> we've got the shedules drafted for O, P, etc.  ;)
<skaet> can you provide a pointer to where you've heard that?
<jcastro> Yeah that was just a rumor where someone misquoted Mark
<skaet> maybe I can see if I can help figure out the disconnect?
<skaet> thanks jcastro
<skaet> anyhow,  to be clear
<jcastro> http://theravingrick.blogspot.com/2010/11/ubuntu-is-not-moving-to-rolling-release.html
<skaet> we're keeping to the 6 month release cadence for the "forseeable" future.
<skaet> lol
<skaet> any more questions?
<ClassBot> rickspencer3 asked: can you tell us a little bit about your background in release management, and how that helps you see opportunities for the Ubuntu Community to improve how we release?
<skaet> thanks rickspencer3
<skaet> prior to joining the ubuntu team,   my last 10 years was working on putting out an embedded distro with Freescale
<skaet> we were at the stage of putting out about 1 release a week
<skaet> so the one release per month (including alphas, beta, candidates, etc.) feels pretty familiar.
<skaet> however the breadth of the packages that ubuntu supports is QUITE different.
<skaet> lol
<skaet> In terms of opportunities,
<skaet> I'd like to work with the community to get earlier testing of the candidate integrated, and more automated testing to come on line.
<skaet> so we're less likely to introduce regressions, and then a bunch of them suddenly show up.
<skaet> by candidate integrated,  I mean the images that are about to be put out
<skaet> as alpha, betas, etc.
<skaet> and by community,  I'm meaning Ubuntu, Kubuntu, Xubuntu, Edubuntu,  Mythbuntu, etc.
<skaet> :)
<skaet> I think the more we have automated testing in place, and regressions available for folks to run, before they submit packages that impact at the system level due to dependencies,
<skaet> the more productive we'll all be.
<skaet> I'd also like to see us all work on making sure we've got launchpad being used effectively
<ClassBot> There are 10 minutes remaining in the current session.
<skaet> its an awesome tool for collaboration.
<skaet> but there's alot of individual practices that have emerged on how to do certain things.
<skaet> finding the best practices and and getting them known wider, would be awesome.
<skaet> I could probably go on for a bit more,  but should check if there are more questions.
<skaet> The other area I'm personally interested in is getting tools available to help with license and copyright identification
<skaet> so we can make sure the code base is easy to use for those building ontop of it.
<skaet> What debian's been doing with DEP-5 is a welcome addition to moving this down the road.
<ClassBot> fosterdv asked: I'm sorry if this is a dumb question, but I find most talk around the Desktop version of 11.04. I was wondering, what good is coming to the server version of 11.04?
<ClassBot> There are 5 minutes remaining in the current session.
<skaet> fosterdv,  thanks for asking.
<skaet> yeah, there is a lot of focus on desktop, since that's where Ubuntu's roots are.
<skaet> however the server team is busy making lots of plans.
 * skaet checks some notes
<skaet> 11.04 has the Open Stack technology preview in it.
<skaet> there are also several things easier to use.
<skaet> https://wiki.ubuntu.com/NattyNarwhal/TechnicalOverview
<skaet> and scroll down undeer the server section to see some more of the specifics.
<skaet> thanks for asking though.   Robbiew and the server team hang out in #ubuntu-server if you have questions about specific packages.  :)
<ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/25/%23ubuntu-classroom.html
<skaet> Thanks all for good questions.
* ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat ||
<EvilPhoenix> skaet:  thanks for coming in to answer our questions :)
<skaet> Thanks!   I'll move over to chat now and take questions informally for a bit,  before breaking.
<skaet> you can find me (or others of the release team) in #ubuntu-release
<jorge38> hola
#ubuntu-classroom 2011-03-26
<zammed> hello
<rajvi> i need  a teacher
#ubuntu-classroom 2012-03-19
<WilsonBradley> Are the classes dead? https://wiki.ubuntu.com/Classroom#Following_Ubuntu_Classroom_on_identi.ca_and_twitter
<WilsonBradley> the schedule is old
<WilsonBradley> ?
<Mkaysi> WilsonBradley: https://www.google.com/calendar/b/0/embed?src=canonical.com_qg6t4s8i7mg8d4lgfu9f93qid4@group.calendar.google.com&gsessionid=_DMYYGZbKhhzBlhEKIlTKg
<Mkaysi> There aren't just any classes planned.
#ubuntu-classroom 2012-03-21
<telcnas> hiiiiiiiii
<telcnas> hey anybody here could help me with my queries regarding wireshark app or can suggest me a right channel for that
#ubuntu-classroom 2012-03-22
<balus> hai
<balus> anyone?
<Adam_> Hello all
#ubuntu-classroom 2012-03-23
<caB00T> Hello, this is a support channel? :)
<pangolin> caB00T, no, try #ubuntu
<caB00T> They keep ignoring me there.
<caB00T> :(
<Mkays|2> !askubuntu | cab00t
<ubot2`> cab00t: AskUbuntu is a support resource that offers non-realtime support by the community! Can't get your problem fixed on IRC? Try AskUbuntu! - http://askubuntu.com/ You can discuss AskUbuntu in #ubuntu-stack
<caB00T> <3
#ubuntu-classroom 2012-03-24
<joo1> -no space left on device- !...usb stick don t boot !can t make image smaller,1 file (3,2gb ) ?
<joo1> thanks
<williammanda> anyone available to answer firewall questions...ufw...iptables?
<Mkays|> !crosspost | williammanda
<ubot2`> williammanda: Please don't ask the same question in multiple Ubuntu channels at the same time. Many helpers are in more than one channel and it's not fair to them or the other people seeking support.
<Mkays|> And you should ask at #ubuntu
<williammanda> sorry...learning
<williammanda> I did ask at ubuntu
<Mkays|> If you do not get answer there, try asking at
<Mkays|> !askubuntu
<ubot2`> AskUbuntu is a support resource that offers non-realtime support by the community! Can't get your problem fixed on IRC? Try AskUbuntu! - http://askubuntu.com/ You can discuss AskUbuntu in #ubuntu-stack
#ubuntu-classroom 2013-03-19
<Aceface> anyone able to give me a hand putting ubuntu on android
<holstein> Aceface: in what way?
<holstein> !touch
<ubot2> Information about the Ubuntu Touch platform for Phone and Tablet is available here https://wiki.ubuntu.com/Touch support and discussion in #ubuntu-touch
<Aceface> anyone able to give me a hand putting ubuntu on android
<Aceface> http://pastebin.com/WH1NE7fG. : error
<Aceface> http://pastebin.com/JXRZjy5r  ; script
<Aceface> droid razr  rom: AOKP and kernal: 3.0.8-g29cf5e7 /hudsoncm@il93xdroid54 #1
<holstein> Aceface: what does that mean?
<holstein> Aceface: what are you trying to do? what does "put ubuntu on android" mean?
<Aceface> trying to install ubuntu on my phone
<holstein> !touch | Aceface
<ubot2> Aceface: Information about the Ubuntu Touch platform for Phone and Tablet is available here https://wiki.ubuntu.com/Touch support and discussion in #ubuntu-touch
<mackattack> leave
