[12:56] <rrrrrrr23> exit
[12:56] <rrrrrrr23> hiii
[12:56] <rrrrrrr23> any one der
[12:56] <rrrrrrr23> wat is dis abt
[13:00] <Ampelbein> rrrrrrr23: this is the classroom where sessions for ubuntu are held. https://wiki.ubuntu.com/Classroom has more information.
[13:00] <rrrrrrr23> but no one is teaching
[13:01] <Ampelbein> rrrrrrr23: you can see the schedule at http://is.gd/8rtIi
[15:56] <dholbach> HELLO EVERYBODY! WELCOME TO THE LAST DAY OF UBUNTU DEVELOPER WEEK!
[15:56] <dholbach> I know it's unfortunate that it's the last day, but I promise it's going to be action packed and loads of fun.
[15:56] <dholbach> if you haven't joined #ubuntu-classroom-chat yet, please do that now
[15:56] <dholbach> because that's where all the chatter and the questions go
[15:56] <dholbach> you know how it works... please prefix your question with QUESTION:
[15:57] <dholbach> first up are Brian Murray and Nigel Babu who are going to tell us how to get better bug reports
[15:57] <dholbach> we still have a few minutes left, so go and grab yourself some coffee, tea or water and let's get cracking in 3 minutes :)
[16:01] <dholbach> alright... bdmurray, nigelb: the stage is yours
[16:01] <nigelb> ok, that's our cue!
[16:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/04/%23ubuntu-classroom.html following the conclusion of the session.
[16:01] <nigelb> Hello and welcome everyone!
[16:02] <nigelb> I'm nigel from the Bug Control Team and with me is bdmurray who's with the QA Team and the Bug Master for Ubuntu
[16:02] <nigelb> We're both going to talk to you folks today about getting better bug reports for your application
[16:02] <nigelb> Bugs happen with software.  We can't control it.
[16:03] <nigelb> Getting good bug reports is very essential to us developers.
[16:03] <nigelb> Sometimes along with bugs we want the user to give us certain debugging information.
[16:03] <nigelb> Sometimes we want to talk to users just before and just after they file a bug.
[16:03] <nigelb> One of the first things we'd like to show you is a newish LP feature.
[16:04] <nigelb> This new feature lets us set package specific bug reporting guidelines.
[16:04] <nigelb> For example, the ubiquity package can have specific guidelines (and it does!).
[16:04] <nigelb> Lets take a look at the information displayed for ubiquity.
[16:04] <nigelb> Lets all look at https://bugs.staging.launchpad.net/ubuntu/+source/ubiquity/+filebug.  This is LP's staging interface and feel free to play with it.
[16:05] <nigelb> If you enter a summary and click next, you'll get to see what I'm talking about
[16:06] <nigelb> If you get a list of duplicate bugs, click "No, I want to file a new bug" and you should see it
[16:07] <nigelb> Here's a screenshot of what I see http://people.ubuntu.com/~nigelbabu/Screenshot.png
[16:08] <nigelb> So, the cool bit is we can tell our users that "Please attach the files /var/log/syslog and /var/log/partman to your bug....
[16:08] <nigelb> and here we get a huge advantage, the bug report now has some information that is probably absolutely necessary for triaging it
[16:09] <nigelb> Launchpad also lets developers set the bug reported acknowledgement.
[16:09] <nigelb> This acknowledgment might be a good way to tell people to test this package from a PPA or tell them what to expect next
[16:10] <nigelb> This would be awesome if you have a daily build PPA for your package
[16:10] <nigelb> (If you dont, hang around and listen to Quintasan_ later today ;) )
[16:10] <nigelb> The Bug supervisor for Ubuntu, which is set to Bug Control has access to edit the bug reporting information and the acknowledgement
[16:10] <nigelb> Since I'm part of bug control, I can see this edit page
[16:10] <nigelb> Here's a screenshot of that page: http://goo.gl/t73Lj
[16:11] <nigelb> Any questions so far?
[16:11] <nigelb> Moving on then.
[16:12] <nigelb> The next very important way to get good bug reports is through apport.
[16:12] <bdmurray> nigelb: if I may
[16:12] <nigelb> bdmurray: please :)
[16:13] <bdmurray> If you can not set the bug reporting guidelines or the acknowledgement for a particular package please let nigelb or I know, or email the bug squad, and we'll be happy to set it for you.
[16:13] <bdmurray> Anything we can do to get higher quality bug reports is a win for everyone! ;-)
[16:15] <ClassBot> techbreak asked: can you explain bit more no syslog and partman ?
[16:16] <bdmurray> So ubiquity logs information to /var/log/partman and /var/log/syslog and the developers want / need to this information to sort out what is going on
[16:17] <bdmurray> After file in the title and the description and the bottom you'll notice an attach files dialog.  There you can then add a log file as an attachment.
[16:17] <bdmurray> However, we have just the tool for you techbreak which makes bug reporting much easier.
[16:18] <bdmurray> Which we'll get to shortly.
[16:18] <nigelb> Ok, so apport is that mystery tool :)
[16:18] <nigelb> Apport is a system which intercepts crashes right when they happen, in development releases of Ubuntu
[16:18] <nigelb> and gathers useful information about the crash and the operating system environment.
[16:19] <nigelb> Additionally, it is used as a mechanism to file non-crash bug reports about software so that we receive more detailed reports.
[16:19] <nigelb> Precisly what we're talking about.
[16:19] <nigelb> Lets take a look at a bug filed by apport.
[16:20] <nigelb> I filed a test bug in the staging environment.
[16:20] <nigelb> https://bugs.staging.launchpad.net/ubuntu/+source/squid3/+bug/724766
[16:20] <nigelb> This bug is filed against the squid3 package (the proxy server for those curious).
[16:20] <nigelb> The title and description isn't very helpful, but hey its a test bug!
[16:21] <nigelb> Of note here is that the squid3 package does not have an apport hook. (we'll get to what it is later)
[16:21] <nigelb> So the information we have here is generic information that apport would collect for any package.
[16:22] <nigelb> We have the architecture, the release being used, the package version and the source package name.
[16:22] <nigelb> Additionally, in the Dependencies.txt attachment we have information about the versions of packages upon which squid3 depends.
[16:22] <nigelb> Really cool isn't it?
[16:22] <nigelb> I was talking about hooks earlier.
[16:23] <nigelb> Apport lets us write 2 types of hooks,
[16:23] <nigelb> one is package specific hooks that run for a particular package.
[16:23] <nigelb> like for example a squid3 package
[16:23] <nigelb> The other is symptom based hooks like audio or storage or display.
[16:24] <nigelb> audio is the problem but the actual bug may be in different packages depending on the symptoms
[16:24] <nigelb> So, questions on apport so far?
[16:24] <nigelb> netsplit's playing havoc today, stay with us :)
[16:25] <nigelb> Lets take a look at /usr/share/apport/package-hooks to see what hooks are on your computer right now.
[16:26] <nigelb> One thing common among a  lot of files in that folder is that a majority of them were probably written by bdmurray ;)
[16:26] <nigelb> When you open that folder there's going to be a lot of python files
[16:27] <nigelb> almost all of them are of the pattern source_foo.py
[16:27] <nigelb> Well, apport hooks are all written in python because apport itself, I belive, is written in python
[16:28] <bdmurray> nigelb: it is written in python
[16:28] <nigelb> Aha!
[16:28] <nigelb> I wasn't sure ;)
[16:28] <bdmurray> and now you are! ;-)
[16:28] <nigelb> heh, thanks
[16:28] <nigelb> Lets take a look at one of the hooks
[16:29] <nigelb> There are some really simple ones and some really complicated ones in there
[16:29] <nigelb> Open the source_totem.py file
[16:29] <nigelb> That's the hook for totem, the media player
[16:29] <nigelb> Its on the default install, so the hook should be in most of your computers
[16:30] <nigelb> In case you don't have the file, head over to http://goo.gl/UUWpt
[16:30] <nigelb> techbreak> can you tell me what actually a hook is ? a bug ? a problem ?
[16:31] <nigelb> I figure this question would benefit a common answer.
[16:31] <nigelb> A hook is a python script that gets executed when you run ubuntu-bug <packagename>
[16:31] <nigelb> So, we do smart things in the hook like as you questions, run certain processes, gather certain log files, and get them all onto launchpad
[16:34] <nigelb> Lets look at the import statements in the totem hook first
[16:34] <nigelb> apport.hookutils and apport.packaging are the significant ones
[16:34] <nigelb> You can look at 'pydoc apport.hookutils' and 'pydoc apport.packaging' to see the functions in both these imports
[16:35] <nigelb> Basically both of them provide a collection of readymade and safe functions for many commonly used things.
[16:35] <nigelb> There are functions for attaching a file's contents, getting a command's output, grabbing hardware information and much more.
[16:36] <nigelb> In this hook, we start with asking the user a question and providing 3 choices for an answer.
[16:36] <nigelb> The question is "How would you describe the issue?" and the options are "The totem interface is not working correctl", "No sound is being played", "Some audio files or videos are not being played correctly".
[16:37] <nigelb> While the first one is definitely something to be filed against totem, the other 2 may not be a totem problem.
[16:37] <nigelb> I mean, how can you blame the media player if your sound driver itself has a bug ;)
[16:37] <nigelb> If the user selected "No sound is being played", we'd come to "if response[0] == 1:", which means the problem is actually a something to do with audio and not totem, so we open the audio hook.
[16:38] <nigelb> there, we straightfoward use a python function to execute another program
[16:38] <nigelb> And if the user selected "Some audio files or videos are not being played correctly", we come to if "response[0] == 2:" where we add the gstreamer package info.
[16:39] <nigelb> Here we use a function provided by apport to add information about a package
[16:39] <nigelb> Now turn your attention to lines 16, 17, and 18.
[16:39] <nigelb> The lines are self explanatory, but I'll take a moment to name those functions
[16:39] <nigelb> apport.hookutils.command_output, apport.hookutils.package_versions, and apport.hookutils.read_file
[16:40] <nigelb> Very self explanatory and the pydoc command I gave earlier should explain all of these functions
[16:40] <nigelb> With these 3 commands we've added valuable debugging information for the developer.  All this information will be uploaded to launchpad and attached to the bug report.
[16:40] <nigelb> As a developer you'll surely recognize the value of this information.
[16:40] <nigelb> That was a bit of information cramming.  Are there any questions so far?
[16:42] <ClassBot> techbreak asked: i just checked /var/log.. but there is neither syslog or partman directory..
[16:42] <nigelb> err, sorry
[16:42] <ClassBot> techbreak asked: can we edit this .py files if we think it can be better and submit ?
[16:43] <nigelb> In short, yes!
[16:43] <nigelb> All these files belong to a particular package.  Like the source_totem.py in the totem package.
[16:43] <nigelb> If you'd like to help make it better, get the code, open a bug, and submit a patch
[16:44] <bdmurray> report a bug about the sourcepackage providing the file so in this example totem
[16:44] <nigelb> You can poke bdmurray or me in #ubuntu-bugs and we'd be happy to review and help get it packaged
[16:44] <nigelb> you can use dpkg S /path/to/file to figureout which package
[16:45] <nigelb> err, that was dpkg -S /path/to/file
[16:45] <nigelb> One of my favorite hooks is the new audio hook, http://goo.gl/r4dFW
[16:45] <nigelb> Don't get alarmed by its size though
[16:46] <nigelb> It asks good questions and manages to be very friendly in diagnosing audio problems.
[16:46] <nigelb> There are some other neat things you could do with hooks, like add tags to the bug report.
[16:47] <nigelb> For example, if a particular question is answered, you want to add a particular tag
[16:47] <nigelb> This helps reduce the load with triaging
[16:47] <nigelb> To see an example of that, see the cheese hook, http://goo.gl/tsJ6G
[16:47] <nigelb> See line numbers 9, 42, 45, 72-76 for the bits where the tags come into play.
[16:47] <nigelb> If you know python, its kind of simple to read what's done.
[16:49] <nigelb> With the cheese hook tags are added based on symptoms, so the devs have a generic idea of the bugs
[16:50] <nigelb> There's developer information for Apport available at https://wiki.ubuntu.com/Apport/DeveloperHowTo.
[16:50] <nigelb> There are lots of packages without an apport hook.
[16:51] <bdmurray> Well, that's partially because there are a lot of packages. ;-)
[16:51] <nigelb> Clearly we've shown how much an apport hook helps get better bug reports for a package
[16:51] <ClassBot> There are 10 minutes remaining in the current session.
[16:51] <nigelb> heh, that too ;)
[16:51] <nigelb> If you'd like to write a hook for your own package and need help, feel free to poke either me (nigelb) or Brian Murray (bdmurray) in #ubuntu-bugs.
[16:52] <nigelb> Some of the fairly new hooks we've managed over the last few cycles is the rhythmbox book (I wrote that one), cheese hook (kermiac, a fellow bug control member wrote that one), and a few more that I now forget
[16:53] <nigelb> ah, thanks techbreak.  rhythmbox *hook* :0
[16:53] <nigelb> :)
[16:53] <nigelb> I fail at typing :p
[16:53] <nigelb> Anyway, that's the end of our session.
[16:54] <nigelb> We'll take questions now :)
[16:54] <nigelb> bdmurray: Anything you want to add?
[16:55] <bdmurray> Once you have bugs reported by apport in Launchpad because the data is in a regular format it is also possible to some automated processing of those bug reports using the Launchpad API.
[16:56] <ClassBot> There are 5 minutes remaining in the current session.
[16:56] <bdmurray> With the combination of high quality reports and some automatic processing your bug life can become much easier.
[16:56] <nigelb> and our triaging life too ;)
[16:58] <bdmurray> Another neat thing about the cheese hook is that it asks you to close cheese so that it gets run in debug mode.
[16:58] <bdmurray> The possibilities of what you can do with apport really are amazing.
[16:59] <bdmurray> Feel free to ask nigelb or I any questions.  We are always in #ubuntu-bugs.
[17:00] <nigelb> ok, I guess we're done
[17:00] <nigelb> Next up is kim0, the man behind the clouds ;)
[17:01] <kim0> Hello Hello everyone o/
[17:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/04/%23ubuntu-classroom.html following the conclusion of the session.
[17:01] <nigelb> Kim0 is going to talk about Introducing boto EC2 Cloud API.  kim0, the stage is all yours :)
[17:01] <kim0> Thanks nigelb
[17:02] <kim0> It's really a pleasure to be presenting in UDW
[17:02] <kim0> So the topic of the session is "intro to boto ec2 api"
[17:02] <kim0> basically what that means is
[17:02] <kim0> how to control the Amazon EC2 cloud
[17:02] <kim0> from the ease and comfort of your python shell
[17:03] <kim0> Please feel free to shoot any questions to me in #ubuntu-classroom-chat
[17:03] <kim0> just prepend your questions with QUESTION:
[17:03] <kim0> I'll give a few lines intro
[17:03] <kim0> since many might not be totally familiar with cloud means
[17:04] <kim0> basically, cloud computing has tons of different definitions
[17:04] <kim0> however, almost everyone will agree
[17:04] <kim0> resources have to be allocated by an api call
[17:04] <kim0> and that resource allocation is instantaneuous
[17:04] <kim0> and that it should be elastic and almost infinite
[17:05] <kim0> Amazon ec2 cloud meets those conditions
[17:05] <kim0> so basically
[17:05] <kim0> through an api call
[17:05] <kim0> you're able to very quickly allocate computing resources
[17:05] <kim0> i.e. servers, networking gear, IPs, storage space
[17:05] <kim0> ...etc
[17:05] <kim0> you use them for as much as you want
[17:05] <kim0> then you simply "delete" them
[17:06] <kim0> So in real life
[17:06] <kim0> assuming you (as a developer) were tasked with the compute heavy task of say converting 100k text files into PDFs
[17:07] <kim0> a "typical" implementation would be to spawn 20 servers in the cloud
[17:07] <kim0> kick them crunching on the conversion .. and finish in say 2 hours
[17:07] <kim0> then delete that infrastructure!
[17:07] <kim0> and yes, you only pay for the used resources (40 hours of compute time)
[17:07] <kim0> lovely, isn't it :)
[17:08] <kim0> Any questions so far on the general concepts
[17:08] <kim0> before I start digging into boto
[17:09] <kim0> Ok .. assuming no one has questions
[17:09] <kim0> let's get started
[17:09] <kim0> the first step would be to install the python-boto package
[17:10] <kim0> sudo apt-get install pyton-boto
[17:10] <kim0> I also prefer to work in the ipython envrionment
[17:10] <kim0> sudo apt-get install ipython
[17:11] <kim0> short break .. taking questions
[17:11] <ClassBot> techbreak asked: short introduction to urself please.. we wonder who our session master is :)
[17:11] <kim0> hi techbreak :) I'm Ahmed Kamal, the ubuntu cloud community liaison
[17:11] <kim0> My role is to help the ubuntu cloud community grow and succeed
[17:12] <kim0> I always hang-out at #ubuntu-cloud
[17:12] <kim0> feel free to ping me anytime
[17:12] <ClassBot> techbreak asked: i have seen examples of cloud to be "facebook" too.. how facebook can be a cloud  ?
[17:13] <kim0> well yes, FB is considered a cloud app
[17:13] <kim0> It is a cloud "application"
[17:13] <kim0> not a cloud platform
[17:13] <kim0> the likes of Google apps (gmail, gdocs...) facebook ..
[17:13] <kim0> are all considered cloud apps
[17:13] <kim0> because the data and code are distributed in highly distributed system
[17:14] <kim0> I could go into all sorts of details, but that question is a bit offtopic .. feel free to pick it up later :)
[17:14] <ClassBot> techbreak asked: ipython ? iron ptyhon ?
[17:14] <kim0> ipython is "interactive python shell" afaik
[17:14] <kim0> ok .. assuming you're all set
[17:15] <kim0> → back to boto
[17:15] <kim0> In order to actually execute the steps here .. you'd need to have an amazon ec2 account
[17:15] <kim0> setup and have generated secret keys and stored your variables in ~/.ec2/ec2rc
[17:16] <kim0> this is outside the scope of this tutorial however
[17:16] <kim0> for those interested
[17:16] <kim0> follow along with https://help.ubuntu.com/community/EC2StartersGuide
[17:16] <kim0> If you spot any wrong info, let me know
[17:16] <kim0> In all examples, I will copy/paste
[17:16] <kim0> the api calls and the results
[17:17] <kim0> so that everyone can follow along easily
[17:17] <kim0> $ ipython
[17:17] <kim0> we're now inside the ipython interperter
[17:17] <kim0> import boto
[17:17] <kim0> the boto python module is imported (ready to be used)
[17:17] <kim0> The Amazon cloud is composed of multiple regions
[17:17] <kim0> similar to "data centers" all around the world
[17:18] <kim0> we need to pick one to connect to
[17:18] <kim0> let's see how to do that
[17:19] <kim0> from boto import ec2
[17:19] <kim0> regions = ec2.regions()
[17:19] <kim0> Out[7]:
[17:19] <kim0> [RegionInfo:eu-west-1,
[17:19] <kim0>  RegionInfo:us-east-1,
[17:19] <kim0>  RegionInfo:ap-northeast-1,
[17:19] <kim0>  RegionInfo:us-west-1,
[17:19] <kim0>  RegionInfo:ap-southeast-1]
[17:19] <kim0> What you see are the Amazon regions (data-centers) around the world
[17:19] <kim0> we'll pick the one in us-east-1 to work with and connect to!
[17:20] <kim0> >> useast = regions[1]
[17:20] <kim0> I will prepend all python input code with (>>) to make it easily distinguishable
[17:21] <kim0> >>useast.endpoint
[17:21] <kim0>  u'ec2.us-east-1.amazonaws.com'
[17:21] <kim0> Let's connect
[17:21] <kim0> >> useconn = useast.connect()
[17:21] <kim0> Awesome
[17:22] <kim0> we're now connected
[17:22] <kim0> let's do something useful
[17:22] <kim0> by default .. Amazon configure its cloud firewall
[17:22] <kim0> to block all incoming port connections
[17:22] <kim0> the rule-sets .. are called "secutiy groups" in its jargon
[17:23] <kim0> let's get a list of security groups available
[17:23] <kim0> >> useconn.get_all_security_groups()
[17:23] <kim0> Out[14]: [SecurityGroup:default]
[17:23] <kim0> the result is a single group called "default" ... makes sense!
[17:25] <kim0> just a little note, for anyone trying to setup a new account with amazon, you're gonna need a credit card
[17:26] <kim0> while they offer a completely free instance (micro type) free for a year
[17:26] <kim0> i.e. you most likely won't be charged anything .. but you still need a valid one
[17:26] <kim0> try to follow along with me in the session, I'll be pasting all input on output
[17:26] <kim0> back on track
[17:26] <kim0> so let's get our security group (firewall rule)
[17:26] <kim0> sg=useconn.get_all_security_groups()[0]
[17:27] <kim0> let's "open" port 22
[17:27] <kim0> that's for ssh
[17:27] <kim0> >> sg.authorize('tcp', 22, 22, '0.0.0.0/0')
[17:28] <kim0> Ok ..
[17:28] <kim0> so let's quickly recap
[17:29] <kim0> what we've done
[17:29] <kim0> we've enumerated amazon cloud datacenters .. and chose to connect to the one in us-east
[17:29] <kim0> and we're manipulated the firewall to open port 22
[17:29] <kim0> all using API calls .. all on-demand and elastic
[17:30] <kim0> let's start the really cool stuff :)
[17:30] <kim0> let's start our own cloud server
[17:30] <kim0> a little intro
[17:30] <kim0> Ubuntu created official ubuntu cloud images, and publishes them to the amazon cloud
[17:30] <kim0> each published image is called AMI
[17:30] <kim0> Amazon Machine Image
[17:31] <kim0> and each AMI .. has its own "id" .. which is like a long number identifying it
[17:31] <kim0> when you want to start a new cloud server, you tell amazon what ami you want to use .. then it clones that ami and starts an "instance" of that ami for you!
[17:31] <kim0> so let's do that
[17:32] <kim0> >> ubuimages=useconn.get_all_images(owners= ['099720109477', ])
[17:32] <kim0> note that useconn .. is the us-east connection we had setup
[17:32] <kim0> get_all_images() is the call to get a list of images from amazon
[17:32] <kim0> the "owners=['099720109477', ]" part .. is basically a filter .. that number is the ID for Canonical .. so that you only get official ubuntu images
[17:33] <kim0> while you can start using fancy code to filter the huge list
[17:33] <kim0> of images to what you want!
[17:33] <kim0> You could use code like
[17:33] <kim0> nattymachines = [ x for x in ubuimages if (x.type == 'machine' and re.search("atty", str(x.name))) ]
[17:33] <kim0> I prefer a simpler approach
[17:34] <kim0> Visit the Ubuntu cloud portal
[17:34] <kim0> http://cloud.ubuntu.com/ami/
[17:34] <kim0> this page shows a listing of all publicly available ubuntu images
[17:34] <kim0> You just use the search box on the top right to search for what you want
[17:35] <kim0> In my case, I searched for "us-east natty"
[17:35] <kim0> the ami ID is shown in the table and you can simply copy it!
[17:35] <kim0> For me it's    ami-7e4ab917
[17:36] <kim0> so let's filter the list using that ID
[17:36] <kim0> >> natty = [ x for x in ubuimages if x.id == 'ami-7e4ab917' ][0]
[17:36] <kim0> >>  natty.name
[17:36] <kim0> Out[22]: u'ebs/ubuntu-images-milestone/ubuntu-natty-alpha3-amd64-server-20110302.2'
[17:36] <kim0> voila .. as you can see
[17:36] <kim0> it's a natty image .. alpha3!
[17:36] <kim0> hot from the oven :)
[17:37] <kim0> Let's go ahead and start a server
[17:37] <kim0> >> reservation = natty.run(key_name='default',instance_type='t1.micro')
[17:37] <kim0> natty.run() starts an image
[17:37] <kim0> the key_name .. is your ssh key .. this is part of setting up your Amazon ec2 account
[17:38] <kim0> that key is injected into the ubuntu instance as it boots, so that you're able to ssh into it later
[17:38] <kim0> The instance_type parameter .. is the "size" of the server you want
[17:38] <kim0> in my case, I'm starting the smallest one .. micro
[17:38] <kim0> since I'm executing this live
[17:38] <kim0> the server must have been created right now
[17:39] <kim0> in a matter of seconds
[17:39] <kim0> the API call
[17:39] <kim0> returns as "reservation"
[17:39] <kim0> let's interrogate that
[17:40] <kim0> >> instance = reservation.instances[0]
[17:40] <kim0> Let's see if the server is ready
[17:40] <kim0> >>  instance.state
[17:40] <kim0> Out[26]: u'pending'
[17:40] <kim0> oh .. that's interesting
[17:40] <kim0> state pending means it's still being allocated
[17:40] <kim0> any questions so far ?
[17:41] <kim0> now is a good time while amazon allocates the instance :)
[17:41] <kim0> not that it takes more than a few seconds actually
[17:41] <kim0> Great
[17:41] <kim0> it's ready
[17:41] <kim0> >> : instance.update()
[17:41] <kim0> Out[28]: u'running'
[17:42] <kim0> The serer has been created, booted, my ssh key "default" injected into it
[17:42] <kim0> how do we login you say
[17:42] <kim0> let's ask about the serer's name
[17:42] <kim0> >>  instance.public_dns_name
[17:42] <kim0> Out[29]: u'ec2-184-72-132-193.compute-1.amazonaws.com'
[17:43] <kim0> I can now ssh into that Ubuntu cloud server just like that
[17:43] <kim0> ssh ubuntu@ec2-184-72-132-193.compute-1.amazonaws.com
[17:44] <kim0> If I had used a different ssh key, I would nice ssh "-i" parameter .. but I'm using my default "~/.ssh/id_rsa" .. so no need to do anything
[17:46] <kim0> I've just configured the server to allow
[17:46] <kim0> you guys to ssh into it
[17:46] <kim0> Go ahead
[17:46] <kim0> ssh session@ec2-184-72-132-193.compute-1.amazonaws.com
[17:47] <kim0> password: session
[17:47] <kim0> please don't do anything nasty :)
[17:47] <kim0> Once logged in .. feel free to start byobu
[17:47] <kim0> a nice colorful gnu/screen customization
[17:47] <kim0> I'm ready to take some questions
[17:49] <ClassBot> ranamalo asked: when new packages are released are they incorporated in the ubuntu ami's?  Like if openssh is currently 5.3 and 5.4 is released today, if fire up the ubuntu ami tomorrow will i have 5.4 installed?
[17:49] <kim0> ranamalo: Well, with package being updated
[17:49] <kim0> new AMIs are pushed
[17:50] <kim0> remember that AMI-ID we got   ami-7e4ab917
[17:50] <kim0> that we got from cloud.ubuntu.com/ami
[17:50] <kim0> every couple of weeks or so
[17:50] <kim0> updates will be pushed
[17:50] <kim0> and a new image will be created
[17:50] <kim0> however
[17:50] <kim0> if you're running an older image .. there's nothing preventing you from apt-get dist-upgrade 'ing it
[17:51] <ClassBot> There are 10 minutes remaining in the current session.
[17:51] <kim0> this is espeically true if you're running natty or a recent maverick (pvgrub booted instance) .. not too important for now
[17:52] <ClassBot> techbreak asked: if you are copy pasting can you link us to the python code ? so that we can have the code and you can explain one by one /
[17:52] <kim0> Here you are
[17:52] <kim0> http://paste.ubuntu.com/575608/
[17:52] <kim0> That's all the commands I've written so far
[17:52] <kim0> you can practice on your own later
[17:53] <kim0> and if you need any help .. everyone at #ubuntu-cloud is more than helpful (at least I hope so) :)
[17:54] <ClassBot> mhall119 asked: does boto work on UEC as well?
[17:54] <kim0> mhall119: yes it does AFAIK
[17:54] <kim0> UEC is Ubuntu Enterprise Cloud
[17:54] <kim0> it's a private cloud product
[17:54] <kim0> that you can use to run your own cloud (like Aamzon)
[17:54] <kim0> on your own hardware
[17:54] <kim0> It is based on the eucalyptus open source project
[17:54] <kim0> for more info on using boto with UEC .. check out this link http://open.eucalyptus.com/wiki/ToolsEcosystem_boto
[17:55] <ClassBot> ranamalo asked: what advantages are there to using boto over ec2-ami-tools and ec2-api-tools?
[17:56] <ClassBot> There are 5 minutes remaining in the current session.
[17:56] <kim0> ranamalo: well, you can use command line tools in those packages
[17:56] <kim0> to effectively do the same thing
[17:56] <kim0> however the benefit of using python bindings
[17:56] <kim0> is clear .. if you're writing a tool
[17:56] <kim0> or some program
[17:56] <kim0> running external commands
[17:56] <kim0> and parsing results from the stdout text
[17:56] <kim0> is effectively hell
[17:56] <kim0> for programmers
[17:57] <kim0> an API provides a consistent clean interface
[17:57] <ClassBot> akshatj asked: What would be the advantages of deploying dmedia( https://launchpad.net/dmedia ) on clous?
[17:57]  * kim0 checking that out
[17:58] <kim0> akshatj: I'm not familiar with that project
[17:58] <kim0> however the generic advantages would be
[17:58] <kim0> you don't worry about hardware
[17:58] <kim0> you don't manage the infrastrucutre
[17:58] <kim0> you can scale-up/down easily cheaply
[17:58] <ClassBot> ranamalo asked: Is there a howto url you can give us?
[17:59]  * kim0 racing to answer :)
[17:59] <kim0> howto url for ?
[17:59] <kim0> a boto howto
[17:59] <kim0> well .. nothing special .. just googling you'll find tons of stuff
[18:00] <kim0> many useful info on the wiki
[18:00] <kim0> like https://help.ubuntu.com/community/UEC
[18:00] <kim0> If you're interested in learning more or contributing
[18:00] <kim0> ping me anytime
[18:00] <kim0> on #ubuntu-cloud
[18:00] <kim0> Thanks everyone
[18:00] <kim0> hope this was useful
[18:00] <kim0> don't forget to terminate your instances
[18:01] <kim0> or you keep paying for them :)
[18:01] <kim0> bye bye
[18:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/04/%23ubuntu-classroom.html following the conclusion of the session.
[18:01] <lukasz> Hello everybody, my name is Łukasz Czyżykowski, I'm part of ISD (Infrastructure Systems Development) team at Canonical. This will be a short introduction for creating web applications with Django framework.
[18:02] <lukasz> I assume that everybody is using Maverick and have Django installed. If not then:
[18:02] <lukasz> $ sudo apt-get install python-django
[18:02] <lukasz> will do the trick.
[18:03] <lukasz> btw, if I'll be going too fast or something is unclear do not hesitate to ask
[18:03] <lukasz> This is last stable version of Django. All documentation for it can be found at http://docs.djangoproject.com/en/1.2/ .
[18:03] <lukasz> Now it's time to start coding. Or at least start working on the project. In the beginning we need to create a Django project. This something which, in theory, should be connected to the site.
[18:04] <lukasz> For the purpose of this tutorial we'll build simple web application, we'll use most bits of Django. Our app will be partial Twitter/status.net clone.
[18:04] <lukasz> All code for this project is accessible at https://launchpad.net/twitbuntu, you can either download it and look at revisions which moves app forward in (almost) the same way as this session is planned or you can only follow irc session as all required code will be presented here.
[18:04] <lukasz> So, the first step is to create Django project:
[18:04] <lukasz> $ django-admin startproject twitbuntu
[18:04] <lukasz> $ cd twitbuntu
[18:06] <ClassBot> hugohirsch asked: will the sources be available somewhere later? (in case I'm not fast enough to get a database up'n'running)
[18:06] <lukasz> Yes, as I mentioned earlier the code is already on Launchpad
[18:06] <lukasz> Project is container for database connection settings, your web server and stuff like that.
[18:06] <lukasz> Now twitbuntu contains some basic files:
[18:06] <lukasz> - manage.py: you'll use this script to invoke various Django commands on this project,
[18:07] <lukasz> - settings.py: here are all settings connected to your project,
[18:07] <lukasz> - urls.py: mapping between urls of your application and Python code, either created by you or already existing
[18:07] <lukasz> and the last one
[18:07] <lukasz> - __init__.py: which marks this directory as Python package
[18:07] <lukasz> Next is setting up the database
[18:07] <lukasz> Open settings.py file in your favourite text editor.
[18:07] <lukasz> For purpose of this tutorial we'll use very simple sqlite database, it holds all of its data in one file and doesn't require any fancy setup. Django can of course use other databases, MySQL and PostgreSQL being most popular choices.
[18:08] <lukasz> Modify your file so that DATABASE setting looks exactly like this: http://pastebin.ubuntu.com/575582/
[18:08] <lukasz> To test that those settings are correct we'll issue syncdb management command. It creates any missing tables in the database which in our case is exactly what we want to get:
[18:08] <lukasz> $ ./manage.py syncdb
[18:09] <lukasz> If everything went right you should see bunch of "Creating table" messages and query about creating superuser. We want to be able to administer our own application so it's good to create one. Answer yes to first question and proceed with others
[18:09] <lukasz> My answers to those questions are:
[18:09] <lukasz> (just if anybody wonders)
[18:09] <lukasz> Would you like to create one now? (yes/no): yes
[18:09] <lukasz> Username (Leave blank to use 'lukasz'): admin
[18:09] <lukasz> E-mail address: admin@example.com
[18:09] <lukasz> Password: admin
[18:09] <lukasz> Password (again): admin
[18:10] <lukasz> Email address is not too important at that stage
[18:10] <lukasz> later you can configure Django to automatically receive crash reports on that address, but that's something bit more advanced.
[18:10] <lukasz> Next bit is to create an application, something where you put your code.
[18:12] <lukasz> By design you should separate different site modules into their own applications, that way it's easier to maintain it later and also if you create something which can be usable outside of your project you can share it with others without necessary putting all of your project out there.
[18:12] <lukasz> It's pretty popular in Django community, so it's always good idea to check somebody already haven't created something useful. That way you can save yourself reinventing the wheel.
[18:12] <lukasz> For this there's startapp management command
[18:12] <lukasz> $ ./manage.py startapp app
[18:13] <lukasz> In this simple case we're calling our application just 'app', but normally it should be called something more descriptive. Something like 'blog', 'gallery', etc. In out case, it could be called 'updates'.
[18:13] <lukasz> This creates an 'app' directory in your project. Inside of it there are files created for you by Django.
[18:13] <lukasz> - models.py: is where your data model definitions go,
[18:13] <lukasz> - views.py: place to hold your views code.
[18:14] <lukasz> Some short clarification with naming.
[18:14] <lukasz> Django is (sort of) Model/View/Controler framework
[18:15] <lukasz> The idea is that your application into separate layers. But in case of Django the naming of the standard and what creators did is bit confusing
[18:15] <lukasz> Models are called as they should.
[18:15] <lukasz> Views are templates in Django
[18:15] <lukasz> and Controllers are called view functions.
[18:15] <lukasz> ok, quick break for questions
[18:16] <ClassBot> wolfrage76 asked: Is it possible to setup Django to use more than one database, for different sections of the site? For instance SQlite as default for the site, but MySQL for the forums?
[18:16] <lukasz> wolfrage76: Yes, you can do that. The details (as this is bit more advanced topic) are in the documentaion.
[18:16] <ClassBot> abhinav asked: what is a superuser ? a database admin ?
[18:17] <lukasz> abhinav: it's an admin for whole web application, it's separate from database administrator
[18:17] <lukasz> abhinav: basically, this user can access Django admin and do anything there
[18:17] <lukasz> continuing
[18:18] <lukasz> First layer are models, where data definitions lies. That's the thing you put into models.py file. You define objects your application will manipulate.
[18:19] <lukasz> Next bit is to add this new application to list of installed apps in settings.py, that way Django knows from which parts your project is assembled.
[18:19] <lukasz> In settings.py file find variable named INSTALLED_APPS
[18:19] <lukasz> Add to the list: 'twitbuntu.app'
[18:19] <lukasz> It should look like that:
[18:19] <lukasz>    INSTALLED_APPS = (
[18:19] <lukasz>      'django.contrib.auth',
[18:19] <lukasz>      'django.contrib.contenttypes',
[18:19] <lukasz>      'django.contrib.sessions',
[18:19] <lukasz>      'django.contrib.sites',
[18:19] <lukasz>      'twitbuntu.app',
[18:19] <lukasz>    )
[18:19] <lukasz> or pastebin: http://pastebin.ubuntu.com/575583/
[18:20] <lukasz> You can see that there are already things here, mostly things which give your project some out-of-the-box funcionality
[18:20] <ClassBot> chadadavis asked: judging by the tables created, users and authentication are built in. I.e. it doesn't require any external modules (as Catalyst does)?
[18:21] <lukasz> chadadavis: yes, the authentication is build in
[18:21] <lukasz> Names are pretty descriptive so you shouldn't have problem with figuring out what each bit does
[18:21] <lukasz> although contenttypes and sites can be bit confusing
[18:21] <lukasz> as those are bits of underlying machinery required by most of the other django addons
[18:22] <ClassBot> chadadavis asked: my settings.py also has 'django.contrib.messages' (Natty). Is that going to cause any problems?
[18:22] <lukasz> chadadavis: not a problem
[18:22] <lukasz> all the defaults are good to go
[18:22] <lukasz> Now we start making actual application. First thing is to create model which will hold user updates. Open file app/models.py
[18:22] <lukasz> You define models in Django by defining classes with special attributes. That can be  translated by Django into table definitions and create appropriate structures in database.
[18:23] <lukasz> For now add following lines to the end of the models.py file: http://paste.ubuntu.com/575584/
[18:24] <lukasz> Now some explanations. You can see that you define model attributes by using data types defined in django.db.models module
[18:24] <lukasz> Full list of types and options they can take is documented here: http://docs.djangoproject.com/en/1.2/ref/models/fields/#field-types
[18:24] <lukasz> ForeignKey bit links our model with User model supplied by Django that way we can have multiple users having their updated on our site
[18:25] <lukasz> class Meta bit is place for settings for whole model. In this case we are saying that whenever we'll get list of updates we want them to be ordered by create_at field in ascending order (by default order is descending, and '-' means reversing that order).
[18:26] <lukasz> Now we have to synchronize data definition in models.py with what is in database. For that we'll use already known command: syncdb
[18:26] <lukasz> $ ./manage.py syncdb
[18:26] <lukasz> You should get following output:
[18:26] <lukasz> Creating table app_update
[18:26] <lukasz> Installing index for app.Update model
[18:26] <lukasz> As you can see each table name have two parts: the app name and model name.
[18:27] <lukasz> Great thing about Python is it's interactive shell. You can easily use it with Django.
[18:27] <lukasz> But because of bit of setup django requires there's a shortcut of setting up proper environment within your project
[18:27] <lukasz> $ ./manage.py shell
[18:27] <lukasz> This runs interactive shell configured to work with your project. From here we can play with our models and create some updates.
[18:28] <lukasz> first we get the user we created when first running syncdb
[18:28] <lukasz> >>> from django.contrib.auth.models import User
[18:28] <lukasz> >>> admin = User.objects.get(username='admin')
[18:28] <lukasz> Here 'admin' is whatever you've chosen when asked for admin username.
[18:29] <lukasz> First thing is to get hold to our admin user, because every update belongs to someone. You can see that we used 'objects' attribute of model class.
[18:29] <lukasz> >>> from twitbuntu.app.models import Update
[18:29] <lukasz> >>> update = Update(owner=admin, status="This is first status update")
[18:29] <lukasz> At that point we have instance of the Update model, but it's not saved in the database you can see that by checking update.id attribute
[18:29] <lukasz> Currently it's None
[18:29] <lukasz> but when we save that object in the database
[18:30] <lukasz> >>> update.save()
[18:30] <lukasz> the update.id attribute has a value
[18:30] <lukasz> >>> udpate.id
[18:30] <lukasz> 1
[18:30] <lukasz> That's only one of many ways to create instances of the models, this one is the easiest one.
[18:30] <lukasz> When we have some data in the database there's time to somehow display it to the user.
[18:31] <lukasz> First bit for a view to work is to tell Django for which url such view should respond to.
[18:31] <lukasz> For that we have to modify urls.py file.
[18:31] <lukasz> Open it and add following line just under line with 'patterns' in it, so whole bit should look like that:
[18:31] <lukasz> urlpatterns = patterns('',
[18:31] <lukasz>     (r'^$', 'twitbuntu.app.views.home'),
[18:31] <lukasz> )
[18:31] <lukasz> First bit there is regular expression for which this view will respond, in our case this is empty string (^ means beginning of the string and $ means end, so there's nothing in it), second bit is name of the function which will be called.
[18:32] <lukasz> Now open app/views.py file. Here all code responsible for responding to users' requests will live.
[18:32] <lukasz> First bit is to import required bit from Django:
[18:32] <lukasz> from django.http import HttpResponse
[18:32] <lukasz> Now we can define our (very simple) view function:
[18:32] <lukasz> def home(request):
[18:32] <lukasz>     return HttpResponse("Hello from Django")
[18:33] <lukasz> As you can see every view function has at least one argument, which is request object
[18:33] <lukasz> It contains lots of useful information about request, but for our simple example we'll not use it.
[18:33] <ClassBot> chadadavis asked: Is that an instance of the *model* or a record in the update table?
[18:33] <lukasz> chadadavis: both
[18:33] <lukasz> saved instances are records in the database
[18:34] <lukasz> but you access all data on that record as convenient python attributes on that object
[18:34] <lukasz> After that we can start our app and check if everything is correct, to do that run:
[18:34] <lukasz> $ ./manage.py runserver
[18:34] <lukasz> If everything went ok you should see following output
[18:34] <lukasz> Validating models...
[18:34] <lukasz> 0 errors found
[18:34] <lukasz> Django version 1.2.3, using settings 'twitbuntu.settings'
[18:34] <lukasz> Development server is running at http://127.0.0.1:8000/
[18:34] <lukasz> Quit the server with CONTROL-C.
[18:35] <lukasz> Now you can access it at provided url
[18:35] <lukasz> What you should see is "Hello from Django" text.
[18:35] <lukasz> any questions/problems/comments?
[18:36] <lukasz> continuing
[18:36] <lukasz> It would be nice to be able to log in to our own application, fortunately Django already has required pieces inside and only thing left for us is to hook them up.
[18:36] <lukasz> Add following two lines to the list of urls:
[18:37] <lukasz> (r'^accounts/login/$', 'django.contrib.auth.views.login'),
[18:37] <lukasz> (r'^accounts/logout/$', 'django.contrib.auth.views.logout'),
[18:37] <lukasz> Next we need to create template directory and enter it's location in settings.py file
[18:37] <lukasz> $ mkdir templates
[18:38] <lukasz> In settings.py file find TEMPLATE_DIRS setting:
[18:38] <lukasz> import os
[18:38] <lukasz> TEMPLATE_DIRS = (
[18:38] <lukasz>     os.path.join(os.path.dirname(__file__), 'templates'),
[18:38] <lukasz> )
[18:38] <lukasz> This will ensure that Django can always find the template directory even if current working directory is not the one containing application (for example when run from Apache web server).
[18:38] <lukasz> Next is to create registration dir in templates directory and put there login.html file with following content: http://paste.ubuntu.com/575631/
[18:39] <lukasz> Last bit is to set up LOGIN_REDIRECT_URL in settings.py to '/':
[18:39] <lukasz> LOGIN_REDIRECT_URL = '/'
[18:39] <lukasz> That way after login user will be redirected to '/' url instead of default '/accounts/profile' which we don't have.
[18:39] <lukasz> Now getting to http://127.0.0.1:8000/accounts/login should present you the login form and you should be able to log in to application.
[18:40] <lukasz> Now it's time to use information about logged in user in our view.
[18:40] <lukasz> Django provides very convenient way of accessing logged in user by adding 'user' attribute to request object.
[18:40] <lukasz> It's either model instance representing logged in user or instance of AnonymousUser class which have same interface as model.
[18:41] <lukasz> Easiest way distinguishing between them is by using user.is_authenticated() method
[18:41] <lukasz> Modify our home view function so it looks like that: http://paste.ubuntu.com/575585/
[18:41] <lukasz> That way logged in users will be greeted and anonymous users will be sent to login form. You should see "Hello username" at http://127.0.0.1:8000/
[18:42] <lukasz> Using that information we can build a functionality to restrict access to some parts of an application.
[18:42] <lukasz> Fortunately Django already has a lot of stuff build for that purpose.
[18:42] <lukasz> Add following line to the top of the views.py file:
[18:42] <lukasz> from django.contrib.auth.decorators import login_required
[18:43] <lukasz> This decorator does exactly what we have done manually but it's less code which doesn't hides what this view is doing, now we can shorten it to: http://paste.ubuntu.com/575586/
[18:43] <lukasz> Test view in your browser, nothing should have changed
[18:43] <lukasz> Now when we have reliable way of getting to the user instance we can return all user's updates.
[18:44] <lukasz> When creating an Update model we have used ForeignKey type, this connects two models together.
[18:44] <lukasz> Later when we've created updates we used user instance as value of this attribute.
[18:44] <lukasz> That's one way of accessing this data (every update has owner attribute).
[18:44] <lukasz> Due to usage of ForeignKey pointing to User model every instance of it got also update_set attribute which contains every update which is assigned to this user.
[18:44] <lukasz> Clean way of getting all user updates is:
[18:45] <lukasz> >>> admin.update_set.all()
[18:45] <lukasz> [<Update: Update object>]
[18:45] <lukasz> But we can also get to the same information from Update model:
[18:45] <lukasz> >>> Update.objects.filter(owner=admin)
[18:45] <lukasz> (btw, those are only examples, you don't have to type them)
[18:45] <lukasz> Both of those will return the same data, but the first one is cleaner.
[18:46] <lukasz> That's just a very simple example of getting data from the database.
[18:46] <lukasz> Django's functionality in that regard is way more sophisticated, but we don't have time now to dive into that.
[18:46] <lukasz> Now when we know how to get to necessary data we can send it to the browser by modifying home function: http://paste.ubuntu.com/575587/
[18:47] <lukasz> Here we set the content type of the response to text/plain so we can see angle brackets in the output, without that, by default, browser would hide it.
[18:47] <lukasz> Now, when we have data we can work on spicing it up a little. For that we'll use templates.
[18:47] <lukasz> Templates in Django have it's own syntax, it's really simple as it was created for designers, people not used to programming languages.
[18:47] <lukasz> We already have templates configured due to requirements of auth system, so it will be very easy to get started.
[18:48] <ClassBot> hugohirsch asked: I'm too slow to follow .... get an error msg: TemplateDoesNotExist at /accounts/login/. Looking for a file in /home/hirsch/twitbuntu/templates/registration/login.html - how can I remove the registration thing?
[18:48] <lukasz> hugohirsch: sorry for being too fast
[18:48] <lukasz> basically you can't, that's hardcoded in the auth application itself
[18:49] <lukasz> that's how the templates are looked up, usually they are specified as apptemplatedir/templatename.html
[18:49] <lukasz> so registration templates are in registration/ directory
[18:49] <lukasz> admin templates land in admin/ dir, etc
[18:50] <lukasz> Any other problems?
[18:50] <lukasz> I'll gladly help to resolve any issues
[18:51] <ClassBot> There are 10 minutes remaining in the current session.
[18:53] <lukasz> ok, continuing
[18:54] <lukasz> Now we need a file for a template
[18:54] <lukasz> create template/home.html and put following content in it: http://paste.ubuntu.com/575588/
[18:54] <lukasz> Every tag in Django templates language is contained between {% %} elements and also ending of every opening thing is done by adding end(thing) to the end (like endfor in that case).
[18:54] <lukasz> To output content of the variable we're using {{ }} syntax.
[18:54] <lukasz> Also we can use something called filters by using | to pass value through the named filter. We're using that to format date to nice text description of time passed.
[18:54] <lukasz> That's template, now let's write view code to use it.
[18:55] <lukasz> There's very convenient function when using templates in views: render_to_response
[18:55] <lukasz> add following line to the top of the view.py file
[18:55] <lukasz> from django.shortcuts import render_to_response
[18:55] <lukasz> This function takes two arguments: name of the template to render (usually it's file name) and dictionary of arguments to pass to template. Having this in mind our home view looks like that: http://paste.ubuntu.com/575589/
[18:56] <ClassBot> There are 5 minutes remaining in the current session.
[18:56] <lukasz> Now running $ ./manage.py runserver you can see that page in the browser has proper title
[18:56] <lukasz> I guess we don't have enough time for the rest of the things I've planned
[18:56] <lukasz> we'll stop here
[18:56] <lukasz> Are there any questions?
[18:57] <lukasz> ok, going forward, going fast
[18:58] <lukasz> It would be really nice to be able to add status updates from the web page. For that we need a form. There are couple ways of doing that in Django, but we'll show a way which is most useful for forms which are used to create/modify instances of the models.
[18:58] <lukasz> By convention form definitions goes to forms.py file in your app directory. Put following bits in there: http://paste.ubuntu.com/575590/
[18:58] <lukasz> This is very simple form which has only one field in it.
[18:58] <lukasz> Now in views.py we need to instantiate this form and pass it to the template. After modifications this file should look like this: http://paste.ubuntu.com/575591/
[18:58] <lukasz> One thing which is new here, the RequestContext thing. This is connected to automatic CSRF (cross site request forgery) protection, which Django enables by default. Basically it provides templates with richer set of accessible data from which we'll going to use only csrf_token tag.
[18:59] <lukasz> Last bit is to display this form in template. Add this bit just after <body> tag:
[18:59] <lukasz> http://paste.ubuntu.com/575650/
[18:59] <lukasz> Now when we have form properly displayed it would be useful to actually create updates based on the data entered by the user. That requires little bit of work inside our home view. Fortunately this is pretty straightforward to do: http://paste.ubuntu.com/575594/
[18:59] <lukasz> First thing is to check weather we're processing POST or GET request, if POST that means that user pressed 'Update' button on our form and we can start processing submitted data.
[18:59] <lukasz> All POST data is conveniently gathered by Django in a dictionary in request.POST. For this case it's not really critical to know what exactly is send, UpdateForm will handle that. Bit with instance= is to automatically set update owner, without that form would not be valid and nothing would be saved in the database.
[19:00] <lukasz> Checking if form is valid is very simple, just invoke .is_valid() method on it. If True is returned then we're saving the form to the database, which returns Update instance. It's not really needed anywhere but I wanted to show you that you can do something with it.
[19:00] <lukasz> Last bit is to create empty form, so that Status field will be clear, ready for next update.
[19:00] <lukasz> If you try to send update without any content you'll see that there's an error message displayed 'This field is required'. All of that is automatically handled by forms machinery.
[19:00] <lukasz> It's nice to be able to see our own status updates but currently it's only viewable by logged user.
[19:00] <lukasz> but that's a homework
[19:00] <lukasz> or a thing to look into existing code
[19:00] <lukasz> thank you for your attention :)
[19:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/04/%23ubuntu-classroom.html following the conclusion of the session.
[19:01] <Quintasan> Thanks lukasz!
[19:02] <Quintasan> So, hi there people. My name is Michał Zając I am a Kubuntu Developer, MOTU and leader of (not famous yet :) Project Neon.
[19:02] <Quintasan> and today I will be talking about Recipes (or Source Builds)  in Launchpad
[19:03] <Quintasan> We are going to use pbuilder and Launchpad and you should already have those thing if you were at  "Getting Started with Development" by dholbach
[19:05] <Quintasan> I think most people have an idea what {daily,weekly,montly} builds are, if not then here's a quick explanation: we grab code from code repository the project uses, build it and release it as packages
[19:07] <Quintasan> You probably get the idea that doing that manually every day would be a little annoying at least so our ingenious Launchpad developers introduced the Recipes feature so we can focus more on testing that on writing complicated scripts or doing the builds by hand
[19:09] <Quintasan> Why should you bother with setting up source builds? Well, testing bleeding edge software goes faster because the packages are very quickly available
[19:09] <Quintasan> Getting testers is easier too because they just add a PPA instead of compiling the whole source themselves
[19:10] <Quintasan> Any questions so far?
[19:11] <Quintasan> Okay, so let's get proceed.
[19:12] <Quintasan> What do you need to do, to set up your daily builds on Launchpad?
[19:12] <Quintasan> 1. You need to have your source code on Launchpad (either use code.launchpad.net for developing or request a source import)
[19:13] <Quintasan> 2. Write a recipe
[19:13] <Quintasan> 3. Test build it locally (we don't want to stuff Launchpad with failing builds, do we?)
[19:13] <Quintasan> 4. Upload and trigger the recipe
[19:14] <Quintasan> Well, I forgot, you also need to have a working packaging for that certain software, that is very important
[19:16] <Quintasan> So now we're going through steps 2 and 3 beacuse they are essential and setting up a recipe is really easy so I will show that later
[19:17] <Quintasan> So, go to you working directory and do
[19:17] <Quintasan> bzr branch lp:~neon/project-neon/kdewebdev-ubuntu
[19:18] <Quintasan> That's our packaging branch for kdewebdev module, and it's responsible for getting our code compiled and put into packages
[19:19] <Quintasan> and if you go to
[19:19] <Quintasan> https://code.launchpad.net/~neon/kdewebdev/trunk
[19:19] <Quintasan> You can see the already imported code from KDE to Launchpad which we are going to use to get a source build of kdewebdev
[19:20] <Quintasan> Now we are going to write a recipe, so fire up your favorite text editor
[19:20] <Quintasan> and paste in the following
[19:20] <Quintasan> # bzr-builder format 0.2 deb-version 2+svn{date}+r{revno}-{revno:packaging}
[19:20] <Quintasan> lp:~neon/kdewebdev/trunk
[19:20] <Quintasan> nest packaging lp:~neon/project-neon/kdewebdev-ubuntu debian
[19:21] <Quintasan> The first line tells bzr builder how is the versioning of the package going to look
[19:22] <Quintasan> the stuff between { and } is going to expand to
[19:22] <Quintasan> {date} to date - like 20110301
[19:23] <Quintasan> {revno} to revision number of the source so it's also going to be a number like 1677
[19:23] <Quintasan> and {revno:packaging} will be substituted with the revno for the branch named packaging in the recipe.
[19:24] <Quintasan> lp:~neon/kdewebdev/trunk <--- this tells the builder to grab the source from ~neon/kdewebdev/trunk branch
[19:25] <Quintasan> abhinav: the recipe file can have any name, though I usually name it <project>.recipe
[19:26] <Quintasan> nest packaging lp:~neon/project-neon/kdewebdev-ubuntu debian
[19:26] <Quintasan> This line places our packaging in source directory in debian/
[19:27] <Quintasan> Please note that the lp:~neon/project-neon/kdewebdev-ubuntu doesn't have debian folder but it's contents
[19:27] <Quintasan> Otherwise the packaging would land under ./debian/debian and LP wouldn't be able to build it
[19:28] <Quintasan> Now save the file and we are going to test build it
[19:28] <Quintasan> Launch a terminal and go the the directory where you saved the recipe file
[19:29] <Quintasan> make another directory called "build" for example
[19:29] <Quintasan> hmm, we are actually going to need bzr-builder
[19:29] <Quintasan> sudo apt-get install bzr-builder
[19:29] <Quintasan> should install it
[19:30] <Quintasan> Any questions so far?
[19:34] <Quintasan> Well, moving on, assuming you have a working pbuilder we have to make a small change to sources.list inside it so we can build it as it pull project-neon libs. Be sure to revert the change after the session
[19:34] <Quintasan> sudo pbuilder --login --save-after-login <--- that will login into you pbuilder chroot and save any changes you made after exiting
[19:35] <Quintasan> you will have to add two entries to /etc/apt/sources.list inside your pbuilder
[19:36] <Quintasan> so open it up for editing and paste
[19:36] <Quintasan> deb http://ppa.launchpad.net/neon/ppa/ubuntu natty main
[19:36] <Quintasan> deb-src http://ppa.launchpad.net/neon/ppa/ubuntu natty main
[19:36] <Quintasan> substitue natty for maverick if you have a maverick pbuilder
[19:36] <Quintasan> save the file and exit the pbuilder
[19:37] <Quintasan> sorry, if you do not use pbuilder hooks then do "apt-get update" after adding the entries
[19:38] <Quintasan> now, back to the recipe directory
[19:38] <Quintasan> issue the following command
[19:39] <Quintasan> bzr dailydeb <your recipe file> <build directory we created earlier>
[19:39] <Quintasan> here it looks like: bzr dailydeb kdewebdev.recipe build
[19:40] <Quintasan> What it is going to is to grab the source code, stuff the packaging inside it and create a dsc file which you can build with pbuilder
[19:43] <Quintasan> after it finishes it work you can build it with pbuilder like this
[19:44] <Quintasan> sudo pbuilder --build build/*.dsc
[19:44] <Quintasan> I just finished building it and it should work for you too.
[19:45] <Quintasan> now that we know the recipe is working we can put it up on Launchpad
[19:46] <Quintasan> To be able to use Recipes you need to add your launchpad account into Recipe beta users team
[19:46] <Quintasan> https://launchpad.net/~launchpad-recipe-beta
[19:46] <Quintasan> It's an open team so anyone can join
[19:47] <Quintasan> Now what we want to do is to go to the branch with the source code which we are going to use for daily building
[19:48] <Quintasan> https://code.launchpad.net/~neon/kdewebdev/trunk
[19:48] <Quintasan> in this case
[19:48] <Quintasan> If you joined the recipe beta users team you should see "1 recipe using this branch."
[19:49] <Quintasan> clicking the "1 recipe" link will redirect you to https://code.launchpad.net/~neon/+recipe/project-neon-kdewebdev
[19:50] <Quintasan> You can see Latest builds section and Recipe contents which contains the exact recipe I gave you
[19:50] <Quintasan> As you can see there are some successful build
[19:50] <Quintasan> +s
[19:51] <Quintasan> Now if you were setting a new daily build then you would click the "Create packaging recipe" button on https://code.launchpad.net/~neon/kdewebdev/trunk
[19:51] <ClassBot> There are 10 minutes remaining in the current session.
[19:52] <Quintasan> Set the description and Name fields to you liking
[19:52] <Quintasan> The Owner field says who can manage the recipe in Launchpad
[19:53] <Quintasan> The Built daily field has a nice explanation under it: Automatically build each day, if the source has changed.
[19:53] <Quintasan> And we have to select to which PPA we are going to push the packages
[19:53] <Quintasan> You can use an existing one or create a new one
[19:54] <Quintasan> Later you can set the series for which the package will be built, like natty, maverik, lucid and so on up to Dapper
[19:54] <Quintasan> In the last field you paste the recipe you wrote and click Create Recipe
[19:55] <Quintasan> You should be redirected to you recipe page where you can manually trigger the first build by pressing the Request build(s) link under Latest builds section
[19:56] <ClassBot> There are 5 minutes remaining in the current session.
[19:56] <Quintasan> If you did everything correctly then it should start building and place the resulting packages in selected PPA
[19:57] <Quintasan> I'm done, you can find more information about Source Builds at Launchpad Help -> https://help.launchpad.net/Packaging/SourceBuilds
[19:57] <Quintasan> You can also find me on #project-neon and #kubuntu-devel channels if you need more explanations
[19:58] <Quintasan> Oh, and there is also a list (not full probably) of existing Daily Builds that are set up on Launchpad
[19:58] <Quintasan> you can find it on -> https://wiki.ubuntu.com/DailyBuilds/AvailableDailyBuilds
[19:59] <Quintasan> Well, we are almost out of time and I'm already done, if you have any questions then ask them in #ubuntu-classroom-chat or find me on the channels I mentioned
[20:00] <Quintasan> Thanks for listening, hope to see some new builds after this session
[20:01] <yofel> here's a more complete list: https://code.launchpad.net/+daily-builds which shows all existing daily build recipes
[20:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/04/%23ubuntu-classroom.html following the conclusion of the session.
[20:01] <nigelb> ok, hello again!
[20:02] <nigelb> This time we're having somethign special for Ubuntu Developer Week
[20:02] <nigelb> We're going to close with project lightning talks
[20:02] <nigelb> We have a few people going to come on and talk about their project for about 5 minutes
[20:02] <nigelb> They will tell you all about it and how you can help them with the project
[20:03] <nigelb> First up is stgraber!
[20:03]  * stgraber waves
[20:03] <nigelb> He'll talk to you abour arkrose
[20:03] <nigelb> All yours stgraber :)
[20:03] <stgraber> Hey everyone !
[20:03] <stgraber> So I just wanted to quickly introduce you to a pet project of mine called arkose
[20:03] <stgraber> Arkose's goal is to do sandboxing of desktop applications
[20:04] <stgraber> with it you can easily start any binary in a sandbox and choose what kind of acces it has
[20:04] <stgraber> this includes, forcing it to use an overlay file system (aufs), block network access or block the possibility to access the X server
[20:04] <stgraber> the project itself can be found at https://launchpad.net/arkose
[20:05] <stgraber> I also blogged about it here: http://www.stgraber.org/category/arkose/
[20:05] <stgraber> it's in the archive for natty
[20:05] <stgraber> and is included by default in Edubuntu
[20:05] <stgraber> it's made of 3 different packages
[20:05] <stgraber>  - arkose (command line tool)
[20:05] <stgraber>  - arkose-gui (similar to the Run dialog in gnome except it starts everything in a container)
[20:05] <stgraber>  - arkose-nautilis (lets you start any binary in a sandbox)
[20:06] <stgraber> the sandboxing itself is done using some new flags of the clone() command, similar to what lxc (https://lxc.sf.net) does (except lxc does it for a full system)
[20:06] <ClassBot> chadadavis asked: Can you restrict what libraries and what versions it has access too?
[20:07] <stgraber> by default it just uses an aufs overlay so it has the exact same packages as your system, though you can call dpkg in the sandbox to install/remove/upgrade/downgrade packages
[20:08] <stgraber> it's ideal when you want to run some untrusted binary (game or similar)
[20:08] <stgraber> ok, I guess I'm done ;) next !
[20:09] <UndiFineD> :)
[20:09] <nigelb> w00t w00t
[20:09] <nigelb> thanks stgraber
[20:09] <nigelb> Next we have UndiFineD!
[20:09] <nigelb> Stage's yours UndiFineD :)
[20:09] <UndiFineD> Hello everyone,
[20:09] <UndiFineD> I welcome you all for this short introduction to SpeechControl.
[20:09] <UndiFineD> If you have questions, please wait a while, I will be brief, thank you.
[20:09] <UndiFineD> This project started in November 2010 with a vision from wiki.ubuntu.com/hajour. I am Keimpe de Jong, better known as wiki.ubuntu.com/UndiFineD and her partner. We have 4 girls with either a form of AD(H)D, Dyslexia or poor sight.
[20:09] <UndiFineD> SpeechControl is an accessibility project. wiki.ubuntu.com/SpeechControl It aims to make controlling your computer easier. In the distant future we hope to reach Star Trek like capabilities.
[20:10] <UndiFineD> How it began ...
[20:10] <UndiFineD> We began by writing capable people in this area, explaining hajour her vision and asked if they would be willing to contribute to it. Then we asked to be placed under the accessibility team and ubuntu beginners team flags.
[20:10] <UndiFineD> Ubuntu Accessibility
[20:10] <UndiFineD> Ubuntu Beginners Team
[20:10] <UndiFineD> And the roundup of currently capable developers with an interest amazed us. We are grateful for all our team members that help us and we welcome new ones with open arms.
[20:10] <UndiFineD> Key features:
[20:10] <UndiFineD> Speech recognition: Using the available tools at our disposal, SpeechControl will be able to comprehend your spoken word.
[20:10] <UndiFineD> Preconfigured Command Execution: knowing what to do what you ordered it to.
[20:10] <UndiFineD> Virtual Assistant: Repetitive task guidance improve for faster and smoother process execution.
[20:10] <UndiFineD> Speech Synthesis: More commonly known at TTS (Text-To-Speech), the voice that communicates with the end-user will be as natural and inviting as technology permits.
[20:11] <UndiFineD> So how is this done ?
[20:11] <UndiFineD> After some research we found Simon to currently be the best capable open source software capable in controlling a computer. www.Simon-Listens.org
[20:11] <UndiFineD> Simon is / was not in the Debian repositories, due to licensing issues. We believe that is cleared up by now and Debian can take on Simon, as it uses the same license type as SSH.
[20:11] <UndiFineD> So what is left to be done ?
[20:11] <UndiFineD> Well there are several disabilities, and we wish to make it good for everyone. In order to do that we need to:
[20:11] <UndiFineD> extend Simon (API work),
[20:11] <UndiFineD> talk to all the system busses (dbus, at-spi2, ...),
[20:12] <UndiFineD> analyse Speech-to-text and make some context (Wintermute),
[20:12] <UndiFineD> execute commands based upon input given,
[20:12] <UndiFineD> help to reduce repeating work,
[20:12] <UndiFineD> communicate back to the user (via Text-to-Speech).
[20:12] <UndiFineD> Blueprints:
[20:12] <UndiFineD> A lot of blueprints need work, we would love to gather all your input and make things better.
[20:12] <UndiFineD> Request come in for simple but related tasks, like remind a user to take their medicine, or read out an e-book.
[20:13] <UndiFineD> Besides this, it bothered us we could not properly have an accessible meeting. So there is work being done on a Speech capable chatcliënt plugin.
[20:13] <UndiFineD> The possibilities are open and endless, and would make using your computer so much easier. To prevent a wild bloom of small projects, our team currently focuses on specifying blueprints and defining the path to take. Our initial goal is a proof of concept and optimize later on.
[20:13] <UndiFineD> Progress:
[20:13] <UndiFineD> In a few months good work has been done;
[20:13] <UndiFineD> the team is slowly growing and could use extra help writing specifications. After that we will develop all the libraries and applications.
[20:13] <UndiFineD> If you would like to learn more feel free to come and visit us on #SpeechControl read more about us
[20:13] <UndiFineD> on wiki.ubuntu.com/SpeechControl our team is located here: launchpad.net/~speechcontrolteam
[20:14] <UndiFineD> I would like to thank everyone in our team, for the great work they have done so far.
[20:14] <UndiFineD> If you have questions feel free to ask them now, but for the sake of the sessions I recommend longer talks to be held in #SpeechControl.
[20:14] <nigelb> Thanks UndiFineD for the wonderful talk
[20:14] <nigelb> That was loaded with information
[20:15] <nigelb> Next up, we have AlanBell.  He's going to talk about a bot he's written
[20:15] <AlanBell> yay
[20:15] <AlanBell> Ubuntu project communication happens in meetings, lots of them
[20:15] <AlanBell> mostly in the #ubuntu-meeting channel and there is a bot in there which kinda records meetings a bit
[20:15] <AlanBell> however normally someone ends up writing up a summary minutes and emailing it out or adding to the wiki
[20:15] <AlanBell> taking minutes is a dull and menial task, and rather an undignified thing to expect a human to do
[20:16] <AlanBell> I believe that the post-meeting procedure should be copy-paste-done
[20:16] <AlanBell> the bot should make nicely formatted minutes
[20:16] <AlanBell> so, I wrote a little extension to the existing mootbot
[20:16] <AlanBell> which is available in some channels as mootbot-UK
[20:17] <AlanBell> and then I rewrote the thing based on a debian fork which is a python supybot bot
[20:17] <AlanBell> which is here now
[20:17] <AlanBell> #startmeeting
[20:17] <meetingology> Meeting started Fri Mar  4 20:17:25 2011 UTC.  The chair is AlanBell. Information about MeetBot at http://wiki.ubuntu.com/AlanBell.
[20:17] <meetingology> Useful Commands: #topic #action #link #idea #voters #vote #chair #action #agreed #help #info #endmeeting.
[20:17] <MootBot> Meeting started at 14:17. The chair is AlanBell.
[20:17] <MootBot> Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE]
[20:17] <AlanBell> do feel free to talk here, the channel has been unmuted
[20:18] <maco> do the mootbot versus mootbot-uk changes include en_US versus en_GB?
[20:18] <AlanBell> #topic do the mootbot versus mootbot-uk changes include en_US versus en_GB
[20:18] <meetingology> TOPIC: do the mootbot versus mootbot-uk changes include en_US versus en_GB
[20:18] <AlanBell> interesting question maco, there has been some translation activity done on mootbot, I think there is a hebrew port of it now
[20:19] <AlanBell> I don't think I mentioned programmes or colours in the messages it says anywhere
[20:19] <AlanBell> you can do funky stuff with this like multiple chairs
[20:19] <AlanBell> #chair maco
[20:19] <meetingology> Current chairs: AlanBell maco
[20:19] <AlanBell> you can use [topic] syntax or #topic it doesn't matter
[20:20] <maco> should make use of that in meetings where the chair has to leave before discussion eneds
[20:20] <AlanBell> yup
[20:20] <AlanBell> there is an awesome new feature in votes too
[20:20] <nigelb> oh, I like the chair functionality
[20:20] <AlanBell> #voters AlanBell maco nigelb
[20:20] <meetingology> Current voters: AlanBell maco nigelb
[20:20] <nigelb> that helps a lot for the council meetings
[20:20] <AlanBell> #vote this house declares cake to be the food of the gods
[20:20] <meetingology> Please vote on: this house declares cake to be the food of the gods
[20:20] <meetingology> Public votes can be registered by saying +1, +0 or -1 in channel, (private votes don't work yet, but when they do it will be by messaging the channel followed by +1/-1/+0 to me)
[20:20] <AlanBell> +1
[20:20] <meetingology> +1 received from AlanBell
[20:20] <nigelb> -1 chocolate!
[20:20] <Quintasan> +1
[20:20] <nigelb> -1
[20:20] <meetingology> -1 received from nigelb
[20:21] <AlanBell> nigelb: you just found a bug!
[20:21] <nigelb> I did!
[20:21] <AlanBell> someone else try to vote please
[20:21] <Quintasan> +1
[20:21] <nigelb> he isn't a voter
[20:21] <mhall119> -0
[20:21] <AlanBell> oh Quintasan sucks to be you, your vote doesn't count!
[20:21] <maco> neat!
[20:21] <Quintasan> :<
[20:21] <AlanBell> #endvote
[20:21] <meetingology> Voting ended on: this house declares cake to be the food of the gods
[20:21] <meetingology> Votes for:1 Votes against:1 Abstentions:0
[20:21] <meetingology> Deadlock
[20:21]  * maco will need to remember that for RMB meetings
[20:21] <nigelb> but do we use it in -meeting yet?
[20:22] <AlanBell> so this is a supybot plugin, the idea is that it could be incorporated into the regular channel bots which run on the same framework
[20:22] <AlanBell> so every channel just grows meeting facilities
[20:22] <nigelb> w00t, that rocks
[20:22] <AlanBell> and it could replace the bot in -meeting
[20:22] <mhall119> when will it integrate with loco-directory meetings?
[20:22] <AlanBell> I need some help
[20:22] <AlanBell> mhall119: patches welcome!
[20:22] <mhall119> :)
[20:22] <AlanBell> and yes, I need to get back into hacking it further
[20:22] <nigelb> there ya go, so if anyone wants to help AlanBell, help integrate it with LD \o/
[20:23]  * mhall119 recommends nigelb 
[20:23] <AlanBell> code is here https://code.launchpad.net/~ubuntu-bots/ubuntu-bots/meetingology
[20:23] <nigelb> I knew you'd do that.  I'll probably take a look ;)
[20:23] <AlanBell> and I am quite approachable, come poke me if you want to have a play with it
[20:23] <AlanBell> #endmeeting
[20:23] <meetingology> Meeting ended Fri Mar  4 20:23:51 2011 UTC.  Information about MeetBot at http://wiki.ubuntu.com/AlanBell . (v 0.1.4)
[20:23] <meetingology> Minutes:        http://mootbot.libertus.co.uk/ubuntu-classroom/2011/ubuntu-classroom.2011-03-04-20.17.moin.txt
[20:24] <nigelb> and there is the log feature which I loove!
[20:24] <AlanBell> thanks for your time everyone, the minutes of this meeting are at https://wiki.ubuntu.com/AlanBell/mointesting
[20:24] <Quintasan> cool
[20:25] <AlanBell> post meeting procedure completed!
[20:25] <nigelb> That was fun playing with the bot AlanBell
[20:26] <nigelb> Thank you for it.
[20:26] <nigelb> Next up is mhall119! He's going to talk about XDG Launcher \o/
[20:26] <nigelb> All yours mhall119 :)
[20:26] <mhall119> yay!
[20:27] <mhall119> XDG Launcher is very simple, you give is a menu path, and it gives you a panel full of launchers from that menu
[20:27] <mhall119> you can see it running against /Games here: http://img718.imageshack.us/i/xdglauncher.png/
[20:27] <mhall119> as far as panels go, it's about as simple as they come, there's no transparency, no auto-hide, no gradients
[20:28] <mhall119> xdg-launcher was developed for the Qimo linux desktop (which I also made)
[20:28] <mhall119> http://qimo4kids.com/
[20:28] <mhall119> as you can see from it's screenshots, it has has a very similar bottom panel http://qimo4kids.com/post/Qimo-20-is-now-available!.aspx
[20:29] <mhall119> but previously, that panel was static, if you added a new game, you had to manually add it to the panel
[20:29] <mhall119> that's why I made xdg-launcher, so that when someone adds a game through software center, and that game adds a menu entry, it'll automatically show up in the bottom panel
[20:29] <mhall119> xdg-launcher is hosted on launchpad: https://launchpad.net/xdg-launcher
[20:30] <mhall119> and will be part of the Qimo 3.0 release sometime in May
[20:30] <ClassBot> nigelb asked: What language is it written in?
[20:30] <mhall119> it's written in Python, and uses GTK and GMenu
[20:30] <mhall119> it is as light weight and simple as I could make it
[20:31] <mhall119> Qimo is also developed on launchpad: https://code.launchpad.net/qimo
[20:31] <mhall119> that's it!
[20:32] <mhall119> nigelb: next!
[20:32] <nigelb> mhall119: you have a question!
[20:32] <mhall119> xdg-launcher isn't in teh repos yet
[20:33] <mhall119> in fact, it won't be in the repos at all under than name, as I was informed by a MOTU that the xdg- prefix implied that it comes from the XDG project
[20:33] <mhall119> so future development on it will likely fall under the name qimo-launcher
[20:33] <mhall119> in fact, there's already a branch by that name under the qimo LP project
[20:34] <nigelb> thanks mhall119!
[20:34] <nigelb> next up i jderose
[20:34] <jderose> okay...
[20:34] <jderose> dmedia == Distributed Media Library
[20:34] <jderose> dmedia is the foundation of the Novacut distributed video editor, and the Novacut player
[20:34] <jderose> But dmedia is an independent component designed to work with any app, for both content creation and content consumption
[20:34] <jderose> A big goal is getting this important user data out of application-specific silos, into a common freedesktop service
[20:35] <jderose> Early on I started talking to the Shotwell and PiTiVi developers... they will likely be some of the first dmedia enabled apps (along with the Novacut apps, of course)
[20:35] <jderose> In a nutshell, dmedia is a simple distributed filesystem
[20:35] <jderose> The metadata (small) for your *entire* library is stored in CouchDB and synced between *all* your devices
[20:35] <jderose> dmedia uses desktopcouch (which is awesome, use it for all your apps!), so you get slick UbuntuOne sync, say:
[20:35] <jderose> Tablet <=> UbuntuOne <=> Workstation
[20:36] <jderose> However, the files (big) for your entire library certainly wont fit on a device with limited storage, like a phone
[20:36] <jderose> Or similarly, the files generated by a pro TV or movie production probably wont fit on any single device, not even a big file server
[20:36] <jderose> And this is where dmedia gets awesome... a given device can contain any arbitrary *subset* of the files (including no files at all)
[20:36] <jderose> Files are loaded from peers or the cloud as needed
[20:36] <jderose> Yet as the metadata is always available locally, you can still browse through a huge library as if all those files were actually there
[20:36] <jderose> Each file has a document in CouchDB, which among other things tracks all the places the file is stored
[20:36] <jderose> Your personal media files (photos, videos, etc) are treated specially, and dmedia will strive to maintain a configurable level of durability for all your personal files
[20:36] <jderose> So dmedia knows when it should copy the new videos you shot from your laptop to your workstation, or upload them to the cloud
[20:37] <jderose> And dmedia also knows when it can safely delete files on a given device to free up space for files currently needed
[20:37] <jderose> This month's release (dmedia 0.5) will probably be the first that is complete enough to be useful to the end user... so it's an exciting time to join in on the development
[20:37] <jderose> The dmedia backend is written in Python... currently 6,193 lines of code and docstrings, 6,719 lines of unit tests (I really like unit tests)
[20:37] <jderose> And the UI is done with HTML5 and JavaScript, talking directly to CouchDB using XMLHttpRequest
[20:37] <jderose> The project is coordinated on Launchpad - https://launchpad.net/dmedia
[20:37] <jderose> To learn more, please stop by #novacut on irc.freenode.net!
[20:37] <jderose> questions?  :)
[20:38] <jderose> (01:38:15 PM) nigelb: QUESTION: How can we help?
[20:38] <ClassBot> nigelb asked: How can we help?
[20:38] <jderose> oops :)
[20:38] <nigelb> heh
[20:39] <jderose> well, if your a Python dev, lots of fun stuff there.... and probably the most exciting stuff is for people who a fluent with HTML5/JavaScript...
[20:39] <jderose> because that what the user sees :)
[20:39] <jderose> and i think things are getting to the point where you can do really cool things without much work... ui stuff that is
[20:40] <jderose> and if you know/like CouchDB... you'll fall in love with dmedia pretty quick, i think :)
[20:40] <jderose> we do monthly releases, so your code will be in users hands quickly
[20:41] <nigelb> ok, thanks jderose
[20:41] <nigelb> Great to hear about dmedia
[20:41] <nigelb> And thanks for making it here today :)
[20:41] <nigelb> Next up yofel and Quintasan are back again!
[20:41] <jderose> hehe.. .np... thank you everyone for listening in :)
[20:41] <nigelb> Over to you guys :)
[20:41] <Quintasan> Sup, it's me and yofel again
[20:41]  * yofel waves
[20:42] <Quintasan> So we are going to talk about Project Neon I mentioned in my Source Builds session, so yofel, what is Project Neon?
[20:42] <yofel> Project Neon provides Daily Builds of the KDE trunk using the launchpad recipes that Quintasan presented earlier
[20:43] <Quintasan> Yeah, so basically we grab KDE bleeding edge source code and compile it and put into a PPA so you can experience the awesomeness without compiling it yourslef
[20:43] <Quintasan> yourself even
[20:44] <Quintasan> Our technical home is -> https://wiki.kubuntu.org/Kubuntu/ProjectNeon
[20:44] <Quintasan> If you want to know where to send the beer -> https://launchpad.net/~neon
[20:44] <Quintasan> The magic used is available at -> https://code.launchpad.net/~neon
[20:44] <Quintasan> and here are the results -> https://launchpad.net/~neon/+archive/ppa
[20:44] <Quintasan> We support natty and maverick for now.
[20:45] <Quintasan> Installing those packages will not break you KDE settings (if you have any) and can peacefully coexist with distro's default KDE version
[20:45] <Quintasan> It can be used for testing, screencasting, bug fixing, development and so on
[20:46] <Quintasan> We compiled it with ALL dependencies which we could get so users will get EVERY available feature to test (and probably break their computers ;)
[20:48] <Quintasan> So, if you ever wanted to give latest KDE a test-drive and you didn't want to compile the whole stuff yourself you can use our packages
[20:48] <Quintasan> Be sure to drop us a line at #project-neon how did it work
[20:48] <Quintasan> It's still a work in progress as we are working on daily builds of Amarok
[20:48] <ClassBot> gmargo2 asked: "peacefullyl coexist"... how do you do that?  A different KDM option?
[20:48] <Quintasan> Yeah
[20:49] <yofel> A different KDM option and we install our files in /opt/project-neon
[20:49] <Quintasan> gmargo: We use an separate X session entry, separate envinronemnt config
[20:49] <Quintasan> and ofc install stuff to other /opt/project-neon as yofel said
[20:49] <ClassBot> monish005 asked: which development language?
[20:50] <Quintasan> monish005: There is no language used, we use Launchpad Daily Builds feature to build the packages
[20:50] <yofel> if anything we have a few bash utilities, other than that only debian packaging and the recipes
[20:51] <ClassBot> There are 10 minutes remaining in the current session.
[20:51] <Quintasan> So, I think we are done, questions and requests welcome at #project-neon
[20:52] <nigelb> Thank you Quintasan and yofel!
[20:52] <nigelb> Unfortunately, our last host kirkland couldn't make it
[20:52] <yofel> when you want to use Project Neon visit our usage instructions on http://techbase.kde.org/Getting_Started/Using_Project_Neon_to_contribute_to_KDE - thanks!
[20:53] <nigelb> Dustin Kirkland was to talk about project bikeshed
[20:53] <nigelb> I'll link you to the project
[20:53] <nigelb> https://launchpad.net/bikeshed
[20:53] <nigelb> The project is about a bunch of scripts which are helpful, but no one is sure which package they go into
[20:54] <nigelb> The name of the project is inspired by the very famous color of the bikeshed mail thread
[20:54] <nigelb> I'll leave you all to explore the package
[20:54] <nigelb> And with that our project lighting talks come to an end
[20:55] <nigelb> Thank you stgraber, UndiFineD, AlanBell, mhall119, jderose, yofel, and Quintasan for making this a grand success
[20:55] <nigelb> We loved listening to your projects and lets all continue to build cool stuff :)
[20:55] <UndiFineD> thanks for being a good host
[20:55] <nigelb> :)
[20:55] <Quintasan> \o/
[20:56] <ClassBot> There are 5 minutes remaining in the current session.
[20:56] <nigelb> and, this means... ANOTHER UBUNTU DEVELOPER WEEK HAS COME TO A CLOSE \O/
[20:56] <jderose> thanks again nigelb, you're one rockin host :)
[20:56] <yofel> :D
[20:56] <Quintasan> nigelb++
[21:01] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/03/04/%23ubuntu-classroom.html
[21:02] <nigelb> and its over...
[21:02] <darkdevil666> :(
[21:02] <nigelb> a few more weeks and we have app developer week and then we'd have open week again
[21:02] <darkdevil666> :D
[21:05] <darkdevil666> its not been put up on the schedule list
[21:05] <darkdevil666> where do i get the schedule?
[21:12] <darkdevil666> @all: where can i get the next "event week" schedule?
[21:12] <meetingology> darkdevil666: Error: "all:" is not a valid command.
[21:13] <Mkaysi> http://is.gd/8rtIi ?
[21:14] <darkdevil666> thanks Mkaysi. but it dsnt show schedules for app developer week n open week