=== MichiSoft is now known as mthalmei === mthalmei is now known as MichiSoft === dholbach_ is now known as dholbach [10:45] hi [13:44] When is the class on django development going to happen that dholbach blogged about? [13:46] !schedule? [13:46] Factoid 'schedule?' not found [13:46] !schedule [13:46] Ubuntu releases a new version every 6 months. Each version is supported for 18 months to 5 years. More info at http://www.ubuntu.com/ubuntu/releases & http://wiki.ubuntu.com/TimeBasedReleases [13:47] Yeah daniel said it was happening today but didn't give times :/ [13:48] https://wiki.ubuntu.com/UbuntuDeveloperWeek [13:49] dholbach, thanks! [14:09] hey, anyone from Italy? === openweek6 is now known as bikedog === MichiSoft is now known as mthalmei === mthalmei is now known as MichiSoft === starcraftman-mob is now known as starcraft-mobile [15:43] y a t'il une disc meeting started [15:52] pas ici === Ursinha is now known as Ursinha-nom [16:00] padu is it meeting started [16:01] I know that 'Building websites with Django' will start at 17.00 UTC === Jonnie_Simpson is now known as JSimpson [16:59] When is the next class??? [17:00] X3MBoy: In about 30 seconds :) [17:00] erm...now i think [17:01] Ok. [17:01] Thx [17:01] „Getting started with Launchpad development“ [17:01] Hello everybody. [17:01] Hello everybody. [17:01] Well, I didn't expect that to appear twice. [17:01] hello [17:02] Hmm. [17:02] Anyway [17:02] hi [17:02] My name's Graham Binns. I'm a member of the Launchpad Bugs development team. [17:02] hai [17:02] hi [17:02] hi [17:02] hello [17:02] I'm going to talk today about getting started with Launchpad development, in the hope that it might make it easier for you guys to contribute patches to scratch your least favourite itches. [17:02] hi [17:02] Hopefully you'll have all completed the instructions at http://dev.launchpad.net/Getting so that you can follow along with this session. If not, you might struggle a bit, but you can always go back once the session is over and follow it through on your own time. [17:03] Note: chatter and questions please in #ubuntu-classroom-chat [17:03] If you've any questions, please shout them out in #ubuntu-classroom-chat and prefix them with QUESTION so that I can see them easier :) [17:03] Okay, so, first things first, we need to find us a bug to fix. For the purposes of this session I've filed a made-up bug on staging for us to fix https://staging.launchpad.net/bugs/422299. I've gone with this because: [17:04] 1) It's fairly simple to fix. 2) It's easy to demonstrate our test-driven development process whilst we fix it, which is why I didn't pick a bug in the UI. 3) There were no really trivial bugs available for us to try this out on :). [17:04] When you're working on fixing a bug in Launchpad, you nearly always want to be doing it in a new branch. [17:05] We try to keep to one bug per branch, because that means that it's much easier to review the patches when they're done (because they're smaller, natch :)) [17:05] So, let's create a branch in which to fix the bug. [17:05] If you've set up the Launchpad development environment properly according to http://dev.launchpad.net/Getting, you should be able to run the following command: [17:05] $ rocketfuel-branch getting-started-with-lp-bug-422299 [17:05] Note that I've appended the bug number to the branch [17:05] so that I can always refer to it if I need to [17:06] but I've also given the branch a useful name to help me remember what it's for if I have to leave it for a while. [17:06] rocketfuel-branch takes a few seconds, so I'll just wait a minute for everyone to catch up. [17:07] (By the way, if anyone has any problems with rocketfuel-get or any other part of this lesson, please come find me afterwards in #launchpad and I'll try to help you out) [17:07] s/-get/-branch/ there, sorry. [17:08] Okay. [17:08] Now, at this point, once you'd decided how to fix the bug [17:08] but - importantly - before you start coding [17:08] you'd ideally have a chat with a member of the Launchpad development team about your intended fix. [17:08] We normally do this either on IRC or on Skype, depending on your preference. [17:09] You can usually find a Launchpad developer in #launchpad-dev on Freenode who'll be available for one of these calls. [17:13] The call gives you a chance to ensure that what you're doing is actually sane. [17:13] For some bugs there's only one possible fix, complex or otherwise. For others there may be many ways to do it, and it's important to pick the right one. [17:13] If your solution is particularly complex or you need to demonstrate *why* you want to do things the way you do, it may help to write some tests to reproduce the bug before you have the call. [17:13] Note that the tests should always fail at this point; [17:13] you shouldn't make any changes to the actual code until you've had the pre-implementation call or chat with an LP developer. [17:13] Okay, so that's the info-dumpy bit of this session over for now :) [17:14] (gmb is having lag issues, please stand by) [17:15] Sorry about that, all. [17:15] I have a rather flaky connection today :) [17:15] As I was saying... [17:16] Under lib/lp you'll find most of the Launchpad code, split up into its applications. [17:16] So, `ls lib/lp` in your new getting-started-with-lp-bug-422299 branch should give you something like this: [17:16] $ ls lib/lp [17:16] answers archiveuploader buildmaster coop registry soyuz [17:16] app blueprints code __init__.py scripts testing [17:16] archivepublisher bugs codehosting __init__.pyc services translations [17:16] Now, we know that we're working in the bugs application, so lets take a look in there to see where to put our tests: [17:17] $ ls lib/lp/bugs [17:17] adapters emailtemplates help model stories windmill [17:17] browser event __init__.py notifications subscribers xmlrpc [17:17] configure.zcml externalbugtracker __init__.pyc pagetests templates [17:17] doc feed interfaces scripts tests [17:18] There are three types of test in Launchpad: doctests, which live in lib/lp/$app/doc; stories, which live in lib/lp/$app/stories and unittests, which live in lib/lp/$app/tests. [17:18] In this case we want to add to an existing doctest, so I'll stick with that for now and we can come back to what the others are for later. [17:18] So, in lib/lp/bugs/doc/ you'll find a file called externalbugtracker-trac.txt. [17:18] This is the test we want to modify, so feel free to open it in your text editor and take a look at line 110, which is where we're going to add our test. [17:19] For the sake of making this quicker, I've already created a diff of the change that I'd make here: http://pastebin.ubuntu.com/263869/plain/ [17:19] You can save that to disk somewhere (e.g. /tmp/diff) and then apply it as a patch using `bzr patch /tmp/diff` in the root of your new Launchpad branch. [17:20] The test we've just added is really simple. [17:20] It passes 'frobnob' to the convertRemoteStatus() method of a Trac instance (which is just an abstraction that lets us talk to an actual Trac server) [17:20] and expects to get "Fix Released" back. [17:21] Of course, it doesn't since we haven't implemented that yet :). [17:21] Once we've written the test, we run it to make sure it fails. [17:21] This part is very important: your tests should always fail first and only after they fail do you write the code to make them pass. [17:21] That means that you can use the tests to build a good spec of how your module / class / function / whatever should behave. [17:22] It also means that, like I said before, you can use the failing tests to demonstrate what your fix will actually change to whoever you have a call with. [17:22] To run this specific test only, we use the `bin/test` command: [17:22] $ bin/test -vvt externalbugtracker-trac.txt [17:23] That might take a short while to run (Launchpad's test suite can be frustratingly slow sometimes, but don't let that put you off; the payoff is worth it) [17:23] The output from which should look something like this: http://pastebin.ubuntu.com/263874/ [17:23] Note the important bit: [17:23] File "lib/lp/bugs/tests/../doc/externalbugtracker-trac.txt", line 111, in externalbugtracker-trac.txt [17:23] Failed example: [17:23] trac.convertRemoteStatus('frobnob').title [17:23] Exception raised: [17:23] Traceback (most recent call last): [17:23] File "/home/graham/canonical/lp-sourcedeps/eggs/zope.testing-3.8.1-py2.4.egg/zope/testing/doctest.py", line 1361, in __run [17:23] compileflags, 1) in test.globs [17:24] File "", line 1, in ? [17:24] File "/home/graham/canonical/lp-branches/lesson/lib/lp/bugs/externalbugtracker/trac.py", line 265, in convertRemoteStatus [17:24] raise UnknownRemoteStatusError(remote_status) [17:24] UnknownRemoteStatusError: frobnob [17:24] This tells us that the test failed, which is exactly what we wanted. [17:24] (Yes, copying and pasting in IRC makes me a bad man.) [17:24] nvertRemoteStatus() raised an UnknownRemoteStatusError instead of giving us back the status we wanted. [17:24] Which was, of course, the 'Fix Released' status. [17:24] At this point, you might want to commit the changes: [17:24] $ bzr commit -m "Added tests for bug 422299." [17:25] Again - I can't emphasise this enough - the fact that your test fails is a Good Thing. If it didn't fail, it wouldn't be a good test, since we know that the bug actually exists in the code. [17:25] Now that we have a test that fails, we want to add some code to make it pass [17:26] We want to add this to lib/lp/bugs/externalbugtracker/trac.py. [17:26] Now, as it happens, I knew that before I started, but you can work it out by looking at the top of the doctest file that we just edited. [17:27] So, open lib/lp/bugs/externalbugtracker/trac.py now and take a look at line 258. We'll add our fix here. [17:27] The fix is really simple, and we can pretty much copy line 255 and alter it to suit our needs. [17:27] We want 'frobnob' to map to 'Fix Released', so we add the following line: [17:28] ('frobnob', BugTaskStatus.FIXRELEASED), [17:28] I'll not go into the nitty-gritty of how status lookups work here, because it's unimportant. [17:28] Suffice it to say that in Trac's case it's a simple pair of values, (remote_status, launchpad_status). [17:29] Here's a diff of that change: http://pastebin.ubuntu.com/263882/ [17:29] Now that we've added a fix for the bug, we run the test again: [17:29] $ bin/test -vvt externalbugtracker-trac.txt [17:29] This time, it should pass without any problems... [17:30] and it does === Traveler is now known as Guest89726 [17:30] http://pastebin.ubuntu.com/263885/ [17:30] So, now we commit our changes: [17:30] $ bzr ci -m "Fixed bug 422299" [17:30] (Note that this is a lame description of the fix; you should use something more descriptive). [17:31] So, we now have a branch that fixes a bug. Hurrah and all that. [17:31] Now we need to get it into the Launchpad tree. [17:31] Launchpad developers use the Launchpad code review system to review Launchpad branches. [17:31] You can't land a branch without having it reviewed first [17:32] This allows us to ensure that code quality stays high [17:32] And it also acts as a sanity check to make sure that the developer hasn't done something unnecessarily odd in their fix. [17:33] So at this point, you need to push your branch to Launchpad using the `bzr push` command: [17:33] $ bzr push [17:33] Once the branch has been pushed up to Launchpad it gets its own page in the Launchpad web interface, which you can look at by running: [17:33] $ bzr lp-open [17:33] This should open the page in your default browser. [17:34] Now that you've fixed the bug and pushed the branch to Launchpad you need to request a review for it. [17:34] To do this, go to the branch page in your browser and click the "Propose for merging into another branch" link. [17:35] This will take you to a page that looks like this: [17:35] http://people.ubuntu.com/~gbinns/propose-merge.png [17:36] In the "Initial comment" box, you need to type a description of the branch. [17:36] For example, for this branch I'd write something like: [17:36] "This branch fixes bug 422299 by making Trac.convertRemoteStatus() map the "frobnob" status to Launchpad's Fix Released status." [17:38] After you've typed in your description, hit the "Propose merge" button and you should see a page that looks something like this: https://code.edge.launchpad.net/~gmb/launchpad/lesson/+merge/11068 [17:38] You then need to head on over to #launchpad-reviews on Freenode and ask if anyone's available to review your branch. [17:38] If there's no-one available at the time, don't worry. [17:39] We have a reviewer schedule: http://dev.launchpad.net/ReviewerSchedule, so someone should take a look at it withing 24 hours. [17:39] The reviewer may ask you to make changes to your branch [17:39] To bring your fix into line with our coding standards [17:40] Or maybe to fix a bug that they've spotted in your fix. [17:40] Once the reviewer has signed off on the changes, they'll submit the branch for merging for you. [17:41] When a branch gets merged, the entire test suite is run against it [17:41] If any of the tests fail [17:41] The reviewer may ask you to help fix them [17:41] But it's likely that someone else will take care of it if you're not around at the time [17:42] And that's about all there is to simple Launchpad development :) [17:42] Are there any questions? Please shout them out in #ubuntu-classroom-chat [17:47] < ahe> QUESTION: When will launchpad be available as a package in the standard distribution? [17:48] ahe: At this point, there aren't any plans for that. We released the code for Launchpad because we wanted to let people help to improve the service, but we've no plans as far as I'm aware to distribute it as a package. [17:52] < Andphe> question: have you planned guys, offer launchpad in another languages than english, example spanish ? [17:53] Andphe: It's something that we've considered and that we would like to do at some point, at least for certain parts of the interface. [17:53] The problem is that launchpad is meant to be a global collaboration tool, and if we translate it wholesale into other languages that automatically means that a certain amount of collaboration will be lost [17:54] For exampel, if a user reads the interface in Spanish and files a bug in Spanish, how am I, an non-Spanish speaker, going to be able to deal with that bug report? [17:54] However, internationalisation would work quite well for the Answers application, and it's already built with that in mind. [17:54] < ahe> QUESTION: Do you deploy launchpad manually or are there some helper scripts or stuff like that to ease the deployment in a production environment? [17:55] It's a combination of the two. [17:55] edge.launchpad.net is deployed by a script every night, as is staging.launchpad.net. [17:55] The production servers are updated manually by our sysadmins at least once per cycle (though it's usually more than that since we discover urgent bugs that need to be fixed). [17:57] < Andphe> question: if answers already support another languages, how can we help to translate it ? [17:58] Andphe: It's built with translation in mind, but I don't know what work needs doing to make it translatable. [17:58] Andphe: Your best bet would be to join the Launchpad Developers mailing list (http://launchpad.net/~launchpad-dev) and post a question about it there. [17:59] I think that's about all we've got time for. [18:00] If you've any further questions, please feel free to join the Launchpad Dev list (above) [18:00] And ask there. [18:00] Everyone's welcome to contribute. [18:00] Thanks very much for your time. [18:00] thanks gmb [18:00] (and hi everybody) [18:01] Hi everybody, my name is Łukasz Czyżykowski. I work for ISD (Infrastructure Systems Development) team at Canonical. Me and my colleague Anthony Lenton (achuni) will be talking about developing web sites with Django. [18:01] that's me. hi, I'm Anthony Lenton and I also work at ISD. [18:01] this talk is going to be generally given by Łukasz. [18:01] I'm going to be here to answer questions, and maybe interrupt Łukasz just to bother. [18:01] For the purpose of this tutorial we'll build simple web application, we'll use most bits of Django. Our app will be partial Twitter/Identi.ca clone. [18:01] All code for this project is accessible at https://launchpad.net/twitbuntu, you can either download it and look at revisions which moves app forward in the same way as this session is planned. [18:02] or only follow irc session as all required code will be presented here [18:02] I assume that everybody is using Jaunty and have Django installed. If you still don't have it: [18:02] $ sudo apt-get install python-django [18:02] will do the trick. [18:03] First step is to create Django project: [18:03] (as usual, or for if you've just arrived, if you have questions, shout them on #ubuntu-classroom-chat) [18:03] $ django-admin startproject twitbuntu [18:03] $ cd twitbuntu [18:04] Project is container for database connection settings, your web server and stuff like that. [18:04] Now twitbuntu contains some basic files: [18:05] - manage.py: you'll use this script to invoke various Django commands on this project, [18:05] - settings.py: here are all settings connected to your project, [18:05] - urls.py: mapping between urls of your application and Python code, either created by you or already existing. [18:05] - __init__.py: which marks this directory as Python package [18:05] Next we'll setup database connection [18:06] Open settings.py file in your favourite text editor. [18:06] For purpose of this tutorial we'll use very simple sqlite database, it holds all of its data in one file and doesn't require any fancy setup. Django can of course utilise other databases, MySQL and PostgreSQL being most popular choices. [18:06] Enter sqlite3 in DATABASE_ENGINE setting. Line should look like that: [18:06] DATABASE_ENGINE = 'sqlite3' [18:06] [18:06] Also set file name in DATABASE_NAME to db.sqlite (it can be whatever you like): [18:06] DATABASE_NAME = 'db.sqlite' [18:07] To test that those settings are correct we'll issue syncdb management command. It creates any missing tables in the database which in our case is exactly what we want to get: [18:07] $ ./manage.py syncdb [18:07] If everything went right you should see bunch of "Creating table" messages and query about creating superuser. We want to be able to administer our own application so it's good to create one. Answer yes to first question and proceed with other questions [18:07] My answers to those questions are: [18:08] Would you like to create one now? (yes/no): yes [18:08] Username (Leave blank to use 'lukasz'): admin [18:08] E-mail address: admin@example.com [18:08] Password: admin [18:08] Password (again): admin [18:08] Email address is not too important at that stage [18:08] later you can configure Django to automatically receive crash reports on that address, but that's something more advanced [18:09] Next bit is to create application, something where you put your code. By design you should separate different site modules into their own applications, that way it's easier to maintain it later and also if you create something which can be usable outside of your project you can share it with others without necessary putting all of your project out there. It's pretty popular in Django community, so it's always good idea to check [18:09] somebody already haven't created something useful. That way you can save yourself reinventing the wheel. [18:10] For this there's startapp command [18:10] $ ./manage.py startapp app [18:10] In this simple case we're calling our application just 'app' [18:11] This creates an 'app' directory in your project. Inside of it there are files created for you by Django. [18:11] - models.py: is where your data model definitions go, [18:11] - views.py: place to hold your views code. [18:12] Maybe some short terms definition here. Django is sort of Model/View/Controller framework (not really according to its creators). Basically it separates all your code into three separate layers and in principle only code from layer above should get access to lower one. [18:12] First layer are models, where data definitions lies. That's the thing you put into models.py file. You define objects your application will manipulate. [18:12] Above that are controllers which in Django are called views. This code responds to requests from users, manipulates the data and sends it to be rendered to the last layer, which is: [18:13] view in standard world, but here those role is taken by templates. [18:13] Next bit is to add this new application to list of installed apps in settings.py, that way Django knows from which parts your application is assembled. [18:14] In settings.py file find variable named INSTALLED_APPS [18:14] Add to the list: 'twitbuntu.app' [18:14] It should look like that: [18:14] INSTALLED_APPS = ( [18:14] 'django.contrib.auth', [18:14] 'django.contrib.contenttypes', [18:14] 'django.contrib.sessions', [18:14] 'django.contrib.sites', [18:14] 'twitbuntu.app', [18:14] ) [18:15] You can see that there are already things here, mostly things giving your project already built functionality [18:15] Names are pretty descriptive so you shouldn't have problem with figuring out what each bit does [18:16] Now we start making actual application. First thing is to create model which will hold user updates. Open file app/models.py [18:17] You define models in Django by defining classes with special attributes. That can be translated by Django into table definitions and create appropriate structures in database. [18:17] For now add following lines to the end of the models.py file: http://paste.ubuntu.com/263851/ [18:17] (btw, bigger chunks of code are on pastebin) [18:17] Now some explanations. You can see that you define model attributes by using data types defined in django.db.models module. Full list of types and options they can take is documented here: http://docs.djangoproject.com/en/dev/ref/models/fields/#ref-models-fields [18:18] ForeignKey bit links our model with User model supplied by Django [18:18] that way we can have multiple users having their updated on our site [18:19] Another bit of magic is auto_now_add setting of the DateTimeFiled, makes that whenever we create new instance of this model this field will be set to current date and time. That way we don't have to worry about that. There's also auto_now option which sets such field to now whenever instance is modified. [18:19] class Meta bit is place for settings for whole model. In this case we are saying that whenever we'll get list of updates we want them to be ordered by create_at field in ascending order (by default order is descending, and '-' means reversing that order). [18:20] Now we have to synchronise data definition in models.py with what is in database. For that we'll use already known command: syncdb [18:20] $ ./manage.py syncdb [18:20] You should get following output: [18:20] Creating table app_update [18:20] Installing index for app.Update model [18:20] Great thing about Python is it's interactive shell. You can easily use it with Django. [18:20] You start it by [18:20] $ ./manage.py shell [18:21] This runs interactive shell configured to work with your project. From here we can play with our models and create some updates. [18:21] >>> from django.contrib.auth.models import User [18:21] >>> admin = User.objects.get(username='admin') [18:21] Here 'admin' is whatever you've chosen when asked for admin username. [18:22] First thing is to get hold to our admin user, because every update belongs to someone. You can see that we used 'objects' attribute of model class. [18:22] >>> from twitbuntu.app.models import Update [18:22] >>> update = Update(owner=admin, status="This is first status update") [18:22] At that point we have instance of the Update model, but it's not saved in the database [18:23] you can see that by checking update.id attribute [18:23] Currently it's None [18:23] >>> update.save() [18:24] Now, when you saved it in database it has id [18:24] >>> update.id [18:24] 1 [18:24] That's only one of many ways to create instances of the models, this one is the easiest one. [18:24] You can check that update.created_at was set properly to current date: [18:24] >>> update.created_at [18:24] datetime.datetime(2009, 9, 2, 12, 23, 58, 659426) [18:25] You can also see that you get back nice, Python datetime object instead of having to process whatever database returned for that field. [18:25] When we have some data in the database there's time to somehow display it to the user. [18:25] First bit for a view to work is to tell Django for which url such view should respond to. For that we have to modify urls.py file. [18:25] Open it and add following line just under line with 'patterns' in it, so whole bit should look like that: [18:25] urlpatterns = patterns('', [18:25] (r'^$', 'twitbuntu.app.views.home'), [18:25] ) [18:26] First bit there is regular expression for which this view will respond, in our case this is empty string (^ means beginning of the string and $ means end, so there's nothing in it), second bit is name of the function which will be called. [18:26] Now go to app/views.py file. Here all code responsible for responding to users' requests will live. [18:26] First bit is to import required bit from Django: [18:26] from django.http import HttpResponse [18:26] Now we can define our (very simple) view function: [18:26] def home(request): [18:26] return HttpResponse("Hello from Django") [18:27] As you can see every view function has at least one argument, which is request object, which contains lots of useful information about request, but for our simple example we'll not use it for now. [18:27] After that we can start our app and check if everything is correct, to do that run: [18:27] $ ./manage.py runserver [18:28] If everything went ok you should see following output [18:28] Validating models... [18:28] 0 errors found [18:28] [18:28] Django version 1.0.2 final, using settings 'twitbuntu.settings' [18:28] Development server is running at http://127.0.0.1:8000/ [18:28] Quit the server with CONTROL-C. [18:29] As you can see Django first checks if model definitions are correct and then starts our application. You can access it by going to http://127.0.0.1:8000/ in your browser of choice. What you should see is "Hello from Django" text. [18:29] It would be nice to be able to log in to our own application, fortunately Django already has required pieces inside and only thing left for us is to hook them up. [18:29] Everything else is already set up when we first used syncdb command. [18:29] Add following two lines to the list of urls: [18:30] (r'^accounts/login/$', 'django.contrib.auth.views.login'), [18:30] (r'^accounts/logout/$', 'django.contrib.auth.views.logout'), [18:31] Next bit is to create template directory and enter it's location in settings.py file: [18:31] $ mkdir templates [18:31] In settings.py file find TEMPLATE_DIRS setting: [18:31] import os [18:31] TEMPLATE_DIRS = ( [18:31] os.path.join(os.path.dirname(__file__), 'templates'), [18:31] ) [18:31] This will ensure that Django can always find the template directory even if current working directory is not the one containing application (for example when run from Apache web server). [18:32] Next is to create registration dir in templates directory and put there login.html file with following content: http://paste.ubuntu.com/263833/ [18:32] Last bit is to set up LOGIN_REDIRECT_URL in settings.py to '/': [18:32] LOGIN_REDIRECT_URL = '/' [18:32] That way after login user will be redirected to '/' url instead of default '/accounts/profile' which we don't have. [18:33] Now getting to http://127.0.0.1:8000/accounts/login should present you the login form and you should be able to log in to application. [18:33] Now when we can login it's time to use that information in our views. [18:34] Django provides very convenient way of accessing logged in user by adding 'user' attribute to request object. It's either model instance representing logged in user or instance of AnonymousUser class which have same interface as model. Easiest way to distinguish those two is by using .is_authenticated() method on it. [18:34] Modify our home view function so it looks like that: http://paste.ubuntu.com/263835/ [18:35] That way logged in users will be greeted and anonymous users will be sent to login form. You should see "Hello username" at http://127.0.0.1:8000/ [18:35] Using that we can restrict access to our application. But it would be very repetitive having to enter same if statement in every function you want to protect, so there is more convenient of doing the same thing. [18:35] Add following line to the top of the views.py file: [18:35] from django.contrib.auth.decorators import login_required [18:35] This decorator does exactly what we have done manually but it's less code which doesn't hides what this view is doing, now we can shorten it to: http://paste.ubuntu.com/263836/ [18:36] Test view in your browser, nothing should have changed. [18:37] Now when we have reliable way of getting to the user instance we can return all user's updates. [18:37] When designing update model we have used ForeignKey type, this creates connection between two models. Later when we've created updates we used user instance as value of this attribute. That's one way of accessing this data (every update has owner attribute). Due to usage of ForeignKey pointing to User model every instance of it got also update_set attribute which contains every update which is assigned to this user. [18:37] Clean way of getting all user updates is: [18:37] >>> admin.update_set.all() [18:37] [] [18:38] But we can also get to the same information from Update model: [18:38] >>> Update.objects.filter(owner=admin) [18:38] (btw, those are only examples, you don't have to type them) [18:38] Both of those will return the same data, only that first way is cleaner IMHO. [18:38] That's just simple example of a way to get data from the database. You have far greater power over that aspect of your application but as time is short for us I don't get much deeper into that aspect. [18:39] Now when we know how to get to necessary data we can send it to the browser by modifying home function: http://paste.ubuntu.com/263837/ [18:39] Here we set the content type of the response to text/plain so we can see angle brackets in the output, without that, by default, browser would hide it. [18:39] Now, when we have data we can work on dressing it up a little bit. For that we'll use templates. [18:39] Templates in Django have it's own syntax, it's really simple as it was designed to be used by designers, people not used to programming languages. [18:40] We already have templates configured due to requirements of auth system, so it will be very easy to get started. [18:40] First we need some template we can use. Create file template/home.html and put following content in it: http://paste.ubuntu.com/263839/ [18:40] Every tag in Django templates language is contained between {% %} elements and also ending of every opening thing is done by adding end(thing) to the end (like endfor in that case). [18:40] To output content of the variable we're using {{ }} syntax. Also we can use something called filters by using | to pass value through the named filter. We're using that to format date to nice text description of time passed. [18:41] That's template, now let's write view code to use it.. [18:41] There's very convenient function when using templates in views: render_to_response [18:41] add following line to the top of the view.py file [18:41] from django.shortcuts import render_to_response [18:42] This function takes two arguments: name of the template to render (usually it's file name) and dictionary of arguments to pass to template. Having this in mind our home view looks like that: http://paste.ubuntu.com/263840/ [18:42] Now running $ ./manage.py runserver you can see that page in the browser has proper title [18:43] It would be really nice to be able to add status updates from the web page. For that we need a form. There are couple ways of doing that in Django, but we'll show a way which is most useful for forms which are used to create/modify instances of the models. [18:43] By convention form definitions goes to forms.py file in your app directory. Put following bits in there: http://paste.ubuntu.com/263841/ [18:43] This is very simple form which has only one field in it. [18:43] Now in views.py we need to instantiate this form and pass it to the template. After modifications this file should look like this: http://paste.ubuntu.com/263842/ [18:43] Last bit is to display this form in template. Add this bit just after tag: [18:44]
[18:44] [18:44] {{ form }} [18:44] [18:44]
[18:44] [18:44] Now when we have form properly displayed it would be useful to actually create updates based on the data entered by the user. That requires little bit of work inside our home view. Fortunately this is pretty straightforward to do: http://paste.ubuntu.com/263843/ [18:44] First thing is to check weather we're processing POST or GET request, if POST that means that user pressed 'Update' button on our form and we can start processing submitted data. [18:44] All POST data is conveniently gathered by Django in a dictionary in request.POST. For this case it's not really critical to know what exactly is send, UpdateForm will handle that. Bit with instance= is to automatically set update owner, without that form would not be valid and nothing would be saved in the database. [18:45] Checking if form is valid is very simple, just invoke .is_valid() method on it. If True is returned then we're saving the form to the database, which returns Update instance. It's not really needed anywhere but I wanted to show you that you can do something with it. [18:45] Last bit is to create empty form, so that Status field will be clear, ready for next update. [18:45] If you try to send update without any content you'll see that there's an error message displayed 'This field is required'. All of that is automatically handled by forms machinery. [18:45] It's nice to be able to see our own status updates but currently it's only viewable by logged user. [18:46] To implement this feature we'll start by adding new entry in urls.py. Add following entry there: [18:46] (r'^(?P\w+)$', 'twitbuntu.app.views.user'), [18:46] This bit (?P) is python extension to regular expression, it names a bit matched string. Using this name will enable us to write pretty convenient view function in views.py [18:46] First we'll import very convenient shortcut function: get_object_or_404 which gets you model instance if it exists in database or returns 404 Page not found page if such object doesn't exist. [18:47] Then add user function as shown here: http://paste.ubuntu.com/263844/ [18:48] Last bit is to create 'user.html' template which will display this data properly. [18:48] Quickly doing this yields something like that: http://paste.ubuntu.com/263845/ [18:48] Now you can go to http://127.0.0.1/username and see your updates. [18:49] (substitute username with the one you've chosen) [18:49] It's all nice with templates but as you have noticed there are common things in both of our templates. We now have two of them but imagine project with tenths of templates, making change to some common thing would be really painful in such situation. [18:49] Fortunately Django is designed to help in this area too. Feature we're talking about now is templates inheritance. Idea is that you can have base template which defines holes to be filled in by the more specific templates. [18:50] Those holes are named blocks in Django. You define them like that: [18:50] {% block some_block %} [18:50] Block content [18:50] {% endblock %} [18:50] When in template which inherits form such base template you provide data to such block it will replace anything defined before, but if you omit that block default data would be rendered to the end user. [18:50] We'll start by defining our base template in templates/base.html file: http://paste.ubuntu.com/263846/ [18:50] This has some styling introduced, so our app will not look so ugly (not that improvement is dramatic ;D). [18:50] Next bit is to update home.html and user.html templates to user this base template. [18:51] Most important bit is extends tag which tells Django which template is base for the current one, for brevity I'll present only user.html template and you should be able to modify home.html by yourself: http://paste.ubuntu.com/263847/ [18:51] As a last bit I left Django admin interface, web application which enables you to manage data in database without need to use python shell or database access tools. It's distinct feature of Django which really speeds up web application development. [18:52] First bit is to add admin as an installed app in your settings.py file: [18:52] INSTALLED_APPS = ( [18:52] ... [18:52] 'django.contrib.admin', [18:52] ) [18:52] In urls.py uncomment this on top of the file, so it looks like that: [18:52] from django.contrib import admin [18:52] admin.autodiscover() [18:52] Last bit is to add admin urls to list of patterns: [18:52] (r'^site-admin/(.*)', admin.site.root), [18:52] In that case I changed default /admin/ url into /site-admin/ as my user is named admin and there would be collision (in that case order of patterns in urls.py file matter, as one higher have precedence). [18:53] Last bit is to invoke syncdb which will create some tables necessary for admin to work properly. [18:53] Now when you go to http://127.0.0.1:8000/site-admin/ you should admin interface you your database. [18:53] Now pretty much only thing you can do with admin is manage users, we don't see our own model there. For that we have to tell Django how we want our data to be displayed. [18:53] By convention bits connected with admin interface goes to admin.py file in your app directory. First enter it's content and then I'll describe what is there exactly. [18:54] from django.contrib import admin [18:54] from twitbuntu.app.models import Update [18:54] [18:54] admin.site.register(Update) [18:54] This is simplest possible form of telling Django about our model. After that you should be able to see it in Django admin interface. [18:55] Now you're able to add new Updates, delete/change existing ones. [18:55] But this is really simple and doesn't show us full potential of it. We'll change one bit to show how easy it is to customise it. [18:55] For that we'll create UpdateAdmin class which will hold all customisations: http://paste.ubuntu.com/263848/ [18:56] Some short description of the fields [18:56] list_display is list of model fields which should be displayed as columns on the list of objects [18:56] search_fields is list of fields which will be checked when you try to search by using search box [18:56] list_filter is list of fields which will create nifty right side filters for your objects which really speeds up looking through big sets of data. [18:58] And that's all I prepared for today's session, you can find detailed information about every aspect of Django in documentation. Django documentation is great, you can almost always find everything you need there http://docs.djangoproject.com/, also tutorial presented there is very good, goes into much more detail than we had time to do today. [18:58] (we're pretty much out of time.... questions?) [19:00] k people, thanks! [19:00] Thank you everybody for your time, hope you enjoyed it [19:00] and I hope you have some info you can start with [19:04] Hi, all. Welcome to Stuart's House Of Desktop Couch Knowledge. [19:04] I'm Stuart Langridge, and I hack on the desktopcouch project! [19:04] Over the next hour I'm going to explain what desktopcouch is, how to use it, who else is using it, and some of the things you will find useful to know about the project. [19:04] I'll talk for a section, and then stop for questions. [19:05] Please feel free to ask questions in #ubuntu-classroom-chat, and I'll look at the end of each section to see which questions have been posted. Ask at any time; you don't have to wait until the end of a section. [19:05] You should prefix your question in #ubuntu-classroom-chat with QUESTION: so that I notice it :-) [19:05] So, firstly, what's desktopcouch? [19:05] Well, it's giving every Ubuntu user a CouchDB on their desktop. [19:05] CouchDB is an Apache project to provide a document-oriented database. If you're familiar with SQL databases, where you define a table and then a table has a number of rows and each row has the same columns in it...this is not like that. [19:06] Instead, in CouchDB you store "documents", where each document is a set of key/value pairs. Think of this like a Python dictionary, or a JSON document. [19:06] So you can store one document like this: [19:06] { "name": "Stuart Langridge", "project": "Desktop Couch", "hair_colour": "red" } [19:06] and another document which is completely different: [19:06] { "name": "Stuart Langridge", "outgoings": [ { "shop": "In and Out Burger", "cost": "$12.99" } , { "shop": "Ferrari dealership", "cost": "$175000" } ] } [19:07] The interface to CouchDB is pure HTTP. Just like the web. It's RESTful, for those of you who are familiar with web development. [19:07] This means that every programming language already knows how to speak it, at least in basic terms. [19:07] CouchDB also comes with an in-built in-browser editor, so you can look at and browse around and edit all the data stored in it. [19:07] So, the desktopcouch project is all about providing these databases for every user, so each user's applications can store their data all in one place. [19:07] You can have as many databases in your desktop Couch as you or your applications want, and storage is unlimited. [19:08] Desktop Couch is built to do "replication", synchronizing your data between different machines. So if you have, say, Firefox storing your bookmarks in your desktop Couch on your laptop, those bookmarks could be automatically synchronized to your Mini 9 netbook, or to your desktop computer. [19:08] They can also be synchronized to Ubuntu One, or another running-in-the-cloud service, so you can see that data on the web, or synchronize between two machines that aren't on the same network. [19:08] So you've got your bookmarks everywhere. Your own personal del.icio.us, but it's your data, not locked up solely on anyone else's servers. [19:08] Imagine if your apps stored their preferences in desktop Couch. Santa Claus brings you a new laptop, you plug it in, pair it with your existing machine, and all your apps are set up. No work. [19:09] But sharing data between machines is only half the win. The other half is sharing data between applications. [19:09] I want all my stuff to collaborate. I don't want to have to "import" data from one program to another, if I switch from Thunderbird to Evolution to KMail to mutt. [19:09] I want any application to know about my address book, to allow any application to easily add "send this to another person", so that I can work with people I know. [19:09] I want to be able to store my songs in Banshee and rate them in Rhythmbox if I want -- when people say that the Ubuntu desktop is about choice, that shouldn't mean choosing between different incompatible data silos. I can choose one application and then choose another, you can choose a third, and we can all cooperate on the data. [19:09] My choice should be how I use my applications, and how they work; I shouldn't have to choose between underlying data storage. With apps using desktopcouch I don't have to. [19:09] All my data is stored in a unified place in a singular way -- and I can look at my data any time I want, no matter which application put it there! Collaboration is what the open source desktop is good at, because we're all working together. It should be easy to collaborate on data. [19:09] That's a brief summary of what desktopcouch *is*: any questions so far before we get on to the meat: how do you actually Use This Thing? [19:10] mandel_macaque (hey, mandel :)) -- that's what the desktopcouch mailing list is for, so people can get together and talk about what should be in a standard record [19:11] there's no ivory tower which hands down standard formats from the top of the mountain :) [19:11] mandel_macaque's question was: will there be a "group" that will try to define standard records? [19:12] QUESTION: how does desktopcouch differ from/replace gconf? [19:12] mhall119|work, desktopcouch is for storing all sorts of user data. It's not just about preferences, although you could store preferences in it [19:13] QUESTION: What about performance? Why would Banshee/rhythmbox switch to a slower way to store metadata? [19:13] sandy|lu1k, performance hasn't really been an issue in our testing, and couchdb provides some serious advantages over existing things like sqlite or text files, like replication and user browseability [19:14] QUESTIONS: Is desktopcouch creating the required infrastructure to allow user sync, or should applications take care of that? [19:14] desktopcouch is providing infrastructure and UI to "pair" machines and handle all the replication; applications do not have to know or worry about data being replicated to your other computers [19:14] QUESTION: can you store media like images, audio and video? [19:15] jopojop, not really -- couchdb is designed for textual, key/value pair, dictionary data, not for binary data [19:16] it's possible to store binary data in desktopcouch, but I'd suggest not importing your whole mp3 collection into it; store the metadata. The filesystem is good at handling binary data [19:16] QUESTION the real performance concern that media apps have is query speed for doing quick searches [19:17] sandy|lu1k, that's something we'd really like to see more experimentation with. couchdb's views architecture makes it really, really quick for some uses, [19:17] ok, let's talk about how to use it :) [19:17] The easiest way to use desktopcouch is from Python, using the desktopcouch.records module. [19:17] This is installed by default in Karmic. [19:17] An individual "document" in desktop Couch is called a "record", because there are certain extra things that are in a record over and above what stock CouchDB requires, and desktopcouch.records takes care of this for you. [19:18] First, a bit of example Python code! This is taken from the docs at /usr/share/doc/python-desktopcouch-records/api/records.txt. [19:18] >>> from desktopcouch.records.server import CouchDatabase [19:18] >>> from desktopcouch.records.record import Record [19:18] >>> my_database = CouchDatabase("testing", create=True) [19:18] # get the "testing" database. In your desktop Couch you can have many databases; each application can have its own with whatever name it wants. If it doesn't exist already, this creates it. [19:18] >>> my_record = Record({ "name": "Stuart Langridge", "project": "Desktop Couch", "hair_colour": "red" }, record_type='http://example.com/testrecord') [19:18] # Create a record, currently not stored anywhere. Records must have a "record type", a URL which is unique to this sort of record. [19:18] >>> my_record["weight"] = "too high!" [19:18] # A record works just like a Python dictionary, so you can add and remove keys from it. [19:19] >>> my_record_id = my_database.put_record(my_record) [19:19] # Actually save the record into the database. Records each have a unique ID; if you don't specify one, the records API will choose one for you, and return it. [19:19] >>> fetched_record = my_database.get_record(my_record_id) [19:19] # You can retrieve records by ID [19:19] >>> print fetched_record["name"] [19:19] "Stuart Langridge" [19:19] # and the record you get back is a dictionary, just like when you're creating it. [19:20] That's some very basic code for working with desktop Couch; it's dead easy to save records into the database. [19:20] You can work with it like any key/value pair database. [19:20] And then desktopcouch itself takes care of things like replicating your data to your netbook and your desktop without you having to do anything at all. [19:20] And the users of your application can see their data directly by using the web interface; no more grovelling around in dotfiles or sqlite3 databases from the command line to work out what an application has stored. [19:20] You can get at the web interface by browsing to file:///home/aquarius/.local/share/desktop-couch/couchdb.html in a web browser, which will take you to the right place. [19:21] (er, if your username is aquarius you can, anyway :)) [19:21] I'll stop there for some questions about this section! [19:21] ah, people in the chat channel are trying it out. YOu might need to install python-desktopcouch-records [19:22] the version in karmic right now has a couple of strange outstanding bugs which we're working on which might make it a little difficult to follow along [19:22] QUESTION: (about views) which is the policy for design documents (views), one per app? [19:23] mandel_macaque, no policy, thus far. Create whichever design docs you want to -- having one per app sounds sensible, but an app might want more than one [19:23] mandel_macaque, this is an ideal topic to bring up for discussion on the mailing list :) [19:23] QUESTION: Does desktopCouch/CouchDB provide a means controls access to my data on a per application basis? I would not necessarily want any application to be able to access any data - I might want to silo two mail apps to different databases, etc. [19:24] test1, at the moment it does not (in much the same way as the filesystem doesn't), but it would be possible to build that in [19:24] QUESTION: how does the HTML interact with couchdb? Javascript? [19:25] mhall119|work, (I assume you mean: how does the HTML web interface for browsing your data interact with couchdb?) yes, JavaScript [19:25] QUESTION: so when I do CRUD, it's done locally, then replicated on the web DB? (and replicated locally from the web some other time to keep sync?) [19:25] AntoineLeclair, yes, broadly [19:25] QUESTION: So far, this sounds a bit like the registry which we all know and hate from the Windows world: Do you really think all applications should put there data into one monolithic databse, which in the end gets messed up? [19:27] F30, having data in one place allows you to do things like replicate that data and make generalisations about it. We have the advantage that desktopcouch is built on couchdb, which is not only dead robust but also open source, unlike the registry :) [19:27] In terms of replication - does CouchDb automate data merging (i.e. how does it handle conflict resolution) if I were to modify my bookmarks on multiple machines before replication took place? [19:28] test1, couch's approach is "eventual consistency". In the case of actual conflicts, desktopcouch stores both versions and marks them as conflicting; it's up to the application that uses the data to resolve those conflicts in some way [19:28] perhaps by asking the user, or applying some algorthmic knowledge [19:29] the application knows way more about what the data is than couch itself does [19:29] Next, on to views. [19:29] Being able to retrieve records one at a time is nice, but it's not what you want to do most of the time. [19:30] To get records that match some criteria, use views. [19:30] Views are sort of like SQL queries and sort of not. Don't try and think in terms of a relational database. [19:30] The best reference on views is the CouchDB book, available for free online (and still being worked on): the views chapter is at http://books.couchdb.org/relax/design-documents/views [19:30] Basically, a view is a JavaScript function. [19:30] When you request the records from a view, desktopcouch runs your view function against every document in the database and returns the results. [19:31] So, to return all documents with "name": "Stuart Langridge", the view function would look like this: [19:31] function(doc) { if (doc.name == "Stuart Langridge") emit(doc._id, doc) } [19:31] This sort of thinking takes a little getting used to, but you can do anything you want with it once you get into it [19:31] desktopcouch.records helps you create views and request them [19:31] # creating a view [19:31] >>> map_js = """function(doc) { emit(doc._id, null) }""" [19:31] >>> db.add_view("name of my view", map_js, None, "name of the view container") [19:31] # requesting the records that the view returns [19:31] >>> result = db.execute_view("name of my view", "name of the view container") [19:32] The "view container", called a "design doc", is a collection of views. So you can group your views together into different design docs. [19:32] (hence mandel_macaque's question earlier about whether each app that uses the data in a database should have its own design doc(s). I suggest yes.) [19:32] Advanced people who know about map/reduce should know that this is a map/reduce approach. [19:33] You can also specify a reduce function (that's the None parameter in the add_view function above) [19:33] The CouchDB book has all the information you'll need on views and the complexities of them. [19:33] Questions on views? :-) [19:33] QUESTION: taking as an example the contacts record, when we have to perform a diff we will have to take into account the application_annotations key, which is share among apps. How can my app know aht to do with other app data? [19:34] (bit of background for those not quite as au fait with desktopcouch: each desktopcouch record has a key called "application_annotations", and under that there is a key for each application that wants to store data specific to that application about this record) [19:35] (so Firefox, for example, while storing a bookmark, would store url and title as top-level fields, and the Firefox internal ID of the bookmark as application_annotations.Firefox.internal_id or similar) [19:35] mandel_macaque, what you have to do with data in application_annotations is preserve it. You are on your honour to not delete another app's metadata :) [19:35] QUESTION: might it be better to standardize on views, rather than records? So, Evolution and TBird might have their own database, with their own Contact record, but a single "All Contacts" view would aggregate both? [19:36] mhall119|work, the idea behind collaboration is that everyone co-operates on the actual data rather than views. So it's better if each app stores the data in a standard format on which they collaborate, and then has its own views to get that data how *it* wants. [19:38] mandel_macaque: what if I wanted to wipe all Firefox data because I want a fresh start? right now, I can just delete ~/.mozilla/firefox/myProfile [19:38] I'm concerned that as a power user, I lose direct access [19:38] FND, you can delete the firefox database from the web interface, or from the command line. "curl -X delete http://localhost:5984/firefox" [19:39] or using desktopcouch.records, which is nicer -- python -c "from desktopcouch.records.server import CouchDatabase; db = CouchDatabase('firefox'); db.delete()" [19:39] QUESTION: Wouldn't deleting your profile simply reflect as deleted records on the CouchDB instance? [19:40] mgunes, how deletions affect applications that used the deleted data depends on the application. For example, there's obviously a distinction between "I deleted this because I want to create a new one" and "I deleted this but I want to be able to get it back later" [19:41] the couchdb upstream team are currently working on having full history for all records, which will make this sort of work easier [19:41] QUESTION: if collaboration is to be done on the database level, there wouldn't be a "Firefox" database, there would be a "Bookmarks" database, correct? [19:41] mhall119|work, yes, absolutely. My mistake in typing, sorry :) [19:42] QUESTION: for those that don't want to mess with python of curl, will there be a CLI program for manipulating couchdb? [19:42] mhall119|work, there isn't at the moment (curl or desktopcouch.records are pretty easy, we think) but I'm sure the bunch of talented people I'm talking to could whip up a program (or a set of bash aliases) in short order if there was desire for it [19:42] :-) [19:42] that would be a cool addition to desktopcouch [19:43] QUESTION: Since couchdb stores all the version of my documents, will we have something like time machine in OS X? The data will already be there :D [19:43] mandel_macaque, certainly the infrastructure for that would be there once couchdb has full history and lots of apps are using desktopcouch [19:43] if someone writes it I'll use it ;-0 [19:44] It's not just Python, though. The Python Records API is in package python-desktopcouch-records, but there are also others. [19:45] couchdb-glib is a library to access desktopcouch from C. [19:45] Some example code (I don't know much about C, but rodrigo_ wrote couchdb-glib and can answer all your questions :-)) [19:45] couchdb = couchdb_new (hostname); [19:45] Create a database -> couchdb_create_database() [19:45] Delete a database -> couchdb_delete_database() [19:45] List documents in a database -> couchdb_list_documents() [19:45] More details are available for couchdb-glib at http://git.gnome.org./cgit/couchdb-glib/tree/README [19:46] We're also working on a library to access desktopcouch from JavaScript, so you can use it from things like Firefox extensions of gjs. [19:46] er, *or* gjs :) [19:46] And because the access method for desktop Couch is HTTP, it's easy to write an access library for any other language that you choose. [19:46] You can, of course, talk directly to desktop Couch using HTTP yourself, if you choose; you don't have to use the Records API, or you might be implementing an access library for Ruby or Perl or Befunge or Smalltalk or Vala or something. [19:47] desktopcouch.records (and couchdb-glib) do a certain amount of undercover work for you which you'll need to do, and to explain that I need to delve into some deeper technical detail. [19:47] Your desktop Couch runs on a TCP port, listening to localhost only, which is randomly selected when it starts up. There is a D-Bus API to get that port. [19:47] So, to find out which port you need to connect to by HTTP, call the D-Bus API. (This API will also start your desktop Couch if it's not already running.) [19:48] $ dbus-send --session --dest=org.desktopcouch.CouchDB --print-reply --type=method_call / org.desktopcouch.CouchDB.getPort [19:48] (desktopcouch.records does this for you.) [19:48] You must also be authenticated to read any data from your desktop Couch. Authentication is done with OAuth, so every HTTP request to desktopcouch must have a valid OAuth signature. [19:48] The OAuth details you need to sign requests are stored in the Gnome keyring. [19:48] (again, desktopcouch.records takes care of this for you so you don't have to think about it.) [19:49] As I said above, every record must have a record_type, a URL which identifies what sort of record this is. So, if your recipe application stores all your favourite recipes in desktopcouch, you need to define a URL as the record type for "recipe records". [19:49] That URL should point to a human-readable description of the fields in records of that type: so for a recipe document you might have name, ingredients, cooking instructions, oven heat. [19:49] The URL is there so other developers can find out what should be stored in a record, so more than one application can collaborate on storing data. [19:49] If I write a different recipe application, mine should work with records of the same format; that way I don't lose all my recipes if I change applications, and me and the developers of the first app can collaborate. [19:49] Let's take some more questions. [19:50] QUESTION: Is there any plan/need for Desktopcouch itself to talk to Midgard, for access to data stored by applications that use it? And did you investigate Midgard before going with CouchDB? [19:50] There's been a lot of conversation between Midgard and CouchDB and desktopcouch and others [19:50] midgard implements the CouchDB replication API, so you can replicate your desktopcouch data to a midgard server [19:51] to clarify, another way to express my concerns - and I hate to be such a nagging naysayer here - is "transparency" - inspecting files is generally a whole lot more obvious than inspecting a DB (even if there's a nifty web UI) [19:51] FND, applications are increasingly using databases rather than flat files anyway, because of the advantages you get from a database -- as was asked about above, media players are using sqlite DBs and so on for quick searchability and indexability [19:52] QUESTION: is couchDB an ubuntu-only project or will it be avaiable on fedora or my mobile phone? [19:52] couchdb runs, like, everywhere. It's available on Ubuntu, Fedora, other Linux distros, Windows, OS X... [19:53] the couchdb upstream project love the idea of things like mobile phones running couch, and they're working on that :) [19:54] desktopcouch, which sets up an individual couchdb for every user, is all written in Python and doesn't do anything Ubuntu-specific, so it should be perfectly possible to run it on other Linux distros (and there's a chap looking at getting it running on fedora) [19:54] and since it's all Python it should be possible to have it on other platforms too, like Windows or the Mac. [19:54] QUESTION: by making applications rely on CouchDB, isn't there a risk of diverging from other distros [19:55] desktopcouch isn't Ubuntu-specific. There was lots of interest at the Gran Canaria Desktop Summit this year [19:57] There is an Even Easier way to have applications use desktop Couch for data storage. [19:57] One of the really cool things in karmic is Quickly: https://wiki.ubuntu.com/Quickly [19:57] quickly helps you make applications...quickly. :-) [19:57] and apps created with Quickly use desktopcouch for data storage. [19:57] If you haven't seen Quickly, it's a way of easily handling all the boilerplate stuff you have to do to get a project going; "quickly create ubuntu-project myproject" gives you a "myproject" folder containing a Python project that works but doesn't do anything. [19:57] So you can concentrate on writing the code to do what you want, rather than boilerplate to get started. [19:57] It's dead neat :) [19:57] Anyway, quickly projects are set up to save application preferences into desktop Couch by default. So you get the advantages of using desktop Couch (replication, browsing of data) for every quickly project automatically. [19:57] The quickly guys have also contributed CouchGrid, a gtk.TreeView which is built on top of desktopcouch, so that it will display records from a desktopcouch database. [19:58] "quickly tutorial ubuntu-project" has lots of information about CouchGrid and how to use it. [19:58] Any questions about quickly? (I can't guarantee to be able to answer them, but #quickly is great for this.) [19:58] I'm going to race throught he last section since I have 3 mins, and then try and answer the last few questions :) [19:58] So, who's already using desktopcouch? [19:58] Quickly, as mentioned, uses desktopcouch for preferences in projects it creates. [19:58] The Gwibber team are working on using desktopcouch for data storage [19:58] Bindwood (http://launchpad.net/bindwood) is a Firefox extension to store bookmarks in desktopcouch [19:58] Macaco-contacts is transitioning to work with desktopcouch for contacts storage (http://www.themacaque.com/?p=248) [19:58] (perhaps :-)) [19:58] Evolution can now, in the evolution-couchdb package, store all contacts in desktopcouch [19:58] Akonadi, the KDE project's contacts and PIM server, can also store contacts in desktopcouch [19:58] These last three are interesting, because everyone's collaborating on a standard record type and record format for "contacts", so Evolution and Akonadi and Macaco-contacts will all share information. [19:58] So if you switch from Gnome to KDE, you won't lose your address book. [19:59] I'm really keen that this happens, that applications that store similar data (think of mail clients and addressbooks, as above, or media players storing metadata and ratings, for example) should collaborate on standard formats. [19:59] Details about the desktopcouch project can be found at http://www.freedesktop.org/wiki/Specifications/desktopcouch [19:59] There's a mailing list at http://groups.google.com/group/desktop-couchdb [19:59] The code is developed in Launchpad: http://launchpad.net/desktopcouch [19:59] The best place to ask questions generally is the #ubuntuone channel; all the desktopcouch developers are hanging out there [19:59] The best place to ask questions that you have right now is...right now, so go ahead and ask in #ubuntu-classroom-chat, and I'll answer any other questions you have! [19:59] in the two minutes I have remaining ;-) [19:59] QUESTION: whats about akonadi? is there competition? [20:00] akonadi has a desktopcouch back end for contacts, which was demonstrated at the Gran Canaria Desktop Summit -- it's dead neat to save a contact with Akonadi and then load it with Evolution :) [20:00] aquarius: QUESTION: does that mean that ubuntuone also uses it? [20:01] desktopcouch lets you replicate your data between all your machines on your network -- Ubuntu One has a cloud service so you can also send your data up into the cloud, so you can get at it from the web and replicate between machines anywhere on the internet [20:01] QUESTION: Do you expect the Bindwood and evolution-couchdb to be reliable enough for daily use in Karmic final? (I'll help either way ;) ) [20:02] mgunes, yes indeed :) [20:02] ok I need to stop now, out of time. Next is kees, who I hope will forgive me for overrunning! [20:05] Hello! [20:05] so, if I understand correctly, discussion and questions are in #ubuntu-classroom-chat [20:06] I'll be watching in there for stuff marked with QUESTION: so feel free to ask away. :) [20:06] this session is a relatively quick overview on ways to try to keep software more secure. [20:06] I kind of think of it as a "best-pratices" review. [20:07] given that there is a lot of material in this area, I try to talor my topics to langauges people are familiar with. [20:08] as a kind of "show of hands", out of HTML, JavaScript, C, C++, Perl, Python, SQL, what are people familiar with? (just shout out on the -chat channel) [20:08] (oh, and Ruby) [20:09] okay, cool, looks like a pretty wide variety. :) [20:09] I'm adapting this overview from some slides I used to give at talk at the Oregon State University. [20:09] you can find that here: http://outflux.net/osu/oss-security.odp [20:10] the main thing about secure coding is to take an "offensive" attitude when testing your software. [20:10] if you think to yourself "the user would never type _that_", then you probably want to rethink it. :) [20:11] I have two opposing quotes: "given enough eyeballs all bugs are shallow" - Eric Raymond, and "most people ... don't explicitly look for security bugs" - John Viega [20:11] I think both are true -- if enough people start thinking about how their code could be abused by some bad-guy, we'll be better able to stop them. [20:12] so, when I say "security", what do I mean? [20:12] basically... [20:12] I mean a bug with how the program functions that allows another person to change the behavior against the desire of the main user [20:12] if someone can read all my cookies out of firefox, that's bad. [20:13] if someone can become root on my server, that's bad, etc. [20:13] so, I tend to limit this overview to stuff like gaining access, reading or writing someone else's data, causing outages, etc. [20:13] I'll start with programming for the web. [20:14] when handling input in CGIs, etc, it needs to be carefully handled. [20:14] the first example of mis-handling input is "Cross Site Scripting" ("XSS"). [20:15] if someone puts hi in some form data, and the application returns exactly that, then the bad-guy can send arbitrary HTML [20:15] output needs to be filtered for html entites. [20:15] luckily, a lot of frameworks exist for doing the right thing: Catalyst (Perl), Smarty (PHP), Django (Python), Rail (Ruby). [20:16] another issue is Cross Site Request Forgery (CSRF). [20:16] the issue here is that HTTP was designed so that "GET" (urls) would be for reading data, and "POST" (forms) would be used for changing data. [20:17] if back-end data changes as a result of a "GET", you may have a CSRF. [20:17] I have a demo of this here: http://research.outflux.net/demo/csrf.html [20:17] imdb.com lets users add "favorite" movies to their lists. [20:17] but it operates via a URL http://imdb.com/rg/title-gold/mymovies/mymovies/list?pending&add=0113243 [20:18] so, if I put that URL on my website, and you're logged into imdb, I can make changes to your imdb account. [20:18] so, use forms. :) [20:18] (or "nonces", though I won't go into that for the moment) [20:19] another form of input validation is SQL. [20:19] if SQL queries aren't escaped, you can end up in odd situations [20:19] SELECT secret FROM users [20:19] WHERE password = '$password' [20:20] with that SQL, what happens if the supplied password is ' OR 1=1 -- [20:20] it'll be true and will allow logging in. [20:20] my rule of thumb is to _always_ use the SQL bindings that exist for your language, and to never attempt to manually escape strings. [20:20] so, for perl [20:21] my $query = $self->{'dbh'}->prepare( [20:21] "SELECT secret FROM users [20:21] WHERE password = ?"); [20:21] $query->execute($password); [20:21] this lets the SQL library you're using do the escaping. it's easier to maintain, and it's much safer in the long-run. [20:22] some examples of SQL and XSS are seen here: http://research.outflux.net/demo/sql-bad.cgi [20:22] If I put: oh my eyes in the form, it'll pass through [20:23] if I put: ' OR 1=1 -- in the form, I log in, etc [20:23] http://research.outflux.net/demo/sql-better.cgi seeks to solve these problems. [20:23] another thing about web coding is to think about where files live [20:24] yet another way around the sql-bad.cgi example is to just download the SQLite database it's using. [20:24] so, either keeping files out the documentroot, or protecting them: http://research.outflux.net/demo/htaccess-better [20:25] so, moving from web to more language agnostic stuff [20:25] when your need to use "system()", go find a better method. [20:26] if you're constructing a system()-like call with a string, you'll run into problems. you always want to implement this with an array. [20:26] python's subprocess.call() for example. [20:26] this stops the program from being run in a shell (where arguments may be processes or split up) [20:27] for example, http://research.outflux.net/demo/progs/system.pl [20:27] no good: system("ls -la $ARGV[0]"); [20:27] better: system("ls","-la",$ARGV[0]); [20:27] best: system("ls","-la","--",$ARGV[0]); [20:28] in array context, the arguments are passed directly. in string context, the first argument may be processed in other ways. [20:28] handling temporary files is another area. [20:29] static files or files based on process id, etc, shouldn't be used since they are easily guessed. [20:29] all languages have some kind of reasonable safe temp-file-creation method. [20:29] File::Temp in perl, tempfile in python, "mktemp" in shell, etc [20:30] i.e. bad: TEMPFILE="/tmp/kees.$$" [20:30] good: TEMPFILE=$(mktemp -t kees-XXXXXX) [20:30] examples of this as well as a pid-racer are in http://research.outflux.net/demo/progs/ [20:30] keep data that is normally encrypted out of memory. [20:31] so things like passwords should be erased from memory (rather than just freed) once they're done being used [20:31] example of this is http://research.outflux.net/demo/progs/readpass.c [20:31] once the password is done being used: [20:31] fclose(stdin); // drop system buffers [20:31] memset(password,0,PASS_LEN); // clear out password storage memory [20:32] then you don't have to worry about leaving it in core-dump files, etc [20:32] for encrypted communications, using SSL should actually check certificates. [20:33] clients should use a Certificate Authority list (apt-get install ca-cerificates, and use /etc/ssl/certs) [20:33] servers should get a certificate authority. [20:33] the various SSL bindings will let you define a "check cert" option, which is, unfortunately, not on by default. :( [20:34] one item I mentioned early on as a security issue is blocking access to a service, usually through a denial of service. [20:34] one accidental way to make a server program vulnerable to this is to use "assert()" or "abort()" in the code. [20:34] normally, using asserts is a great habit to catch errors in client software. [20:35] unfortunately, if an assert can be reached while you're processing network traffic, it'll take out the entire service. [20:35] those kinds of programs should abort on if absolutely unable to continue (and should gracefully handle unexpected situations) [20:36] switching over to C/C++ specific issues for a bit... [20:37] one of C's weaknesses is its handling of arrays (and therefore strings). since it doesn't have built-in boundary checking, it's up to the programmer to do it right. [20:37] as a result, lengths of buffers should always be used when performing buffer operations. [20:37] functions like strcpy, sprintf, gets, strcat should not be used, because they don't know how big a buffer might be [20:38] using strncpy, snprintf, fgets, etc is much safer. [20:38] though be careful you're measureing the right buffer. :) [20:38] char buf[80]; [20:38] strncpy(buf,argv[1],strlen(argv[1])) is no good [20:39] you need to use buf's len, not the source string. [20:39] it's not "how much do I want to copy" but rather "how much space can I use?" [20:40] another tiny glitch is with format strings. printf(buffer); should be done with printf("%s", buffer); otherwise, whatever is in buffer would be processes for format strings [20:40] instead of "hello %x" you'd get "hello 258347dad" [20:40] I actually have a user on my system named %x%x%n%n just so I can catch format string issues in Gnome more easily. :) [20:41] the last bit to go over for C in this overview is calculating memory usage. [20:41] if you're about to allocate memory for something, where did the size come from? [20:42] malloc(x * y) could wrap around an "int" value and result in less than x * y being allocated. [20:42] this one is less obvious, but the example is here: http://research.outflux.net/demo/progs/alloc.c [20:43] malloc(5 * 15) will be safe, but what about malloc (1294967000 * 10) [20:44] using MAX_INT to get it right helps [20:44] (I need to get an example of _good_ math ) [20:45] so, the biggest thing to help defend against these various glitches is testing. [20:45] try putting HTML into form data, URLs, etc [20:45] see what kinds of files are written in /tmp [20:46] try putting giant numbers through allocations [20:46] put format strings as inputs [20:46] try to think about how information is entering a program, and how that data is formulated. [20:46] there are a lot of unit-test frameworks (python-unit, Test::More, CxxTest, check) [20:47] give them a try. :) [20:47] as for projects in general, it's great if a few days during a development cycle can be dedicated to looking for security issues. [20:48] that's about all I've got for this quick overview. I've left some time for questions, if there are any? [20:49] 19:48 < AntoineLeclair> QUESTION: how could the malloc thing be a security problem? [20:49] so, the example I tried to use (http://research.outflux.net/demo/progs/alloc.c) is like a tool that processes an image [20:49] in the example, it starts by reading the size [20:50] then allocates space for it [20:50] and then starts filling it in, one row at a time. [20:50] if we ended up allocating 10 bytes where we're reading 100, we end up with a buffer overflow. [20:50] in some situations, those can be exploitable. [20:51] 19:50 < bas89> QUESTION: what security issues are there with streams? [20:51] (in C++) [20:51] I'm not aware of anything to shy away from that implementation. [20:52] obviously, where the stream is attached (/tmp/prog.$$) should be examined [20:52] but I haven't seen issues with streams before. (maybe I'm missing something in how C++ handles formatting) [20:53] as it happens, Ubuntu's compiler will try to block a lot of the more common C buffer mistakes, including stack overflows. glibc will block heap overflows, and the kernel is set up to block execution of heap or stack memory. [20:54] so a lot of programs that would have had security issues are just crashes instead. [20:54] this can't really help design failures, though. [20:55] well, that's about it, so I'll clear out of the way. Thanks for listening, and if other questions pop to mind, feel free to catch me on freenode or via email @ubuntu.com [20:56] 19:56 < henkjan> kees: QUESTION: wil ubuntu stay with apparmor or wil it move to Selinux? [20:56] both are available in Ubuntu (and will remain available). There hasn't been a good reason to leave AppArmor as a default yet, so we're sticking with that. [21:00] ok thanks kees! [21:01] ok thanks everyone for showing up [21:02] This next session is "Bugs lifecycle, best practices, workflow, tags, upstream, and big picture" [21:02] with myself and pedro_ [21:02] * pedro_ waves [21:03] ok, so the idea for this session is we want to familiarize you with general bug "workflow" stuff [21:03] so that you're aware of tools and techniques we use to make better bugs [21:03] and how to make that process efficient so bugs get fixed quicker [21:03] ubuntu is a "distribution", which means we bundle a bunch of software from what we call "upstreams" [21:03] so like, GNOME, KDE, Xorg, Firefox, Openoffice, etc. [21:04] since we have lots of users and sometimes things go wrong, those users report the bugs to us. [21:04] and what people like pedro_ do is to ensure that reports get to the right people [21:04] this is important because not all upstreams can keep track of bugs they get from distros [21:05] so what we try to do is act as a collection filter and then forward the good bug reports to these upstream projects [21:06] and to "close the loop" part of the process is checking to make sure that bugs that are fixed upstream get out to users. [21:06] this involves working closely with upstreams to make sure everyone is getting the right information [21:06] so, we can start off with the bug lifecylce [21:06] which pedro can tell you about [21:07] Yeah, the Bug workflow on Ubuntu is not that different from everything else out there [21:07] so when Bugs get filed on Ubuntu they are assigned with the "New" Status [21:08] this is not like the "New" on Bugzilla [21:08] this is more like the Unconfirmed there [21:08] meaning that nobody else has confirmed the bug yet [21:09] it might confuse people a bit if you're used to Bugzilla workflow [21:09] ok so how to open a new bug in Ubuntu? [21:09] Best way is to go to the application menu -> Help -> Report a bug [21:10] or execute in the command line : ubuntu-bug $package_name ; ie: ubuntu-bug nautilus [21:10] apport will show up and start collecting information of your system, which is going to submit to launchpad along with your description of the problem [21:10] wanna know more? https://help.ubuntu.com/community/ReportingBugs is a good reading [21:11] So we have a New bug on Launchpad now what? [21:11] that bug needs to be Triaged [21:11] most of bugs on Ubuntu are triaged by the Ubuntu BugSquad: https://wiki.ubuntu.com/BugSquad [21:12] and also some of the products out there are triaged by their maintainers, so we're always looking for help to avoid that and let the developers concentrate on just that, fixing bugs and developing new features for Ubuntu [21:12] wanna help on that? easy join the BugSquad ;-) [21:13] ok so if that bug you reported is missing some information : [21:13] the report Status is changed to "Incomplete" [21:13] again, this is not like the Incomplete in Bugzilla, the bug is not closed [21:13] this is more like the "NeedInfo" there [21:14] If that Triager or Developer think that probably the report you opened is not a bug [21:14] that report is marked as "Invalid" [21:15] or if it's a feature request you want to see implemented but the maintainer don't want to implement it because it's too crazy or too controversial [21:15] the bug is marked as "Won't Fix" [21:16] when someone else than the reporter is having the same issue, that report is marked as "Confirmed" [21:16] this is a recommendation that fit all the bug trackers out there : please do not confirm your own reports [21:16] everytime you do that, a kitten die [21:17] ok so if someone from the Ubuntu Bug Control team [21:17] thinks that the report has enough information for a developer to start to work on it [21:17] the report is marked as Triaged [21:18] and yes you need extra powers to do that [21:18] how to request that rights? have a look to -> http://wiki.ubuntu.com/UbuntuBugControl [21:19] ooh, a question! [21:19] QUESTION what should we do with upstream packages that are dead, or orphaned (like gnome-volume-manager)? [21:19] Usually I try to find the project that supercedes that [21:20] so for example, gvm is replaced by something (part of the utopia stack I can't remember right now) [21:20] and then ask the reporter if it happens in that [21:20] if the problems is dead dead upstream then usually it just sits there. :-/ [21:21] let's continue [21:21] most of the developers look into the Triaged bugs to see what to fix next [21:21] so if one of them is working on a bug, they change the status to "In Progress" [21:22] And I've seen some confusion here [21:23] in a few reports i've seen that the reporter when is asked to provide more information and they're looking for that [21:23] they change the status to "In Progress" [21:24] don't do that, the status is still Incomplete, so if you as a triager see something like that, please educate them [21:24] when that fix that the developer was working on, get's committed into a bzr branch for example [21:24] the status of that report is changed to "Fix Committed" [21:25] if that fix that was committed is uploaded to an Official Ubuntu repository the status is changed to "Fix Released" [21:25] QUESTION Should a triager set the status to InProgress? (if working on the triage) [21:26] no, if you're doing triage on a report (requesting more info, etc) the status should be Incomplete [21:26] never In Progress, which is used by the developers instead [21:27] working with the BugSquad is a good way to give some love back to your adorable Ubuntu project [21:27] if you want to learn more about Triage: https://wiki.ubuntu.com/Bugs/HowToTriage [21:28] and if you doubts about status just ask on the #ubuntu-bugs channel, don't be afraid [21:28] Ok so on the upstream side [21:28] if you think that a bug that is already marked as Triaged should go upstream [21:29] because that feature wasn't developed by Ubuntu, the crash is not produced by an Ubuntu patch, etc, etc [21:29] first thing is to: Check if the bug is already filed there [21:29] let's take Gnome as an example [21:29] so as said first thing, search for a duplicate on the upstream tracker, Gnome uses Bugzilla as their BTS: http://bugzilla.gnome.org/query.cgi [21:30] you might want to go there and search [21:30] QUESTION: should the status be set to "Fix Committed" if a patch is included in the comments? [21:30] no, only if that patch was committed to a branch [21:31] the status of that report should remain the same until that happen [21:31] ok so let's continue with the upstream side [21:31] you found a report upstream that is similar to the one you are triaging on Ubuntu [21:31] what to do now? [21:32] you might say, ok i'll add a comment with the bug number [21:32] well that's correct, but let's do something else first [21:32] i'll show you a trick: [21:33] we might want to know if there's any report on Launchpad that links to that report on the Upstream Bug Tracker [21:33] so let's find that out [21:33] if you go to https://bugs.edge.launchpad.net/bugs/bugtrackers/ [21:33] you'll see a huge list of bugtrackers [21:33] Gnome Bugzilla, the Kernel one, Freedesktop, etc , etc ,etc [21:34] <^arky^> QUESTION: What is 'merge' request ? [21:34] if you do something like: https://bugs.edge.launchpad.net/bugs/bugtrackers/gnome-bugs/<> [21:34] you'll be redirected to a bug on launchpad which links to that report [21:35] example: https://bugs.edge.launchpad.net/bugs/bugtrackers/gnome-bugs/570329 [21:36] ^arky^: a merge request is when someone grabs the code from launchpad, fixes a bug, then publishes the source code [21:36] then they ask for someone to merge in their fix [21:36] so like, the package maintainer would look at that, review it, test it, and then merge it in [21:38] ok so as said, before filing anything upstream search if there's a bug on launchpad linking to that report [21:38] if there's one, well mark the bug as a duplicate of that [21:38] and if there is not , open a new bug there on the upstream BTS [21:38] grgr i mean link the report [21:38] I just did a screencast on how to link reports! [21:39] http://blip.tv/file/2527267 [21:40] awesome ;-)) [21:40] in the Gnome bugzilla side you also need to link the report [21:40] there's a new and shiny feature which allows you to Add Bug URLs on the Gnome Bugzilla [21:41] there's a tiny box on the right side which sayd "Add Bug Urls" if you don't find it , well look at Jorge's blog post about that: [21:41] http://castrojo.wordpress.com/2009/08/29/gnome-bugzilla-update/ [21:42] there's no automatic way on Launchpad (yet) to just say, this is affecting upstream and add a comment there with our Bug url [21:42] right now you need to manually do that, so please : Add the bug url to the url lists there and add a nice comment as well [21:42] gmb says he's working on it though if you want to send love/hate mail [21:43] \o/ [21:43] ok [21:43] so ... some ways to find bugs to link up [21:43] https://edge.launchpad.net/ubuntu/+upstreamreport [21:43] (please go there) [21:44] sometimes developers know that the problem is upstream [21:44] and mark the problem with an upstream task [21:44] however sometimes they can't find or don't know where in the upstream bug tracker this might be [21:44] so gmb created this report here [21:44] (for the purposes of this talk let's just look at the last column) [21:45] those are bugs that have been marked as an upstream problem, but NOT linked upstream [21:45] so, /potentially/ those are bugs where we have failed to communicate to an upstream project. [21:45] which is bad. [21:45] however, bug work being what it is, sometimes something is marked wrong [21:46] or someone thinks it's upstream and it's not [21:46] or sometimes someone makes a mistake [21:46] so what I do is check that last column [21:46] and when you click on them you get a list of bugs [21:46] so if you're interested in VLC [21:46] you'll see it has 6 possible bugs that could (or could not) be upstream related [21:46] so you can start with that list of 6 and work on them [21:47] when we do bug days we check these all the time [21:47] and we like to see over 90% of the bugs [21:47] so as we get closer to release I am usually going around to people who triage certain bugs reminded them to get those bugs forwarded upstream [21:48] i've started this section of the wiki https://wiki.ubuntu.com/Upstream [21:48] for people who are interested in helping getting the bugs and patches that people submit to the right places [21:49] So if you're interested in becoming an upstream contact for your favorite project, let me know! https://wiki.ubuntu.com/Upstream/Contacts [21:49] so, as another example [21:49] in that report, you see openoffice.org with 67 bugs that could be upstreamed [21:50] ooo is "special" because in many ways it has 2 bugtrackers upstream, the go-ooo one and the sun one [21:50] so in alot of ways that's double the work. [21:50] also, don't get too discouraged by the kernel bugs, they're on a sharp decline (there used to be over 8,000!) [21:51] any questions so far? [21:51] ok [21:51] another great resource I use is this [21:51] http://qa.ubuntu.com/reports/launchpad-database/unlinked-bugwatch.html [21:51] lots of times users Do The Right Thing(tm) and DO find if a bug is reported upstream [21:51] or in another distro [21:51] you've probably seen these before "This bug is also in Debian!" and then a URL [21:52] or, "This bug is fixed in debian!" and then a URL [21:52] it helps ubuntu developers better if those bugs are linked [21:52] sometimes people will just post the URL but not actually link the bug in launchpad [21:52] so this page is every bug that is not linked, but has a URL in the comments that is a bug tracker URL [21:53] so sometimes it might be a false alarm like "I think this is a bug here" and it's not [21:53] but alot of times it is a person who just didn't link the bug [21:53] so I go through this list here and I find a surprising amount of bugs where everyone is doing the right thing and just forgot to link the bugs [21:53] so I doublecheck that the bugs are indeed the same and then I link them [21:53] for other distros, upstream, whatever [21:54] then what happens is when launchpad goes and gets the status of the remote bugs it updates the bug in LP. [21:54] and it's MUCH easier for ubuntu developers to look a piles of bugs that are fixed in upstream or might have a patch in another distro or whatever. [21:54] I've seen bugs where a person finds the bug fixed in debian but doesn't know what to do [21:55] if it's linked it gets on the right radar and we can get those bugs fixed much quicker [21:55] and my last tip, of course getting involved in the bug and hug days is a great way to contribute [21:55] there are many upstreams that aren't as large as GNOME, KDE, etc. that need someone in Ubuntu to be their goto person [21:56] so if you have a project that you're passionate about and what to be the bridge betweem the distro and that package then Go For It, and let me know and I can help you [21:56] whoa! [21:56] we're the last session of the day [21:56] thanks everyone for coming, hope you learned a bunch and had a good time [21:56] smoke if you got em [21:57] thanks folks! [21:57] aids [21:57] thanks jcastro and pedro_ [21:58] thanks guys good session! [21:59] <^arky^> thanks jcastro pedro_ [22:00] thanks for attending , if you have further doubts just show up at #ubuntu-bugs and ask :-) [22:00] ausimage: you're a log hero, I was going to do it but you're so fast === mimor is now known as sir_mimor === sir_mimor is now known as mimor [23:34] whois pedro_ === Geep_ is now known as help === help is now known as Guest18796