[13:04] <g4ur4v> hello peeple...!
[13:16] <Martiini> anyone use Twitter ??
[14:35] <bugo> when this stuff starts?
[14:38] <nigelb> 3.5 hours to go.
[14:38] <nigelb> err 2.5
[14:44] <shadowking> der r still 4 hrs to start
[14:47] <shadowking> can anyone tell me wht tools will b reuired during dis session
[15:35] <techbreak> hi, this is the place where class will go on right ?
[15:36] <crazedpsyc1> yep, in approx 20 minutes the first one starts
[15:36] <techbreak> how long to go yet ?
[15:36] <techbreak> crazedpsyc1: cool thanks :)
[15:50] <risky> hello world
[15:50] <lithpr> howdy
[15:50] <akzfowl> hey everybody
[15:58] <Upirate> hello guys, what is going on?
[15:59] <Upirate> has the class started yet?
[15:59] <c2tarun> Upirate: Nope, still one hour left
[16:06] <Upirate> thanks, c2tarun:
[16:14] <Guest1225> class started ?
[16:14] <shadowking> still waiting 4 it guest
[16:14] <Guest1225> shadowking: ok :)
[16:14] <shadowking> its gonna start in abt 45 min
[16:15] <Guest1225> 45 min ? or 4 to 5 min ?
[16:15] <Guest1225> shadowking:
[16:15] <shadowking> "45" mins
[16:16] <Guest1225> shadowking:  ok :)
[16:17] <deuxpi> you can all follow "ubuntuclassroom" on identi.ca for the announcements :)
[16:17] <shadowking> n twitter too
[16:19] <shadowking> i guess m gonaa go n take a nap till it starts
[16:19] <shadowking> :P
[16:37] <shadowking> just half n hr more
[16:44] <crazedpsyc> I thought it was starting 44 min ago
[16:44] <Arcidias> *no daylight saving time
[16:44] <Arcidias> that means it will start in 15 minutes
[16:45] <nigelb> 15 mins to go people!
[16:45] <rigved> cool
[16:45] <crazedpsyc> i must have calculated the absense of DST the wrong way...
[16:46] <Arcidias> I did it, too
[16:48] <crazedpsyc> will the actual class be mirrored on twitter and identica, or just announcements
[16:49] <nigelb> Just the announcements
[16:49] <crazedpsyc> ok
[16:49] <Bannaz> hey everyone
[16:49] <g4ur4v> hi
[16:50] <Bannaz> hey there
[16:50] <crazedpsyc> hello
[16:50] <Arcidias> I'm sure there'll be logs later
[16:50] <Bannaz> are you all waiting for a class??
[16:50] <Arcidias> yep
[16:50] <Arcidias> 10 minutes to go
[16:50] <deuxpi> crazedpsyc: the session logs will be at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html
[16:50] <dArKd3ViL_> yes
[16:50] <Bannaz> so the class is gonna be here?!!
[16:50] <Arcidias> duh
[16:51] <Andy80> #ubuntu-classroom-chat
[16:51] <Bannaz> lol
[16:51] <c2tarun> Bannaz: eveyone thinks so ;)
[16:51] <Bannaz> alright
[16:51] <Andy80> sorry ;)
[16:51] <Bannaz> hmm
[16:51] <Bannaz> then we'll never know
[16:51] <nigelb> ok, guys, please take chatter to #ubuntu-classroom-chat
[16:54] <Bannaz> its a web chat ...how am I supposed to take a class here ?
[16:54] <Arcidias> the speaker of the session talks here
[16:54] <deuxpi> Bannaz: you follow in #ubuntu-classroom, and you may ask questions in here
[16:54] <Arcidias> you ask your questions in ubuntu-classroom-chat
[16:55] <goliath_> hi
[16:55] <Arcidias> hi
[16:55] <antonioJASR> hi
[16:55] <goliath_> how can I search for chanel that in their names have freelance?
[16:57] <cnd> Hi! I'm Chase Douglas, and I've been working on bringing multitouch input to X.org, moving the uTouch st
[16:57] <cnd> ugh
[16:57] <cnd> oh google calendar
[16:57] <cnd> always in the way with your pop ups
[16:59] <Jrsquee> haha
[17:00] <Error404NotFound> I am here from twitter after hearing about uTouch :)
[17:01] <dholbach> WELCOME EVERYBODY TO UBUNTU APP DEVELOPER WEEK!
[17:01] <dholbach>  I'll just do a very quick intro and then quickly pass on the mic
[17:01] <dholbach>  all the info you need should be on https://wiki.ubuntu.com/UbuntuAppDeveloperWeek - which session is next, what it is about, etc
[17:01] <dholbach>  also will we put logs of the sessions that happened on there, so if you couldn't make it, you can still read up on the session afterwards
[17:01] <dholbach> if you want to chat about what's going on or ask questions, please head to #ubuntu-classroom-chat
[17:01] <dholbach> if you have questions, please prefix them with QUESTION:
[17:02] <dholbach> for example:         QUESTION: What is a great device to try out multitouch with?
[17:02] <dholbach> etc
[17:02] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session.
[17:02] <dholbach> without further ado, I'll clear the stage for Chase Douglas and Stephen Webb
[17:02] <dholbach> they'll talk about Enabling Multitouch and Gestures Using uTouch
[17:02] <cnd> Hi! I'm Chase Douglas, and I've been working on bringing multitouch input to X.org, making the uTouch stack use the new X MT work, and making improvements throughout the stack from the kernel to the toolkits
[17:03] <bregma> I'm Stephen Webb, I designed the geis API used in utouch
[17:03] <bregma> we're dividing the session into two parts
[17:03] <bregma> forst, I'll discuss gestures and utouch overall
[17:04] <bregma> then Chase will chime with multi-touch
[17:04] <bregma> first off, I'd like to clarify what we mean when we say "gestures"
[17:04] <bregma> there are fancy complex gestures like spirals and zorros made with the mouse
[17:04] <bregma> we refer to those as 'stroke gestures'
[17:05] <bregma> Those are not the gestures we are handling in uTouch.
[17:05] <bregma> They lack the discoverability and generality for general-purpose gestures.
[17:05] <bregma> The uTouch stack currently recognizes what we call 'gesture primitives'.
[17:05] <bregma> These are "drag", "pinch/expand", "rotate", "tap", and "touch".
[17:05] <bregma> The first three correspond to the linear transformations of "translate", "scale", "rotate".
[17:05] <bregma> The last is similar to mouse-down and mouse-up events, and a "tap" is like a single mouse click.
[17:06] <bregma> It is possible to build stroke gestures from these primitives if that's what you want.
[17:06] <bregma> we have also been discussing the use of a 'gesture langue' to do just that
[17:06] <bregma> The current focus in uTouch is on multi-touch gestures.
[17:06] <bregma> A gesture appears to an application as a stream of events.
[17:06] <bregma> Each event is a snapshot in time of the current status of the gesture,
[17:07] <bregma> including properties such as velocity, change in radius, and finger positions if available
[17:07] <bregma> There is a lot of hardware with a wide range of capabilities, including the number of touches supported.
[17:07] <bregma> Not all multi-touch devices provide all the information required for all gestures.
[17:07] <bregma> For example, some common notebook touchpads provide only the bounding box of the multiple touches, which is inadquate to determine rotation angles.
[17:08] <bregma> So, the uTouch stack consists of three basic layers
[17:08] <bregma> (1) the input layer (evdev in the kernel, through the /dev/input interface, and the mtdev userspace library to homogenize input)
[17:08] <bregma> (2) the gesture recognition engine, utouch-grail
[17:08] <bregma> (3) the application programming interface, utouch-geis
[17:09] <bregma> it's a little more complex than that in implementation, but that's the basic structure.
[17:09] <bregma> a very rough diagram: <https://docs.google.com/drawings/edit?id=1isTrkSWDH7OWKLi_aialasw9xjb0pziK9yjKUR1yt9c&hl=en&authkey=CLiA9fEG>
[17:10] <bregma> In maverick and natty, grail runs in the X server so it has easy access to
[17:10] <bregma> window geometry.
[17:10] <bregma> the latest versions of grail are using XInput 2.1 to get full multi-touch support
[17:11] <bregma> the geis API currently connects to grail over a private X connection
[17:11] <bregma> applications and libraries using geis do not need to know this
[17:12] <bregma> in oneiric, grail will probably move into the compiz process as the X server gets replaced by "something else"
[17:12] <bregma> included as part of the uTouch stack are a set of diagnostic tools
[17:13] <bregma> gesturetest (which talks to the X server directly)
[17:13] <bregma> grail-gesture (which runs grail directly)
[17:13] <bregma> geistest (built on the API)
[17:13] <bregma> and others are already available, like xinput and lsinput, for examining hardware traits
[17:14] <bregma> the single programmable access to uTouch is through the GEIS API
[17:14] <bregma> API docs are available online at <http://people.canonical.com/~stephenwebb/geis-v2-api/> or in the linutouch-geis-doc package
[17:15] <bregma> the simplified interface was developed first and is sufficient for very basic gesture operations
[17:15] <bregma> te advanced interface was developed in response to initial feedback from developers and
[17:15] <bregma> ives finer control ofver the types of gestures reported and how the data are reported
[17:15] <bregma> the simplified interface requires a connection to grail (an "instance") for each window, and a list of gestures of interest
[17:16] <bregma> all feedback is through callbacks
[17:16] <bregma> en example of using the simplified interface is here: <http://pastebin.com/ju1Tgq4N>
[17:17] <bregma> the advanced interface requires only a single connection to grail and set of subscriptions
[17:17] <bregma> each filtering on window, gestures, and input device attributes
[17:18] <bregma> feedback is through event delivery or, optionally, callbacks
[17:18] <bregma> example code using the advanced interface can be found at <http://people.canonical.com/~stephenwebb/geis-v2-api/geis2_8c-example.html>
[17:18] <bregma> the advanced API is required for handling upcoming work on "gesture languages"
[17:19] <bregma> I believe these examples should be fairly self-explanatory, I'm going to gloss over them because we have a lot of ground to cover
[17:21] <bregma> there are some easier ways to take advantage of uTouch without programming to geis
[17:21] <bregma> first, there is libgrip, an add-on for GTK-based applications
[17:22] <bregma> it features a singleton GestureManager that a widget registers with and receives callbacks from
[17:22] <bregma> examples of libgrip use include the eog and evince packages found in the utouch PPA at https://launchpad.net/~utouch-team/+archive/utouch
[17:22] <bregma> future work also includes Qt and native python bindings
[17:23] <bregma> Qt has agreed to integrate utouch-geis into their gesture infrastructure
[17:23] <bregma> Python bindings for utouch-geis will be available in the utouch PPA soo and should be available with oneiric
[17:24] <bregma> we also have ginn, which can be used to retrofit utouch gestures into applications that were not programmed to accept them
[17:24] <bregma> ginn ses utouch-geis to subscribe to gestures and converts them into keystrokes or mouse movements and reinjects them into the application's input stream
[17:24] <bregma> it uses an XML file, /etc/ginn/wishes.xml, to define the set of conversion rules for each application
[17:24] <bregma> ginn ships with natty today
[17:25] <ClassBot> pecisk asked: what do you mean with uTouch going included in compiz? Shouldn't it left seperated?
[17:26] <bregma> utouch-grail, the recognition engine, needs to know about window geometry and ordering
[17:26] <bregma> the easiest way to do that is to stick it some place that already has the information, like the X server
[17:27] <bregma> except the X server isn't really where it belongs
[17:27] <bregma> so, compiz for those desktops that use compiz (like Unity), and we'll try to come up with alternate solutions for others
[17:28] <ClassBot> tomeu asked: what are the native python bindings for? isn't enough to access that functionality through pyqt, pygobject, etc?
[17:28] <bregma> the utouch-geis library is not gobject-based
[17:28] <bregma> it's as lightweight as possible so it can be included anywhere, like games
[17:29] <bregma> there are already pygobject bindings for libgrip
[17:30] <cnd> ok, so we're out of questions on the gestures for now
[17:30] <cnd> so I'm going to start talking about raw multitouch events
[17:31] <cnd> over this past cycle for natty, we've been working hard to bring real multitouch input through the X server
[17:31] <cnd> in 11.04 we'll be the first linux distro with multitouch support!
[17:31] <cnd> but it's just the ground work for now
[17:31] <cnd> and it's not quite finalized yet
[17:32] <cnd> so it's considered a prototype, or pre-release for now
[17:32] <cnd> but we do have some support for developers who want to write applications to take advantage of the new functionality
[17:32] <cnd> there are two layers you can develop at
[17:33] <cnd> you can develop at the XInput level, which we don't recommend for now but does provide some extra functionality
[17:33] <cnd> or you can develop using the Qt touch framework
[17:33] <cnd> first, I'll go over the XInput work just to give some background
[17:33] <cnd> XInput is an extension to the X server
[17:33] <cnd> X is almost 30 years old now
[17:34] <cnd> and no one was trying to integrate multitouch with X way back when it was created :)
[17:34] <cnd> over time, the X Input extension has grown to allow for various input related functionality
[17:34] <cnd> at first it allowed for multiple mice and keyboards to be used at the same time
[17:35] <cnd> this allows you to control the cursor with your trackpad and your usb mouse without having to toggle one or the other
[17:35] <cnd> then, support was added for grouping keyboards and mice, and for creating more than one cursor on the screen
[17:35] <cnd> you can try this out with the xinput command line utility, it's kinda fun to have multiple cursors :)
[17:36] <cnd> now, we're extending XInput to version 2.1 to add multitouch support
[17:36] <cnd> here's the link to the current protocol document that's in development: http://cgit.freedesktop.org/xorg/proto/inputproto/tree/specs/XI2proto.txt?h=inputproto-2.1-devel
[17:37] <cnd> I don't recommend trying to understand it though :)
[17:37] <cnd> so I'll just skip over it for now and hit a few key points
[17:37] <cnd> first, there's a touch "lifetime"
[17:37] <cnd> every touch has an event stream associated with it
[17:37] <cnd> when the touch begins, a TouchBegin event is generated
[17:38] <cnd> when the touch changes in any way, i.e. it moved, or the pressure changed, a TouchUpdate event is sent
[17:38] <cnd> when the touch leaves the touch surface, a TouchEnd event is sent
[17:39] <cnd> the second major point about touch input is that there are two classes of devices that affect how touch events are handled
[17:39] <cnd> direct touch devices are basically touchscreens
[17:39] <cnd> where you touch on the surface is where the touch events are sent
[17:39] <cnd> so if you touch with one finger over the terminal, and you touch another finger over the web browser
[17:39] <cnd> then each application will receive the touch event for their respective touches
[17:40] <cnd> in contrast, there are dependent touch devices
[17:40] <cnd> these comprise trackpads and devices like the Apple Magic Mouse
[17:40] <cnd> when you touch the surface of these devices, the touches are sent to the window that is under the cursor on the screen
[17:41] <cnd> Lastly, there's a layer of mouse event emulation for direct touch devices
[17:41] <cnd> if your application subscribes to mouse events and not touch events, and someone touches your application using a touchscreen
[17:41] <cnd> a mouse event stream is generated
[17:41] <cnd> the primary mouse button is "pressed" when you touch the screen
[17:42] <cnd> and the cursor moves with your finger
[17:42] <cnd> and then the primary mouse button is "released" when the touch ends
[17:42] <cnd> this allows us to add touch capabilities to new applications while not breaking mouse usage for older applications
[17:43] <cnd> That's enough for now about X though
[17:43] <cnd> I want to move on to Qt
[17:43] <cnd> in 11.04, we will also have a pre-release addition to the Qt framework that will support multitouch
[17:43] <cnd> if you have a multitouch device and want to test it out, install qt4-demos
[17:44] <cnd> then try out the applications in /usr/lib/qt4/examples/touch/
[17:44] <cnd> there's four examples: dials, fingerpaint, knobs, and pinchzoom
[17:44] <cnd> I like fingerpaint the most :)
[17:45] <cnd> note that with the Qt framework, a trackpad device does not emit touch events by default until two or more fingers are touching the surface
[17:45] <cnd> here's a link to the documentation for reference: http://doc.qt.nokia.com/latest/qtouchevent.html
[17:46] <cnd> If you want to take a look at an example source code file for handling multitouch data,  see http://doc.qt.nokia.com/latest/touch-fingerpaint-scribblearea-cpp.html
[17:46] <cnd> this is the fingerpaint application source code for the canvas area
[17:47] <cnd> if you scroll down near the bottom you'll see the ScribbleArea::event() function
[17:47] <cnd> this is where the multitouch events are received
[17:47] <cnd> in Qt, touches sent to a widget are grouped together
[17:47] <cnd> so once you get a touch event, you can get a list of all the touch points
[17:47] <cnd> and then you can iterate over them to find which of the touchpoints have changed
[17:48] <cnd> this is what the fingerpaint application does
[17:48] <cnd> with that, I'll move on from qt to get to some more stuff :)
[17:48] <cnd> there's also a niche library called libavg
[17:48] <cnd> this library is often used for games
[17:49] <cnd> and there are a handful of multitouch games available for it
[17:49] <cnd> they are all written in python, so it's a very accessible library
[17:49] <cnd> I won't spend any more time on it today, but you can try one of the games in natty by installing empcommand
[17:49] <cnd> it's a multitouch version of missile command :)
[17:49] <cnd> there are more games to try out in the libavg ppa
[17:50] <cnd> hmm... seems they aren't there yet
[17:50] <cnd> we'll get them uploaded soon though :)
[17:50] <cnd> lastly, I wanted to mention a few advanced things you can do with the XInput 2.1 extension
[17:50] <cnd> the first is that you can do touch "grabs"
[17:51] <cnd> this allows one application to control the event stream of touch events before they reach the destination application
[17:51] <cnd> second, though this won't be available until ubuntu 11.10, an application can "observe" touches
[17:52] <cnd> this may allow for ripple effects in compiz when you touch the screen, for example
[17:52] <ClassBot> There are 10 minutes remaining in the current session.
[17:52] <cnd> lastly, you can receive "unowned" events, which allow applications to peek at events
[17:52] <cnd> for more details, see the XInput 2.1 spec
[17:52] <cnd> with that, I'll open it up for questions :)
[17:52] <ClassBot> rydberg asked: what can you do with multitouch that you cannot do with single touch?
[17:53] <cnd> too many things :)
[17:53] <cnd> obviously all the multitouch gestures are available only with multitouch
[17:53] <cnd> but there are other possibilities
[17:54] <cnd> one can envision something like a conference table where the table is a big multitouch screen
[17:54] <cnd> each participant in the conference may interact with the table
[17:54] <cnd> there are also possibilities with object manipulation
[17:54] <cnd> for 3d applications
[17:55] <cnd> we're focusing on enabling all these, so we're hoping others have good ideas too :)
[17:55] <ClassBot> pecisk asked: what kind of licensing uTouch have?
[17:55] <cnd> uTouch is licensed under GPL v3
[17:55] <ClassBot> crazedpsyc asked: do you think multitouch will ever be added to compiz? If compiz did it I'm sure the animations (eg squishing a window) would be amazing
[17:56] <cnd> natty already brings multitouch gestures to unity
[17:56] <cnd> which is based on compiz as the window manager
[17:56] <cnd> for example, if you touch with three fingers over a window, you'll see the resize handles
[17:56] <cnd> let me find a picture
[17:56] <cnd> http://www.omgubuntu.co.uk/2011/03/unity-love-handles-resizing-in-ubuntu-just-got-sexy/
[17:57] <ClassBot> There are 5 minutes remaining in the current session.
[17:57] <cnd> we plan on adding more functionality in further releases too
[17:57] <bregma> we do have plans for a compiz plugin to expose multi-touch and gesture data directly
[17:58] <cnd> compiz support will also allow for something microsoft has termed "no touch left behind"
[17:58] <cnd> http://venturebeat.com/2009/05/11/with-surfaces-new-features-microsoft-wants-to-leave-no-touch-behind/
[17:59] <cnd> where there is feedback to provide the user with context about where a touch occurred
[17:59] <cnd> and what action it performed
[17:59] <cnd> btw, I realized I messed up on the licensing question :)
[17:59] <cnd> it's LGPL v3
[18:00] <cnd> the parts that are in the X.org server, such as the XInput 2.1 multitouch extension are under the X licensing (MIT/X11/BSD)
[18:00] <cnd> I think we're out of time now
[18:00] <cnd> so I want to thank everyone for participating!
[18:01] <cnd> come find us in #ubuntu-touch!
[18:02] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session.
[18:03] <tomeu> Hi, I'm a GNOME developer working at Collabora.
[18:03] <tomeu> During the past few years it has been evident that trying to keep the GNOME APIs available to languages other than C required more resources than were available.
[18:03] <tomeu> If you were working on a Python app using GNOME stuff, you probably realized about that.
[18:03] <tomeu> GObject Introspection's main goal is to radically lower the amount of effort required to do that.
[18:04] <tomeu> Feel free to make questions at any point, I will address them as I see them fitting in the plan of the talk.
[18:04] <tomeu> == The problem ==
[18:04] <tomeu> Before introspection was available for GObject-based APIs, bindings maintainers had to produce C code that would bridge between the host language and each C API that would be made accessible.
[18:04] <tomeu> There were code generators that saved a lot of time, but still, corner cases had to be handled manually and the generated code had to be maintained.
[18:05] <tomeu> The total amount of work required to keep the C APIs callable from other languages was a factor of the size of the APIs, the number of languages that would be able to call into them, and the distros where such bindings had to be packaged.
[18:05] <tomeu> As you can see, the amount of work to be done was growing very quickly, far faster than resource availability in a mature project such as GNOME.
[18:06] <tomeu> == The solution ==
[18:06] <tomeu> The reason why bindings weren't able to generate code that wouldn't need manual modifications is that by scanning the C sources there's only so much information available.
[18:07] <tomeu> There is critical information that bindings need that was only available in natural-language API documentation or in the code itself.
[18:07] <tomeu> Some bindings allowed this information to be manually added and fed to the generator, but it meant that each binding had to write that extra information by themselves, maintain it, etc.
[18:07] <tomeu> Based on that experience, it turned out to be clear that the extra information required to call the C API had to be added to the API sources themselves, so all bindings could benefit. This extra information is added in what we call "annotations" and we'll get into a bit of detail later.
[18:08] <tomeu> But that's not enough to reduce the workload at the distro level, if each binding had to generate code based on that extra information, distros would still need to package each binding for each API and each language.
[18:09] <tomeu> This is the reason why all the introspection information needs to be available at runtime, so a system which has bindings for, say Python, can call some new C API without having to write, package and deploy specific software for it.
[18:09] <tomeu> So introspection information is available at runtime with acceptable performance, it is compiled into "typelibs": tightly packed files that can be mmapped and queried with low overhead.
[18:10] <tomeu> == Workflow changes ==
[18:10] <tomeu> ASCII art overview of GI's architecture: http://live.gnome.org/GObjectIntrospection/Architecture
[18:11] <tomeu> (I'm going to go through it, so I recommend to give it a look now)
[18:11] <tomeu> When building a shared library, g-ir-scanner is called which will scan the source code, query GType for some more bits of info, and produce a XML file with the .gir suffix.
[18:12] <tomeu> The .gir file is then compiled into a typelib, with the .typelib suffix.
[18:12] <tomeu> The typelib is distributed along with the shared library and the .gir file is distributed with the header files.
[18:12] <tomeu> When an application that uses introspection is running, the introspection bindings for its programming language will use the information in the typelib to find out how it should call the C API, being helped by the GType system and by libraries such as libffi.
[18:13] <tomeu> Now I'm going to make a pause until someone says in #*-chat that I'm not going too fast :)
[18:14] <tomeu> == Annotations ==
[18:14] <tomeu> The needed information that is missing in the signature of the C API includes mainly:
[18:14] <tomeu> * details about the contents of collections (element-type),
[18:15] <tomeu> * memory management expectations (transfer xxx),
[18:15] <tomeu> * which functions are convenience API for C users and should not be exposed (skip),
[18:15] <tomeu> * scope of callbacks (scope),
[18:15] <tomeu> * auxiliar arguments (closure) (array length=2),
[18:15] <tomeu> * is NULL accepted as an input argument (allow-none),
[18:15] <tomeu> * and more.
[18:15] <tomeu> For more details: http://live.gnome.org/GObjectIntrospection/Annotations
[18:15] <tomeu> Example from GTK:
[18:15] <tomeu> --------------------- 8< -----------------
[18:15] <tomeu> /**
[18:15] <tomeu>  * gtk_tree_model_filter_new:
[18:15] <tomeu>  * @child_model: A #GtkTreeModel.
[18:15] <tomeu>  * @root: (allow-none): A #GtkTreePath or %NULL.
[18:16] <tomeu>  *
[18:16] <tomeu>  * Creates a new #GtkTreeModel, with @child_model as the child_model
[18:16] <tomeu>  * and @root as the virtual root.
[18:16] <tomeu>  *
[18:16] <tomeu>  * Return value: (transfer full): A new #GtkTreeModel.
[18:16] <tomeu>  */
[18:16] <tomeu> GtkTreeModel *
[18:16] <tomeu> gtk_tree_model_filter_new (GtkTreeModel *child_model,
[18:16] <tomeu>                            GtkTreePath  *root)
[18:16] <tomeu> --------------------- 8< -----------------
[18:16] <tomeu> == Other benefits ==
[18:16] <tomeu> Having available all the required information at runtime means that bindings can decide more freely when to allocate resources such as datas structures and function stubs, this allows bindings to address long-time issues such as slow startup and high memory usage.
[18:16] <tomeu> Another consequence of bindings calling the C APIs as exposed by upstream means that documentation can be generated directly from the introspectable information, without any per-API work.
[18:17] <tomeu> By lowering the barrier to expose APIs to other languages, more applications are being made extensible through the use of plugins.
[18:18] <tomeu> Libpeas helps your application to expose some extension points that can be used by plugins written in C, JavaScript and Python. It is already being used by Totem, GEdit, Vinagre, Eye of GNOME, etc
[18:18] <tomeu> == Changes for library authors ==
[18:18] <tomeu> Library authors that wish their API was available to other languages need to mainly do these three things:
[18:18] <tomeu> * mark all the API that cannot be called from other languages with (skip) so it doesn't appear in the typelib,
[18:19] <tomeu> that API could be considered as a convenience for C users
[18:19] <tomeu> * make sure all the functionality is available to bindings (by adding overlapping API),
[18:19] <tomeu> * modify their build system to generate and install the .gir and .typelib files (http://live.gnome.org/GObjectIntrospection/AutotoolsIntegration),
[18:19] <tomeu> * add annotations as mentioned before.
[18:19] <tomeu> In practical terms and for existing libraries, it uses to be better if people trying to use your API are the ones that submit patches adding annotations as they have a more readily available way to check for their correctness.
[18:20] <tomeu> But for the author of the API it should be generally obvious which annotations are needed provided some exposure to how bindings use the introspected information.
[18:20] <tomeu> == Changes for application authors ==
[18:20] <tomeu> Application authors need to be aware that, until the whole callable API becomes used through introspection by applications, they cannot expect for the annotations to be perfect.
[18:20] <tomeu> So instead of waiting for introspection to "mature", consider starting right now and get involved upstream by pointing out issues and proposing annotations and alternative API when needed.
[18:21] <tomeu> For now, may be best to look at the .gir to figure out how to call something, if the C docs aren't enough.
[18:21] <tomeu> In the future there will be documentation generated for each language from the .gir files, but nobody has got anything usable yet.
[18:22] <tomeu> so I don't have any more text to copy&paste, I will gladly answer any questions
[18:23] <ClassBot> crazedpsyc asked: is this available for languages other than C?
[18:24] <tomeu> no, it would be really hard depending on the particular language
[18:24] <tomeu> and the turnout would be smaller because platform code tends to be written in C in the GObject world
[18:25] <ClassBot> patrickd asked: Are there examples any where of getting started using these bindings in say, something like python?
[18:25] <tomeu> we have some material at http://live.gnome.org/PyGObject/IntrospectionPorting
[18:26] <tomeu> but tomorrow you will get a session here by pitti just about python and introspection
[18:26] <tomeu> this was intended to present the basic concepts, tomorrow will be more about practical stuff
[18:28] <ClassBot> abhinav81 asked: so a language binding (say python) for a library ultimately calls the C API ?
[18:28] <tomeu> yes, there will be some glue code in python that will be calling the same API that C programs use
[18:29] <tomeu> PyGObject uses libffi directly, there's another alternative implementation that uses python's ctypes (which in turn also uses libffi)
[18:29] <tomeu> we also have an experimental branch of pygobject by jdahlin that uses LLVM to generate wrappers
[18:30] <tomeu> I know python best, but I guess other languages will have other mechanisms to call into C code at runtime
[18:30] <ClassBot> chadadavis asked: is the plan to currently move everything to PyGI then? What types of applications would be better off staying with PyGTK?
[18:31] <tomeu> at this moment, pygtk won't be updated to support gtk3
[18:31] <tomeu> also, pygobject+introspection doesn't support gtk2
[18:32] <tomeu> so my recommendation is to do what most GNOME apps do: branch and keep a maintenance branch which still uses pygtk/gtk2, and move master to introspection and gtk3
[18:32] <ClassBot> geojorg asked: What is the current status of PyGI in Python 3 ?
[18:33] <tomeu> haven't been personally involved on that, but I think someone at the last hackfest rebased the python3 branch
[18:33] <tomeu> I think fedora is aiming for gtk3+python3 for their next release
[18:35] <tomeu> I will hang around for a while in case there's some more questions before the next talk starts
[18:35] <ClassBot> JanC asked: how similar are code for PyGtk & PyGI (and thus how much work is it to port an application and keep parallel branches)?
[18:36] <tomeu> IMO is not that dissimilar, you have some tips about porting here:http://live.gnome.org/PyGObject/IntrospectionPorting
[18:37] <tomeu> and you can get an idea of the kind of transformations needed by reading this script: http://git.gnome.org/browse/pygobject/tree/pygi-convert.sh
[18:37] <tomeu> you may find that the changes between gtk2 and gtk3 are more worrying, depending on how much of the API your app uses
[18:37] <ClassBot> pecisk asked: is the any deadlines when all base apps should be correctly supported by g-i?
[18:38] <tomeu> you say you meant base libs, so the deadline was GNOME 3 for all libraries in GNOME
[18:39] <tomeu> no doubt some libraries will have better annotations than others
[18:40] <tomeu> as I said before, the quality of their introspection support depends greatly on the contributions from application authors, which went on submitting annotations for the API that their app uses
[18:40] <ClassBot> crazedpsyc asked: can I get PyGI in maverick? how?
[18:40] <tomeu> I have heard you can, but I'm not sure how (I don't use ubuntu)
[18:41] <tomeu> but even then, maverick has gtk2 afaik, so I would recommend to try to move to natty for development
[18:41] <tomeu> gtk2 lacks a lot of annotations because the focus has been on gtk3
[18:44] <tomeu> there may exist a PPA, not sure
[18:49] <tomeu> == Where to go from here? ==
[18:49] <tomeu> http://live.gnome.org/GObjectIntrospection
[18:49] <tomeu> In GIMPNet: #introspection, #python, #javascript, ...
[18:50] <tomeu> Thanks for the attention and the questions, I also have to thank Martin Pitt for passing me his notes on GI
[18:51] <tomeu> as said, he will be giving tomorrow a session focused on python and introspection
[18:52] <tomeu> laszlok: QUESTION: is there a bug report or a wiki page about the status of generating documentation for the new API?
[18:52] <tomeu> let me get some links for you
[18:52] <tomeu> http://blog.tomeuvizoso.net/2011/02/generating-api-docs-from-gir-files.html
[18:52] <tomeu> https://bugzilla.gnome.org/show_bug.cgi?id=625494
[18:53] <ClassBot> There are 10 minutes remaining in the current session.
[18:53] <ClassBot> laszlok asked: is there a bug report or a wiki page about the status of generating documentation for the new API?
[18:53] <tomeu> http://live.gnome.org/Hackfests/Introspection2011#line-19
[18:57] <ClassBot> There are 5 minutes remaining in the current session.
[19:00] <dpm> hey, hello everyone!
[19:01] <dpm> thanks for joining in this session on how to internationalize your applications
[19:02] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session.
[19:02] <dpm> ok, now that classbot is done...
[19:03] <dpm> first of all, thanks to tomeu for a great session
[19:03] <dpm> And now let's get started with translations
[19:03] <dpm> First the introductions
[19:03] <dpm> I'm David Planella, and I work in the Community team at Canonical as the Ubuntu Translations Coordinator
[19:04] <dpm> Usually I work more on the community side of things, with the always awesome Ubuntu translation teams
[19:05] <dpm> But today I've put my developer hat to show you how easy it is to get your app ready to speak a multitude of languages
[19:05] <dpm> and set up so that the community can translate it.
[19:05] <dpm> Regardless of the programming language, the process of adding internationalization support to an application is not only fairly easy
[19:06] <dpm> but also, on a high level view, the same for all programming languages.
[19:06] <dpm> This means that after this session you should have a pretty good overview on what it takes to make your application translatable
[19:06] <dpm> and you can apply this to any programming language, slightly adapting the syntax, of course.
[19:07] <dpm> In order for you to see how it all fits together, I've based the talk on a common framework that can get you quickstarted in just a few minutes
[19:07] <dpm> I've used the Python programming language and Quickly
[19:08] <dpm> https://wiki.ubuntu.com/Quickly
[19:08] <dpm> Let's start with some background concepts to make it easier to understand the steps we'll be doing later on
[19:09] <dpm> So let's have a quick look at the main players involved in the internationalization game:
[19:09] <dpm>  
[19:09] <dpm> Background Concepts
[19:09] <dpm> [19:09] <dpm>  
[19:09] <dpm> GNU Gettext
[19:09] <dpm> -----------
[19:09] <dpm> Gettext is the underlying and most widely used technology to enable translations of Open Source projects.
[19:09] <dpm> It defines a standard format of translation files translators can do their work with (PO files, more on them in a minute)
[19:10] <dpm> and lets applications load those translations compiled in a binary format (MO files) at runtime.
[19:10] <dpm> It has implementations for many programming languages, and amongst them, of course, Python.
[19:10] <dpm> You'll find that the comprehensive gettext manual at http://www.gnu.org/software/gettext/manual/gettext.html can be a very useful reference,
[19:10] <dpm> The Python implementation of the gettext API is what we'll use to internationalize our project with Quickly today.
[19:11] <dpm> Needless to say, it also comes with some nifty documentation at http://docs.python.org/library/gettext.html
[19:11] <dpm> * {i} In short, gettext does all the heavy lifting involved in exposing your application for translation and loading the translations for the end user
[19:12] <dpm>  
[19:12] <dpm> intltool
[19:12] <dpm> --------
[19:12] <dpm> Intltool is a higher level tool that adds functionality to gettext by allowing the extraction of translatable strings from a variety of file formats
[19:13] <dpm> It has also become a standard tool when implementing internationalization for OSS projects.
[19:13] <dpm> Nearly all (if not all) GNOME projects, for example, use intltool.
[19:14] <dpm> * {i} intltool handles the translations of things such as the desktop shortcut of your application
[19:14] <dpm>  
[19:14] <dpm> python-distutils-extra
[19:14] <dpm> ----------------------
[19:14] <dpm> Python-distutils-extra is a python package that makes it easy to integrate themable icons, documentation and gettext based translations in your python install and build tools, and it's basically an enhancement to python-distutils.
[19:15] <dpm> The project's page is at http://www.glatzor.de/projects/python-distutils-extra/
[19:15] <dpm> * /!\ Note that this tool is Python-specific. I'm mentioning it here because we're going to be talking of a practical example with Python. If your application were a C application you'd probably use autotool rules to achieve the same result
[19:15] <dpm>  
[19:15] <dpm> The three above technologies (gettext, intltool, python-distutils-extra) are transparently used by Quickly, so we won't get into much more detail for now.
[19:16] <dpm> I just want you to get an idea of what we're talking about
[19:16] <dpm> There are also more aspects involved in internationalizing applications, such as font rendering, input methods, etc., but this should get you started for now.
[19:16] <dpm>  
[19:16] <dpm> Quickly
[19:16] <dpm> -------
[19:16] <dpm> I'll be very brief here and let you figure out more on quickly as we go along
[19:16] <dpm> For now, it will suffice give you a teaser and tell you that it is the tool which brings back the fun in writing applications! ;-)
[19:17] <dpm>  
[19:17] <dpm> Finally, a tool that is not strictly needed for internationalization (or the shorter form: i18n), but that can help you build an active translation community around your project
[19:18] <dpm> which will be the next step after your project adds i18n support
[19:18] <dpm>  
[19:18] <dpm> Launchpad Translations
[19:18] <dpm> ----------------------
[19:18] <dpm> Launchpad Translations (https://translations.launchpad.net/) is the collaborative online tool which allows translation communities to be brought together and translate applications online through its web UI.
[19:18] <dpm> Apart from the very polished UI to provide translations, it has other nice features such as message sharing across project series (translate one message in a series and it instantly propagates to all other shared series),
[19:19] <dpm> global suggestions (suggestions of translations across _all_ projects in Launchpad), automatic imports of translations and automatic commits to bzr branches, several levels of permissions, and a huge translator base.
[19:20] <dpm> On the right hand side of the URL I gave you you can see that there are quite a lot of projects using Launchpad to make translations easy both for developers and translators.
[19:20] <dpm> Bear with me: we're nearly there - let's also quickly trow in and review a couple of concepts related to the gettext technology
[19:20] <dpm>  
[19:20] <dpm> Gettext: MO files
[19:20] <dpm> -----------------
[19:21] <dpm> The message catalog, or MO file (for Machine Object) is the binary file that is actually used to load translations in a running system.
[19:21] <dpm> It is created from a textual PO (more on that in a bit), which is used as the source, generally by a tool called msgfmt.
[19:21] <dpm> Message catalogs are used for performance reasons, as they are implemented as a binary hash table that is much more efficient to look up at runtime than textual PO files
[19:22] <dpm> * {i} .mo files are the binary files installed in the system where the application loads the translations from, using gettext
[19:22] <dpm>  
[19:22] <dpm> Gettext: PO files
[19:22] <dpm> -----------------
[19:22] <dpm> PO file stands for Portable Object file, and are the textual files translators work with to provide translations. They are plain text files with a special format:
[19:22] <dpm>   msgid "English message"
[19:22] <dpm>   msgstr "Traducció al català" <- Translated string
[19:22] <dpm> (message pairs containing the original messages from the application, and its corresponding translation)
[19:23] <dpm> You can see an example of a PO file here:
[19:23] <dpm> http://bazaar.launchpad.net/~synaptic-developers/synaptic/trunk/view/head:/po/zh_CN.po
[19:23] <dpm> * {i} Translators provide translations in PO files. If they use an online translation system they won't directly work with them, but your project sources will still contain them
[19:23] <dpm> * {i} In each application source tree there is generally a PO file per language, named after the language code. E.g. ca.po, de.po, zh_CN.po, etc.
[19:23] <dpm>  
[19:23] <dpm> Gettext: POT files
[19:23] <dpm> ------------------
[19:23] <dpm> Once your project has added i18n support, you'll need to give translators an updated list of translatable messages they can start translating.
[19:23] <dpm> You'll also need to update this list whenever you add new messages to your application.
[19:24] <dpm> You achieve that through POT files, or simply templates in l10n (localization) slang (Portable Object Template).
[19:24] <dpm> They are textual files with the same format as PO files,
[19:24] <dpm> but they are empty of translations and are used as a template or stencil to create PO files from.
[19:24] <dpm> * {i} There are special tools to update templates. Generally intltool is used, often called from a build system rule or a higher level tool such as python-distutils-extra
[19:24] <dpm>  
[19:24] <dpm> Gettext: Translation domain
[19:24] <dpm> ---------------------------
[19:25] <dpm> The translation domain is a unique string identifier assigned by the programmer in the code (usually in the build system)
[19:25] <dpm> and used by the gettext functions to locate the message catalog where translations will be loaded from.
[19:25] <dpm> The general form to compute the catalog’s location is:
[19:26] <dpm>     locale_dir/locale_code/LC_category/domain_name.mo
[19:26] <dpm> which in Ubuntu expand generally to /usr/share/locale/locale_code/LC_MESSAGES/domain_name.mo
[19:26] <dpm> The locale_code part refers to the language code for the particular language. As an example, when using Nautilus in a Catalan locale with Ubuntu,
[19:26] <dpm> the gettext functions will look for the message catalogue at:
[19:26] <dpm>     /usr/share/locale-langpack/ca/LC_MESSAGES/nautilus.mo
[19:27] <dpm> That's where the translations for your application will be installed and searched for
[19:27] <dpm> Note that for your app this location might be slightly different:
[19:27] <dpm>      /usr/share/locale/ca/LC_MESSAGES/myapp.mo
[19:28] <dpm> * {i}The corresponding translation template should have the same translation domain in its filename, e.g. nautilus.pot.
[19:28] <dpm> * {i} The translation domain must be unique across all applications and packages. I.e. something generic like messages.pot won’t work.
[19:28] <dpm>  
[19:29] <dpm> Ok, done with the concepts, let's get down to work and to questions
[19:29] <dpm> Generic Steps to Internationalize an Application
[19:29] <dpm> [19:29] <dpm> * Integrate gettext into the application. Initialize gettext in your main function, most especially the translation domain
[19:30] <dpm> * Integrate gettext into the build system. There are generally gettext rules in the most common build systems. Use them.
[19:30] <dpm> * Mark translatable messages. Use the _() gettext call to mark all translatable messages in your application
[19:31] <dpm> * Care for your translation community. Not necessarily a step related to adding i18n support, but you'll want an active and healthy translation community around your project. Keep the templates with translatable messages up to date. Announce these updates and give translators time to do their work before a release. Be responsive to feedback.
[19:31] <dpm>  
[19:31] <dpm> Hands-on: creating an internationalized app
[19:31] <dpm> [19:31] <dpm> Ok, enough theory, let's have a go at using quickly to create your first internationalized application
[19:32] <dpm> You can install Quickly on any recent Ubuntu release by simply firing up a terminal and executing:
[19:32] <dpm>     sudo apt-get install quickly
[19:33] <dpm> Once you've done that, you can run quickly to create your first project:
[19:34] <dpm>     quickly create ubuntu-application awesometranslations
[19:34] <dpm> (if you like, substitute 'awesometranslations' by your favourite project name)
[19:34] <dpm> We've just told Quickly to use the ubuntu-application template, and to call what is created "awesometranslations"
[19:35] <dpm> you should probably have an open dialog from your new app in front of you.
[19:35] <dpm> Quickly has created all that for you!
[19:35] <dpm> You can close the dialog to continue
[19:35] <dpm> What Quickly did was to copy over basically a sample application, and do some text switcheroos to customize the app
[19:36] <dpm> What you could see there was the ui containing some text that needs translation.
[19:36] <dpm> To start making change to your app, go to the directory where it's stored. Generally by simply running
[19:36] <dpm>     cd awesometranslation
[19:37] <dpm> You can then edit your code with $ quickly edit, change the UI with $ quickly glade, and try your changes with $ quickly run
[19:37] <dpm> You can save your change with $ quickly save
[19:37] <dpm> Finally, to package, share, release your apps so that other will be, with the following commands (not all are necessary): $ quickly package / $ quickly share / $ quickly release
[19:37] <dpm> As it stands now, the application has nearly all you need to make it translatable
[19:38] <dpm> which is the great thing about quickly
[19:38] <dpm> From now on, while I'll let you play and investigate the application you've created, we'll be looking at the one I created for the purpose of this session
[19:39] <dpm> and I'll show you the i18n bits
[19:39] <dpm> so that you can add them to your existing applications if you want
[19:39] <dpm> For new applications, I'd simply recommend you to use quickly
[19:39] <dpm> and start from there
[19:40] <dpm> which will set up everything for you, so that you can forget about it and concentrate on all those new cool functions your new app is going to provide :-)
[19:40] <dpm> So let's have a look at:
[19:40] <dpm> http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/files
[19:41] <dpm> Notice that we could just call the session finished at this point, as quickly did all the job for us :)
[19:41] <dpm> Also notice how easy it is. I've just created and pushed the application to Launchpad a few minutes ago
[19:42] <dpm> Remember we were talking about PO and POT files?
[19:42] <dpm> They are right there, under the po/ folder:
[19:42] <dpm> http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/files/head:/po/
[19:43] <dpm> the po/ folder could be called something else, but it is customary to call it like that, as some tools rely on this convention
[19:43] <dpm> Notice the .pot file called the same name as your app
[19:43] <dpm> and an example translation file (ca.po) submitted by a translator
[19:44] <dpm> Now to the interesting bits:
[19:44] <dpm> Remember the generic steps for internationalization we were talking about earlier on?
[19:44] <dpm> * Initializing gettext:
[19:44] <dpm> http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/view/head:/awesometranslations/__init__.py#L8
[19:45] <dpm> so here we include the gettext module
[19:45] <dpm> and we define a function called simply _()
[19:45] <dpm> And finally we define the translation domain
[19:46] <dpm> which will be the name of the .mo file installed in the system and the name of the .pot file
[19:47] <dpm> * Integrating gettext in the build system:
[19:47] <dpm> http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/view/head:/setup.py#L13
[19:47] <dpm> Here the integration happens automagically by using python-distutils-extra
[19:47] <dpm> C programs using autotools might need a more complex integration
[19:48] <dpm> * Mark translatable messages:
[19:48] <dpm> http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/view/head:/awesometranslations/__init__.py#L23
[19:49] <dpm> For every message you want to expose for translation, you simply have to wrap it in the _() function, e.g. _("Translatable message here")
[19:49] <dpm> And that's basically it, really
[19:50] <dpm> easy, isn't it?
[19:50] <dpm> ok, so we're running out of time, let's see if there are questions!
[19:52] <ClassBot> There are 10 minutes remaining in the current session.
[19:52] <ClassBot> bdfhjk asked: What is the best way to translate QT application? Gettext or QTLinquist?
[19:52] <dpm> bdfhjk asked: What is the best way to translate QT application? Gettext or QTLinquist?
[19:52] <dpm> Tough question :)
[19:53] <dpm> Both Gettext and QT Linguist are excellent i18n frameworks
[19:53] <dpm> With similar functionality
[19:53] <dpm> But I would personally use gettext
[19:54] <dpm> Because it is framework-agnostic and used by the vast majority of Open Source projects
[19:54] <dpm> Not only that, but most online translation tools rely on gettext
[19:54] <dpm> KDE itself uses gettext, for example
[19:54] <ClassBot> bulldog98_konv asked: what’s the difference between GNOMEs and KDEs handling of translations in code?
[19:55] <dpm> Not much really
[19:55] <dpm> As I said, both KDE and GNOME use gettext
[19:55] <dpm> The majority of GNOME is written in C, and KDE in C++
[19:56] <dpm> I gather that in KDE they wrap Qt Linguist calls through kdelib to actually use gettext to load the actual translations
[19:56] <ClassBot> bdfhjk asked: Is Gettext working in windows?
[19:57] <ClassBot> There are 5 minutes remaining in the current session.
[19:57] <dpm> Yes, gettext works in any platform where glibc can run, including Windows
[19:58] <dpm> There is still time to answer one last question if you've got one
[19:59] <dpm> Ok, so I think I'll use the last minutes to thank everyone for their participation, and remind you that if you've got any questions on translations, feel free to ping me any time!
[19:59] <dpm> I usually hang out on #ubuntu-devel
[20:00] <dpm> Now time for KDE/Kubuntu rockstar apachelogger, whol'll tell you about the secret art of writing plasma widgets
[20:00] <apachelogger> thank you dpm :)
[20:01] <dpm> the floor is yours!
[20:01] <apachelogger> salut, bonjour and welcome to an introduction to Widgetcraft oh my :)
[20:01] <apachelogger> ...also known as the art of creating Plasma Widgets.
[20:01] <apachelogger> my name is Harald Sitter and I am developer of KDEish things
[20:02] <apachelogger> for this session you will need a couple of packages and any handy editor you like
[20:02] <apachelogger> sudo apt-get install kdebase-workspace-bin kdebase-runtime plasma-dataengines-workspace
[20:02] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session.
[20:02] <apachelogger> this comand will make sure you get the packages necessary
[20:02] <apachelogger> if the editor can do syntax highlighting for javascript it would be good :)
[20:02] <apachelogger> meanwhile I am going to talk a bit about the technology we are going to work with
[20:03] <apachelogger> plasma is the technology most people refer to as "the KDE desktop"
[20:03] <apachelogger> or "the KDE workspace" if you will
[20:03] <apachelogger> Plasma is just about everything you see when you log into a newly installed KDE system (such as Kubuntu ;))
[20:04] <apachelogger> it is the wallpaper, and the panel at the bottom, and every icon and element within that panel and so on
[20:04] <apachelogger> it comes in many amazing favors and creating new ones is not all that difficult
[20:04] <apachelogger> by favors I mean specific versions of plasma for different form factors (i.e. devices)
[20:05] <apachelogger> currently there are plasma versions for desktop system, netbook systems, mobile devices (such as phones) and even tablets
[20:05] <apachelogger> although the latter 2 are actually more like tech previews and not terribly usable at this time
[20:06] <apachelogger> plasma widgets are widgets for plasma (surprise ;))
[20:06] <apachelogger> they are also called plasmoids... in particular plasma widget usually means a widget that can run in plasma
[20:06] <apachelogger> this includes apple dashboard widgets, google gadgets and native plasma widgets
[20:07] <apachelogger> those native widgets are the ones called plasmiods
[20:08] <apachelogger> plamoids make best usage of plasma's abilities and can be writen in javascript (including qml in KDE 4.6+), c++, ruby and python
[20:08] <apachelogger> however only javascript and c++ are builtin (thus always available)
[20:08] <apachelogger> so, usually you want to use on of those
[20:09] <apachelogger> personally I would even go as far as saying that javascript is the weapon of choice unless you have good reasons to choose anoter language
[20:09] <apachelogger> the reason for this is that javascript is of course easier to deploy (as it does not need compliation compared to c++) and is always available on every plasma system (unlike ruby and python)
[20:10] <apachelogger> we also use javscript in this session ;)
[20:10] <apachelogger> plasmoids are distributed as so called "plasmagik packages" (what a name!)
[20:11] <apachelogger> they essentially contain one metadata file to describe the plasmoid at hand as well as code, images and other magic files
[20:11] <apachelogger> for more information have a look at http://community.kde.org/Plasma/Package
[20:11] <apachelogger> QUESTION: can we use Qt/C++
[20:12] <apachelogger> as explained, one can use C++, however there is no particular gain from this for the usual plasmoid
[20:12] <apachelogger> as the javascript API iis very powerful
[20:12] <apachelogger> If there are no moar questions we can move on to hacking
[20:13] <apachelogger> a common step when creating a new plasmoid is setting up the folder structure, for this you can use this magic command sequence of mine:
[20:13] <apachelogger> NAME=dont-blink                   # Set a shell variable
[20:13] <apachelogger> mkdir -p $NAME/contents/code/     # Create everything up to the code dir.
[20:13] <apachelogger> touch $NAME/metadata.desktop      # Create the metadata file, which contains name and description...
[20:13] <apachelogger> touch $NAME/contents/code/main.js # Create main code file of the plasmoid.
[20:13] <apachelogger> this will create a bare setup for a new plasmoid
[20:13] <apachelogger> in the folder dont-blink
[20:14] <apachelogger> QUESTION: can we create plasmoids in Ubuntu/Gnome and of course, test it on Kubuntu?
[20:14] <apachelogger> yes
[20:14] <apachelogger> testing can be done in gnome too
[20:14] <apachelogger> however unfortunately at this point there is no actual widget integration, so the plasmoids will only work in KDE with Plasma
[20:15] <apachelogger> movig on
[20:15] <apachelogger> first we will need to setup our metadata file
[20:15] <apachelogger> http://people.ubuntu.com/~apachelogger/uadw/04.11/dont-blink/metadata.desktop
[20:15] <apachelogger> is a good starting point
[20:15] <apachelogger> I believe the file is pretty easy to understand, it simply defines the general properties of our plasmoid
[20:16] <apachelogger> name, license, author, version etc.
[20:16] <apachelogger> usually you will want to change at least Name and X-KDE-PluginInfo-Name
[20:16] <apachelogger> now we can already get our hands dirty
[20:17] <apachelogger> Pleaes open the contents/code/main.js code file in your editor.
[20:17] <apachelogger> let's start with a semi-helloworld thing :)
[20:17] <apachelogger>     // First create a layout we can stuff things into
[20:17] <apachelogger>     layout = new LinearLayout(plasmoid);
[20:18] <apachelogger> layouts are very handy as we can put just about anything in there and they will automagically figure out how to align stuff
[20:18] <apachelogger> (well, almost automagically ;))
[20:18] <apachelogger>     // Then create a label to display our text
[20:18] <apachelogger>     label = new Label(plasmoid);
[20:18] <apachelogger>     // Add the label to the layout
[20:18] <apachelogger>     layout.addItem(label);
[20:18] <apachelogger>     // Set the text of our Label
[20:18] <apachelogger>     label.text = 'Don\'t Even Blink';
[20:18] <apachelogger>     // Done
[20:18] <apachelogger> not terribly difficult, right? :)
[20:19] <apachelogger> you can now run this using plasmoidviewer . or plasmoidviewer PATHTOTHEPLASMOID (depending on where you are in a terminal right now).
[20:19] <apachelogger> this works on both KDE and GNOME
[20:19] <apachelogger> and XFCE and ....
[20:20] <apachelogger> plasmoidviewer is a very nice app to test plasmiods as you do not need to install the plasmoid to test it
[20:20] <apachelogger> Here is a trick. If you have KDE 4.5 (default on Kubuntu 10.10) you will have a new command called 'plasma-windowed' using this command you can run most Plasmoids just like any other application in a window, is that not brilliant?
[20:20] <apachelogger> for example you can try that on our new plasmoid
[20:21] <apachelogger> or if you have the facebook plasmoid installed, you can try it with that
[20:21] <apachelogger> very handy to run plasmoids as sort-of real applications
[20:21] <apachelogger> I hope everyone got our first code working by now
[20:22] <apachelogger> maybe let us continue with a buttons
[20:22] <apachelogger> buttons are cool
[20:22] <apachelogger> oh, in case you have not noticed, code lines are always indeted by 4 characters for your reading pleasure
[20:22] <apachelogger>     // Create a new button
[20:22] <apachelogger>     button = new PushButton;
[20:22] <apachelogger>     // Add the button to our layout
[20:22] <apachelogger>     layout.addItem(button);
[20:22] <apachelogger>     // Give the button some text
[20:22] <apachelogger>     button.text = 'Do not EVER click me';
[20:23] <apachelogger> if you try the plasmoid now you will notice quite the sillyness
[20:23] <apachelogger> the layout placed the button next to the text
[20:24] <apachelogger> not so awesome :(
[20:24] <apachelogger> and apachelogger claimed layouts are awesome -.-
[20:24] <apachelogger> oh well
[20:24] <apachelogger> easily fixable
[20:24] <apachelogger> the problem is that the layout by default tries to place things next to each other rather than align them vertically
[20:24] <apachelogger>     // Switch our layout to vertical alignment
[20:24] <apachelogger>     layout.orientation = QtVertical;
[20:24] <apachelogger> now this should look *much* better
[20:25] <apachelogger> well
[20:25] <apachelogger> our button does not do anythign yet
[20:25] <apachelogger> that is a bit boring I might say ... and useless
[20:26] <apachelogger> hwo about adding an image into the mix ? ;)
[20:26] <apachelogger> QUESTION: why is it QtVertical? and not just Vertical like other widgets?
[20:27] <apachelogger> QtVertical is actually coming from Qt and not from Plasma, as to avoid name clashes in the future I suppose it got prefixed with Qt ;)
[20:27] <apachelogger> generally speaking layout orientation in C++ Qt is also an enum in the Qt namespace, so it looks pretty much the same
[20:27] <apachelogger> but now for our image
[20:28] <apachelogger> if you still have the same terminal you created the bare folder structure in you can use the following:
[20:28] <apachelogger> mkdir -p $NAME/contents/images/
[20:28] <apachelogger> wget -O $NAME/contents/images/troll.png http://people.ubuntu.com/~apachelogger/uadw/04.11/dont-blink/contents/images/troll.png
[20:28] <apachelogger> otherwise jsut navigate to your plasmoid folder, go to contents and create an images folder, then download http://people.ubuntu.com/~apachelogger/uadw/04.11/dont-blink/contents/images/troll.png into that folder
[20:29] <apachelogger> Now for the code...
[20:29] <apachelogger>     // Labels can also contain images, so we will use a label again
[20:29] <apachelogger>     troll = new Label(plasmoid);
[20:29] <apachelogger>     // But this time we set an image. The image path is constructed automatically by us telling it in what directory it is and what name it has
[20:29] <apachelogger>     troll.image = plasmoid.file("images", "troll.png");
[20:29] <apachelogger>     // So that our image fits in we need to tell the label to consume as much space as possible and necessary
[20:29] <apachelogger>     troll.sizePolicy = QSizePolicy(QSizePolicyMaximum, QSizePolicyMaximum);
[20:29] <apachelogger>     // We only want to show the image after the user dared pressing the button, so we set it not visible and also do not add it to our layout
[20:29] <apachelogger>     troll.visible = false;
[20:30] <apachelogger> that will not actually do anything
[20:30] <apachelogger> as the troll is set to invisible by default
[20:31] <apachelogger> we only show it once the user clicked on the button
[20:32] <apachelogger> so this leads us to a very interesting part of Plasma in specific and Qt in particular
[20:32] <apachelogger> connecting a state change on one thing to an action
[20:32] <apachelogger> usually in Qt we call this the signal and slot system
[20:33] <apachelogger> in javascript plasmoids we have almost the same thing, in fact it is even simpler than in standard c++
[20:33] <apachelogger> so
[20:33] <apachelogger> let us try that
[20:33] <apachelogger>     // First add a function to handle clicking on the button
[20:33] <apachelogger>     function onClick()
[20:33] <apachelogger>     {
[20:34] <apachelogger> within that function we put our logic for changing the visibility of our troll
[20:35] <apachelogger>         // Either the troll is shown or it is not...
[20:35] <apachelogger>         // If it is visible -> hide it
[20:35] <apachelogger>         if (troll.visible) {
[20:35] <apachelogger> ah, this is getting complicated
[20:35] <apachelogger> lets stop here for a bit
[20:35] <apachelogger> so, our troll can be visible and invisible and we change this via .visible and we read it via .visible
[20:36] <apachelogger> this might be a bit confusing for those of us who drive ourselfs crazy with C++ ;)
[20:37] <apachelogger> however, it is really something very Qt
[20:37] <apachelogger> Qt adds property functionality to objects, which is really what we are seeing here
[20:37] <apachelogger> our label has a property visibile
[20:37] <apachelogger> and this property has a "setter" and a "getter"
[20:38] <apachelogger> depending on the context we can therefore use .visible as getter or setter
[20:38] <apachelogger> very handy :D
[20:38] <apachelogger> (as we will have some QML sessions later this week ... this is also how QML elements work for the better part ;))
[20:38] <apachelogger> now, moving on...
[20:39] <apachelogger> we were writing our onClick function, in particular the logic for when the troll is already visible
[20:39] <apachelogger>             // Make it invisible
[20:39] <apachelogger>             troll.visible = false;
[20:39] <apachelogger>             // And remove it from the layout, so that it does not take up space
[20:39] <apachelogger>             layout.removeItem(troll);
[20:39] <apachelogger> I think changing the visibility should be clear now ... but that removing there is a bit confusing
[20:39] <apachelogger> apachelogger apparently did not prepare very well :P
[20:40] <apachelogger> so let me show you the rest of the onClick function and explain this afterwards
[20:40] <apachelogger>         } else { // If it is not visible -> show it
[20:40] <apachelogger>             // Once our button gets clicked we want to show an image.
[20:40] <apachelogger>             troll.visible = true;
[20:40] <apachelogger>             // Finally we add the new image to our layout, so it gets properly aligned
[20:40] <apachelogger>             layout.addItem(troll);
[20:40] <apachelogger>         }
[20:40] <apachelogger>     }
[20:40] <apachelogger> so, depending on the state of visibility we simply do inverted actions
[20:40] <apachelogger> possibly you noticed earlier on that we did not add the troll to our layout
[20:41] <apachelogger> this was very intentional
[20:41] <apachelogger> as soon as you add something to your layout it will usually consume space
[20:41] <apachelogger> visible or not
[20:41] <apachelogger> so, whenver our troll is not visible it also must not be part of the layout
[20:41] <apachelogger> hence the logic in our onClick
[20:42] <apachelogger> if visible -> make invisbile and remove from layout || if invisible -> make visible and add to layout
[20:42] <apachelogger> the daring programmer will now try this and complain that it is not working
[20:42] <apachelogger> oh my
[20:43] <apachelogger> we did not yet define that onClick should do something upon button click
[20:43] <apachelogger>     // Now we just tell our button that once it was clicked it shall run our function
[20:43] <apachelogger>     button.clicked.connect(onClick);
[20:44] <apachelogger> well then
[20:44] <apachelogger> for me it works \o/
[20:44] <apachelogger> very useful plasmoid we created there :D
[20:45] <apachelogger> you can find a version of this I created earlier here : http://people.ubuntu.com/~apachelogger/uadw/04.11/dont-blink/
[20:46] <apachelogger> it also contains additional magic that should trigger a notification on click and display your location as detected by gps or ip lookup ;)
[20:46] <apachelogger> now that we have a wonderful plasmoid we will need to package it properly
[20:47] <apachelogger> as mentioned earlier, plasmoids are distributed in super cool special packages
[20:47] <apachelogger> although....
[20:47] <apachelogger> actually they are just zip files with .plasmoid as suffix
[20:47] <apachelogger> so let us create such a nice package from our plasmoid
[20:48] <apachelogger> if you are still in the same terminal we started off with, the following should do the job:
[20:48] <apachelogger> cd $NAME &&
[20:48] <apachelogger> zip -r ../$NAME.plasmoid . &&
[20:48] <apachelogger> cd ..
[20:48] <apachelogger> $NAME is simply the name of our plasmoid, so you can easy enough create the zip manually too :)
[20:49] <apachelogger> please note that plasma does expect the contents and metadata to be in the top level of the zip though, so you must not package the plasmoid directory (in our case dont-blink) but only the files
[20:49] <apachelogger> that is really what that fancy zip command there does
[20:50] <apachelogger> Once you have your plasmoid you can install it using the regular graphical ways on your plasma version or by using the command line tool plasmapkg.
[20:50] <apachelogger> plasmapkg -i $NAME.plasmoid
[20:50] <apachelogger> now the plasmoid should show up in your widget listing.
[20:50] <apachelogger> consequently you should be able to use plasmoidviewer $NAME and plasma-windowed $NAME to view the plasmoid without plasma
[20:51] <apachelogger> QUESTION: are there any particular style guidelines for writing code for plasmoids beyond what you have shown us? for those that are used to MVC kinda stuff
[20:51] <apachelogger> not really
[20:51] <apachelogger> if you are using C++ you can do just about anything ... in the future the javascript plasmoids will use QML quite a bit (you will hear about QML later this week)
[20:52] <ClassBot> There are 10 minutes remaining in the current session.
[20:52] <apachelogger> QML usually wants people to use Qt's Model/View system (which is pretty close to MVC)
[20:52] <apachelogger> especially if you are working with lists of course :)
[20:52] <apachelogger> QUESTION: will it be possible to compress the package with bzip2 in the futur?
[20:53] <apachelogger> not planned in particular, but if you ask in #plasma I am sure someone could tell you whether that would be desirable
[20:53] <apachelogger> as plasmoids are usually atomic it does not make much a difference though
[20:53] <apachelogger> any other questions?
[20:54] <apachelogger> if not, then let me give you some additonal resources where you can find handy super nice things :)
[20:54] <apachelogger> Where the Plasma community collects its information: http://community.kde.org/Plasma
[20:54] <apachelogger> General tutorials on JavaScript Plasmoids: http://techbase.kde.org/Development/Tutorials/Plasma#Plasma_Programming_with_JavaScript
[20:54] <apachelogger> Plasma and KDE development examples: http://quickgit.kde.org/?p=kdeexamples.git&a=summary
[20:54] <apachelogger> Some general guidelines for Plasmoid programming: http://community.kde.org/Plasma/PlasmoidGuidelines
[20:55] <apachelogger> Information on Plasma packages: http://community.kde.org/Plasma/Package
[20:55] <apachelogger> AND
[20:55] <apachelogger> last, but not least
[20:55] <apachelogger> *super important*
[20:55] <apachelogger> the JavaScript API: http://techbase.kde.org/Development/Tutorials/Plasma/JavaScript/API
[20:55] <apachelogger> if you compare this API to what you can do in C++ you will notice that the JavaScript API is really sufficient for most things :)
[20:56] <apachelogger> On IRC you can get help in #plasma most of the time
[20:56] <apachelogger> Good luck with creating your brilliant Plasmoids :)
[20:56] <apachelogger> you can find me in just about every KDE and Kubuntu IRC channel after the sessions if you have any additional questions
[20:57] <ClassBot> There are 5 minutes remaining in the current session.
[20:58] <apachelogger> if you are interested in KDE software development I'd like to direct your attention to the KDE development session tomorrow, the various QML sessions and my talk on multimedia in Qt and KDE on friday :)
[20:58] <apachelogger> thanks everyone for joining and have a nice day
[21:01] <barry> welcome to "rock solid python development with unittest/doctest".  today i'm going to give a brief introduction to unit- and doc- testing your python applications, and hooking these into the debian packaging infrastructure.
[21:01] <barry> raise your hand if you're already unashamedly obsessed with testing :)
[21:02] <barry> since it would take way more than one hour, i'm not going to give a deep background on testing, or the python testing culture, or python testing tools.  there are a ton of references out there.  two resources i'll give right up front are the testing-in-python mailing list <http://tinyurl.com/2bl2gk> and the python testing tools taxonomy <http://tinyurl.com/msya4>.  python has a *very* rich testing culture, and i encourage you to explore
[21:02] <barry> it!
[21:02] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session.
[21:03] <barry> sorry, i'm going to start over because of the classbot delay...
[21:03] <barry> welcome to "rock solid python development with unittest/doctest".  today i'm going to give a brief introduction to unit- and doc- testing your python applications, and hooking these into the debian packaging infrastructure.
[21:03] <barry> raise your hand if you're already unashamedly obsessed with testing :)
[21:03] <barry> since it would take way more than one hour, i'm not going to give a deep background on testing, or the python testing culture, or python testing tools.  there are a ton of references out there.  two resources i'll give right up front are the testing-in-python mailing list <http://tinyurl.com/2bl2gk> and the python testing tools taxonomy <http://tinyurl.com/msya4>.  python has a *very* rich testing culture, and i encourage you to explore
[21:03] <barry> it!
[21:03] <barry> there's a lot you can do right out of the box, and that's where we'll start.  michael foord is hopefully here today too; he's the author of unittest2, a standalone version of all the wizzy new unittest stuff in python2.7
[21:03] <barry> for now, we'll keep things pretty simple, and the examples should run in python2.6 or python2.7 with nothing extra needed.
[21:03] <barry> for those of you with bazaar, the example branch can be downloaded with this command: bzr branch lp:~barry/+junk/adw
[21:04] <barry> if you can't check out the branch, you can view it online here:
[21:04] <barry> http://bazaar.launchpad.net/~barry/+junk/adw/files
[21:04] <barry> i'll pause for a few moments so that you can grab the branch or open up your browser
[21:05] <barry> here's a quick overview of what we'll be looking at: a quick intro to unittesting, a quick intro to doctesting, hooking them together in a setup.py, hooking them into your debian packaging.
[21:06] <barry> let's first look at a simple unittest.  if you've downloaded the branch referenced above, you should open the file adw/tests/test_adding.py in your editor.
[21:06] <barry> http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/adw/tests/test_adding.py
[21:06] <barry> the adw package is really stupid.  it has one function which adds two integers together and returns the results.  i won't talk much about test driven development (tdd) here, but i highly encourage you to read up on that and to practice tdd diligently!  these tests were developed using tdd.
[21:07] <barry> anyway, looking at test_adding.py, you can see one test class, called TestAdding.  there are some other boilerplate stuff in test_adding.py that you can mostly ignore.  the TestAdding class has one method, test_add_two_numbers().  this method is a unittest.  you'll notice that it calls the add_two_numbers() function (called the "system under test" or sut), and asserts that the return value is equal to 20.  pretty simple.
[21:07] <barry> look below that at the test_suite() function.  this is mostly boilerplate used to get your unittest to run.  the function creates a TestSuite object and adds the TestAdding class to it.  python's unittest infrastructure will automatically run all test_*() methods in the test classes in your suite.
[21:08] <barry> let's run the tests.  type this at your shell prompt:
[21:08] <barry> $ python setup.py test
[21:08] <barry> (without the $ of course)
[21:08] <barry> the first time you do this, your package will get built, then you'll see a little bit of verbose output you can ignore, and finally you'll see that two tests were run.  ignore the README.txt doctest for the moment.
[21:09] <barry> if you want to see what a failing test looks like, uncomment the test_add_two_numbers_FAIL() method and run `python setup.py test` again.  be sure to comment that back out afterward though! :)
[21:09] <barry> everybody with me so far?  i'll pause for a few minutes to see if there are any questions up to now
[21:10] <barry> !q
[21:11] <barry> so, obviously the more complicated your library is, the more test methods, test classes, and test_*.py files you'll have.  i probably won't have time to talk about test coverage much, but there are excellent tools for reporting on how much of your code is covered by tests.  you obviously want to aim for 100% coverage.
[21:11] <barry> okay, let's switch gears and look at a doctest now.  go ahead and open adw/docs/README.txt
[21:11] <barry> http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/adw/docs/README.txt
[21:12] <barry> doctests are *testable documentation*.  the emphasis is on the documentation aspects of these files, and in fact there are excellent resources for turning doctests into actual documentation, e.g. http://packages.python.org/flufl.i18n/
[21:12] <barry> doctests are written using restructured text, which is a very nice plain text human readable format.  the key thing for testing is to notice the last three lines of the file.  see the two lines that start with >>>
[21:12] <barry> (aside: sphinx is the tool to turn rest documention into html, pdf, etc.)
[21:13] <barry> and it's very well integrated with setup.py and the python package infrastructure
[21:13] <barry> that's a python interpreter prompt, and doctests work by executing all the code in your file that start with the prompt.  any code that returns or prints some output is compared with text that follows the prompt.  if the text is equivalent, then the test passes, otherwise it fails.
[21:14] <barry> oh, i should mention.  "doctests" can mean one of two things.  you can actually have testable sections in your docstrings, or you can have separate file doctests.  by personal preference, i always use the latter
[21:16] <barry> btw, the use of doctests is somewhat controversial in the python world.  i personally love them, others hate them, but i think everyone agrees they do not replace unittests, but i think they are an excellent complement.  anyway, if we have time at the end we can debate that :)
[21:16] <barry> !q
[21:17] <barry> in this example, add_two_numbers() is called with two integers, and it returns the sum.  if you were to type the very same code at the python interpreter, you'd get 12 returned too.  doctest knows this and compares the output
[21:17] <barry> run `python setup.py test` again and look carefully at the output.  you'll see that the README.txt doctest was run and passed.  if you change that 12 to a 13, you'll see what a failure looks like (be sure to change it back afterward!)
[21:17] <barry> i'll pause for a few minutes to let folks catch up
[21:18] <ClassBot> RawChid asked: So doctest is one way to do uittesting in Python?
[21:19] <barry> RawChid: i'd say one way to do *testing*, which i'm personally a big fan of.  i love writing documentation first because it ensures that i can explain what i'm doing.  if i can't explain it, i probably don't understand it.  but for really thorough testing, you must add unittests.  e.g. you typically do not want to do corner case and error cases in doctests.
[21:20] <barry> although the heretic in me says you should try :)
[21:20] <barry> unfortunately, python's unittest framework does not run doctests automatically.  if you look in the adw/tests directory, you'll see a file called test_documentation.py.  you don't need to study this much right now, and you are free to copy this into your own projects.  it's just a little boiler plate to hook up doctests with the rest of your test suite.  it looks for files inside docs/ directories that end in .txt or .rst (the reST
[21:20] <barry> standard extension) and adds them to the test suite.  once you have test_documentation.py, you never need to touch it.  just add more .txt and .rst files to your docs directories, and it will work automatically.
[21:21] <barry> http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/adw/tests/test_documentation.py
[21:21] <ClassBot> tronda asked: Is doctest somewhat similar to the BDD movement?
[21:21] <barry> tronda: probably related.  i don't know too much of the details but i think there are better tools for doing bdd in python
[21:21] <barry> voidspace might know more about that
[21:22] <barry> everybody with me so far?
[21:22] <barry> time to switch gears a little.  how do you hook up your unittests and doctests to your setup.py file so that you also can run `python setup.py test`?  open up setup.py in your editor and we'll take a look
[21:23] <barry> http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/setup.py
[21:23] <barry> notice first that setup.py imports distribute_setup and then makes a function call.  you'll see the file distribute_setup.py in the branch's top level directory.  this means my package uses *distribute* which is the successor to setuptools.  i highly recommend it, but if you don't know what that is, you can just cargo cult this bit.
[21:23] <barry> anyway, the setup.py is fairly simple.  you'll just notice one thing in the second to last line.  it sets `test_suite` to `adw.tests`.  the value is a python package path and it references the directory adw/tests.  this is how you hook up your test suite to setup.py.  when you run `python setup.py test` it looks at this test_suite key, and runs all the files that look like test_*.py
[21:24] <barry> voidspace mentions in #u-c-c that we're pretty sure this is a setuptools extension to the standard distutils setup.py.  so it'll probably work for either setuptools or distribute
[21:25] <barry> we have two of those of course!  test_adding.py and test_documentation.py, and the testing infrastructure automatically finds these, and makes the appropriate calls to find what tests to run.  so that little test_suite='adw.tests' line is all you need to hook your tests into setup.py
[21:25] <barry> so far so good.  now let's look at how to hook your python test suite into your debian packaging so that your tests always run when your package is built.  if you're not into debian packaging you can ignore the next couple of minutes.
[21:26] <barry> open up debian/rules in your editor.
[21:26] <barry> http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/debian/rules
[21:26] <barry> first thing to notice is that my package uses dh_python2, which is the new goodness replacing python-central and python-support.  i highly recommend it as it can make your debian packaging of python code really really easy.  you can see there's not much to my rules file.
[21:27] <barry> i won't go into dh_python2 much right now, but you can look at the debian wiki for more details http://wiki.debian.org/Python
[21:27] <barry> for today's class, there are really three parts to take a look at.  the first is the PYTHON2 line.  what this does is ensure that your tests will be run against all supported versions of python (2.x) on your system, not just python2.6 or python2.7.  the commented out line for PYTHON3 will do something similar for python3
[21:27] <barry> (e.g. line 3)
[21:27] <barry> remember that ubuntu 11.04 supports both python2.6 and 2.7
[21:27] <barry> aside: it is possible to write your package for both python2 and python3, and to run all the tests into both.  we won't have time to talk about that today though.
[21:27] <barry> so the next thing to look at is the line that starts `test-python%`.  this is a wildcard rule that is used to run the setup.py test suite with every supported version of python2 on your system.  you'll notice the -vv which just increases the verbosity.
[21:28] <barry> (e.g. line 9)
[21:28] <barry> override_dh_auto_test line then expands the PYTHON2 variable to include all the supported versions of python2, and it runs the test-python% wildcard rule for each of these.  thus this hooks in the setup.py tests for all versions of python2.  the override is currently needed because dh_auto_test doesn't know about `python setup.py test` yet.
[21:28] <barry> i won't go into the specifics of packaging building right now, but i've done a build locally, and the results are available here: http://pastebin.ubuntu.com/592711/
[21:28] <ClassBot> eolo999 asked: I'm very comfortable with nosetests; is there a particular reason why you left it out from the session?
[21:28] <barry> scroll down to line 251 and you'll see the test suite getting run for python2.7.  scroll down to line 268 and you'll see it getting run for python2.6.  the nice thing about this is that if you get a failure in either test suite, your package build will also fail.  this is a great way to ensure really high quality (i.e. rock solid :) python applications in ubuntu.
[21:29] <ClassBot> jderose asked: I got the impression that setuptools wasn't well maintained lately, wasn't regarded as the way forward, esp with Python3 - is that true, WWBWD? :)
[21:30] <barry> eolo999: mostly just to keep things simple.  voidspace in #u-c-c says that the main advantage of nosetests is the test runner, so it you can basically use all the techniques here with nose
[21:30] <barry> i'm pretty sure that the future plans voidspace has for unittest2 include integrating nose more as a plugin than as a separate tool
[21:31] <barry> btw, that's about all the canned stuff i have prepared, so i welcome questions from here on out
[21:31] <barry> just ask them in #ubuntu-classroom-chat and we'll post the answers here
[21:32] <barry> jderose: i'd say that's correct, though setuptools does get occasional new releases.  distribute is the maintained successor to setuptools, but for python3 distutils2 will be the way forward
[21:32] <barry> i admit that it's all very confusing :)
[21:33] <barry> but my recommendation would be: use distribute for python2 stuff, and for python3 stuff if you want the same code base to be compatible with 2.x and 3.x (i.e. via 2to3).  this is a great way to support both versions of python
[21:34] <barry> oh yes, distutils2 will be called 'packaging' in python 3.3 and it will come in the stdlib
[21:35] <barry> from #u-c-c:
 barry is correct, I have plans for unittest to become more
[21:35] <barry>             extensible (plugins) that should allow nose to become much simpler
[21:35] <barry>             and be implemented as plugins for unittest
 at the moment nose is convoluted and painful to maintain because
[21:35] <barry>             unittest itself is not easy to extend
[21:35] <barry>  
[21:35] <barry> also lvh mentions trial, which is twisted's test runner.  for mailman3 i use zc.testing which is zope's test runner
[21:36] <barry> so yeah, there are lots of testing tools out there :)
[21:36] <barry> !q
 QUESTION: so is it okay/encouraged to run your python tests in PPA
[21:37] <barry>           builds, say for daily recipes and whatnot?
[21:38] <barry> jderose: i don't recall a discussion about it one way or the other.  personally, i would enable tests for all package builds, just to ensure that what you deploy has not regressed.
[21:38] <barry> however, you do need to be careful that your test suite can *run* in a ppa environment
[21:39] <barry> this may not always be the case.  some test suites require resources that are not available on the buildds.  those tests would obviously cause problems when your ppa were built
[21:40] <barry> in those cases, it may be best to have more than one "layer" of tests.  one that gives good coverage and high confidence against regressions, but requires no expensive or external resources.  and a full test suite you can run locally with whatever crazy stuff you need
[21:40] <barry> mocks might be a good answer to help with that
[21:40] <barry> qwebirc57920: ppa == personal package archive
[21:41] <barry> https://help.launchpad.net/Packaging/PPA
[21:41] <barry> QUESTION: How do I know what resources are available on the
[21:41] <barry>              buildds?
[21:41] <barry> yeah, good question :)  ask on #launchpad or launchpad-dev, or just try it and see what fails ;)  there should be better documentation about that on help.l.net
 QUESTION: so if `test` requires a lot more dependencies than
[21:42] <barry>           `install`, should we just put those all in Build-Depends?  when will
[21:42] <barry>           we get Test-Depends?  :)
[21:42] <barry> jderose: excellent question.  for now, i recommend build-depends
 Question: In the Java space there's a lot of mocking
[21:43] <barry>          tools/libraries. Any need for that in Python - if so - which are the
[21:43] <barry>          recommended ones?
[21:43] <barry> voidspace can tell you how many mock libraries are available in python!  answer is *lots*
[21:44] <barry> btw, please note that there are tools (such as pkgme and stdeb) that can debianize your setup.py based python project.  they do a pretty good job, though i'm not sure they turn test-requires into build-depends.
 QUESTION: you mentioned "layering" tests into light/heavy - what a
[21:45] <barry>           good way of doing that?
[21:45] <barry>  
[21:47] <barry> jderose: i think this depends on the test runner you use.  python's stdlib for example uses -u flag to specify additional resources to enable (e.g. largefile).  most test runners have some way of specifying a subset of all tests to run and what i would do is in your debian/rules file, file the right arguments to your test runner to run the tests you can or want to run
[21:47] <barry> note that in my debian/rules file, i set it up to run 'python setup.py test -vv' but really, it can run any command with any set of options
 QUESTION: to different doc tests share a common environment /
[21:47] <barry>              namespace? Can I make them explicitly separate / explicitly
[21:47] <barry>              shared?
[21:47] <barry>  
[21:48] <barry> chadadavis: all the doctests in a single file or docstring share the same namespace.  one of the criticisms of doctests is that it builds up state as it goes so it can sometimes be difficult if a test later in the file fails, to determine what earlier state caused the failure.
[21:48] <barry> i think that just means you have to be careful, and also, keep your doctests focussed
[21:49] <barry> not too big
[21:49] <barry> you really just have to understand when and where each tool (unittest or doctest) is appropriate
[21:50] <barry> voidspace also points out that every line in a doctest gets executed, even if there are failures (though i *think* there's a fail to cause it to bail on the first failure)
[21:50] <barry> i'll just say that that can be an advantage or disadvantage depending on what you like and what you're trying to do :)
[21:51] <barry> looks like we have a few minutes left.  are there any other questions?
[21:52] <ClassBot> There are 10 minutes remaining in the current session.
 QUESTION - what's the status of 3to2?  write Python3 is so wonderful, i'd rather go that way than 2to3
[21:52] <barry> i'll just say again what an excellent resource the testing-in-python mailing list is.  i highly recommend you join!
[21:53] <barry> voidspace answers this as well as i could:
 jderose: packaging (distutils2) is now using 3to2 rather than 2to3
 jderose: so although I've not used it myself, it must be in a
[21:53] <barry>             pretty good state  [16:53]
[21:53] <barry>  
[21:53] <barry> i've also not used 3to2 myself
[21:53] <barry> much
[21:54] <barry> fwiw, if you look at my test_documentation.py file, you'll see how you can do setups and teardowns for doctests
[21:54] <barry> it also does fun stuff like set __future__ flags for the doctest namespace
[21:56] <barry> voidspace says in #u-c-c that sphinx has support for doctests through its doctest:: directive
[21:57] <ClassBot> There are 5 minutes remaining in the current session.
[21:57] <barry> well, time is almost up, so let me thank you all for attending!  i know there was a lot of material and i blew through it pretty fast
[21:57] <barry> in closing, i'll say that while we can all debate this or that detail of testing, there's no debate that testing is awesome and we all should do more of it!
[21:58] <barry> big thanks to my colleague voidspace for helping out!
[22:02] <ClassBot> Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html
[23:43] <wcampusano> .