=== _LibertyZero is now known as LibertyZero === croppa is now known as croppa_ === croppa_ is now known as croppa === yofel_ is now known as yfoel === yfoel is now known as yofel [13:04] hello peeple...! [13:16] anyone use Twitter ?? === SquishyNotHere is now known as squishy === bregma_ is now known as bregma [14:35] when this stuff starts? [14:38] 3.5 hours to go. [14:38] err 2.5 [14:44] der r still 4 hrs to start [14:47] can anyone tell me wht tools will b reuired during dis session === hggdh_ is now known as hggdh [15:35] hi, this is the place where class will go on right ? [15:36] yep, in approx 20 minutes the first one starts [15:36] how long to go yet ? [15:36] crazedpsyc1: cool thanks :) === CodeBlock_ is now known as CodeBlock [15:50] hello world [15:50] howdy [15:50] hey everybody [15:58] hello guys, what is going on? [15:59] has the class started yet? [15:59] Upirate: Nope, still one hour left [16:06] thanks, c2tarun: === anonymous is now known as Guest1225 [16:14] class started ? [16:14] still waiting 4 it guest [16:14] shadowking: ok :) [16:14] its gonna start in abt 45 min [16:15] 45 min ? or 4 to 5 min ? [16:15] shadowking: [16:15] "45" mins [16:16] shadowking: ok :) [16:17] you can all follow "ubuntuclassroom" on identi.ca for the announcements :) [16:17] n twitter too [16:19] i guess m gonaa go n take a nap till it starts [16:19] :P === davidcalle_ is now known as davidcalle [16:37] just half n hr more === rmrf is now known as rmrf|NA === rmrf|NA is now known as rmrf === serial is now known as Guest5786 [16:44] I thought it was starting 44 min ago [16:44] *no daylight saving time [16:44] that means it will start in 15 minutes [16:45] 15 mins to go people! [16:45] cool [16:45] i must have calculated the absense of DST the wrong way... [16:46] I did it, too [16:48] will the actual class be mirrored on twitter and identica, or just announcements [16:49] Just the announcements [16:49] ok [16:49] hey everyone [16:49] hi [16:50] hey there [16:50] hello [16:50] I'm sure there'll be logs later [16:50] are you all waiting for a class?? [16:50] yep [16:50] 10 minutes to go [16:50] crazedpsyc: the session logs will be at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html [16:50] yes [16:50] so the class is gonna be here?!! [16:50] duh [16:51] #ubuntu-classroom-chat [16:51] lol [16:51] Bannaz: eveyone thinks so ;) [16:51] alright [16:51] sorry ;) [16:51] hmm [16:51] then we'll never know [16:51] ok, guys, please take chatter to #ubuntu-classroom-chat [16:54] its a web chat ...how am I supposed to take a class here ? [16:54] the speaker of the session talks here [16:54] Bannaz: you follow in #ubuntu-classroom, and you may ask questions in here [16:54] you ask your questions in ubuntu-classroom-chat [16:55] hi [16:55] hi [16:55] hi [16:55] how can I search for chanel that in their names have freelance? [16:57] Hi! I'm Chase Douglas, and I've been working on bringing multitouch input to X.org, moving the uTouch st [16:57] ugh [16:57] oh google calendar [16:57] always in the way with your pop ups [16:59] haha [17:00] I am here from twitter after hearing about uTouch :) [17:01] WELCOME EVERYBODY TO UBUNTU APP DEVELOPER WEEK! [17:01] I'll just do a very quick intro and then quickly pass on the mic [17:01] all the info you need should be on https://wiki.ubuntu.com/UbuntuAppDeveloperWeek - which session is next, what it is about, etc [17:01] also will we put logs of the sessions that happened on there, so if you couldn't make it, you can still read up on the session afterwards [17:01] if you want to chat about what's going on or ask questions, please head to #ubuntu-classroom-chat [17:01] if you have questions, please prefix them with QUESTION: === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu App Developer Week - Current Session: Enabling Multitouch and Gestures Using uTouch - Instructors: cnd, bregma [17:02] for example: QUESTION: What is a great device to try out multitouch with? [17:02] etc [17:02] Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session. [17:02] without further ado, I'll clear the stage for Chase Douglas and Stephen Webb [17:02] they'll talk about Enabling Multitouch and Gestures Using uTouch [17:02] Hi! I'm Chase Douglas, and I've been working on bringing multitouch input to X.org, making the uTouch stack use the new X MT work, and making improvements throughout the stack from the kernel to the toolkits [17:03] I'm Stephen Webb, I designed the geis API used in utouch [17:03] we're dividing the session into two parts [17:03] forst, I'll discuss gestures and utouch overall [17:04] then Chase will chime with multi-touch [17:04] first off, I'd like to clarify what we mean when we say "gestures" [17:04] there are fancy complex gestures like spirals and zorros made with the mouse [17:04] we refer to those as 'stroke gestures' [17:05] Those are not the gestures we are handling in uTouch. [17:05] They lack the discoverability and generality for general-purpose gestures. [17:05] The uTouch stack currently recognizes what we call 'gesture primitives'. [17:05] These are "drag", "pinch/expand", "rotate", "tap", and "touch". [17:05] The first three correspond to the linear transformations of "translate", "scale", "rotate". [17:05] The last is similar to mouse-down and mouse-up events, and a "tap" is like a single mouse click. [17:06] It is possible to build stroke gestures from these primitives if that's what you want. [17:06] we have also been discussing the use of a 'gesture langue' to do just that [17:06] The current focus in uTouch is on multi-touch gestures. [17:06] A gesture appears to an application as a stream of events. [17:06] Each event is a snapshot in time of the current status of the gesture, [17:07] including properties such as velocity, change in radius, and finger positions if available [17:07] There is a lot of hardware with a wide range of capabilities, including the number of touches supported. [17:07] Not all multi-touch devices provide all the information required for all gestures. [17:07] For example, some common notebook touchpads provide only the bounding box of the multiple touches, which is inadquate to determine rotation angles. [17:08] So, the uTouch stack consists of three basic layers [17:08] (1) the input layer (evdev in the kernel, through the /dev/input interface, and the mtdev userspace library to homogenize input) [17:08] (2) the gesture recognition engine, utouch-grail [17:08] (3) the application programming interface, utouch-geis [17:09] it's a little more complex than that in implementation, but that's the basic structure. [17:09] a very rough diagram: [17:10] In maverick and natty, grail runs in the X server so it has easy access to [17:10] window geometry. [17:10] the latest versions of grail are using XInput 2.1 to get full multi-touch support [17:11] the geis API currently connects to grail over a private X connection [17:11] applications and libraries using geis do not need to know this [17:12] in oneiric, grail will probably move into the compiz process as the X server gets replaced by "something else" [17:12] included as part of the uTouch stack are a set of diagnostic tools [17:13] gesturetest (which talks to the X server directly) [17:13] grail-gesture (which runs grail directly) [17:13] geistest (built on the API) [17:13] and others are already available, like xinput and lsinput, for examining hardware traits [17:14] the single programmable access to uTouch is through the GEIS API [17:14] API docs are available online at or in the linutouch-geis-doc package [17:15] the simplified interface was developed first and is sufficient for very basic gesture operations [17:15] te advanced interface was developed in response to initial feedback from developers and [17:15] ives finer control ofver the types of gestures reported and how the data are reported [17:15] the simplified interface requires a connection to grail (an "instance") for each window, and a list of gestures of interest [17:16] all feedback is through callbacks [17:16] en example of using the simplified interface is here: [17:17] the advanced interface requires only a single connection to grail and set of subscriptions [17:17] each filtering on window, gestures, and input device attributes [17:18] feedback is through event delivery or, optionally, callbacks [17:18] example code using the advanced interface can be found at [17:18] the advanced API is required for handling upcoming work on "gesture languages" [17:19] I believe these examples should be fairly self-explanatory, I'm going to gloss over them because we have a lot of ground to cover [17:21] there are some easier ways to take advantage of uTouch without programming to geis [17:21] first, there is libgrip, an add-on for GTK-based applications === JasonO_ is now known as JasonO [17:22] it features a singleton GestureManager that a widget registers with and receives callbacks from [17:22] examples of libgrip use include the eog and evince packages found in the utouch PPA at https://launchpad.net/~utouch-team/+archive/utouch [17:22] future work also includes Qt and native python bindings [17:23] Qt has agreed to integrate utouch-geis into their gesture infrastructure [17:23] Python bindings for utouch-geis will be available in the utouch PPA soo and should be available with oneiric [17:24] we also have ginn, which can be used to retrofit utouch gestures into applications that were not programmed to accept them [17:24] ginn ses utouch-geis to subscribe to gestures and converts them into keystrokes or mouse movements and reinjects them into the application's input stream [17:24] it uses an XML file, /etc/ginn/wishes.xml, to define the set of conversion rules for each application [17:24] ginn ships with natty today [17:25] pecisk asked: what do you mean with uTouch going included in compiz? Shouldn't it left seperated? [17:26] utouch-grail, the recognition engine, needs to know about window geometry and ordering [17:26] the easiest way to do that is to stick it some place that already has the information, like the X server [17:27] except the X server isn't really where it belongs [17:27] so, compiz for those desktops that use compiz (like Unity), and we'll try to come up with alternate solutions for others === qwebirc67471 is now known as Merhoc [17:28] tomeu asked: what are the native python bindings for? isn't enough to access that functionality through pyqt, pygobject, etc? [17:28] the utouch-geis library is not gobject-based [17:28] it's as lightweight as possible so it can be included anywhere, like games === rmrf is now known as rmrf|NA [17:29] there are already pygobject bindings for libgrip === beni is now known as Guest10897 [17:30] ok, so we're out of questions on the gestures for now [17:30] so I'm going to start talking about raw multitouch events [17:31] over this past cycle for natty, we've been working hard to bring real multitouch input through the X server [17:31] in 11.04 we'll be the first linux distro with multitouch support! [17:31] but it's just the ground work for now [17:31] and it's not quite finalized yet [17:32] so it's considered a prototype, or pre-release for now [17:32] but we do have some support for developers who want to write applications to take advantage of the new functionality [17:32] there are two layers you can develop at [17:33] you can develop at the XInput level, which we don't recommend for now but does provide some extra functionality [17:33] or you can develop using the Qt touch framework [17:33] first, I'll go over the XInput work just to give some background [17:33] XInput is an extension to the X server [17:33] X is almost 30 years old now [17:34] and no one was trying to integrate multitouch with X way back when it was created :) [17:34] over time, the X Input extension has grown to allow for various input related functionality [17:34] at first it allowed for multiple mice and keyboards to be used at the same time [17:35] this allows you to control the cursor with your trackpad and your usb mouse without having to toggle one or the other [17:35] then, support was added for grouping keyboards and mice, and for creating more than one cursor on the screen [17:35] you can try this out with the xinput command line utility, it's kinda fun to have multiple cursors :) [17:36] now, we're extending XInput to version 2.1 to add multitouch support [17:36] here's the link to the current protocol document that's in development: http://cgit.freedesktop.org/xorg/proto/inputproto/tree/specs/XI2proto.txt?h=inputproto-2.1-devel [17:37] I don't recommend trying to understand it though :) [17:37] so I'll just skip over it for now and hit a few key points [17:37] first, there's a touch "lifetime" [17:37] every touch has an event stream associated with it [17:37] when the touch begins, a TouchBegin event is generated [17:38] when the touch changes in any way, i.e. it moved, or the pressure changed, a TouchUpdate event is sent [17:38] when the touch leaves the touch surface, a TouchEnd event is sent [17:39] the second major point about touch input is that there are two classes of devices that affect how touch events are handled [17:39] direct touch devices are basically touchscreens [17:39] where you touch on the surface is where the touch events are sent [17:39] so if you touch with one finger over the terminal, and you touch another finger over the web browser [17:39] then each application will receive the touch event for their respective touches [17:40] in contrast, there are dependent touch devices [17:40] these comprise trackpads and devices like the Apple Magic Mouse [17:40] when you touch the surface of these devices, the touches are sent to the window that is under the cursor on the screen [17:41] Lastly, there's a layer of mouse event emulation for direct touch devices [17:41] if your application subscribes to mouse events and not touch events, and someone touches your application using a touchscreen [17:41] a mouse event stream is generated [17:41] the primary mouse button is "pressed" when you touch the screen [17:42] and the cursor moves with your finger [17:42] and then the primary mouse button is "released" when the touch ends [17:42] this allows us to add touch capabilities to new applications while not breaking mouse usage for older applications [17:43] That's enough for now about X though [17:43] I want to move on to Qt [17:43] in 11.04, we will also have a pre-release addition to the Qt framework that will support multitouch [17:43] if you have a multitouch device and want to test it out, install qt4-demos [17:44] then try out the applications in /usr/lib/qt4/examples/touch/ [17:44] there's four examples: dials, fingerpaint, knobs, and pinchzoom [17:44] I like fingerpaint the most :) [17:45] note that with the Qt framework, a trackpad device does not emit touch events by default until two or more fingers are touching the surface [17:45] here's a link to the documentation for reference: http://doc.qt.nokia.com/latest/qtouchevent.html [17:46] If you want to take a look at an example source code file for handling multitouch data, see http://doc.qt.nokia.com/latest/touch-fingerpaint-scribblearea-cpp.html [17:46] this is the fingerpaint application source code for the canvas area [17:47] if you scroll down near the bottom you'll see the ScribbleArea::event() function [17:47] this is where the multitouch events are received [17:47] in Qt, touches sent to a widget are grouped together [17:47] so once you get a touch event, you can get a list of all the touch points [17:47] and then you can iterate over them to find which of the touchpoints have changed [17:48] this is what the fingerpaint application does [17:48] with that, I'll move on from qt to get to some more stuff :) [17:48] there's also a niche library called libavg [17:48] this library is often used for games [17:49] and there are a handful of multitouch games available for it [17:49] they are all written in python, so it's a very accessible library [17:49] I won't spend any more time on it today, but you can try one of the games in natty by installing empcommand [17:49] it's a multitouch version of missile command :) [17:49] there are more games to try out in the libavg ppa [17:50] hmm... seems they aren't there yet [17:50] we'll get them uploaded soon though :) [17:50] lastly, I wanted to mention a few advanced things you can do with the XInput 2.1 extension [17:50] the first is that you can do touch "grabs" [17:51] this allows one application to control the event stream of touch events before they reach the destination application [17:51] second, though this won't be available until ubuntu 11.10, an application can "observe" touches [17:52] this may allow for ripple effects in compiz when you touch the screen, for example [17:52] There are 10 minutes remaining in the current session. [17:52] lastly, you can receive "unowned" events, which allow applications to peek at events [17:52] for more details, see the XInput 2.1 spec [17:52] with that, I'll open it up for questions :) [17:52] rydberg asked: what can you do with multitouch that you cannot do with single touch? [17:53] too many things :) [17:53] obviously all the multitouch gestures are available only with multitouch [17:53] but there are other possibilities [17:54] one can envision something like a conference table where the table is a big multitouch screen [17:54] each participant in the conference may interact with the table [17:54] there are also possibilities with object manipulation [17:54] for 3d applications [17:55] we're focusing on enabling all these, so we're hoping others have good ideas too :) [17:55] pecisk asked: what kind of licensing uTouch have? [17:55] uTouch is licensed under GPL v3 [17:55] crazedpsyc asked: do you think multitouch will ever be added to compiz? If compiz did it I'm sure the animations (eg squishing a window) would be amazing [17:56] natty already brings multitouch gestures to unity [17:56] which is based on compiz as the window manager [17:56] for example, if you touch with three fingers over a window, you'll see the resize handles [17:56] let me find a picture [17:56] http://www.omgubuntu.co.uk/2011/03/unity-love-handles-resizing-in-ubuntu-just-got-sexy/ [17:57] There are 5 minutes remaining in the current session. [17:57] we plan on adding more functionality in further releases too [17:57] we do have plans for a compiz plugin to expose multi-touch and gesture data directly [17:58] compiz support will also allow for something microsoft has termed "no touch left behind" [17:58] http://venturebeat.com/2009/05/11/with-surfaces-new-features-microsoft-wants-to-leave-no-touch-behind/ [17:59] where there is feedback to provide the user with context about where a touch occurred [17:59] and what action it performed [17:59] btw, I realized I messed up on the licensing question :) [17:59] it's LGPL v3 [18:00] the parts that are in the X.org server, such as the XInput 2.1 multitouch extension are under the X licensing (MIT/X11/BSD) [18:00] I think we're out of time now [18:00] so I want to thank everyone for participating! [18:01] come find us in #ubuntu-touch! === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu App Developer Week - Current Session: GObject Introspection: The New Way For Developing GNOME Apps in Python, JavaScript and Others - Instructors: tomeu [18:02] Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session. [18:03] Hi, I'm a GNOME developer working at Collabora. [18:03] During the past few years it has been evident that trying to keep the GNOME APIs available to languages other than C required more resources than were available. [18:03] If you were working on a Python app using GNOME stuff, you probably realized about that. [18:03] GObject Introspection's main goal is to radically lower the amount of effort required to do that. [18:04] Feel free to make questions at any point, I will address them as I see them fitting in the plan of the talk. [18:04] == The problem == [18:04] Before introspection was available for GObject-based APIs, bindings maintainers had to produce C code that would bridge between the host language and each C API that would be made accessible. [18:04] There were code generators that saved a lot of time, but still, corner cases had to be handled manually and the generated code had to be maintained. [18:05] The total amount of work required to keep the C APIs callable from other languages was a factor of the size of the APIs, the number of languages that would be able to call into them, and the distros where such bindings had to be packaged. [18:05] As you can see, the amount of work to be done was growing very quickly, far faster than resource availability in a mature project such as GNOME. [18:06] == The solution == [18:06] The reason why bindings weren't able to generate code that wouldn't need manual modifications is that by scanning the C sources there's only so much information available. [18:07] There is critical information that bindings need that was only available in natural-language API documentation or in the code itself. [18:07] Some bindings allowed this information to be manually added and fed to the generator, but it meant that each binding had to write that extra information by themselves, maintain it, etc. [18:07] Based on that experience, it turned out to be clear that the extra information required to call the C API had to be added to the API sources themselves, so all bindings could benefit. This extra information is added in what we call "annotations" and we'll get into a bit of detail later. [18:08] But that's not enough to reduce the workload at the distro level, if each binding had to generate code based on that extra information, distros would still need to package each binding for each API and each language. [18:09] This is the reason why all the introspection information needs to be available at runtime, so a system which has bindings for, say Python, can call some new C API without having to write, package and deploy specific software for it. [18:09] So introspection information is available at runtime with acceptable performance, it is compiled into "typelibs": tightly packed files that can be mmapped and queried with low overhead. [18:10] == Workflow changes == [18:10] ASCII art overview of GI's architecture: http://live.gnome.org/GObjectIntrospection/Architecture [18:11] (I'm going to go through it, so I recommend to give it a look now) [18:11] When building a shared library, g-ir-scanner is called which will scan the source code, query GType for some more bits of info, and produce a XML file with the .gir suffix. [18:12] The .gir file is then compiled into a typelib, with the .typelib suffix. [18:12] The typelib is distributed along with the shared library and the .gir file is distributed with the header files. [18:12] When an application that uses introspection is running, the introspection bindings for its programming language will use the information in the typelib to find out how it should call the C API, being helped by the GType system and by libraries such as libffi. === JasonO_ is now known as JasonO [18:13] Now I'm going to make a pause until someone says in #*-chat that I'm not going too fast :) [18:14] == Annotations == [18:14] The needed information that is missing in the signature of the C API includes mainly: [18:14] * details about the contents of collections (element-type), [18:15] * memory management expectations (transfer xxx), [18:15] * which functions are convenience API for C users and should not be exposed (skip), [18:15] * scope of callbacks (scope), [18:15] * auxiliar arguments (closure) (array length=2), [18:15] * is NULL accepted as an input argument (allow-none), [18:15] * and more. [18:15] For more details: http://live.gnome.org/GObjectIntrospection/Annotations [18:15] Example from GTK: [18:15] --------------------- 8< ----------------- [18:15] /** [18:15] * gtk_tree_model_filter_new: [18:15] * @child_model: A #GtkTreeModel. [18:15] * @root: (allow-none): A #GtkTreePath or %NULL. [18:16] * [18:16] * Creates a new #GtkTreeModel, with @child_model as the child_model [18:16] * and @root as the virtual root. [18:16] * [18:16] * Return value: (transfer full): A new #GtkTreeModel. [18:16] */ [18:16] GtkTreeModel * [18:16] gtk_tree_model_filter_new (GtkTreeModel *child_model, [18:16] GtkTreePath *root) [18:16] --------------------- 8< ----------------- [18:16] == Other benefits == [18:16] Having available all the required information at runtime means that bindings can decide more freely when to allocate resources such as datas structures and function stubs, this allows bindings to address long-time issues such as slow startup and high memory usage. [18:16] Another consequence of bindings calling the C APIs as exposed by upstream means that documentation can be generated directly from the introspectable information, without any per-API work. [18:17] By lowering the barrier to expose APIs to other languages, more applications are being made extensible through the use of plugins. [18:18] Libpeas helps your application to expose some extension points that can be used by plugins written in C, JavaScript and Python. It is already being used by Totem, GEdit, Vinagre, Eye of GNOME, etc [18:18] == Changes for library authors == [18:18] Library authors that wish their API was available to other languages need to mainly do these three things: [18:18] * mark all the API that cannot be called from other languages with (skip) so it doesn't appear in the typelib, [18:19] that API could be considered as a convenience for C users [18:19] * make sure all the functionality is available to bindings (by adding overlapping API), [18:19] * modify their build system to generate and install the .gir and .typelib files (http://live.gnome.org/GObjectIntrospection/AutotoolsIntegration), [18:19] * add annotations as mentioned before. [18:19] In practical terms and for existing libraries, it uses to be better if people trying to use your API are the ones that submit patches adding annotations as they have a more readily available way to check for their correctness. [18:20] But for the author of the API it should be generally obvious which annotations are needed provided some exposure to how bindings use the introspected information. [18:20] == Changes for application authors == [18:20] Application authors need to be aware that, until the whole callable API becomes used through introspection by applications, they cannot expect for the annotations to be perfect. [18:20] So instead of waiting for introspection to "mature", consider starting right now and get involved upstream by pointing out issues and proposing annotations and alternative API when needed. [18:21] For now, may be best to look at the .gir to figure out how to call something, if the C docs aren't enough. [18:21] In the future there will be documentation generated for each language from the .gir files, but nobody has got anything usable yet. [18:22] so I don't have any more text to copy&paste, I will gladly answer any questions [18:23] crazedpsyc asked: is this available for languages other than C? [18:24] no, it would be really hard depending on the particular language [18:24] and the turnout would be smaller because platform code tends to be written in C in the GObject world [18:25] patrickd asked: Are there examples any where of getting started using these bindings in say, something like python? [18:25] we have some material at http://live.gnome.org/PyGObject/IntrospectionPorting [18:26] but tomorrow you will get a session here by pitti just about python and introspection [18:26] this was intended to present the basic concepts, tomorrow will be more about practical stuff [18:28] abhinav81 asked: so a language binding (say python) for a library ultimately calls the C API ? [18:28] yes, there will be some glue code in python that will be calling the same API that C programs use === jhernandez is now known as jhernandez_afk [18:29] PyGObject uses libffi directly, there's another alternative implementation that uses python's ctypes (which in turn also uses libffi) [18:29] we also have an experimental branch of pygobject by jdahlin that uses LLVM to generate wrappers [18:30] I know python best, but I guess other languages will have other mechanisms to call into C code at runtime [18:30] chadadavis asked: is the plan to currently move everything to PyGI then? What types of applications would be better off staying with PyGTK? [18:31] at this moment, pygtk won't be updated to support gtk3 === dpm_ is now known as dpm [18:31] also, pygobject+introspection doesn't support gtk2 [18:32] so my recommendation is to do what most GNOME apps do: branch and keep a maintenance branch which still uses pygtk/gtk2, and move master to introspection and gtk3 [18:32] geojorg asked: What is the current status of PyGI in Python 3 ? [18:33] haven't been personally involved on that, but I think someone at the last hackfest rebased the python3 branch [18:33] I think fedora is aiming for gtk3+python3 for their next release [18:35] I will hang around for a while in case there's some more questions before the next talk starts [18:35] JanC asked: how similar are code for PyGtk & PyGI (and thus how much work is it to port an application and keep parallel branches)? [18:36] IMO is not that dissimilar, you have some tips about porting here:http://live.gnome.org/PyGObject/IntrospectionPorting [18:37] and you can get an idea of the kind of transformations needed by reading this script: http://git.gnome.org/browse/pygobject/tree/pygi-convert.sh [18:37] you may find that the changes between gtk2 and gtk3 are more worrying, depending on how much of the API your app uses [18:37] pecisk asked: is the any deadlines when all base apps should be correctly supported by g-i? [18:38] you say you meant base libs, so the deadline was GNOME 3 for all libraries in GNOME [18:39] no doubt some libraries will have better annotations than others [18:40] as I said before, the quality of their introspection support depends greatly on the contributions from application authors, which went on submitting annotations for the API that their app uses [18:40] crazedpsyc asked: can I get PyGI in maverick? how? [18:40] I have heard you can, but I'm not sure how (I don't use ubuntu) [18:41] but even then, maverick has gtk2 afaik, so I would recommend to try to move to natty for development [18:41] gtk2 lacks a lot of annotations because the focus has been on gtk3 [18:44] there may exist a PPA, not sure [18:49] == Where to go from here? == [18:49] http://live.gnome.org/GObjectIntrospection [18:49] In GIMPNet: #introspection, #python, #javascript, ... [18:50] Thanks for the attention and the questions, I also have to thank Martin Pitt for passing me his notes on GI [18:51] as said, he will be giving tomorrow a session focused on python and introspection [18:52] laszlok: QUESTION: is there a bug report or a wiki page about the status of generating documentation for the new API? [18:52] let me get some links for you [18:52] http://blog.tomeuvizoso.net/2011/02/generating-api-docs-from-gir-files.html [18:52] https://bugzilla.gnome.org/show_bug.cgi?id=625494 [18:53] There are 10 minutes remaining in the current session. [18:53] laszlok asked: is there a bug report or a wiki page about the status of generating documentation for the new API? [18:53] http://live.gnome.org/Hackfests/Introspection2011#line-19 [18:57] There are 5 minutes remaining in the current session. [19:00] hey, hello everyone! [19:01] thanks for joining in this session on how to internationalize your applications === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu App Developer Week - Current Session: From English to any language: internationalizing your apps - Instructors: dpm [19:02] Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session. [19:02] ok, now that classbot is done... [19:03] first of all, thanks to tomeu for a great session [19:03] And now let's get started with translations [19:03] First the introductions [19:03] I'm David Planella, and I work in the Community team at Canonical as the Ubuntu Translations Coordinator [19:04] Usually I work more on the community side of things, with the always awesome Ubuntu translation teams [19:05] But today I've put my developer hat to show you how easy it is to get your app ready to speak a multitude of languages [19:05] and set up so that the community can translate it. [19:05] Regardless of the programming language, the process of adding internationalization support to an application is not only fairly easy [19:06] but also, on a high level view, the same for all programming languages. [19:06] This means that after this session you should have a pretty good overview on what it takes to make your application translatable [19:06] and you can apply this to any programming language, slightly adapting the syntax, of course. [19:07] In order for you to see how it all fits together, I've based the talk on a common framework that can get you quickstarted in just a few minutes [19:07] I've used the Python programming language and Quickly [19:08] https://wiki.ubuntu.com/Quickly [19:08] Let's start with some background concepts to make it easier to understand the steps we'll be doing later on [19:09] So let's have a quick look at the main players involved in the internationalization game: [19:09] [19:09] Background Concepts [19:09] =================== [19:09] [19:09] GNU Gettext [19:09] ----------- [19:09] Gettext is the underlying and most widely used technology to enable translations of Open Source projects. [19:09] It defines a standard format of translation files translators can do their work with (PO files, more on them in a minute) [19:10] and lets applications load those translations compiled in a binary format (MO files) at runtime. [19:10] It has implementations for many programming languages, and amongst them, of course, Python. [19:10] You'll find that the comprehensive gettext manual at http://www.gnu.org/software/gettext/manual/gettext.html can be a very useful reference, [19:10] The Python implementation of the gettext API is what we'll use to internationalize our project with Quickly today. [19:11] Needless to say, it also comes with some nifty documentation at http://docs.python.org/library/gettext.html [19:11] * {i} In short, gettext does all the heavy lifting involved in exposing your application for translation and loading the translations for the end user [19:12] [19:12] intltool [19:12] -------- [19:12] Intltool is a higher level tool that adds functionality to gettext by allowing the extraction of translatable strings from a variety of file formats [19:13] It has also become a standard tool when implementing internationalization for OSS projects. [19:13] Nearly all (if not all) GNOME projects, for example, use intltool. [19:14] * {i} intltool handles the translations of things such as the desktop shortcut of your application [19:14] [19:14] python-distutils-extra [19:14] ---------------------- [19:14] Python-distutils-extra is a python package that makes it easy to integrate themable icons, documentation and gettext based translations in your python install and build tools, and it's basically an enhancement to python-distutils. [19:15] The project's page is at http://www.glatzor.de/projects/python-distutils-extra/ [19:15] * /!\ Note that this tool is Python-specific. I'm mentioning it here because we're going to be talking of a practical example with Python. If your application were a C application you'd probably use autotool rules to achieve the same result [19:15] [19:15] The three above technologies (gettext, intltool, python-distutils-extra) are transparently used by Quickly, so we won't get into much more detail for now. [19:16] I just want you to get an idea of what we're talking about [19:16] There are also more aspects involved in internationalizing applications, such as font rendering, input methods, etc., but this should get you started for now. [19:16] [19:16] Quickly [19:16] ------- [19:16] I'll be very brief here and let you figure out more on quickly as we go along [19:16] For now, it will suffice give you a teaser and tell you that it is the tool which brings back the fun in writing applications! ;-) [19:17] [19:17] Finally, a tool that is not strictly needed for internationalization (or the shorter form: i18n), but that can help you build an active translation community around your project [19:18] which will be the next step after your project adds i18n support [19:18] [19:18] Launchpad Translations [19:18] ---------------------- [19:18] Launchpad Translations (https://translations.launchpad.net/) is the collaborative online tool which allows translation communities to be brought together and translate applications online through its web UI. [19:18] Apart from the very polished UI to provide translations, it has other nice features such as message sharing across project series (translate one message in a series and it instantly propagates to all other shared series), [19:19] global suggestions (suggestions of translations across _all_ projects in Launchpad), automatic imports of translations and automatic commits to bzr branches, several levels of permissions, and a huge translator base. [19:20] On the right hand side of the URL I gave you you can see that there are quite a lot of projects using Launchpad to make translations easy both for developers and translators. [19:20] Bear with me: we're nearly there - let's also quickly trow in and review a couple of concepts related to the gettext technology [19:20] [19:20] Gettext: MO files [19:20] ----------------- [19:21] The message catalog, or MO file (for Machine Object) is the binary file that is actually used to load translations in a running system. [19:21] It is created from a textual PO (more on that in a bit), which is used as the source, generally by a tool called msgfmt. [19:21] Message catalogs are used for performance reasons, as they are implemented as a binary hash table that is much more efficient to look up at runtime than textual PO files [19:22] * {i} .mo files are the binary files installed in the system where the application loads the translations from, using gettext [19:22] [19:22] Gettext: PO files [19:22] ----------------- [19:22] PO file stands for Portable Object file, and are the textual files translators work with to provide translations. They are plain text files with a special format: [19:22] msgid "English message" [19:22] msgstr "Traducció al català" <- Translated string [19:22] (message pairs containing the original messages from the application, and its corresponding translation) [19:23] You can see an example of a PO file here: [19:23] http://bazaar.launchpad.net/~synaptic-developers/synaptic/trunk/view/head:/po/zh_CN.po [19:23] * {i} Translators provide translations in PO files. If they use an online translation system they won't directly work with them, but your project sources will still contain them [19:23] * {i} In each application source tree there is generally a PO file per language, named after the language code. E.g. ca.po, de.po, zh_CN.po, etc. [19:23] [19:23] Gettext: POT files [19:23] ------------------ [19:23] Once your project has added i18n support, you'll need to give translators an updated list of translatable messages they can start translating. [19:23] You'll also need to update this list whenever you add new messages to your application. [19:24] You achieve that through POT files, or simply templates in l10n (localization) slang (Portable Object Template). [19:24] They are textual files with the same format as PO files, [19:24] but they are empty of translations and are used as a template or stencil to create PO files from. [19:24] * {i} There are special tools to update templates. Generally intltool is used, often called from a build system rule or a higher level tool such as python-distutils-extra [19:24] [19:24] Gettext: Translation domain [19:24] --------------------------- [19:25] The translation domain is a unique string identifier assigned by the programmer in the code (usually in the build system) [19:25] and used by the gettext functions to locate the message catalog where translations will be loaded from. [19:25] The general form to compute the catalog’s location is: [19:26] locale_dir/locale_code/LC_category/domain_name.mo [19:26] which in Ubuntu expand generally to /usr/share/locale/locale_code/LC_MESSAGES/domain_name.mo [19:26] The locale_code part refers to the language code for the particular language. As an example, when using Nautilus in a Catalan locale with Ubuntu, [19:26] the gettext functions will look for the message catalogue at: [19:26] /usr/share/locale-langpack/ca/LC_MESSAGES/nautilus.mo [19:27] That's where the translations for your application will be installed and searched for [19:27] Note that for your app this location might be slightly different: [19:27] /usr/share/locale/ca/LC_MESSAGES/myapp.mo [19:28] * {i}The corresponding translation template should have the same translation domain in its filename, e.g. nautilus.pot. [19:28] * {i} The translation domain must be unique across all applications and packages. I.e. something generic like messages.pot won’t work. [19:28] [19:29] Ok, done with the concepts, let's get down to work and to questions [19:29] Generic Steps to Internationalize an Application [19:29] ================================================ [19:29] * Integrate gettext into the application. Initialize gettext in your main function, most especially the translation domain [19:30] * Integrate gettext into the build system. There are generally gettext rules in the most common build systems. Use them. [19:30] * Mark translatable messages. Use the _() gettext call to mark all translatable messages in your application [19:31] * Care for your translation community. Not necessarily a step related to adding i18n support, but you'll want an active and healthy translation community around your project. Keep the templates with translatable messages up to date. Announce these updates and give translators time to do their work before a release. Be responsive to feedback. [19:31] [19:31] Hands-on: creating an internationalized app [19:31] =========================================== [19:31] Ok, enough theory, let's have a go at using quickly to create your first internationalized application [19:32] You can install Quickly on any recent Ubuntu release by simply firing up a terminal and executing: [19:32] sudo apt-get install quickly [19:33] Once you've done that, you can run quickly to create your first project: [19:34] quickly create ubuntu-application awesometranslations [19:34] (if you like, substitute 'awesometranslations' by your favourite project name) [19:34] We've just told Quickly to use the ubuntu-application template, and to call what is created "awesometranslations" [19:35] you should probably have an open dialog from your new app in front of you. [19:35] Quickly has created all that for you! [19:35] You can close the dialog to continue [19:35] What Quickly did was to copy over basically a sample application, and do some text switcheroos to customize the app [19:36] What you could see there was the ui containing some text that needs translation. [19:36] To start making change to your app, go to the directory where it's stored. Generally by simply running [19:36] cd awesometranslation [19:37] You can then edit your code with $ quickly edit, change the UI with $ quickly glade, and try your changes with $ quickly run [19:37] You can save your change with $ quickly save [19:37] Finally, to package, share, release your apps so that other will be, with the following commands (not all are necessary): $ quickly package / $ quickly share / $ quickly release [19:37] As it stands now, the application has nearly all you need to make it translatable [19:38] which is the great thing about quickly [19:38] From now on, while I'll let you play and investigate the application you've created, we'll be looking at the one I created for the purpose of this session [19:39] and I'll show you the i18n bits [19:39] so that you can add them to your existing applications if you want [19:39] For new applications, I'd simply recommend you to use quickly [19:39] and start from there [19:40] which will set up everything for you, so that you can forget about it and concentrate on all those new cool functions your new app is going to provide :-) [19:40] So let's have a look at: [19:40] http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/files [19:41] Notice that we could just call the session finished at this point, as quickly did all the job for us :) [19:41] Also notice how easy it is. I've just created and pushed the application to Launchpad a few minutes ago [19:42] Remember we were talking about PO and POT files? [19:42] They are right there, under the po/ folder: [19:42] http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/files/head:/po/ [19:43] the po/ folder could be called something else, but it is customary to call it like that, as some tools rely on this convention [19:43] Notice the .pot file called the same name as your app [19:43] and an example translation file (ca.po) submitted by a translator [19:44] Now to the interesting bits: [19:44] Remember the generic steps for internationalization we were talking about earlier on? [19:44] * Initializing gettext: [19:44] http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/view/head:/awesometranslations/__init__.py#L8 [19:45] so here we include the gettext module [19:45] and we define a function called simply _() [19:45] And finally we define the translation domain [19:46] which will be the name of the .mo file installed in the system and the name of the .pot file [19:47] * Integrating gettext in the build system: [19:47] http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/view/head:/setup.py#L13 [19:47] Here the integration happens automagically by using python-distutils-extra [19:47] C programs using autotools might need a more complex integration [19:48] * Mark translatable messages: [19:48] http://bazaar.launchpad.net/~dpm/+junk/awesometranslations/view/head:/awesometranslations/__init__.py#L23 [19:49] For every message you want to expose for translation, you simply have to wrap it in the _() function, e.g. _("Translatable message here") [19:49] And that's basically it, really [19:50] easy, isn't it? [19:50] ok, so we're running out of time, let's see if there are questions! [19:52] There are 10 minutes remaining in the current session. [19:52] bdfhjk asked: What is the best way to translate QT application? Gettext or QTLinquist? [19:52] bdfhjk asked: What is the best way to translate QT application? Gettext or QTLinquist? [19:52] Tough question :) [19:53] Both Gettext and QT Linguist are excellent i18n frameworks [19:53] With similar functionality [19:53] But I would personally use gettext [19:54] Because it is framework-agnostic and used by the vast majority of Open Source projects [19:54] Not only that, but most online translation tools rely on gettext [19:54] KDE itself uses gettext, for example [19:54] bulldog98_konv asked: what’s the difference between GNOMEs and KDEs handling of translations in code? [19:55] Not much really [19:55] As I said, both KDE and GNOME use gettext [19:55] The majority of GNOME is written in C, and KDE in C++ [19:56] I gather that in KDE they wrap Qt Linguist calls through kdelib to actually use gettext to load the actual translations [19:56] bdfhjk asked: Is Gettext working in windows? [19:57] There are 5 minutes remaining in the current session. [19:57] Yes, gettext works in any platform where glibc can run, including Windows [19:58] There is still time to answer one last question if you've got one [19:59] Ok, so I think I'll use the last minutes to thank everyone for their participation, and remind you that if you've got any questions on translations, feel free to ping me any time! [19:59] I usually hang out on #ubuntu-devel [20:00] Now time for KDE/Kubuntu rockstar apachelogger, whol'll tell you about the secret art of writing plasma widgets [20:00] thank you dpm :) [20:01] the floor is yours! [20:01] salut, bonjour and welcome to an introduction to Widgetcraft oh my :) [20:01] ...also known as the art of creating Plasma Widgets. [20:01] my name is Harald Sitter and I am developer of KDEish things [20:02] for this session you will need a couple of packages and any handy editor you like === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu App Developer Week - Current Session: Widgetcraft: The Art of Creating Plasma Widgets - Instructors: apachelogger [20:02] sudo apt-get install kdebase-workspace-bin kdebase-runtime plasma-dataengines-workspace [20:02] Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session. [20:02] this comand will make sure you get the packages necessary [20:02] if the editor can do syntax highlighting for javascript it would be good :) [20:02] meanwhile I am going to talk a bit about the technology we are going to work with [20:03] plasma is the technology most people refer to as "the KDE desktop" [20:03] or "the KDE workspace" if you will [20:03] Plasma is just about everything you see when you log into a newly installed KDE system (such as Kubuntu ;)) [20:04] it is the wallpaper, and the panel at the bottom, and every icon and element within that panel and so on [20:04] it comes in many amazing favors and creating new ones is not all that difficult [20:04] by favors I mean specific versions of plasma for different form factors (i.e. devices) [20:05] currently there are plasma versions for desktop system, netbook systems, mobile devices (such as phones) and even tablets [20:05] although the latter 2 are actually more like tech previews and not terribly usable at this time [20:06] plasma widgets are widgets for plasma (surprise ;)) [20:06] they are also called plasmoids... in particular plasma widget usually means a widget that can run in plasma [20:06] this includes apple dashboard widgets, google gadgets and native plasma widgets [20:07] those native widgets are the ones called plasmiods [20:08] plamoids make best usage of plasma's abilities and can be writen in javascript (including qml in KDE 4.6+), c++, ruby and python [20:08] however only javascript and c++ are builtin (thus always available) [20:08] so, usually you want to use on of those [20:09] personally I would even go as far as saying that javascript is the weapon of choice unless you have good reasons to choose anoter language [20:09] the reason for this is that javascript is of course easier to deploy (as it does not need compliation compared to c++) and is always available on every plasma system (unlike ruby and python) [20:10] we also use javscript in this session ;) [20:10] plasmoids are distributed as so called "plasmagik packages" (what a name!) [20:11] they essentially contain one metadata file to describe the plasmoid at hand as well as code, images and other magic files [20:11] for more information have a look at http://community.kde.org/Plasma/Package [20:11] QUESTION: can we use Qt/C++ [20:12] as explained, one can use C++, however there is no particular gain from this for the usual plasmoid [20:12] as the javascript API iis very powerful [20:12] If there are no moar questions we can move on to hacking [20:13] a common step when creating a new plasmoid is setting up the folder structure, for this you can use this magic command sequence of mine: [20:13] NAME=dont-blink # Set a shell variable [20:13] mkdir -p $NAME/contents/code/ # Create everything up to the code dir. [20:13] touch $NAME/metadata.desktop # Create the metadata file, which contains name and description... [20:13] touch $NAME/contents/code/main.js # Create main code file of the plasmoid. [20:13] this will create a bare setup for a new plasmoid [20:13] in the folder dont-blink [20:14] QUESTION: can we create plasmoids in Ubuntu/Gnome and of course, test it on Kubuntu? [20:14] yes [20:14] testing can be done in gnome too [20:14] however unfortunately at this point there is no actual widget integration, so the plasmoids will only work in KDE with Plasma [20:15] movig on [20:15] first we will need to setup our metadata file [20:15] http://people.ubuntu.com/~apachelogger/uadw/04.11/dont-blink/metadata.desktop [20:15] is a good starting point [20:15] I believe the file is pretty easy to understand, it simply defines the general properties of our plasmoid [20:16] name, license, author, version etc. [20:16] usually you will want to change at least Name and X-KDE-PluginInfo-Name [20:16] now we can already get our hands dirty [20:17] Pleaes open the contents/code/main.js code file in your editor. [20:17] let's start with a semi-helloworld thing :) [20:17] // First create a layout we can stuff things into [20:17] layout = new LinearLayout(plasmoid); [20:18] layouts are very handy as we can put just about anything in there and they will automagically figure out how to align stuff [20:18] (well, almost automagically ;)) [20:18] // Then create a label to display our text [20:18] label = new Label(plasmoid); [20:18] // Add the label to the layout [20:18] layout.addItem(label); [20:18] // Set the text of our Label [20:18] label.text = 'Don\'t Even Blink'; [20:18] // Done [20:18] not terribly difficult, right? :) [20:19] you can now run this using plasmoidviewer . or plasmoidviewer PATHTOTHEPLASMOID (depending on where you are in a terminal right now). [20:19] this works on both KDE and GNOME [20:19] and XFCE and .... [20:20] plasmoidviewer is a very nice app to test plasmiods as you do not need to install the plasmoid to test it [20:20] Here is a trick. If you have KDE 4.5 (default on Kubuntu 10.10) you will have a new command called 'plasma-windowed' using this command you can run most Plasmoids just like any other application in a window, is that not brilliant? [20:20] for example you can try that on our new plasmoid [20:21] or if you have the facebook plasmoid installed, you can try it with that [20:21] very handy to run plasmoids as sort-of real applications [20:21] I hope everyone got our first code working by now [20:22] maybe let us continue with a buttons [20:22] buttons are cool [20:22] oh, in case you have not noticed, code lines are always indeted by 4 characters for your reading pleasure [20:22] // Create a new button [20:22] button = new PushButton; [20:22] // Add the button to our layout [20:22] layout.addItem(button); [20:22] // Give the button some text [20:22] button.text = 'Do not EVER click me'; [20:23] if you try the plasmoid now you will notice quite the sillyness [20:23] the layout placed the button next to the text [20:24] not so awesome :( [20:24] and apachelogger claimed layouts are awesome -.- [20:24] oh well [20:24] easily fixable [20:24] the problem is that the layout by default tries to place things next to each other rather than align them vertically [20:24] // Switch our layout to vertical alignment [20:24] layout.orientation = QtVertical; [20:24] now this should look *much* better [20:25] well [20:25] our button does not do anythign yet [20:25] that is a bit boring I might say ... and useless [20:26] hwo about adding an image into the mix ? ;) [20:26] QUESTION: why is it QtVertical? and not just Vertical like other widgets? [20:27] QtVertical is actually coming from Qt and not from Plasma, as to avoid name clashes in the future I suppose it got prefixed with Qt ;) [20:27] generally speaking layout orientation in C++ Qt is also an enum in the Qt namespace, so it looks pretty much the same [20:27] but now for our image [20:28] if you still have the same terminal you created the bare folder structure in you can use the following: [20:28] mkdir -p $NAME/contents/images/ [20:28] wget -O $NAME/contents/images/troll.png http://people.ubuntu.com/~apachelogger/uadw/04.11/dont-blink/contents/images/troll.png [20:28] otherwise jsut navigate to your plasmoid folder, go to contents and create an images folder, then download http://people.ubuntu.com/~apachelogger/uadw/04.11/dont-blink/contents/images/troll.png into that folder [20:29] Now for the code... [20:29] // Labels can also contain images, so we will use a label again [20:29] troll = new Label(plasmoid); [20:29] // But this time we set an image. The image path is constructed automatically by us telling it in what directory it is and what name it has [20:29] troll.image = plasmoid.file("images", "troll.png"); [20:29] // So that our image fits in we need to tell the label to consume as much space as possible and necessary [20:29] troll.sizePolicy = QSizePolicy(QSizePolicyMaximum, QSizePolicyMaximum); [20:29] // We only want to show the image after the user dared pressing the button, so we set it not visible and also do not add it to our layout [20:29] troll.visible = false; [20:30] that will not actually do anything [20:30] as the troll is set to invisible by default [20:31] we only show it once the user clicked on the button [20:32] so this leads us to a very interesting part of Plasma in specific and Qt in particular [20:32] connecting a state change on one thing to an action [20:32] usually in Qt we call this the signal and slot system [20:33] in javascript plasmoids we have almost the same thing, in fact it is even simpler than in standard c++ [20:33] so [20:33] let us try that [20:33] // First add a function to handle clicking on the button [20:33] function onClick() [20:33] { [20:34] within that function we put our logic for changing the visibility of our troll [20:35] // Either the troll is shown or it is not... [20:35] // If it is visible -> hide it [20:35] if (troll.visible) { [20:35] ah, this is getting complicated [20:35] lets stop here for a bit [20:35] so, our troll can be visible and invisible and we change this via .visible and we read it via .visible [20:36] this might be a bit confusing for those of us who drive ourselfs crazy with C++ ;) [20:37] however, it is really something very Qt [20:37] Qt adds property functionality to objects, which is really what we are seeing here [20:37] our label has a property visibile [20:37] and this property has a "setter" and a "getter" [20:38] depending on the context we can therefore use .visible as getter or setter [20:38] very handy :D [20:38] (as we will have some QML sessions later this week ... this is also how QML elements work for the better part ;)) [20:38] now, moving on... [20:39] we were writing our onClick function, in particular the logic for when the troll is already visible [20:39] // Make it invisible [20:39] troll.visible = false; [20:39] // And remove it from the layout, so that it does not take up space [20:39] layout.removeItem(troll); [20:39] I think changing the visibility should be clear now ... but that removing there is a bit confusing [20:39] apachelogger apparently did not prepare very well :P [20:40] so let me show you the rest of the onClick function and explain this afterwards [20:40] } else { // If it is not visible -> show it [20:40] // Once our button gets clicked we want to show an image. [20:40] troll.visible = true; [20:40] // Finally we add the new image to our layout, so it gets properly aligned [20:40] layout.addItem(troll); [20:40] } [20:40] } [20:40] so, depending on the state of visibility we simply do inverted actions [20:40] possibly you noticed earlier on that we did not add the troll to our layout [20:41] this was very intentional [20:41] as soon as you add something to your layout it will usually consume space [20:41] visible or not [20:41] so, whenver our troll is not visible it also must not be part of the layout [20:41] hence the logic in our onClick [20:42] if visible -> make invisbile and remove from layout || if invisible -> make visible and add to layout [20:42] the daring programmer will now try this and complain that it is not working [20:42] oh my [20:43] we did not yet define that onClick should do something upon button click [20:43] // Now we just tell our button that once it was clicked it shall run our function [20:43] button.clicked.connect(onClick); [20:44] well then [20:44] for me it works \o/ [20:44] very useful plasmoid we created there :D [20:45] you can find a version of this I created earlier here : http://people.ubuntu.com/~apachelogger/uadw/04.11/dont-blink/ [20:46] it also contains additional magic that should trigger a notification on click and display your location as detected by gps or ip lookup ;) [20:46] now that we have a wonderful plasmoid we will need to package it properly [20:47] as mentioned earlier, plasmoids are distributed in super cool special packages [20:47] although.... [20:47] actually they are just zip files with .plasmoid as suffix [20:47] so let us create such a nice package from our plasmoid [20:48] if you are still in the same terminal we started off with, the following should do the job: [20:48] cd $NAME && [20:48] zip -r ../$NAME.plasmoid . && [20:48] cd .. [20:48] $NAME is simply the name of our plasmoid, so you can easy enough create the zip manually too :) [20:49] please note that plasma does expect the contents and metadata to be in the top level of the zip though, so you must not package the plasmoid directory (in our case dont-blink) but only the files [20:49] that is really what that fancy zip command there does [20:50] Once you have your plasmoid you can install it using the regular graphical ways on your plasma version or by using the command line tool plasmapkg. [20:50] plasmapkg -i $NAME.plasmoid [20:50] now the plasmoid should show up in your widget listing. [20:50] consequently you should be able to use plasmoidviewer $NAME and plasma-windowed $NAME to view the plasmoid without plasma [20:51] QUESTION: are there any particular style guidelines for writing code for plasmoids beyond what you have shown us? for those that are used to MVC kinda stuff [20:51] not really [20:51] if you are using C++ you can do just about anything ... in the future the javascript plasmoids will use QML quite a bit (you will hear about QML later this week) [20:52] There are 10 minutes remaining in the current session. [20:52] QML usually wants people to use Qt's Model/View system (which is pretty close to MVC) [20:52] especially if you are working with lists of course :) [20:52] QUESTION: will it be possible to compress the package with bzip2 in the futur? [20:53] not planned in particular, but if you ask in #plasma I am sure someone could tell you whether that would be desirable [20:53] as plasmoids are usually atomic it does not make much a difference though [20:53] any other questions? [20:54] if not, then let me give you some additonal resources where you can find handy super nice things :) [20:54] Where the Plasma community collects its information: http://community.kde.org/Plasma [20:54] General tutorials on JavaScript Plasmoids: http://techbase.kde.org/Development/Tutorials/Plasma#Plasma_Programming_with_JavaScript [20:54] Plasma and KDE development examples: http://quickgit.kde.org/?p=kdeexamples.git&a=summary [20:54] Some general guidelines for Plasmoid programming: http://community.kde.org/Plasma/PlasmoidGuidelines [20:55] Information on Plasma packages: http://community.kde.org/Plasma/Package [20:55] AND [20:55] last, but not least [20:55] *super important* [20:55] the JavaScript API: http://techbase.kde.org/Development/Tutorials/Plasma/JavaScript/API [20:55] if you compare this API to what you can do in C++ you will notice that the JavaScript API is really sufficient for most things :) [20:56] On IRC you can get help in #plasma most of the time [20:56] Good luck with creating your brilliant Plasmoids :) [20:56] you can find me in just about every KDE and Kubuntu IRC channel after the sessions if you have any additional questions [20:57] There are 5 minutes remaining in the current session. [20:58] if you are interested in KDE software development I'd like to direct your attention to the KDE development session tomorrow, the various QML sessions and my talk on multimedia in Qt and KDE on friday :) [20:58] thanks everyone for joining and have a nice day [21:01] welcome to "rock solid python development with unittest/doctest". today i'm going to give a brief introduction to unit- and doc- testing your python applications, and hooking these into the debian packaging infrastructure. [21:01] raise your hand if you're already unashamedly obsessed with testing :) === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || Event: Ubuntu App Developer Week - Current Session: Rock solid Python development with unittest/doctest - Instructors: barry [21:02] since it would take way more than one hour, i'm not going to give a deep background on testing, or the python testing culture, or python testing tools. there are a ton of references out there. two resources i'll give right up front are the testing-in-python mailing list and the python testing tools taxonomy . python has a *very* rich testing culture, and i encourage you to explore [21:02] it! [21:02] Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html following the conclusion of the session. [21:03] sorry, i'm going to start over because of the classbot delay... [21:03] welcome to "rock solid python development with unittest/doctest". today i'm going to give a brief introduction to unit- and doc- testing your python applications, and hooking these into the debian packaging infrastructure. [21:03] raise your hand if you're already unashamedly obsessed with testing :) [21:03] since it would take way more than one hour, i'm not going to give a deep background on testing, or the python testing culture, or python testing tools. there are a ton of references out there. two resources i'll give right up front are the testing-in-python mailing list and the python testing tools taxonomy . python has a *very* rich testing culture, and i encourage you to explore [21:03] it! [21:03] there's a lot you can do right out of the box, and that's where we'll start. michael foord is hopefully here today too; he's the author of unittest2, a standalone version of all the wizzy new unittest stuff in python2.7 [21:03] for now, we'll keep things pretty simple, and the examples should run in python2.6 or python2.7 with nothing extra needed. [21:03] for those of you with bazaar, the example branch can be downloaded with this command: bzr branch lp:~barry/+junk/adw [21:04] if you can't check out the branch, you can view it online here: [21:04] http://bazaar.launchpad.net/~barry/+junk/adw/files [21:04] i'll pause for a few moments so that you can grab the branch or open up your browser [21:05] here's a quick overview of what we'll be looking at: a quick intro to unittesting, a quick intro to doctesting, hooking them together in a setup.py, hooking them into your debian packaging. [21:06] let's first look at a simple unittest. if you've downloaded the branch referenced above, you should open the file adw/tests/test_adding.py in your editor. [21:06] http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/adw/tests/test_adding.py [21:06] the adw package is really stupid. it has one function which adds two integers together and returns the results. i won't talk much about test driven development (tdd) here, but i highly encourage you to read up on that and to practice tdd diligently! these tests were developed using tdd. [21:07] anyway, looking at test_adding.py, you can see one test class, called TestAdding. there are some other boilerplate stuff in test_adding.py that you can mostly ignore. the TestAdding class has one method, test_add_two_numbers(). this method is a unittest. you'll notice that it calls the add_two_numbers() function (called the "system under test" or sut), and asserts that the return value is equal to 20. pretty simple. [21:07] look below that at the test_suite() function. this is mostly boilerplate used to get your unittest to run. the function creates a TestSuite object and adds the TestAdding class to it. python's unittest infrastructure will automatically run all test_*() methods in the test classes in your suite. [21:08] let's run the tests. type this at your shell prompt: [21:08] $ python setup.py test [21:08] (without the $ of course) [21:08] the first time you do this, your package will get built, then you'll see a little bit of verbose output you can ignore, and finally you'll see that two tests were run. ignore the README.txt doctest for the moment. [21:09] if you want to see what a failing test looks like, uncomment the test_add_two_numbers_FAIL() method and run `python setup.py test` again. be sure to comment that back out afterward though! :) [21:09] everybody with me so far? i'll pause for a few minutes to see if there are any questions up to now [21:10] !q [21:11] so, obviously the more complicated your library is, the more test methods, test classes, and test_*.py files you'll have. i probably won't have time to talk about test coverage much, but there are excellent tools for reporting on how much of your code is covered by tests. you obviously want to aim for 100% coverage. [21:11] okay, let's switch gears and look at a doctest now. go ahead and open adw/docs/README.txt [21:11] http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/adw/docs/README.txt [21:12] doctests are *testable documentation*. the emphasis is on the documentation aspects of these files, and in fact there are excellent resources for turning doctests into actual documentation, e.g. http://packages.python.org/flufl.i18n/ [21:12] doctests are written using restructured text, which is a very nice plain text human readable format. the key thing for testing is to notice the last three lines of the file. see the two lines that start with >>> [21:12] (aside: sphinx is the tool to turn rest documention into html, pdf, etc.) [21:13] and it's very well integrated with setup.py and the python package infrastructure [21:13] that's a python interpreter prompt, and doctests work by executing all the code in your file that start with the prompt. any code that returns or prints some output is compared with text that follows the prompt. if the text is equivalent, then the test passes, otherwise it fails. [21:14] oh, i should mention. "doctests" can mean one of two things. you can actually have testable sections in your docstrings, or you can have separate file doctests. by personal preference, i always use the latter [21:16] btw, the use of doctests is somewhat controversial in the python world. i personally love them, others hate them, but i think everyone agrees they do not replace unittests, but i think they are an excellent complement. anyway, if we have time at the end we can debate that :) [21:16] !q [21:17] in this example, add_two_numbers() is called with two integers, and it returns the sum. if you were to type the very same code at the python interpreter, you'd get 12 returned too. doctest knows this and compares the output [21:17] run `python setup.py test` again and look carefully at the output. you'll see that the README.txt doctest was run and passed. if you change that 12 to a 13, you'll see what a failure looks like (be sure to change it back afterward!) [21:17] i'll pause for a few minutes to let folks catch up [21:18] RawChid asked: So doctest is one way to do uittesting in Python? [21:19] RawChid: i'd say one way to do *testing*, which i'm personally a big fan of. i love writing documentation first because it ensures that i can explain what i'm doing. if i can't explain it, i probably don't understand it. but for really thorough testing, you must add unittests. e.g. you typically do not want to do corner case and error cases in doctests. === JasonO_ is now known as JasonO [21:20] although the heretic in me says you should try :) [21:20] unfortunately, python's unittest framework does not run doctests automatically. if you look in the adw/tests directory, you'll see a file called test_documentation.py. you don't need to study this much right now, and you are free to copy this into your own projects. it's just a little boiler plate to hook up doctests with the rest of your test suite. it looks for files inside docs/ directories that end in .txt or .rst (the reST [21:20] standard extension) and adds them to the test suite. once you have test_documentation.py, you never need to touch it. just add more .txt and .rst files to your docs directories, and it will work automatically. [21:21] http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/adw/tests/test_documentation.py [21:21] tronda asked: Is doctest somewhat similar to the BDD movement? [21:21] tronda: probably related. i don't know too much of the details but i think there are better tools for doing bdd in python [21:21] voidspace might know more about that [21:22] everybody with me so far? [21:22] time to switch gears a little. how do you hook up your unittests and doctests to your setup.py file so that you also can run `python setup.py test`? open up setup.py in your editor and we'll take a look [21:23] http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/setup.py [21:23] notice first that setup.py imports distribute_setup and then makes a function call. you'll see the file distribute_setup.py in the branch's top level directory. this means my package uses *distribute* which is the successor to setuptools. i highly recommend it, but if you don't know what that is, you can just cargo cult this bit. [21:23] anyway, the setup.py is fairly simple. you'll just notice one thing in the second to last line. it sets `test_suite` to `adw.tests`. the value is a python package path and it references the directory adw/tests. this is how you hook up your test suite to setup.py. when you run `python setup.py test` it looks at this test_suite key, and runs all the files that look like test_*.py [21:24] voidspace mentions in #u-c-c that we're pretty sure this is a setuptools extension to the standard distutils setup.py. so it'll probably work for either setuptools or distribute [21:25] we have two of those of course! test_adding.py and test_documentation.py, and the testing infrastructure automatically finds these, and makes the appropriate calls to find what tests to run. so that little test_suite='adw.tests' line is all you need to hook your tests into setup.py [21:25] so far so good. now let's look at how to hook your python test suite into your debian packaging so that your tests always run when your package is built. if you're not into debian packaging you can ignore the next couple of minutes. [21:26] open up debian/rules in your editor. [21:26] http://bazaar.launchpad.net/~barry/+junk/adw/view/head:/debian/rules [21:26] first thing to notice is that my package uses dh_python2, which is the new goodness replacing python-central and python-support. i highly recommend it as it can make your debian packaging of python code really really easy. you can see there's not much to my rules file. [21:27] i won't go into dh_python2 much right now, but you can look at the debian wiki for more details http://wiki.debian.org/Python [21:27] for today's class, there are really three parts to take a look at. the first is the PYTHON2 line. what this does is ensure that your tests will be run against all supported versions of python (2.x) on your system, not just python2.6 or python2.7. the commented out line for PYTHON3 will do something similar for python3 [21:27] (e.g. line 3) [21:27] remember that ubuntu 11.04 supports both python2.6 and 2.7 [21:27] aside: it is possible to write your package for both python2 and python3, and to run all the tests into both. we won't have time to talk about that today though. [21:27] so the next thing to look at is the line that starts `test-python%`. this is a wildcard rule that is used to run the setup.py test suite with every supported version of python2 on your system. you'll notice the -vv which just increases the verbosity. [21:28] (e.g. line 9) [21:28] override_dh_auto_test line then expands the PYTHON2 variable to include all the supported versions of python2, and it runs the test-python% wildcard rule for each of these. thus this hooks in the setup.py tests for all versions of python2. the override is currently needed because dh_auto_test doesn't know about `python setup.py test` yet. [21:28] i won't go into the specifics of packaging building right now, but i've done a build locally, and the results are available here: http://pastebin.ubuntu.com/592711/ [21:28] eolo999 asked: I'm very comfortable with nosetests; is there a particular reason why you left it out from the session? [21:28] scroll down to line 251 and you'll see the test suite getting run for python2.7. scroll down to line 268 and you'll see it getting run for python2.6. the nice thing about this is that if you get a failure in either test suite, your package build will also fail. this is a great way to ensure really high quality (i.e. rock solid :) python applications in ubuntu. [21:29] jderose asked: I got the impression that setuptools wasn't well maintained lately, wasn't regarded as the way forward, esp with Python3 - is that true, WWBWD? :) [21:30] eolo999: mostly just to keep things simple. voidspace in #u-c-c says that the main advantage of nosetests is the test runner, so it you can basically use all the techniques here with nose [21:30] i'm pretty sure that the future plans voidspace has for unittest2 include integrating nose more as a plugin than as a separate tool [21:31] btw, that's about all the canned stuff i have prepared, so i welcome questions from here on out [21:31] just ask them in #ubuntu-classroom-chat and we'll post the answers here [21:32] jderose: i'd say that's correct, though setuptools does get occasional new releases. distribute is the maintained successor to setuptools, but for python3 distutils2 will be the way forward [21:32] i admit that it's all very confusing :) [21:33] but my recommendation would be: use distribute for python2 stuff, and for python3 stuff if you want the same code base to be compatible with 2.x and 3.x (i.e. via 2to3). this is a great way to support both versions of python [21:34] oh yes, distutils2 will be called 'packaging' in python 3.3 and it will come in the stdlib [21:35] from #u-c-c: [21:35] barry is correct, I have plans for unittest to become more [21:35] extensible (plugins) that should allow nose to become much simpler [21:35] and be implemented as plugins for unittest [21:35] at the moment nose is convoluted and painful to maintain because [21:35] unittest itself is not easy to extend [21:35] [21:35] also lvh mentions trial, which is twisted's test runner. for mailman3 i use zc.testing which is zope's test runner [21:36] so yeah, there are lots of testing tools out there :) [21:36] !q [21:37] QUESTION: so is it okay/encouraged to run your python tests in PPA [21:37] builds, say for daily recipes and whatnot? [21:38] jderose: i don't recall a discussion about it one way or the other. personally, i would enable tests for all package builds, just to ensure that what you deploy has not regressed. [21:38] however, you do need to be careful that your test suite can *run* in a ppa environment [21:39] this may not always be the case. some test suites require resources that are not available on the buildds. those tests would obviously cause problems when your ppa were built [21:40] in those cases, it may be best to have more than one "layer" of tests. one that gives good coverage and high confidence against regressions, but requires no expensive or external resources. and a full test suite you can run locally with whatever crazy stuff you need [21:40] mocks might be a good answer to help with that [21:40] qwebirc57920: ppa == personal package archive [21:41] https://help.launchpad.net/Packaging/PPA [21:41] QUESTION: How do I know what resources are available on the [21:41] buildds? [21:41] yeah, good question :) ask on #launchpad or launchpad-dev, or just try it and see what fails ;) there should be better documentation about that on help.l.net [21:42] QUESTION: so if `test` requires a lot more dependencies than [21:42] `install`, should we just put those all in Build-Depends? when will [21:42] we get Test-Depends? :) [21:42] jderose: excellent question. for now, i recommend build-depends [21:43] Question: In the Java space there's a lot of mocking [21:43] tools/libraries. Any need for that in Python - if so - which are the [21:43] recommended ones? [21:43] voidspace can tell you how many mock libraries are available in python! answer is *lots* [21:44] btw, please note that there are tools (such as pkgme and stdeb) that can debianize your setup.py based python project. they do a pretty good job, though i'm not sure they turn test-requires into build-depends. [21:45] QUESTION: you mentioned "layering" tests into light/heavy - what a [21:45] good way of doing that? [21:45] [21:47] jderose: i think this depends on the test runner you use. python's stdlib for example uses -u flag to specify additional resources to enable (e.g. largefile). most test runners have some way of specifying a subset of all tests to run and what i would do is in your debian/rules file, file the right arguments to your test runner to run the tests you can or want to run [21:47] note that in my debian/rules file, i set it up to run 'python setup.py test -vv' but really, it can run any command with any set of options [21:47] QUESTION: to different doc tests share a common environment / [21:47] namespace? Can I make them explicitly separate / explicitly [21:47] shared? [21:47] [21:48] chadadavis: all the doctests in a single file or docstring share the same namespace. one of the criticisms of doctests is that it builds up state as it goes so it can sometimes be difficult if a test later in the file fails, to determine what earlier state caused the failure. [21:48] i think that just means you have to be careful, and also, keep your doctests focussed [21:49] not too big [21:49] you really just have to understand when and where each tool (unittest or doctest) is appropriate [21:50] voidspace also points out that every line in a doctest gets executed, even if there are failures (though i *think* there's a fail to cause it to bail on the first failure) [21:50] i'll just say that that can be an advantage or disadvantage depending on what you like and what you're trying to do :) [21:51] looks like we have a few minutes left. are there any other questions? [21:52] There are 10 minutes remaining in the current session. [21:52] QUESTION - what's the status of 3to2? write Python3 is so wonderful, i'd rather go that way than 2to3 [21:52] i'll just say again what an excellent resource the testing-in-python mailing list is. i highly recommend you join! [21:53] voidspace answers this as well as i could: [21:53] jderose: packaging (distutils2) is now using 3to2 rather than 2to3 [21:53] jderose: so although I've not used it myself, it must be in a [21:53] pretty good state [16:53] [21:53] [21:53] i've also not used 3to2 myself [21:53] much [21:54] fwiw, if you look at my test_documentation.py file, you'll see how you can do setups and teardowns for doctests [21:54] it also does fun stuff like set __future__ flags for the doctest namespace [21:56] voidspace says in #u-c-c that sphinx has support for doctests through its doctest:: directive [21:57] There are 5 minutes remaining in the current session. [21:57] well, time is almost up, so let me thank you all for attending! i know there was a lot of material and i blew through it pretty fast [21:57] in closing, i'll say that while we can all debate this or that detail of testing, there's no debate that testing is awesome and we all should do more of it! [21:58] big thanks to my colleague voidspace for helping out! [22:02] Logs for this session will be available at http://irclogs.ubuntu.com/2011/04/11/%23ubuntu-classroom.html === ChanServ changed the topic of #ubuntu-classroom to: Welcome to the Ubuntu Classroom - https://wiki.ubuntu.com/Classroom || Support in #ubuntu || Upcoming Schedule: http://is.gd/8rtIi || Questions in #ubuntu-classroom-chat || === A is now known as Guest36811 === beni is now known as Guest18564 [23:43] .