=== Logan_ is now known as Guest83048 === Guest83048 is now known as Logan_ === GridCube is now known as GridNet [04:25] Good morning [04:26] smoser: infinity reconfigured the VMs to use 4 GB of RAM, to avoid over-committing (that wreaked havoc, as VMs encountered kernel paging errors) [04:26] smoser: three VM qemus still need to be restarted; it's not urgent any more as the total RAM is now sufficient, but I understood he wanted to do it at some point [04:27] smoser: we couldn't do it on Tuesday with a build queue of 500 items or so (glibc + qt), but now things are quiet === doko_ is now known as doko [07:19] jamespage, maven main alarm === dbarth_ is now known as dbarth === dupondje_ is now known as dupondje === Spads_ is now known as Spads [09:55] pitti, simple question: will vivid have systemd by default? I don't see online anything related ATM [10:00] LocutusOfBorg1, http://summit.ubuntu.com/uos-1411/meeting/22401/systemd-transition/ [10:00] UOS session today [10:02] wow thanks! [10:03] seb128, quick question [10:04] vivid universe package universe/net/boinc-client requires systemd migration (sysv=True, upstart=False, systemd=False) [10:04] if you sync it from experimental you fix the issue :) [10:05] LocutusOfBorg1, depending strictly on systemd before it's default seems buggy [10:05] or do you mean the experimental version resolves that? [10:06] what's the question exactly? ;-) [10:07] yep, I added the service file in the experimental upload [10:08] I was lookit at the blueprint link See: http://people.canonical.com/~jhunt/systemd/packages-to-convert/ (updated daily). [10:08] since I do not want to spare anybody's time I already did the file and uploaded on experimental [10:09] LocutusOfBorg1, why not unstable? [10:09] freeze :/ [10:10] isn't 'make it work with the default init system' a value reason to get a freeze exception? [10:10] don't know :) [10:11] asking now on release channel [10:12] thanks [10:13] it very much isn't. [10:13] k [10:13] I can upload anyway, and revert if an RC steps up [10:14] it is not a major release [10:23] LocutusOfBorg1: right, aiming for it, but it depends on getting more hands on porting upstart-only packages (still ~ 200 to go ) [10:27] pitti, I understand, systemd handles the systemV scripts, but not the upstart ones? [10:28] what about patching systemd then? :) [10:29] LocutusOfBorg1: to do what? :-) [10:34] to avoid porting creating 200 service files prior to vivid release :) [10:34] it will give us a "smoother" transition [10:34] but maybe most of that 200 packages just needs to be sync'd over from debian I guess [10:34] LocutusOfBorg1: well, there was an alternative proposal to run upstart alongside systemd [10:34] but nobody looked at that [10:35] I guess you will avoid that ;) [10:35] LocutusOfBorg1: nope; those 200 are mostly the ones where we have heavy ubuntu development/deltas; mostly in the cloud/server/touch space [10:35] oh... ok [10:35] LocutusOfBorg1: conceptually upstart and systemd are so far apart that it's pretty much impossible to "just run" upstart jobs by systemd [10:36] anyway creating a .service file is something easy for a person with enough knowledge I guess, I never saw a service file more than 20 lines [10:36] long [10:36] right, every individual one isn't hard, it's just cumulatively hard due to sheer number [10:36] anyway, I guess you know what to do, who am I? :) [10:36] yes of course, 200 is an high number [10:37] I guess I'll try to help a little bit, but I'm really not a systemd person (at least not now) [10:37] so if you want to help out with submitting missing systemd units (or even just init.d scripts), that'd be greatly appreciated [10:38] you mean vivid/main packages right? I see 191 left [10:43] grub-common seems a false-positive, hostname might just need a sync/revert, ifupdown needs a little merge? [10:45] oh... many of them seems to be sysv=True, so grep "sysv=False" |wc -l returns just 106 entries [10:45] I hope I'm on the right list http://people.canonical.com/~jhunt/systemd/packages-to-convert/2014-11-13.txt [10:53] right [10:54] jodh_: ^ I think we should really start with that -- if a package has an init.d script, it's fine for now [10:54] then it doesn't block the switch and thus shoudln't be part of the transition [10:54] yep, 106 seems a better number :) [10:55] and of the 106 many are still false positive, like "upstart*" [10:56] or "systemd*" [10:56] I guess you need to start with sysv=False && systemd=False [10:57] grep "sysv=False" |grep "systemd=False" |wc -l [10:57] 91 [10:57] even better [11:41] Laney, how do you feel about this patch for the transparency problem of gnome.-terminal? https://bug695371.bugzilla-attachments.gnome.org/attachment.cgi?id=274727 [11:41] bug 1292282 [11:41] bug 1292282 in gnome-terminal (Ubuntu) "background transparency is not working on gnome terminal" [Low,Confirmed] https://launchpad.net/bugs/1292282 [11:41] LocutusOfBorg1: what transparency problem? [11:41] -> #ubuntu-desktop maybe [11:42] well yes [11:46] LocutusOfBorg1: grub-common isn't entirely a false positive, I just haven't yet worked out how to do that "run as late as possible to approximate the system being fully booted" thing in systemd [11:47] It's not urgent though, as the sysv file should do the job for the time being [11:47] cjwatson, I meant "also a debian issue" :) [11:48] Sure === MacSlow is now known as MacSlow|lunch [12:50] bdmurray,ev,pitti: ubuntu-rtm has Contents now, would be good to confirm that retraces can now work [12:56] cjwatson: hooray, thanks! === MacSlow|lunch is now known as MacSlow [13:02] barry: FYI, autopkgtest 3.8 is now in vivid, that also contains the funny apt fix [14:02] doko, ack [14:02] doko, not much time today but will look as soon as I can === alai` is now known as alai [14:22] pitti: awesome, thanks === davidcalle_ is now known as davidcalle [14:40] pitti, stgraber i've rebooted host on wolfe [14:40] and restarted your instances there. [14:42] smoser: ack, thanks [14:45] smoser: thanks [14:48] smoser: is it just my imagination or did we get a memory downgrade in those VMs? [14:48] might be that infinity reconfigured your's as well [14:49] the intention (from my POV) was to just reduce mem from 8 to 4 GB for wolfe-02 to -09 (the autopkgtest ones) [14:49] yeah, looks like we got downgraded from 8GB to 4GB, which may be a slight problem for us since we run all our builds in tmpfs as I/Os were horribly slow on there (10min on tmpfs vs 1h30 on disk last time I tested) [14:51] stgraber: I think the wolfe host now has enough RAM left to run your's with 8 [14:52] rebooting the autopkgtset ones free'ed 12 GB, and we were under the "64 GB total" line before [14:55] stgraber: What are you building that has such awful I/O characteristics? [14:55] infinity: tons of debootstraps [14:55] stgraber: In theory, wolfe should be no different than denneed, fisher, and postal, and they perform quite well. [14:56] usually 3-4 of those in parallel [14:56] stgraber: Ask smoser to start your guests with cache=unsafe [14:56] stgraber: Then watch 'em fly. [14:57] infinity: yeah, cache=unsafe on the data volume would probably help quite a bit [14:57] infinity: do they all need to have the same size? [14:58] infinity: because we certainly do have the RAM now [14:58] pitti: No, they don't have to be. He can certainly have bigger ones. [14:58] pitti: But "I like building in tmpfs" is also a lousy reason to eat all the RAM. ;) [14:58] well, who doesn't :) [14:58] Exactly. [14:58] * pitti builds nowhere else at home [14:59] also, if we can get the same cache=unsafe hack on "my" wolfes, I wouldn't complain loudly either (and it might even be easier as all VMs use the same config) [14:59] infinity: well, my concern was that our testsuite and image building scripts do a ton of IOPs so tmpfs obviously make things faster for us and also means we don't hammer the disks for everyone else on the box [14:59] but cache=unsafe may very well do that too just at a different layer :) [15:00] can you just swap the stickers on the motherboard, so that we have 1 TB RAM and 64 GB disk? should be enough to fit the VM images :) [15:00] (TGIF!) [15:00] As long as no one cares about losing the occasional syslog or whatever, cache=unsafe isn't really all that unsafe on a VM where the base system doesn't change much. [15:01] for my VMs, I don't care at all; if they break, I run my magic script to set up a fresh one [15:01] (Okay, it's horribly unsafe, but if you don't like your data...) [15:01] * pitti likes "fast" better than "safe" in this case [15:05] infinity: well, that's why we've got two disks on those VMs right? I sure don't want to loose data on the main volume, but the data volume I don't care about [15:05] so setting cache=unsafe for /dev/vdb would be perfectly fine [15:13] stgraber: Do you have a host I can kill and restart to test this? [15:13] s/host/guest/ [15:16] stgraber: I hacked up smoser's guest-start script to start the ephemeral image with cache=unsafe, want to test that I did it right. :P [15:16] arges: here are the debs : http://people.canonical.com/~lbouchard/makedumpfile-1.5.7-3/ [15:17] pitti: Or if you have one I can kill. [15:17] pitti: (And are you using the same root/data split?) [15:17] Looks like you probably are. [15:17] infinity: kill 07 or 08 (or both) [15:18] infinity: yes, I mount /dev/sdb1 as /data [15:18] which has the containers and logs [15:18] but as I said, nothing on these machines is precious [15:21] infinity: you can kill 01. Just make sure that only sdb is affected, sda pretty much never changes and is a pain to setup so I'd like that one to be properly saved to disk :) [15:22] That looks like it worked (for wolfe-08) [15:22] stgraber: Want to halt 01 for me, or shall I just kill it? [15:23] pitti: Check wolfe-08 seems happy, that's the one I twiddled. [15:23] infinity: shutting down now [15:23] infinity: yep, already at it [15:24] If this hack makes you guys happy, it's a simple diff to scott's guest-start that just applies cache=unsafe to all eph* images. [15:25] stgraber: 01 should be back. [15:26] smoser: http://paste.ubuntu.com/9007931/ [15:27] smoser: That's what I applied to wolfe. [15:27] stgraber: If that's not responsive enough for you, we can give you more RAM too, but I suspect that should be pretty decent. [15:27] infinity: doesn't really feel faster to me (doing another lxc-create), but at least it seems to work [15:28] Well, I suppose it's also possible cache=unsafe is a noop for non-virtio... [15:28] I have no idea. [15:28] eek, this isn't virtio? [15:28] qemu and I, we're not best buddies. [15:29] pitti: No. virtio is buggy as heck on !x86. [15:29] pitti: This is ibm-vscsi, which is pretty performant, in my testing. [15:29] root@wolfe-01:/var/lib/lxc# dd if=/dev/zero of=blah.img conv=fsync bs=4M count=1000 [15:29] 1000+0 records in [15:29] But a bunch of VMs all contending for the same disk will hurt any I/O subsystem. [15:29] 1000+0 records out [15:29] now to test with a real load [15:29] 4194304000 bytes (4.2 GB) copied, 3.09727 s, 1.4 GB/s [15:30] stgraber: Well, the real test is if unsafe is doing as advertised, and not passing syncs down to the host. [15:30] stgraber: Since that's the slow down for debootstrap/dpkg. [15:31] infinity: still get 1.5GB/s with conv=sync so it looks like it's ignoring syncs as expected [15:32] the fs is also mounted with data=writeback for good measure too :) [15:32] I'm running lxc-create under eatmydata, add force-unsafe-io to dpkg, but it still sucks :/ (there is a looong time during debootstrap when eatmydata isn't active) [15:35] pitti: you could use the download template, that'd at least make that part faster [15:35] stgraber: I tried that a while ago, and there was some reason why it wasn't practical, but I forgot; probably firewalled or so [15:36] root@wolfe-01:/var/lib/lxc# grep proxy /etc/environment [15:36] http_proxy=http://squid.internal:3128 [15:36] https_proxy=http://squid.internal:3128 [15:36] then it works fine [15:36] it may also have been that back then there were no ppc64el images [15:36] ah [15:36] stgraber: or that [15:38] hm, jenkins-slave fails to start now, WTH [16:13] infinity: test build was about as fast as on tmpfs, so I'm fine with sticking to that [16:13] stgraber: \o/ [16:13] stgraber: Did you have two guests that needed this treatment, or just that one? [16:14] I just have the one [16:14] Kay, cool. [16:14] * infinity logs out of wolfe and wanders off. [16:15] 16min vs 14min and since that's still about 4x as faster than our armhf builder, we're good :) [16:15] with the previous I/O performances I was getting on wolfe, it was slower than my armhf devboard (which arguably does SATA-3 and uses a SSD) which seemed a bit ridiculous :) [16:16] stgraber: Yeah, I imagine that was just insane I/O contention with every guest on there syncing like crazy. [16:16] stgraber: Since the machine has 9 guests, all of whom pretty much just run dpkg a whole lot. [16:16] stgraber: And qemu/kvm performs very poorly in that scenario with default settings. [16:16] clearly [16:19] stgraber: The only thing to watch out for with cache=unsafe is that it really is very unsafe. A poorly-timed crash could mean that filesystem needs reformatting on reboot. [16:20] which is fine because that's what I do at every reboot anyway :) [16:20] Handy. [16:20] I never want crap sticking around in /var/lib/lxc === MacSlow is now known as MacSlow|errand [17:06] willcooke, slangasek, shadeslayer, who is doing the developer track summary? [17:06] i.e. the presentation at the end [17:07] I guess me [17:08] cool, thanks willcooke [17:10] gaughen, and I guess you're presenting the summary for the cloud track? [17:10] dpm, well actually I have a conflict. trying to see if i can re-arrange it. [17:10] it's with a partner [17:11] gaughen, thanks. Otherwise, perhaps another cloud track lead can do it? Just let us know === JanC_ is now known as JanC === MacSlow|errand is now known as MacSlow [17:59] jodh_: ah, thanks for updating the systemd conversion lists! [18:00] pitti: np - scripts are here fwiw: http://people.canonical.com/~jhunt/systemd/scripts/ === Elbrus_try_again is now known as Elbrus [19:02] wgrant: can we move the langpack schedule to produce RTM langpack exports on Tuesday evening or Wednesday early morning? [19:02] ogra_: ^ so when exactly do you want the new packs in RTM? then we can calculate back from that [19:03] about 1.5 h to prepare the sources, upload them, have them build, and propagate through -proposed, and about 6 hours for the export, plus two hours safety margin [19:04] pitti, as per https://wiki.ubuntu.com/LandingTeam/MilestoneSchedule we are freezing wed. morning (EU time) now ... [19:05] ogra_: does the freeze include the langpacks, or you want to freeze, then export/upload langpacks, to catch the last string changes before freeze? [19:06] pitti, well, we freeze to be able to do image rebuilds for single fixes ... would eb good if everything langpack realted would be in place when the freeze starts so it doesnt taint tested images that get re-rolled [19:07] sure [19:07] wgrant: so could we kick off RTM exports around Tue 21:00 UTC? then I'd build packs around Wed 05:00 UTC, and we would be ready to build images around 07:00 UTC [19:07] ogra_: ^ does that fit? [19:09] that sounds good, yeah [19:09] ack; I'll adjust the cronjobs as soon as I get wgrant's ok [19:10] pitti, slangasek: just fixed and re-run http://people.canonical.com/~jhunt/systemd/scripts/ - outstanding main packages to convert figure is now 77. [19:10] jodh_: neato, thanks [19:10] wgrant: if that collides with some other exports, I'm happy to move the other stuff around too [19:10] jodh_: yay; can you fix it some more? :-) [19:10] jodh_: btw, do we know if any of these are packages that just require no-change rebuilds with current debhelper? [19:11] jodh_: ah, and there's things like "upstart" and "mountall" which don't need conversion [19:11] you mean I don't get to write a mountall systemd unit? [19:11] jodh_: could this grow a "known false positive" blacklist? [19:11] slangasek: no - it's a very crude script atm :) [19:12] slangasek: ... like it even shows that upstart and upstart-bin needs conversion so atleast a few false-positives there :) [19:12] sure [19:12] or ureadahead [19:13] jodh_: oh, and there's no separation of user session jobs vs. system jobs [19:13] or hostname [19:13] (indicator-*) [19:13] pitti: yes, it need to. [19:13] slangasek: right. still some whittling to do. [19:13] jodh_: thanks for making this script! Let us tell you everything that's wrong with it [19:13] oh right, all the indicator bits are session upstart [19:13] ;) [19:13] slangasek: yeah, I should be asking you for patches :) [19:13] which is kind of fair, as we need to vet them for signals [19:14] jodh_: is that based on xnox's scripts? ISTR that they spat out quite a bit more [19:15] * pitti waves good night [19:15] /qui/quit [19:15] ok, let's try that again :) [19:37] pitti: no - it's a rewrite in python that uses a local mirror for speed. It's now here: https://code.launchpad.net/~ubuntu-foundations-team/+junk/migration-to-systemd [23:29] i'm looking at a vnc4 upload, rebasing on debian to pull in arm64 support. i noticed that, though it has been rebased since, it is retaining some old ubuntu changelog (<= hardy): http://paste.ubuntu.com/9014774/ [23:29] should i follow suit and keep the recent ubuntu entries, or just add a Merge, Remaining changes: entry to the top?