netlar | Hi all | 00:32 |
---|---|---|
netlar | I want to get involved in the Ubuntu community, but not sure where to start? | 00:41 |
=== _salem is now known as salem_ | ||
=== salem_ is now known as _salem | ||
pitti | Good morning | 03:27 |
=== chihchun_afk is now known as chihchun | ||
=== no_mu is now known as Nothing_Much | ||
=== no_mu is now known as Nothing_Much | ||
=== chihchun is now known as chihchun_afk | ||
dkessel | morning pitti | 08:10 |
pitti | hey dkessel | 08:10 |
=== chihchun_afk is now known as chihchun | ||
pitti | vila: trying to juju deploy swift-storage in lxc (juju-local) fails with http://paste.ubuntu.com/8221749/ here; it expects a /dev/sdb, which doesn't exist in a container; is that what you see as well? | 08:17 |
vila | pitti: never tried to deploy swift in a local container, but the /dev/sdb issue rings a bell. By default containers created by juju are not able to deal with block devices (we ran into this issue when the image builder was trying to use qemu-nbd to manipulate images) | 08:54 |
pitti | vila: yeah, I'm trying to figure out a solution for bug 1250965 | 08:54 |
ubot5 | bug 1250965 in juju-core "Loopback mounts do not work with local provider" [Medium,Triaged] https://launchpad.net/bugs/1250965 | 08:54 |
pitti | vila: I was able to change /dev/sdb with this: | 08:55 |
pitti | $ cat swift.cfg | 08:55 |
pitti | swift-storage-zone1: | 08:55 |
pitti | zone: 1 | 08:55 |
pitti | block-device: /etc/swift/storage.img|200M | 08:55 |
vila | pitti: let me find the realted MPs | 08:55 |
pitti | and use that as --cnofig | 08:55 |
pitti | vila: I have swift running in a local container just fine, just not with the charm (manual setup); thus there definitively is a solution :0 | 08:55 |
pitti | (http://people.canonical.com/~pitti/scripts/setup-swift.sh) | 08:57 |
=== chihchun is now known as chihchun_afk | ||
pitti | vila: ./ubuntu/uci-engine/gatekeeper/setup-swift.sh -- haha! | 09:00 |
pitti | vila: that's very familiar to me :) | 09:00 |
vila | pitti: https://code.launchpad.net/~vila/uci-engine/lxc-image-builder/ has some hints | 09:00 |
vila | pitti: yes ;) It's not unknown ;) | 09:00 |
pitti | vila: do I actually need to make the swift charm work for a local deployment, or does deploying uci-engine just take an ip/creds for an external swift (i. e. I could just point it to my existing container)? | 09:01 |
vila | pitti: but the uci engine is going towards using uuids for swift containers so for our needs we'll probably end up having to copy the test results from a swift container to somewhere else | 09:02 |
vila | pitti: yes, the usual setup is to use credentials for an external swift | 09:02 |
pitti | vila: ah good, then I can stop fighting with the swift charm | 09:03 |
vila | pitti: yup | 09:03 |
pitti | I followed up on bug 1250965 to improve that | 09:03 |
ubot5 | bug 1250965 in juju-core "Loopback mounts do not work with local provider" [Medium,Triaged] https://launchpad.net/bugs/1250965 | 09:03 |
vila | pitti: and the other MP is: https://code.launchpad.net/~pwlars/uci-engine/doc-lxc-updates/+merge/226382 | 09:04 |
pitti | vila: yup, no problem on that side -- it's a two-command issue to set up a swift container | 09:05 |
pitti | but it's eternal pain with the swift juju charm | 09:05 |
pitti | still fighting with other juju bugs; I found workarounds for most, but one is impossible to work around :/ | 09:05 |
vila | pitti: right, just read your bug comment regarding swift. Good to know there is some way but so far it's not a blocking area for us. | 09:06 |
pitti | vila: right, I just wanted to leave a note there as it'd really be nice to make it work with local | 09:06 |
pitti | while swift-setup.sh is nice, it's not really how we intend to do things :) | 09:06 |
vila | agreed | 09:06 |
pitti | jibel: so I added some tests to https://code.launchpad.net/~pitti/ubuntu-test-cases/desktop-systemd/ | 10:39 |
pitti | jibel: they work in a local VM, but now I think is a good time to exercise them in production UTAH and see whether "reboot" and everything else is working | 10:39 |
pitti | jibel: do you have some minutes to show me how I can run them in the DC? | 10:47 |
jibel | pitti, I never deployed UTAH jobs in the DC, only the CI team can do that. | 10:48 |
pitti | jibel: ah, so I'll talk to ev/plars? | 10:48 |
jibel | pitti, plars or psivaa usuaaly deploy utah jobs | 10:50 |
=== roadmr is now known as roadmr_afk | ||
=== roadmr_afk is now known as roadmr | ||
=== davmor2_ is now known as davmor2 | ||
=== roadmr is now known as roadmr_afk | ||
=== roadmr_afk is now known as roadmr | ||
balloons | ping elopio | 20:03 |
elopio | balloons: pong | 20:03 |
balloons | elopio, so I'm going to manually merge your mp: https://code.launchpad.net/~canonical-platform-qa/reminders-app/workaround1363604-add_sleep/+merge/232912. We can't land it with jenkins due to the fact jenkins won't run your version of the tests and will just lock up | 20:04 |
balloons | I'm not sure there's another way around it | 20:04 |
balloons | once merged, I'll push to the store and hopefully things will be good | 20:09 |
elopio | balloons: yes, I think it would be ok to merge it even with jenkins error. | 20:10 |
elopio | it's only one line. If it doesn't work as a workaround, we can revert it. | 20:10 |
=== salem_ is now known as _salem | ||
=== knome_ is now known as knome |
Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!