[00:08] <blkperl> why are python-crypto and  python-mako showing throwing apt errrors, "cannot be authenticated" from the cloud archive
[00:09] <sarnold> blkperl: there were problems with at least us-east-1 ec2 mirror earlier today; can you try again?
[00:10] <blkperl> apt-get update, says the key is not availible....
[00:11] <blkperl> maybe it failed to install during kickstart
[00:11] <blkperl> sarnold: yeah seems to be fine now
[00:59] <swaT30> does anyone know when Openstack 2013.1.4 is planned to hit the Ubuntu Cloud Archive?
[01:30] <jrwren> swaT30: its in there.
[01:30] <jrwren> swaT30: that is havana, right?
[04:23] <utlemming_mobile> q
[04:23] <utlemming_mobile> q
[04:32] <RobbyF> isn't there a way to watch ssh sessions, what they are typing
[04:32] <RobbyF> if your admin.
[04:33] <sarnold> RobbyF: sure
[04:33] <RobbyF> Thanks.
[04:35] <sarnold> RobbyF: you could use pam_tty_audit (I haven't used it myself yet, I'm not sure what exactly it does) or you could modify the sshd to record input and output or you could configure a pty service to interpose between sshd and the 'real' ptys, or you could launch 'screen' or 'tmux' immediately upon connect and use simple screen sharing
[04:36] <sarnold> RobbyF: or you could attack ptrace to the user's sshd and read syscalls that way (more or less keylogging and reading input/output)
[04:37] <sarnold> s/attack/attach/
[04:57] <lickalott> hello all.
[04:58] <lickalott> having an issue with a bad partition.  After I upgraded to 13 the OS went all wonky. everything was read only and a lot of my processes weren't working (FTP, etc...).  So i googled a bit and found that people had luck with an fsck.  Not me....
[05:00] <lickalott> i can't even boot the disk now.  At this point I'm pretty sure it's either a grub or mbr issue and i don't want to deal with that.  So i figured i would just mount the partition with my data on it and get some stuff.  Then - fresh install.  Turns out my sda5 (linux LVM) doesn't have a partition table so it won't mount.  i've tried lvm2 and all the associated commands but it refuses to
[05:00] <lickalott> mount.
[05:00] <lickalott> does anyone know of something else I can do to get some stuff off of that partition?
[05:57] <ripthejacker> Hi everyone I am trying to setup apache solr backend for search on an amazon ec2 instance 'A' , which is accessed from another ec2 instance 'B' where the program which uses solr is hosted. What is the better way to go, open the port 8983 on instance 'A' or use proxy on 'A' for port 8983.
[08:15] <makara> hi. i've installed ubuntu server 12.04, which includes squid-deb-proxy, and I installed squid-deb-proxy-client on my own machine
[08:16] <makara> but when I do `apt-get install gimp` I get this error: Failed to resolve service geriatrix.local 'Squid deb proxy on geriatrix' of type '_apt_proxy._tcp' in domain 'local': Timeout reached
[08:16] <makara> how can I debug this?
[08:18] <rbasak> makara: sounds like a Zeroconf/avahi-daemon issue, if that helps.
[08:19] <rbasak> makara: "getent hosts geriatrix.local" should resolve the server's IP on your client machine. That involves libnss-mdns on your client, and avahi-daemon on your server.
[08:20] <rbasak> makara: (independent of squid-deb-proxy at that stage)
[08:22] <makara> rbasak, you're saying my client isn't resolving the IP of the server?
[08:22] <rbasak> makara: that's what your error message indicates to me, yes.
[08:24] <makara> rbasak, `avahi-browse --all` on the client returns `+   eth0 IPv4 Squid deb proxy on geriatrix                  _apt_proxy._tcp      local`
[08:24] <makara> so it sees it
[08:25] <rbasak> makara: right, but can it resolve geriatrix.local itself?
[08:25] <makara> rbasak, I can ping geriatrix.local
[08:30] <makara> rbasak, `avahi-browse -a -r` fails to resolve geriatrix.local
[08:30] <makara> it resolves everything else though
[08:38] <makara> rbasak, its working now
[08:39] <makara> i had to add our network CIDR to /etc/squid-deb-proxy/allowed-networks-src.acl
[08:40] <makara> i'm assuming squid-deb-proxy is dependent on squid
[08:42] <rbasak> makara: that's odd. I'd have expected a different, more relevant error in that case. Thanks for sharing - good to know for the future.
[08:49] <makara> i'm trying to get lxc-create to use a cache for debs
[08:50] <makara> but looks like it just uses wget
[08:50] <makara> and squid isn't integrated with avahi
[08:50] <makara> i really need to learn more about this avahi
[08:50] <makara> just found out about today
[08:52] <makara> squid is such a beast to setup
[08:52] <makara> :(
[08:54] <vila> Hi there, seeking advice on how to track respawn upstart events in general. My specific use case is jenkins slaves that are dieing occasionally in the ci lab for various reasons (none are properly understood yet)
[08:55] <vila> So the plan is to 1) add 'respawn' and 'respawn limit' in the jenkins-slave upstart job, 2) send a nagios alert when a respawn happens 3) collect whatever we can to better understand why they crash
[09:01] <rbasak> vila: you could try adding a post-stop stanza to your upstart job to perform cleanup, trigger an alert, etc. I'm not sure if that atually works in the case of your jenkins slaves dying, but you could try and see.
[09:09] <vila> rbasak: I don't have (yet) a test environment where I could try that :-/ And I was under the impression that post-stop is explicitly called by upstart when cleanly stopping a service but won't be called when the service dies unexpectedly... Taking note of the suggestion to test it once I have a proper test env though
[09:10] <rbasak> vila: it's a good question - I don't know the details of post-stop. Another thing that might work is to set up a second job that triggers on the "stopped your-service" event.
[09:11] <rbasak> vila: though again, I'm not sure if that event gets called in the event of a respawn.
[09:13] <vila> rbasak: wow, hold on, can I set that other job to trigger on "respawned jenkins-slave" ?
[09:15] <rbasak> vila: that would be ideal, but I'm not sure that there is such an event.
[09:16] <rbasak> vila: another option might be to use a pre-start stanza, which I presume definitely is called each time (including the first time though)
[09:16] <vila> rbasak: right, so I definitely needs a test env for all those ideas
[09:16] <rbasak> vila: if you can't find any documentation on this, I think it would be worth filing a bug asking for the respawn handling details like this to be documented.
[09:17] <vila> rbasak: reading upstart-events(7)
[09:17] <rbasak> vila: check http://upstart.ubuntu.com/cookbook/ too
[09:17] <vila> rbasak: no 'respawned' there
[09:17] <vila> rbasak: no 'respawn' even :-/
[09:18] <rbasak> vila: init(5) defines respawn.
[09:18] <vila> rbasak: in the man page I meant, reading (re-reading) the cookbook
[09:18] <rbasak> But not in enough details for me to understand this behaviour.
[09:18] <vila> rbasak: yup
[09:20] <jamespage> yolanda, I still think you could get much better unit test coverage in the heat charm
[09:20] <jamespage> specifically in heat_context
[09:21] <yolanda> jamespage, i tried with identity but i had a conflict with a log() call, it wasn't working for me although i patched it
[09:21] <yolanda> it should be working just adding log to the items to patch?
[09:22] <jamespage> yolanda, no - that won't work
[09:22] <vila> rbasak: ha ha, the 'stopping' event has a PROCESS env var set to 'respawn' denoting the job attempted to exceed its respawn limit, quite a good time for sending a nagios alert (another alert for each respawn would be good but not as important)
[09:22] <jamespage> yolanda, that only patches objects in heat_context
[09:23] <yolanda> jamespage, problem is with a log call inside a charmhelpers function
[09:23] <jamespage> yolanda, yes - but you need to isolate your tests to the heat charm
[09:23] <jamespage> let me dig out an example for this case
[09:23] <yolanda> so i don't have to call the charmhelpers method?
[09:24] <yolanda> the test should be too obvious then...
[09:25] <jamespage> yolanda, actually I would - but I'd patch out the bits around the charmhelper context I don't want to execise
[09:26] <jamespage> yolanda, you can use a patch annotation for the specific unit test:
[09:26] <jamespage>  @patch('charmhelpers.contrib.openstack.context.log')
[09:26] <jamespage> the cinder charm does this in a few places
[09:26] <jamespage> you can also setup some fake relations
[09:26] <yolanda> mm, i think i tried like that, maybe i did something wrong
[09:26] <yolanda> i'll take another look
[09:27] <yolanda> apart from it, i tested all the hooks, can you think on some more tests?
[09:28] <vila> rbasak: for reference, http://upstart.ubuntu.com/cookbook/#id187 says: With this stanza (respawn), whenever the main script/exec exits, without the goal of the job having been changed to stop, the job will be started again. This includes running pre-start, post-start and post-stop. Note that pre-stop will not be run.
[09:28] <vila> rbasak: I'm not sure I properly parse that but that's where I got the feeling I couldn't rely on pre/post-stop
[09:32] <jamespage> yolanda, I'd probably add tests for relations where the context is not complete, to ensure that the hooks don't try to write configs
[09:32] <jamespage> yolanda, but the main gap is in context testing
[09:32] <yolanda> jamespage, ok, i'll take another look
[09:34] <jamespage> zul, the nova-compute break is a packaging problem - the postinst for nova-compute was not renamed after the drop of libvirtd detection in d/rules
[09:34] <jamespage> so nova never gets added to the libvirtd group
[10:33] <makara> how can I get lxc to use deb instead of wget?
[10:34] <makara> to take advantage of squid-deb-proxy
[11:22] <yolanda> jamespage, pushed some more tests, finally i was able to solve the log problem
[11:23] <jamespage> yolanda, looking
[11:25] <jamespage> yolanda, could you do a make sync as well please - it will pull inthe icehouse pocket support for the cloud-archive
[11:25] <jamespage> other than that I'm  going to push it to the store.
[11:25] <jamespage> cheers
[11:25] <yolanda> nice!
[11:27] <yolanda> done
[11:30] <jamespage> zul, I fixed subunit harder
[11:30] <jamespage> it was dh_python3  causing the problems
[11:41] <yolanda> jamespage, any documentation about active-active rabbitmq? currently looking at http://www.rabbitmq.com/ha.html
[11:41] <jamespage> yolanda, openstack docs as well I thinkl
[11:42] <yolanda> ok this one http://docs.openstack.org/high-availability-guide/content/ha-aa-rabbitmq.html
[11:43] <yolanda> i'll read about it
[11:43] <yolanda> btw, lots of pending points in that BP...
[12:25] <yolanda> jamespage, should I replace what is there now for rabbit HA?
[12:31] <krababbel> How can I change the NTP server ntpdate uses at boot? I assume ntpdate is generally invoked in a script when a network interface is brought up? I looked at the script but I can't see a server being specified.
[12:35] <mardraum> krababbel: /etc/default/ntpdate
[12:35] <mardraum> I encourage you to use ntp properly though, with ntpd running all the time
[12:36] <mardraum> so that defaults file references ntp.conf anyway, which you would be using for the real ntp service.
[12:36] <krababbel> mardraum: Thank you and I do want to run ntpd as well. Will ntpate still set the clock once at boot time when there is ntpd installed.
[12:37] <krababbel> This server will run in a cloud service, and will be shut down often.
[12:38] <mardraum> never tested that. ntpdate usually fails if the socket is in use by ntpd. it is likely there is some logic to avoid that though in the startup scripts
[12:38] <krababbel> mardraum: I hope so. :) I will try anyway.
[12:39] <mardraum> ntpd can handle large time jumps, provided it is configured to
[12:39] <mardraum> and being cloud based sounds like a VM which will usually be provided with a decent enough time from the host on boot
[12:40] <krababbel> OK, thanks for the hint, and yes, the host should give a good time at boot, but there is no time sync offered after boot I think. It is an EC2 instace. Can I create a copy of the config file in /etc/default? For example 'ntpdate.bak'?
[12:41] <krababbel> Basically, the Amazon people advise to use ntpd on instances.
[12:41] <mardraum> ntpd without ntpdate works just fine on ec2
[12:41] <krababbel> Will a copy of the config file in /etc/default brake?
[12:42] <mardraum> break?
[12:42] <krababbel> break, yes :)
[12:42] <krababbel> non native speaker here
[12:42] <mardraum> I have never used ntpdate, so I don't know
[12:43] <mardraum> I'd advise you as a native speaker to never rely on it either :p
[12:43] <rbasak> vila: looks to me that post-stop is OK then?
[12:45] <krababbel> mardraum: Thanks again, I wasn't thinking this through, ntpd should work, I see that. I had issues in Hyper V when the vm was saved instead of shut down, time would be frozen as well.
[12:46] <krababbel> That's why I asked.
[12:46] <mardraum> hyper v hey. *giggle*
[12:47] <krababbel> :) Well, it works fine on my laptop for server stuff, but somehow time sync on standby got broken I think.
[12:49] <mardraum> actually when I re-read yes, time gets saved when you save the vm state to disk
[12:49] <mardraum> the startup scripts for ntpdate won't run when you resume the VM though
[12:50] <mardraum> it just unfreezes, as such
[12:50] <mardraum> so afaik ntpdate won't help, you need to configure ntpd to handle this.
[12:51] <krababbel> I was sure the host had "fixed" the time when the vm was restored before. Maybe I didn't realize time was off, I am not sure.
[12:51] <ikonia> keep in mind if your drift is greater than 300 seconds, you'll need to manually sync
[12:51] <krababbel> I mean HyperV does have time sync for linux guests, unlike ec2
[12:51] <krababbel> I understand that ntpd wouldn't want that, ikonia, if that's what you mean. :)
[12:53] <mardraum> I don;t think this is considered "drift"
[12:54] <krababbel> Well I am sure the host of the vm fixed it through virtualization drivers or something, I am sure I tested it.
[12:54] <krababbel> When the vm was restored I mean
[12:55] <mardraum> hey ntpd actually retired ntpdate by providing the -q option
[13:04] <andygraybeal> so i'm no good at this mail stuff with postfix.  but i wonder, should i follow the official documentation or the community documentation on howto install postfix?  urls: https://help.ubuntu.com/12.04/serverguide/postfix.html & https://help.ubuntu.com/community/Postfix
[13:11] <zul> jamespage: https://bugs.launchpad.net/ubuntu/+source/python-psutil/+bug/1259928
[14:26] <zul> smoser: ping, nova needs a newer version of boto than what we have in ubuntu or debian (>= 2.12) im just worried about breaking things like euca2ools
[14:27] <jrwren> yay! I get new boto! :)
[14:28] <smoser> i think that euca2ools maybe doesn't even depend on boto now.
[14:28] <smoser> i just did a sync request for it yesterday to remove our delta from debian
[14:29] <smoser> yeah, it did
[14:29] <smoser>   fbaa65b Stopped to use the Python libraries boto and m2crypto, and started to
[14:29] <smoser>           use lxml, requestbuilder, requests, setuptools and six.
[14:29] <zul> smoser: what about simplestreams?
[14:29] <smoser> simplestreams does not use boto
[14:30] <smoser> cloud-init does, but we can address that if it happens to fail. i dont think it wil.
[14:30] <zul> smoser: i just did an apt-get rdepends python-boto and simplestreams came up
[14:31] <zul> smoser: boto is imported in simplestreams/objectstore/s3.py
[14:31] <smoser> ah. yeah. ok. it does for s3 storage. you are correct. i wouldn't worry about it.
[14:31] <zul> ok cool
[14:31] <smoser> boto is generally sane.
[14:31] <zul> alright ill get this sucker packaged and uploaded
[14:31] <smoser> where is there a report of why something is "stuck in proposed" ?
[14:32] <zul> smoser: hold on
[14:32] <zul> http://people.canonical.com/~ubuntu-archive/proposed-migration/
[14:35] <smoser> zul, thanks.
[14:37] <smoser> ok. i'm being stupid.
[14:37] <smoser> https://launchpad.net/ubuntu/+source/euca2ools/3.0.2-1/+build/5321133
[14:38] <smoser> that is the build. its in dependency wait on python-requestbuilder
[14:38] <smoser> but python-requestbuilder is available
[14:38] <smoser> (in universe)
[14:39] <zul> smoser: needs a MIR then
[14:40] <smoser> but that should'nt block build i dont hink
[14:40] <smoser> hm.. maybe it does.
[14:57] <Felipe_C> HI, Anyone could answer a couple of questions regarding JUJU - Manual provisioning?
[14:59] <cfhowlett> felipe_, out of my area, but I think there's a juju support channel
[15:00] <Felipe_C> Thanks cfhowlett!
[15:02] <cfhowlett> felipe_, no problem
[15:05] <jamespage> yolanda, https://jujucharms.com/fullscreen/search/precise/heat-0
[15:06] <yolanda> wohoo!
[15:22] <hallyn_> zul: not nagging, but just in case it's not what you expected, https://launchpad.net/~zulcss/+archive/libvirt-1.2.0 is empty...
[15:23] <foursixnine> Hi guys, we've been having problems with live-build at work... we're trying to build a custom ubuntu image, but when running lb-build we get an error saying that busybox is not available
[15:24] <zul> hallyn_:  argh gimme a sec
[15:24] <foursixnine> we're able to build only a debian image (Over a debian host), but when trying to do it from an ubuntu host, it always fails with similar error messages... we basicly need a custom installer environment which requires no user interaction...
[15:24] <foursixnine> any ideas?
[15:25] <hallyn_> foursixnine: well utlemming does our automated cloud image building.  in general preseeded installs work ok, if it's ok to run something accelerated like kvm
[15:31] <foursixnine> hallyn_: do you know if utlemming uses live-build? or his processes are documented somewhere?
[15:31] <thurstylark> I want to mount a samba share using fstab and then prompt the user for the username and password for the share when it mounts. Is there a way to do this?
[15:31] <foursixnine> i see there's a modified version of live-build in his github repo
[15:32] <hallyn_> foursixnine: dunno.  he might use a modified vmbuilder.  he'll answer when he comes around.
[15:33] <foursixnine> Thanks hallyn_, íll try to stay arround
[15:33] <foursixnine> this has been killing us for like 3 months now :D
[15:36] <ivoks> sigh
[15:38] <zul> jamespage:  ping swift-bench made it into the archive do we want to do a MIR for it or just leave it as a suggests for swift
[15:38] <hallyn_> ivoks: ?
[15:39] <ivoks> i'd like to propose some changes to charms
[15:39] <ivoks> like... getting adding keys and sources out of charmhelper's contrib domain
[15:39] <ivoks> these should really be part of base
[15:39] <ivoks> there's already add_source() and configure_sources() in fetch
[15:40] <ivoks> but they are suboptimal
[15:41] <ivoks> wrong place to bring this up? :)
[16:00] <zul> hallyn_/smb: uploading to ppas are timing out for me so: http://people.canoincal.com/~chucks/libvirt
[16:02] <smb> zul, thanks will use that next time I am looking at t
[16:02] <d1n0> I have a maas node that fails the smoke/burn tests. On a older version of maas, I had no problem getting it to work.
[16:02] <zul> smb:  np
[16:31] <jamespage> zul, nah - leave it in universe
[16:31] <jamespage> suggests
[16:31] <zul> jamespage:  k
[16:33] <jamespage> zul, adam_g: if either of you are feeling brave
[16:33] <jamespage> https://code.launchpad.net/~james-page/neutron/ml2-ovs-cleanup-fixes/+merge/198546
[16:33] <jamespage> I'd like to get that into the trunk testing packaging so I can do the associated charm work
[16:36] <hallyn_> win 28
[16:50] <jamespage> zul, new boto?
[16:51] <zul> jamespage:  yeah tests were failing because of it (requirements.txt is asking for >=2.12.0)
[16:51] <jamespage> zul, yeah - I saw
[16:51] <zul> jamespage:  anyways nova builds fine now
[16:52] <jamespage> zul, gonna do the backport for 12.04?
[16:56] <zul> jamespage:  yeah do i run the boom script or the backport job (just making sure)
[16:57] <jamespage> zul, either
[16:57] <zul> jamespage:  ack
[17:07] <jamespage> zul, aside from the dashboard not being django 1.6 compat it all looks OK
[17:08] <zul> jamespage:  yay..
[17:08] <zul> jamespage:  just fixing trove so we can get it past -proposed and then will ubuntize it
[17:10] <jamespage> zul, thats wip upstream
[17:10] <zul> jamespage:  django 1.6?
[17:11] <jamespage> zul, yes
[17:11] <zul> jamespage:  ack
[17:12] <frojnd> Hi there.
[17:13] <zul> jamespage:  lovely..trove has got sqlalchemy problems
[17:13] <jamespage> \o/
[17:14] <jamespage> zul, adam_g: want to try to get together to discuss the nova-compute-* rejigs we've been avoiding for the last month?
[17:14] <jamespage> it would be good to get that out of the way
[17:14] <adam_g> jamespage, sure
[17:14] <zul> jamespage:  sure why not
[17:15] <jamespage> adam_g, zul: how about now? we can do via irc I think
[17:15] <zul> jamespage:  sure
[17:15] <jamespage> OK _ so here's my thoughts
[17:15] <jamespage> nova-compute is currently two libvirt centric so step one is to push out the libvirt bits and dependencies to the libvirt specific hypervisor packages
[17:16] <zul> ok
[17:16] <jamespage> that will then allow us to support proxy based stuff a bit easier - think nova-compute-vmware for example
[17:16] <jamespage> adam_g, was that what you where thinking? I know you hit some issues during the charm work last cycle?
[17:16] <adam_g> one sec, branching our current packaging
[17:17] <jamespage> I think most of the deps for nova-compute need pushing out to hypervisor packages
[17:17] <adam_g> im thinking: create a 'nova-compute-libvirt' package that has all the current dependencies of the 'nova-compute' package and provides nova-compute-hypervisor
[17:18] <adam_g> make nova-compute-{kvm, lxc, etc} depend on nova-compute-libvirt and each adds hypervisor-specific deps
[17:18] <adam_g> then we can implement other nova-compute-$foo's alongside the nova-compute-libvirt
[17:19] <adam_g> the nova-compute package would be stripped of most/all of its current deps (iptables, kpartx, qemu-utils, etc)
[17:19] <adam_g> i think its only dependency would be on nova-compute-hypervisor
[17:19] <adam_g> thoughts?
[17:19] <jamespage> adam_g, that sounds about right
[17:20] <jamespage> I was wondering how much benefit having the nova-compute-libvirt package would actually give - but I guess it's a single place to add nova tothe libvirtd group if nothing else
[17:20] <zul> no complaints from me
[17:20] <jamespage> OK - so this sounds like a plan - who's go some time/inclination to work on this?
[17:20] <adam_g> jamespage, well right now there are general libvirt requirements that we define as deps of 'nova-compute', around ~12 of them
[17:20] <zul> jamespage:  do you want to take care of this?
[17:21] <jamespage> adam_g, sure
[17:22] <adam_g> in addition to nova-compute-libvirt, we can add packages (albeit dependency placeholders that only insatll the proper nova-compute.conf for now) for every driver in nova-compute
[17:22] <jamespage> adam_g, but for deps would could just manage that using a substr in the packaging
[17:22] <smoser> anyone have thoughts on $ dpkg-query --show "python-novaclient"
[17:22] <smoser> python-novaclient       1:2.15.0-0ubuntu1
[17:22] <smoser> $ dpkg-query --show python-keyring
[17:22] <smoser> python-keyring  3.3-1
[17:22] <smoser> oops
[17:22] <smoser> funy
[17:22] <smoser> https://bugs.launchpad.net/ubuntu/+source/python-keyring/+bug/1260017
[17:22] <smoser> what is the right way to address that.
[17:22] <adam_g> smoser, i think upstreams solution is to just uninstall keyring
[17:25] <smoser> adam_g, ?
[17:36] <hallyn_> zul: all right, building packages to test libvirt...
[17:44] <smoser> adam_g, well, i posted a work around there.
[17:49] <smoser> utlemming, if you want to open a bug on cloud-utils and say it should rewrite MBR to gpt on > 2TB, then please do so.
[17:49] <zul> jamespage:  neutron looks ok to me
[17:50] <smoser> it doesn't seem like a bad idea.
[17:50] <jamespage> zul, the neutron-ovs-cleanup stuff worries me a bit - it needs some testing
[17:50] <jamespage> but I'd like todo that pre-next release to archive via trunk testing
[17:50] <utlemming> smoser: ack
[17:51] <zul> jamespage:  yeah im not the best person to review neutron functionaility changes either though
[17:52] <jamespage> zul, I potentially need to SRU that as well
[17:52] <jamespage> zul, I think it might have been the cause of an odd issue I saw during havana testing
[17:52] <zul> jamespage:  for the trove stuff im just going to get it building and uploaded I can de-debconf it tomorrow
[17:52] <jamespage> zul, ok
[17:52] <zul> jamespage:  why the SRU?
[17:53] <jamespage> zul, people are running the cleanup by hand right now
[17:53] <jamespage> which is sucky
[17:53] <zul> jamespage:  ah ok
[17:54] <jamespage> zul, I think our maas might be bust in the ci lab right now as well
[17:54] <jamespage> but I'll look at that next week
[17:56] <utlemming> smoser: I think there is a slight wrinkle in the idea of converting to GPT. 1) does the BIOS support GPT, or is BIOS/MBR; 2) since GPT uses a BIOS_GRUB (type EF00) partition, where goes that get made? ; 3) growpart would need run grub-install to populate the BIOS_GRUB partition
[17:57] <smoser> well, it could convert it to the fully backwards compat gpt.
[17:57] <smoser> (i thought there was such a thing)
[17:57] <utlemming> smoser: there is, but it fragile
[17:57] <smoser> and since its growing, there is room at the end for the gpt footer
[17:57] <smoser> (or it wouldn't grow it)
[17:57] <utlemming> smsoer: the gpt footer isn't the problem, per se. Consider the following:
[17:58] <utlemming> you boot with a 2TB disk. Growpart resizes everything. Then you switch to a 4TB disk. Growpart resizes everything.
[17:58] <utlemming> one the first resize, where do you the EF00 partition?
[17:58] <utlemming> if you put it at the end, then you can't resize part 1
[17:59] <utlemming> if you use the hybrid MBR/GPT, then root may not be partition 1
[18:00] <utlemming> smoser: if this is a use case that is becoming common, I would almost rather we turn on UEFI iamges for 12.04 too. It seems a lot safer than trying a bunch of heuristics to figure out if there is enough space between the partition table and partition 1 to install a EF00 partition for grub.
[18:01] <utlemming> smoser: we could probably make cloud images that work, but in the end, those rolling their own images might encounter a lot of pain
[18:01] <smoser> utlemming, converting mbr to gpt is not converting mbr to uefi
[18:01] <utlemming> smoser: so I think that I want to retract my idea of converting to GPT on 2TB or bigger
[18:01] <utlemming> smoser: right, its not
[18:02] <utlemming> smoser: for BIOS/GPT you need a partition to install grub to, type EF00
[18:02] <utlemming> smoser: for UEFI, you need a partition to install UEFI bits, type EF02
[18:03] <utlemming> smoser: you're only choice is a hybrid MBR/GPT and that is fragile and reportedly unsupportable.
[18:03] <utlemming> s/you're/your
[18:03] <smoser> then i agree.
[18:03] <smoser> i did'nt realize that mbr/gpt hybrid was unsupportable
[18:03] <utlemming> smoser: I made inquiries and was told in no uncertain terms to completely avoid it
[18:17] <hallyn_> zul: ok, so no good yet.  just doing apt-get dist-upgrade gave me http://paste.ubuntu.com/6557253/ , but also it didn't cause the new libvirt-python to try to install
[18:17] <hallyn_> (tested using reprepro)
[18:17] <zul> hallyn_:  ok ill fix that up
[18:18] <hallyn_> zul: (I assume you know this better than i do, but you need a python-libvirt package in libvirt-python, dpeending on the new pkg, to force the upgrade)
[18:18] <hallyn_> zul: oh, ok. thanks
[18:19] <hallyn_> i'll keep this instance running and hot :)
[18:19] <zul> hallyn_:  yeah i didnt add that yet
[18:20] <hallyn_> ok
[18:29] <smoser> adam_g, jamespage zul random useful thing, harlowja pointed me at
[18:29] <smoser> https://github.com/harlowja/gerrit_view/
[18:33] <zul> smoser: thats pretty cool
[18:42] <smoser> harlowja is super whiz bang cool.
[18:49] <d1n0> argh, lol ... No PXE template found in u'/etc/maas/templates/pxe'
[19:04] <swaT30> jamespage: any ETA on getting Grizzly 2013.1.4 into updates?
[19:39] <hggdh> zul: ping
[19:40] <zul> hggdh:  whats up
[20:13] <hallyn_> smoser: kirkland: zul: stgraber: opened bug 1260062
[20:14] <hallyn_> let's see who else i can piss off today
[20:14] <smoser> hallyn_, wooohoo
[20:14] <hallyn_> clearly a little war tangentially related to init systems is where i shoudl stoke the flames
[20:15]  * zul hands hallyn_  some gasoline
[20:15] <hallyn_> alas amazon hasn't yet shipped my firestarter stone
[20:39] <w0rmie> i've installed saucy on NFSBOOT folder to be run from nodes machines on my LAN, do i need a grub configuration to make them run throught NIC-boot?
[20:41] <kirkland> hallyn_: ;-)
[21:41] <hallyn_> kirkland: given as this appears to be your baby, do you mind uploading http://paste.ubuntu.com/6558177/ ?
[22:15] <kirkland> hallyn_: heh, that's my baby?  :-)
[22:16] <kirkland> hallyn_: I don't mind sponsoring for you, but it's been eons since I touched that
[22:16] <hallyn_> kirkland: eh, you did the last upload :)
[22:16] <hallyn_> it's been eons since anyone touched that
[22:18] <kirkland> hallyn_: done
[22:19] <hallyn_> kirkland: thanks!  1 down, 2 to go (before vmbuilder can be dropped)
[22:19] <kirkland> hallyn_: ;-)
[22:19] <kirkland> hallyn_: what else?
[22:19] <hallyn_> rbasak: bug 1242383, why is it not fixed in trusty?
[22:20] <hallyn_> kirkland: sandbox-upgrader and auto-upgrade-tester, which are more intricate
[22:20] <hallyn_> they need to either be dropped if not in use, or else switched to using rbasak's uvtool
[22:23] <kirkland> hallyn_: gotcha; can't help with those
[22:23] <hallyn_> kirkland: yup - thanks!  ttyl
[22:35] <rbasak> hallyn_: it's fixed in trunk. I just haven't done an upload recently.
[22:36] <hallyn_> ok.  planning one soon?
[22:36] <rbasak> hallyn_: I can do an upload tomorrow if you need it? Since the consumers so far have mainly been manual (PPA users) or on cloud images (the dependency is pulled in by cloud-init), I didn't think it affected many people today, so I just had it down as "will be fixed on next upload"
[22:37] <hallyn_> rbasak: yeah it's just im' about to shift some vmbuilder users over to uvtool, probably :)
[22:37] <hallyn_> just testing manually right now, so i know what to do
[22:38] <rbasak> hallyn_: I should finish and upload some manpages too, then :-/
[22:38] <hallyn_> hm.  does it require the ability to mount filesystems?
[22:38] <hallyn_> that woudl rock :)
[22:38] <hallyn_> i'm flailing around like a fish otu of water here
[22:38] <hallyn_> btw my typing is sucking bc my hands are SO COLD
[22:38] <rbasak> No. It can run as a normal user, provided you're in the libvirtd group for the libvirt bits.
[22:38] <sarnold> jump back in to the water!
[22:39] <hallyn_> rbasak: so long as all the fs magic is done inside kvm it's ok,
[22:39] <hallyn_> rbasak: but if uvt tries to mount anything then it won't run by default in the container i'm testing in
[22:39] <hallyn_> (i'm trying to figureout why uvt-kvm wont' work for me)
[22:39] <hallyn_> (trusty container)
[22:42] <hallyn_> rbasak: and now i get RuntimeError: Multiple images found that match filters ['release=saucy'].
[22:42] <rbasak> hallyn_: wasn't there some issue when danwest tried to run it in a container, that I then asked you about? I forget what it was exactly.
[22:42] <rbasak> hallyn_: try "release=saucy arch=amd64". The latest PPA version might be more helpful. There you can do "uvt-simplestreams-libvirt query release=saucy" and it'll show you what you have so you can disambiguate.
[22:43] <rbasak> hallyn_: (and I'll upload the PPA version soon)
[22:43] <hallyn_> rbasak: that gets me back to the more familiar error: http://paste.ubuntu.com/6558409/
[22:44] <hallyn_> yeah lemme try ppa, long as that's gonig itno archive soon
[22:44] <hallyn_> whcih ppa?
[22:44] <rbasak> hallyn_: does that file exist?
[22:44] <rbasak> hallyn_: ppa:uvtool-dev/trunk
[22:45] <hallyn_> no the file doesn't exist
[22:45] <rbasak> hallyn_: sounds like there was a problem importing it.
[22:45] <hallyn_> also why does uvt-simplestreams-sync not autocomplete
[22:46] <rbasak> It uses argparse. I don't think we have an argparse autocompleter in the archive at all, do we?
[22:46] <rbasak> uvt-simplestreams-libvirt sync
[22:46] <hallyn_> no i just mean typing 'uvt-si<tasb>' doesn't even work
[22:46] <rbasak> wfm
[22:46] <hallyn_> weird
[22:49] <hallyn_> rbasak: so i gather that during the sync i shouldn't be getting :  libvirt: Storage Driver error : Storage volume not found: no storage vol with matching name 'x-uvt-b64-Y29tLnVidW50dS5jbG91ZDpzZXJ2ZXI6MTMuMTA6YW1kNjQgMjAxMzEyMDQ='
[22:49] <zamadatix> Hello
[22:49] <rbasak> hallyn_: that error is a libvirt API issue I have yet to try and track down. It can be ignored.
[22:50] <hallyn_> ok.
[22:50] <rbasak> hallyn_: libvirt API has no "do you have volume X" AFAICT. It just has "give me volume X", and on failure prints to stderr instead of quietly telling the caller.
[22:50] <hallyn_> so yay, purge+sync with ppa version got me furhter
[22:50] <hallyn_> now i get: libvirt.libvirtError: internal error: no supported architecture for os type 'hvm'
[22:50] <rbasak> That happens when libvirtd didn't see KVM support on startup.
[22:51] <hallyn_> oh i created the node but didn't chownit, my bad
[22:52] <hallyn_> rbasak: thanks, i can work with this.  so, di you have any interest in updating vmbuidler users to uvtool?  :)
[22:53] <rbasak> hallyn_: I guess it makes sense. I'd certainly like to update vmbuilder users to use our public cloud images and do what they want with those instead. If uvtool can help with that, then fair enough.
[22:53] <rbasak> hallyn_: I need to catch up with documentation and so on though, and particularly for the vmbuilder use cases. I'm not clear on exactly what those are.
[22:55] <hallyn_> rbasak: sandbox-upgrader and auto-upgrade-tester packages
[22:56] <zamadatix> Does anyone have experience setting up multiple VLANs on a single physical adapter?
[22:56] <rbasak> !anyone | zamadatix
[22:58] <zamadatix> Having issues after defining the interfaces in /etc/network/interfaces
[22:59] <zamadatix> There are about 30 different vlans defined, all IPs are static. I defined a gateway for each but they all seemed to use the gateway of the first defined vlan (1)
[23:00] <zamadatix> if I try to manually add the route for a subnet it says it's already there but route says each adapters gateway is * and * is the gateway for vlan 1
[23:01] <zamadatix> If i do ping -I vlanx I can ping anything in the layer 2 but obviously the traffic isn't being routed right so no other subnets can ping the server
[23:01] <rbasak> VLANs don't really have gateways. The "default gateway" is a per-system thing, not a per-VLAN thing.
[23:02] <Patrickdk> I have tons of vlans without gateways, normally only have 3 or so with gateways
[23:02] <rbasak> If you're originating outbound traffic, then you need to make sure that the traffic uses the correct source IP.
[23:02] <rbasak> You basically have the same problem as you would have if you had multiple interfaces.
[23:03] <rbasak> http://lartc.org/howto/ might have some relevant help for you here.
[23:03] <Patrickdk> multible switchs/nics
[23:03] <zamadatix> Thanks for the link
[23:03] <Patrickdk> unless you got a l3 switch, and told it to do routing
[23:03] <Patrickdk> not something I would do, but
[23:05] <zamadatix> There is a core router doing the layer 3 magic
[23:06] <zamadatix> Thanks, I'll have to read over that link some more
[23:06] <hallyn_> zul: libvirt-python package doesn't actually include the lbivirt bindings...
[23:07] <hallyn_> while debian/tmp/usr/lib/python2.7 still exists - missing entry in .install?
[23:10] <hallyn_> testing with that .install