[00:22] <mwhudson> hah
[00:22] <mwhudson> + wget --no-verbose -O - http://localhost/MAAS/api/1.0/files/?key=8a8b1d56-ae02-11e2-a65a-ac162d8ba8b8&op=get_by_key
[00:22] <mwhudson> in cloud-init-output.log
[00:22] <mwhudson> i can see two problems here: 1) i put localhost in the environment config
[00:22] <mwhudson> 2) this is an armhf node...
[00:23] <mwhudson> + tar xz -C /var/lib/juju/tools/1.10.0-precise-amd64
[00:24] <ahasenack> mwhudson: how did the bootstrap go?
[00:24] <mwhudson> ahasenack: getting there
[00:24] <ahasenack> ah, it fetched the amd64 tarball?
[00:24] <mwhudson> well
[00:24] <mwhudson> it didn't
[00:25] <mwhudson> but for reasons that are pebkac
[00:25] <mwhudson> if i hadn't made that mistake, i don't think the next bit would have gone very well though
[00:25] <ahasenack> ok
[00:25] <mwhudson> thumper: oy, are you here?
[00:26] <sarnold> do the tools exist for armhf? I thought I saw in #ubuntu-release earlier that juju-core had incorrect architecture lines...
[00:26] <mwhudson> sarnold: not that i can tell
[00:27] <mwhudson> sync-tools just grabs stuff for amd64 and i386
[00:27] <mwhudson> so i guess i wasn't expecting the bootstrap to _work_
[00:27] <mwhudson> but i also wasn't expecting it to blithely try to grab the amd64 tools
[00:28] <mwhudson> i guess i should try pyju again, i hear this actually works with armhf...
[00:29] <thumper> mwhudson: oh hai
[00:30] <mwhudson> thumper: i'm having fun with juju-core on (well, deploying to) armhf
[00:30] <mwhudson> thumper: would you expect this to work?  or am i rubbing myself across the bleeding edge again?
[00:30] <thumper> mwhudson: hmm...
[00:30] <thumper> mwhudson: which version of go are you using?
[00:31] <mwhudson> thumper: no idea
[00:31] <mwhudson> i don't think i have go installed actually
[00:31] <mwhudson> hm no i do
[00:31] <thumper> golang is the package
[00:31] <mwhudson> mwhudson@lava-leg01:~$ dpkg -l golang
[00:31] <mwhudson> No packages found matching golang.
[00:32] <thumper> well, since we don't build for arm yet, I'd be very surprised if you got juju-core working on arm
[00:32] <mwhudson> ah
[00:32] <mwhudson> ii  golang-go                                     2:1-5                                         Go programming language compiler
[00:32] <thumper> hmm... mine is 2:1.0.2
[00:32] <mwhudson> this is precise fwiw
[00:32] <thumper> mwhudson: I've been told by davecheney that go prior to 1.1 is crackful on arm
[00:33] <mwhudson> why does this matter though?
[00:33] <thumper> so no, I wouldn't expect it to work
[00:33] <mwhudson> does stuff get compiled on the fly?
[00:33] <thumper> we don't build arm binaries
[00:33] <thumper> so how are you running it if not compiling yourself?
[00:33] <mwhudson> i'm running on an intel box
[00:33] <mwhudson> pointed at maas
[00:33] <mwhudson> that has a bunch of arm nodes enlisted
[00:36] <thumper> hmm...
[00:36] <thumper> there are no arm tools built
[00:36] <thumper> so no, I don't think it will work
[00:36] <mwhudson> ok
[00:36] <thumper> you'd have to build your own
[00:37] <thumper> which is a bit more work
[00:37] <mwhudson> it seems to try to fetch the amd64 tools on my node
[00:37] <mwhudson> which i guess is a bug?
[00:37] <thumper> yeah, that's kinda hard coded :(
[00:38] <mwhudson> ok
[00:38] <thumper> so, yes a bug, arm isn't supported yet
[00:38] <thumper> through the wonders of a compiled lanuage
[00:38] <thumper> language
[00:38] <mwhudson> who can i talk to to find out if this is a priority?
[00:38] <thumper> mramm
[00:38] <mwhudson> ok
[00:39] <thumper> np
[07:07] <davecheney> can I get some help with the juju gui ?
[10:45] <gnuoy`> Hi, I'm trying to debug an issue I'm seeing between 2 charms I'm testing on lxc. I have a debug-hooks session running and if I remove-relation between the charms the debug-hooks session immediately kicks in and gives me a session in the charm dir. However add-relation does not, the debug hooks session is silent as if there is no attempt to run any hooks.
[10:45] <gnuoy`> http://paste.ubuntu.com/5604164/
[10:56] <gnuoy`> hmm, I'm going to redeploy and try and recreate.
[15:56] <idioteque> clear
[15:56] <idioteque> exit
[17:10] <marcoceppi> Should a charm ever need to "require" a juju-info interface? I thought that subordinates were supposed to provide it, but all "other" communication was to be done via other interfaces
[17:45] <SpamapS> marcoceppi: nothing provided is useful without at least one requirer
[19:52] <jcastro> hey marcoceppi
[19:52] <marcoceppi> hey jcastro
[19:52] <jcastro> now that your charm tools stuff got merged maybe you can post the status on the list?
[19:52] <jcastro> in my transition thread?
[19:53] <marcoceppi> jcastro: we still haven't figure out what to do about the PPA. Adding the juju/pkgs ppa will "break" go-juju installs
[19:53] <marcoceppi> I'd hate to say "Yeah, charm-tools update, break your juju to install"
[19:54] <jcastro> core is in backports, maybe we should put charm-tools there too?
[19:54] <marcoceppi> jcastro: probably
[19:54] <jcastro> non-PPA workflow for everything would be quite nice.
[19:54] <marcoceppi> I know we talked briefly about just putting charm-tools in it's own ppa, but backporting would be nice
[19:55] <jcastro> marcoceppi: ok so that would be either Daviey, mgz, or jamespage, all in the UK and probably at the bar, can you sync up with them on Monday?
[19:55] <marcoceppi> jcastro: yeah
[20:09] <m_3> marcoceppi: I think juju/pkgs ppa wonn't break juju-core once 0.7 lands in the ppa
[20:09] <marcoceppi> m_3: ah, any idea when that'll happen? I know it's failed to build in there the last few times
[20:09]  * m_3 looks to see if that's done
[20:09] <m_3> ah that's why then
[20:09] <m_3> I was wondering what the deal was
[20:10] <m_3> marcoceppi: so, no, no clue
[20:10]  * marcoceppi nods
[20:10] <marcoceppi> looks like quantal built, but raring is still no-go
[20:11] <m_3> marcoceppi: precise still no
[20:11] <m_3> at least it doesn't show up on my updates
[20:12] <marcoceppi> unfortunate
[20:21] <ali1234> i have a dedicated server with 12.04. can i manage it using juju?
[20:42] <sarnold> ali1234: it's not exactly ideal for that; you can definitely use the 'local' lxc provider to deploy services into lxc containers on that one machine, but juju really shines at allocating machines and services from clouds, like from MAAS or EC2 or rackspace...
[20:45] <ali1234> why is there even a difference?
[20:49] <sarnold> ali1234: you -can- use the 'jitsu deploy-to' command to force a deployment on a specific machine, without forcing containers, but that's not exactly promoted with the python version of juju; the go version is supposed to do that 'natively', without using the same kinds of hacks, but I haven't followed that closely enough to know if it will work that way or not
[20:49] <ali1234> i want to use juju to manage virtual machines in a cloud
[20:49] <ali1234> the only caveat being i want the cloud to run on my server
[20:50] <ali1234> so i guess i need lxc for that
[20:50] <ali1234> but how is that really different than using AWS?
[20:51] <marcoceppi> ali1234: You'll need an underlying provider for juju to talk to. The difference between AWS and just running VMs on a machine is AWS offers a channel for juju to talk to. Juju doesn't actually do the heavy lifting of turning on and provisioning machines, it simply requests them. There are already tools out there (re: Amazon, OpenStack, MAAS) to manage machine instances
[20:51] <ali1234> ah yes, openstack
[20:51] <ali1234> why does it need a minimum of 7 machines?
[20:51] <marcoceppi> Juju is really more interested in setting up those instances and then orchestrating communication between the services and units deployed
[20:52] <marcoceppi> ali1234: why does what need a minimum of 7 machines?>
[20:52] <ali1234> openstack
[20:52] <marcoceppi> Well, if you're using devstack, you only need one machine. 7 machines is the "recommended minimum hardware" for running openstack. You'll need "computer" nodes, servers for management, servers for authentication, storage, etc etc
[20:53] <marcoceppi> s/computer/compute/ nodes
[20:53] <ali1234> no, the recommended minimum is 10. 7 is the absolute minimum according to the document i read, and now cannot find
[20:53] <marcoceppi> It's not recommended (for HA, etc) to have everything for openstack on one machine. It doesn't scale very well
[20:54] <marcoceppi> If you're looking to play around with OpenStack you can do so on one machine: http://devstack.org/ as for reasons why OpenStack says this or that I'd give #openstack a go :)
[20:54] <ali1234> i'm not looking to play around
[20:54] <ali1234> this is a production server
[20:54] <ali1234> i want a better way to manage it
[20:55] <ali1234> preferably one that won't increase the cost by a factor of 10...
[20:55] <marcoceppi> OpenStack isn't really a tool to just manage a server, it's a tool to build cloud environments
[20:55] <sarnold> marcoceppi: wow, devstack needs to be better advertised :)
[20:56] <ali1234> the way i see it, a cloud environment just means running virtual machines
[20:56] <marcoceppi> ali1234: that's a really simplistic view of it
[20:56] <ali1234> perhaps
[20:57] <marcoceppi> Just off the top of my head, you have things like: storage management, networking management, images to build virtalmachines, and actual "compute" nodes you deploy vms to
[20:57] <marcoceppi> All of that and more go in to building a cloud environment. It all takes hardware to do so
[20:58] <marcoceppi> I'm not OpenStack expert, the people over in #openstack would be far better equiped to answer questions about how it may or may not help you
[20:58] <ali1234> and there is nothing else that could do it?
[20:59] <marcoceppi> ali1234: could do what?
[20:59] <ali1234> give me a private cloud on a single server
[20:59] <ali1234> which is suitable for production use
[21:02] <sarnold> ali1234: fiddle around with juju's local provider on your workstation for a little bit and see what you think. You could use it to manage lxc instances on your server; that'd probably be the least-overhead sort of setup like this, short of just running all the services on the one machine as we have for the last 30 years....
[21:03] <ali1234> i'm running all the services on the one machine now, the problem is they all conflict with each other
[21:04] <sarnold> aha :) darn
[21:04] <marcoceppi> ali1234: Someone was able to get Juju local provider to work on a server and map public ips to different machines http://askubuntu.com/a/282415/41
[21:04] <marcoceppi> that may or may not work for what you're doing
[21:04] <ali1234> it's running 4 wordpress, 2 drupal, moinmoin, some random ecommerce site...
[21:05] <ali1234> a gitlab (which needs a load of unpackaged ruby gems), and several static pages
[21:05] <ali1234> oh and a git server, and a bitcoind
[21:06] <sarnold> no wonder you want something better :)
[21:06] <sarnold> marcoceppi: nice.
[21:07] <ali1234> it costs 60 euros/month... last time i checked that will get me 4 instances on amazon...
[21:07] <sarnold> hetzner? :)
[21:07] <ali1234> right
[21:07] <sarnold> hehe, I've thought about that too. ridiculous amouts of hardware for cheap cheap cheap.
[21:07] <ali1234> and the thing is we're only using about 10% of the resources it has
[21:08] <ali1234> but adding more services just makes it unmanagable
[21:08] <ali1234> so i'd like to be able to spin up VMs on it, please
[21:08] <marcoceppi> I don'
[21:08] <marcoceppi> I don't know if I'd call LXC production vm/containerization, but it's certainly worth a shot
[21:09] <marcoceppi> But that might be splitting hairs at this point
[21:11] <sarnold> ali1234: I've heard the MAAS group uses libvirt VMs for their testing... it's not exactly the intentional usage of MAAS on a pile of VMs but it might also work. At least on
[21:11] <sarnold> ali1234: at least on my laptop, I often run ten or so VMs simultaneously (lighjtly loaded..) and things mostly work fine, those hetzner machines are bonkers compared to my laptop...
[21:13] <marcoceppi> sarnold: the only thing I'd be worried about using MAAS out in the ether is it kind of gets network hungry (last I checked it) and tries to own everything
[21:14] <sarnold> marcoceppi: heh, true, hosts wouldn't be pleased to find that thing aimed outwards :)
[21:15] <ali1234> got a 10TB soft bandwidth limit...