[00:00] <firl> hatch: yeah I am comfortable with it. I am trying to see for my co workers what might be easier for them to work with for node expansion / deployment etc
[00:00] <hatch> ahh - well using the gui and bundles is definitely the way to go...later this week ;)
[00:00] <firl> Any way to track that status so I can try it later?
[00:01] <lazyPower> yeah hatch
[00:01] <lazyPower> whats with these new toys i haven't heard about yet?
[00:01] <lazyPower> you holdin out on us?
[00:02] <hatch> firl you can follow our blog http://blog.jujugui.org/ or follow me on twitter @fromanegg
[00:02] <hatch> lazyPower: we travel in different circles now ;)
[00:02] <lazyPower> thats low hatch :|
[00:02] <hatch> lol
[00:02] <firl> hatch: thanks, is there a dev version I can try now by any chance?
[00:03] <hatch> firl: there is, but deployment is a few steps becuase you have to run the dev version of the charm and of the actual gui source.
[00:04] <hatch> If you're up for some bzr/git fun I can outline the steps
[00:04] <firl> sure
[00:04] <firl> I am bzr and git compatible
[00:04] <hatch> haha ok give me a few
[00:04] <firl> do I need a custom juju source client also?
[00:05] <firl> or can i bootstrap
[00:05] <hatch> nope, you can use whatever version of juju you have installed
[00:05] <hatch> (assuming it's > 1.2)
[00:05] <firl> yeah
[00:08] <hatch> firl: https://gist.github.com/hatched/2dc93eddbcdc9a2c9974
[00:09] <firl> hatch: ty, do I need to do anything special to have the un tethered bundle so to speak?
[00:10] <hatch> firl: so with this branch you'll be able to drag/import a bundle and it'll be 'uncommitted'
[00:10] <hatch> so you can play around with it
[00:10] <hatch> delete services etc
[00:10] <lazyPower> that, is awesome news
[00:10] <firl> sweet, can i choose the placement as well on the machines?
[00:10] <hatch> there are known bugs that we're squashing just FYI :)
[00:11] <firl> I completely understand, I work in software dev
[00:11] <hatch> firl: yeah you'll need to 'destroy' the unit from the placed unit, and then add another unit and place that one
[00:11] <firl> I think I understand enough
[00:11] <hatch> the UX for that is a little funky but we're working on it
[00:11] <firl> so I should do juju add-machine
[00:11] <firl> first?
[00:12] <firl> so I can “pre” place each service
[00:12] <hatch> nope the GUI will take care of all of that from the Machine View
[00:12] <firl> will it spawn the 17 machine states again?
[00:12] <hatch> nope - it won't do anything until you hit 'commit'
[00:12] <hatch> well.. 'commit' then 'confirm'
[00:12] <firl> ok I will have to putz around
[00:12] <hatch> yeah - I'll be doing some docs on how to do it for release
[00:12] <firl> yeah when I was trying the trusty bundle it was trying to provision 17 machines through my MaaS
[00:13] <hatch> yeah - it does that now as a default
[00:13] <hatch> which bundle?
[00:13] <firl> openstack-base
[00:13] <firl> https://demo.jujucharms.com/?deploy-target=bundle/openstack-base-34
[00:14] <firl> 17 items in the sandbox
[00:17] <hatch> firl: ok that bundle doesn't have machine placement details so when you drag it to the canvas, after a bit you'll see a bunch of blue bordered icons showing up
[00:17] <hatch> once it's done switch to the machine view where you'll see a list of unplaced units on the left
[00:17] <firl> ok
[00:17] <hatch> create the machines you want in the gui and place them as you like
[00:18] <firl> so I need to juju add-machine for each node then right?
[00:18] <hatch> nope when you create a machine in the GUI it'll handle doing all that in the background for you
[00:18] <firl> ah! nice ok
[00:19] <hatch> you shouldn't need to touch the CLI
[00:19] <firl> ok just for the bzr branch and deploying of gui
[00:19] <hatch> right
[00:19] <firl> understood, thanks for sharing the info I appreciate it
[00:19] <hatch> because you're using dev version of both :)
[00:20] <firl> apparently it is a large trunk hah
[00:21] <hatch> yeah it's huge
[00:21] <hatch> sorry I should have mentioned that
[00:21] <firl> haha
[00:25] <hatch> firl: I have to run but I'll check back in a bit later to see if you run into any issues
[01:32] <hatch> firl: having any luck?
[01:32] <firl> hatch: JUST got back
[01:32] <firl> about to deploy since bzr finished
[01:33] <hatch> great, well I'll be around now if you have any q's
[01:34] <firl> cool, “error: no service name specified” when I do “juju set juju-gui-source=develop"
[01:34] <firl> and in the order of your gist it’s after the deploy
[01:34] <hatch> woops
[01:34] <firl> ( not sure if it needs to be before )
[01:34] <hatch> juju set juju-gui juju-gui-source=develop
[01:34] <hatch> updated gist
[01:35] <firl> rgr
[01:35] <firl> then expose and run with it?
[01:35] <hatch> yep - it'll take a bit as it needs to download the source and build it
[01:36] <firl> download it from my local juju CLI client right?
[01:37] <hatch> nope it actually downloads the latest source from github
[01:37] <hatch> then builds it
[01:37] <firl> so the local repo just holds the info of where to point to then?
[01:38] <hatch> the gui has a charm and the 'app'
[01:38] <hatch> we ship a built version of the app with the charm, but because you're using the bleading edge you have to download the source and build it
[01:39] <hatch> shouldn't take more than 5min
[01:39] <hatch> most of that will be cloning the repo
[01:40] <firl> gotcha
[01:40] <firl> looks like it is rolling
[01:41] <hatch> do you know how to get the admin password for the GUI?
[01:41] <firl> nope
[01:41] <firl> but I did just notice that it is a nodejs app
[01:41] <firl> I assumed it would be the same as my environments.yml
[01:41] <hatch>  head ~/.juju/environments/<whatever your environment is>.yaml
[01:42] <firl> gotcha, after it’s built and going I take it
[01:42] <hatch> there will be a line in there called 'password' or the like
[01:42] <hatch> yep
[01:43] <firl> yeah, it picked it up from the environment, or looks that way
[01:47] <hatch> firl: feel free to file any bugs you find https://bugs.launchpad.net/juju-gui :)
[01:49] <firl> thanks, not sure I would know what a bug is yet vs what I am just using wrong hah
[01:49] <firl> UI just came up
[01:50] <firl> So even though it says “This bundle will be deployed to your provider immediately” it won’t?
[01:50] <firl> hatch
[01:51] <rick_h_> firl: hah, good feedback. That text will have to be updated.
[01:51] <hatch> good catch
[01:51] <hatch> rick_h_: I'll add that to the list....lol
[01:51] <firl> lol i can open a bug i suppose
[01:51] <hatch> it's ok I got it :)
[01:51] <firl> :)
[01:52] <firl> nice they are all blue
[01:52] <hatch> great - now switch to the machine view
[01:52] <hatch> the Machine tab at the top
[01:52] <firl> yeah
[01:52] <hatch> and you'll have a list of all the units on the left
[01:52] <firl> so I should “add machine” from that view
[01:52] <hatch> yup
[01:52] <firl> and not worry about the containers on the right
[01:52] <firl> ?
[01:53] <hatch> once you create the machine you'll want to click on it
[01:53] <hatch> then drag any units onto the 'Bare Metal' container shown on the right
[01:53] <firl> just have it mimic my MaaS
[01:53] <hatch> you can also create lxc/kvm containers in that right column if you so choose
[01:53] <firl> ‘bare metal’ container on the right = ‘Root Container’ ?
[01:53] <hatch> oh...yeah
[01:54] <firl> rgr
[01:54] <hatch> forgot we renamed that :)
[01:54] <firl> yeah it threw me for a loop the first time
[01:54] <firl> when I tried deploying to a non KVM’able machine
[02:22] <firl> lets see how it rips
[02:34] <firl> hatch: well, it provisioned the machines, it did not add any of the services to the root container though
[02:34] <firl> and it lost all of it’s relations
[02:35] <rick_h_> firl: give it time, if it fails it should come back with an error
[02:35] <rick_h_> firl: otherwise it might take a bit to bring up the machines/get the charms/come up
[02:36] <firl> I can check the juju logs, but juju status looks like it’s done
[02:36] <rick_h_> firl: :/ ok
[02:37] <lazyPower> firl: w/ 17 machines, in the config that ships with the bundle it takes 15 minutes to come up
[02:37] <firl> ok
[02:37] <lazyPower> firl: check juju debug-log -x machine-0 (this filters out some of the stateserver noise)
[02:37] <lazyPower> the relationships come up after the agent state reaches started
[02:37] <lazyPower> and there's a ton of relationships in there.
[02:37] <firl> ( from the juju-gui node )?
[02:38] <lazyPower> you should be able to run that command from your workstation
[02:38] <lazyPower> all those commands route through the Juju-API
[02:38] <rick_h_> lazyPower: purdy (aside from the docker icon which is a fix that we applied in search results but missed here https://jujucharms.com/u/lazypower
[02:38] <firl> yeah, it didn’t work that’s the only reason I asked
[02:38] <lazyPower> rick_h_: docker icon?
[02:38] <lazyPower> wat did i miss
[02:38] <rick_h_> yea, the docker icon wrecks things
[02:39] <lazyPower> is it docker or etcd?
[02:39] <lazyPower> i updated teh bundle - but the icon didn't udpate
[02:39] <lazyPower> argh fingerssss, cooperate
[02:39] <rick_h_> lazyPower: oh sorry, coreos I think
[02:39] <lazyPower> yeah, its etcd then
[02:39] <rick_h_> sorry yea etcd
[02:39] <rick_h_> anyway, aside from that quite a nice profile page. You need to setup a gravatar
[02:40] <lazyPower> ah i know what happened here - i'm pointing at hazmats old charm that needs a new icon
[02:40] <lazyPower>     charm: cs:~hazmat/trusty/etcd-6
[02:40] <lazyPower> i'll get a fix for that pushed shortly
[02:40] <rick_h_> ah, gotcha
[02:42] <firl> Ok, so every agent-state is “started” all the “services” are installed but not put on any of the machines, and there has been no log activity on any of the nodes for over 10 minutes
[02:42] <rick_h_> firl: hmm, it sounds like the services were added but not actually deployed. hatch ^
[02:42] <rick_h_> firl: we'll have to check into it and see if we can dupe it and if there's something off in the commands sent to juju with the new uncomitted work
[02:43] <firl> anything I can do to help / log a bug for it?
[02:43] <rick_h_> sorry it's not fully baked yet but hopefully shows where we're headed.
[02:43] <firl> grab the logs / recreate etch
[02:43] <firl> etc*
[02:43] <firl> Im not dissapointed, it’s awesome
[02:43] <rick_h_> we'd need to pull a ton of info. I think it'll be easier for us to just do a manual deployment that has manual location stuff in it.
[02:43] <rick_h_> it should duplicate, if we can't we'll be in touch :)
[02:44] <rick_h_> cool, glad you like it
[02:44] <firl> hah cool, I will mess around a bit with it, see if I can get a work around going
[02:44] <firl> since it installed them just not deployed
[02:44] <lazyPower> rick_h_: next ingest that should look a lot cleaner
[02:45] <rick_h_> firl: hmm, you can try to reload the gui, check machine view, and see if it thinks anything is on those machines
[02:45] <rick_h_> firl: it'll reload the data juju has
[02:45] <firl> Already did that, it will allow me to add new units
[02:45] <rick_h_> firl: and if nothing's there then it failed to tell juju to add those services to those machines
[02:45] <firl> but the new HW MaaS machines show up
[02:45] <rick_h_> firl: if it did not, then you can go to the scale up UX "Add units"
[02:45] <rick_h_> and put them on
[02:45] <rick_h_> the machines and commit a second round and hopefully that'll work out
[02:46] <firl> yeah I was thinking about wiping the environment, then committing the machines first
[02:46] <firl> waiting for them to be “started” then applying the bundle
[02:46] <rick_h_> can't do that
[02:46] <firl> oh ok
[02:46] <rick_h_> a bundle cannot reference machines in the environment already since there's no way for you to map your desire there
[02:46] <firl> gotcha
[02:46] <rick_h_> bundles are always just 'additive' vs a merge
[02:47] <firl> makes sense
[02:47] <rick_h_> best bet would be to wipe it and go to juju-quickstart with the bundle file you've got there.
[02:47] <firl> I figured with the new gui, I could just drag and drop to existing machines
[02:47] <rick_h_> firl: heh, not yet.
[02:47] <rick_h_> firl: you'd have to drag each thign into place and that's a complex UI to get right.
[02:47] <rick_h_> each service/unit
[02:48] <firl> I will try the “unit” additive
[02:48] <firl> I don’t understand it, but always fun to learn
[03:06] <firl> looks to be working
[03:15] <firl> rick_h_ after the units have started, should I see relations in the gui if juju status shows relations to clusters?
[03:26] <rick_h_> firl: yes, you should see relation lines if there are relations in the juju status
[03:27] <rick_h_> firl: if you don't please take a screenshot and the output of juju status (if you can share it, sanitize it) and file a bug please
[03:27] <firl> glad you said that, I was about to clear the buffer
[03:28] <rick_h_> firl: going to call it a night here but thanks for being a tester for us and checking things out. Really appreciate it.
[03:29] <firl> not a problem
[03:29] <firl> have a good night
[03:29] <rick_h_> firl: if you hit/need anything feel free to send me a note rick.harding@canonical.com
[03:29] <firl> thanks mate
[08:32] <thumper> stub: ping
[08:32] <stub> thumper: pong. Sorry I missed you yesterday - didn't notice my dead VPN
[08:32] <thumper> stub: that's fine
[08:33] <thumper> stub: the default trusty postgresql charm, what do I need to configure to get point in time backup/restore?
[08:34] <stub> thumper: wal_e_storage_uri and the appropriate credential config items (wabs_*, os_*, aws_* depending on your cloud)
[08:35] <thumper> hmm... ok
[08:35] <thumper> I should really get around to setting that up at some stage
[08:35] <thumper> I remembered one question I ha
[08:35] <thumper> d
[08:35] <thumper> I grabbed a backup from my server
[08:35] <thumper> and wanted to restore it on my laptop version
[08:36] <thumper> it has different db owner and database name
[08:37] <stub> yes. that story is very poor. If you explicitly specify the database and roles options in the relation, recovery (or what you are doing) is easier but still sucks somewhat.
[08:38] <stub> One of many things I want to address in the great Leadership Refactoring/Rewrite.
[08:39] <stub> thumper: With what you have now, you need to manually go in and rename the database to the generated database name and fix ownership of tables etc. to match the generated username
[08:40] <stub> If you had explicitly specified the database name and ensured all your tables etc. had relevant permissions granted to the roles you specify, then this isn't a problem.
[08:41] <thumper> hmm.. fair enough
[08:43] <thumper> stub: where does the automated daily backup cron live?
[08:43] <stub> fair enough, but it is a horrible user experience. Security vs. usability as usual, but I'd like it to be better
[08:43] <stub> thumper: backup_dir config item
[08:43] <stub>  var/lib/postgresq/backups by default
[08:43] <stub> ql
[08:43] <thumper> not the files, but the cron to run it
[08:44] <stub> oh, umm...
[08:44] <thumper> what I'm wanting to do is to take a backup now before I upgrade my app
[08:44] <thumper> just for piece of mind
[08:44] <thumper> the daily one is about 6 hours old
[08:44] <stub>  /etc/cron.d/postgresql
[08:45] <stub> There was someone working on actions for this, but it didn't go so well
[08:45] <thumper> hmm...
[08:45] <thumper> haha...
[08:45] <thumper> yeah
[08:46] <thumper> I did find a way to stream the file off...
[08:46] <thumper> juju run -e kawaw-prod --machine=0 'cd /var/lib/postgresql/backups && sudo tar -c kawaw-site.20150423.dump' | tar -x
[08:47] <stub> actions can't return a stream IIRC, just stuff the backups somewhere and return the location for the user to retrieve.
[08:47] <thumper> what is the param '7' to the backup script?
[08:47] <stub> retention probably
[08:47] <thumper> stub: exactly, what I plan on doing is having the action return the command to get the file off :)
[08:47] <thumper> ah
[08:48] <thumper> so with no args it backs up all databases?
[08:48] <stub> I'd like a stream. stuffing things into temporary files sucks when you are dealing with terabytes
[08:49] <stub> but I guess you use pitr for any of those systems anyway
[08:49]  * thumper nods
[08:49] <thumper> once we have storage working well, I'll talk to you about migrating my db off the local instance
[09:04] <stub> thumper: It works well with the storage broker setup if you want to do things that way. Complex, but seems solid enough.
[09:04] <thumper> yeah... wanted to skip that though
[09:10] <mattyw> thumper, got a moment for a minor distraction?
[09:10] <thumper> mattyw: depends
[09:10] <thumper> mattyw: I'm not working
[09:10] <mattyw> thumper, any idea if there's a simple way to get a units private address from the juju command?
[09:11] <mattyw> thumper, juju run unit-get private-address could work - but doesn't seem to
[09:11] <thumper> juju run would need to be run in the context of a unit
[09:11] <thumper> let me try that
[09:12] <thumper> $ juju run --unit postgresql/0 'unit-get private-address'
[09:12] <thumper> 10.218.169.141
[09:12] <thumper> mattyw: ^^
[09:14] <mattyw> thumper, perfect
[09:43] <apuimedo> jamespage: thanks for the reviews
[09:43] <apuimedo> Should I re-target the keystone patch then
[09:44] <apuimedo> Sorry that I didn't reply to that yet, I was a bit swamped in meetings
[09:44] <jamespage> apuimedo, I've sorted and merged the keystone changes
[09:44] <apuimedo> ah, cool :-)
[09:44] <apuimedo> thanks for that ;-)
[09:44] <jamespage> apuimedo, do you have a neutron-api branch up for review?
[09:44] <apuimedo> neutron-api and neutron-agents-midonet I want to re-check today
[09:45] <apuimedo> for some reason the sql connection data was not properly written
[09:45] <jamespage> apuimedo, hmm - what's neutron-agents-midonet?
[09:45] <apuimedo> badly rendered from the template
[09:45] <apuimedo> that's more or less like the quantum gateway plugin
[09:46] <apuimedo> that does the metadata agents and dhcp agents stuff
[09:46] <apuimedo> but as a subordinate charm to neutron-api
[09:46] <apuimedo> (and without gateway, since we need different things for that)
[09:51] <jamespage> apuimedo, so it provides:
[09:51] <jamespage> - nova-api-metadata
[09:51] <jamespage> - neutron-dhcp-agent
[09:51] <jamespage> - neutron-metadata-agent
[09:52] <jamespage> that's pretty much exactly what the 'nsx' config option does on the neutron-gateway charm
[09:52] <jamespage> apuimedo, pushing those functions onto neutron-api means it can't be deployed into a container
[09:52] <jamespage> whereas we categorically know that neutron-gateway can never be container deployed
[09:53] <apuimedo> doesn't 'nsx' config option also give you the gateway functionality?
[09:53] <jamespage> apuimedo, no
[09:54] <jamespage> just some 'network nodes services'
[09:54] <jamespage> apuimedo, that charm may be a little misnamed
[09:54] <apuimedo> 'a little' :P
[09:54] <apuimedo> I read quite a bit of it
[09:54] <jamespage> apuimedo, yes - plugin=nsx makes it do what you want
[09:54] <apuimedo> but I guess not enough
[09:55] <apuimedo> I have to remember why I ruled it out
[09:56] <apuimedo> but it is likely that I thought that it was hardcoded to provide gateway
[09:56] <jamespage> apuimedo, ack
[09:56] <apuimedo> which is something that we need to provide on separate charm that uses midonet-host-agent as a subordinate
[09:57] <jamespage> apuimedo, so in this case you would use neutron-gateway with midonet-host-agent I think
[09:57] <jamespage> that's how we use it for NSX
[09:57] <jamespage> nsx-transport-node gets deployed with the neutron-gateway and nova-compute
[09:58] <apuimedo> nsx-transport-node then must be like our midonet-host-agent
[09:58] <jamespage> apuimedo, do you have a deployment bundle yet?
[09:58] <jamespage> apuimedo, I think so yes - for nsx it installs the right openvswitch version and then registered the edge back into NSX controller
[09:58] <apuimedo> no, I have a file openstack.cfg with all the configs
[09:58] <apuimedo> and another with the relations
[09:58] <apuimedo> the problem for having a bundle
[09:58] <apuimedo> is that neutron-api requires at installation time a package which it does not have available
[09:59] <apuimedo> python-midonetclient
[09:59] <jamespage> apuimedo, this is a biggish problem
[09:59] <apuimedo> yes
[09:59] <apuimedo> I wish the neutron-api charm has an additional repos config option
[10:00] <jamespage> apuimedo, does it actually need it? we set the config option for plugin to midonet
[10:00] <jamespage> and then the charm should know what todo re extra repos
[10:00] <jamespage> that's acceptable
[10:01] <apuimedo> jamespage: you mean that I add a patch to neutron-api that when midonet is selected it installs the repo?
[10:01] <jamespage> yes
[10:01] <apuimedo> ok, let's look at that
[10:02] <apuimedo> https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet_stable_midonet_backport_v2
[10:02] <apuimedo> here was my last change
[10:02] <apuimedo> I'll check it out
[10:02] <apuimedo> and let's see where we could add it
[10:06] <jamespage> apuimedo, proposed that so we can see the diff
[10:06] <jamespage> https://code.launchpad.net/~celebdor/charms/trusty/neutron-api/midonet_stable_midonet_backport_v2/+merge/258978
[10:07] <jamespage> apuimedo, ok - so first comment
[10:07] <jamespage> apuimedo, midonet and neutron will both be users under the service admin tenant
[10:08] <apuimedo> yes, that's the case
[10:08] <jamespage> apuimedo, so you don't need to provide the midonet access credentials via config
[10:08] <jamespage> apuimedo, as neutron-api already has access
[10:08] <jamespage> apuimedo, and the ip and port should be done via a relation, not via config
[10:08] <jamespage> apuimedo, juju add-relation neutron-api midonet-api
[10:08] <apuimedo> that was my original plan
[10:09] <apuimedo> but I think somebody here discouraged me from doing so
[10:09] <apuimedo> I don't remember exactly the details
[10:09] <jamespage> apuimedo, hmm - not sure who - if you remember who is was I'll go berate them
[10:09] <apuimedo> which relation would it be providing?
[10:09] <jamespage> apuimedo, neutron-api consumes midonet-api right?
[10:09]  * jamespage looks
[10:10] <apuimedo> it does
[10:10] <jamespage> apuimedo, midonet-api
[10:10] <jamespage> provides:
[10:10] <jamespage>   midonet-api:
[10:10] <jamespage>     interface: midonet
[10:10] <jamespage> add a relation to neutron-api that consumes that
[10:10] <jamespage> requires:
[10:10] <jamespage>     midonet-api:
[10:10] <jamespage>       interface: midonet
[10:10] <jamespage> its optional for when you deploy neutron with midonet - that's fine
[10:10] <apuimedo> jamespage: that's exactly what I was told I couldn't do for now :P
[10:10]  * jamespage sighs
[10:10] <apuimedo> I remember now
[10:11] <jamespage> apuimedo, this is exactly what juju is really great a doing
[10:11] <apuimedo> because it makes so much more sense
[10:11] <apuimedo> exactly, you saw that I even use relationships to change between upstream and downstream versions
[10:11] <apuimedo> I want relations to provide for everything
[10:11] <jamespage> apuimedo, I guess the only case is where midonet is pre-installed somewhere already
[10:11] <jamespage> apuimedo, yeah I'm not sold on midonet-repository
[10:11] <jamespage> its config, not a charm ;-)
[10:12] <apuimedo> but it allows me to config once
[10:12] <jamespage> gnuoy, ^^ what that you?
[10:12] <apuimedo> change everywhere
[10:12] <jamespage> apuimedo, hrm are you sure?
[10:12] <jamespage> apuimedo, config can change
[10:12] <apuimedo> if you change the repo it should change for all the midonet relations
[10:13] <jamespage> apuimedo, I understand conceptually what its doing
[10:13] <jamespage> apuimedo, but I think its a sledgehammer to hit a nut
[10:13] <apuimedo> you'd rather have the configuration be specified for each charm like you do with cloud-origin, is that right?
[10:13] <jamespage> apuimedo, setting the same config on two deployed services is not that hard
[10:14] <jamespage> apuimedo, yes
[10:14] <apuimedo> mmm
[10:14] <apuimedo> I'll think about it
[10:14] <jamespage> apuimedo, its even easier with overrides in a bundle
[10:14] <apuimedo> yes, in the bundle case it's much simpler
[10:14] <apuimedo> I agree with that
[10:14] <jamespage> apuimedo, ok - so how about this
[10:14] <jamespage> apuimedo, provide those configuration options on midonet-apu
[10:14] <jamespage> apuimedo, and then pass them down to midonet-host-agent?
[10:15] <apuimedo> you mean for the repo stuff?
[10:15] <jamespage> that way to guarantee that all deployed units are using the same config
[10:15] <jamespage> apuimedo, yes
[10:15] <jamespage> there is a relation between midonet-api and midonet-host-agent right?
[10:15] <apuimedo> yes
[10:16] <apuimedo> it's used for setting the tunneling
[10:16] <jamespage> apuimedo, so make that data part of the 'midonet-host' interface type
[10:16] <jamespage> midonet-api -> midonet-host-agent
[10:16] <jamespage> apuimedo, does that make sense?
[10:17] <jamespage> it avoids the need for the extra charm, and still gives you the single point of control on setting source software config options
[10:17] <apuimedo> It makes sense
[10:18] <apuimedo> I want to fix neutron-api + quantum-gateway(with the change you proposed above) first
[10:18] <apuimedo> I'd like to have that today
[10:18] <jamespage> apuimedo, ok but target you changes at the neutron-gateway next branch
[10:18] <jamespage> apuimedo, we've renamed that charm
[10:19] <apuimedo> jamespage: thank God for that ;-)
[10:19] <jamespage> apuimedo, I can't guarantee re-review today
[10:19] <jamespage> have alot todo for next week still
[10:19] <apuimedo> will neutron-gateway-next work well with the current stable neutron-api?
[10:20] <apuimedo> how stable are the neutron-api-next and neutron-gateway-next
[10:20] <apuimedo> cause I want to use that for deployments soonish
[10:20] <apuimedo> or should I target them and then do backports?
[10:23] <apuimedo> jamespage: ^^
[10:23] <jamespage> apuimedo, they are pretty stable
[10:23] <jamespage> apuimedo, we operate next first policy
[10:24] <jamespage> changes land their first, and can then be backported to stable
[10:24] <apuimedo> that means that I should test it first with keystone-next as well
[10:24] <apuimedo> then backport, right?
[10:28] <apuimedo> jamespage: https://code.launchpad.net/~landscape/charms/trusty/neutron-api-next/trunk and the others first, then?
[10:30] <jamespage> apuimedo, all of the branches are under ~openstack-charmres
[10:30] <jamespage> ers
[10:30] <jamespage> not sure where that one came from
[10:30] <apuimedo> that's the link from "view code" of the juju charm store
[10:32] <apuimedo> jamespage: which of these branches for the resync to neutron-api-next https://code.launchpad.net/~openstack-charmers/charm-helpers
[10:36] <jamespage> apuimedo, next branches don't appear tin the charmstore
[10:39] <apuimedo> jamespage: so I should move the patches to charm-helpers and neutron api to the next branches now
[10:39] <apuimedo> add the new relation
[10:39] <jamespage> yes
[10:40] <apuimedo> between neutron-api and midonet-api
[10:40] <apuimedo> then backport them
[10:56] <gnuoy> Mmike, if you're about I'd be grateful for any feedback on https://code.launchpad.net/~gnuoy/charm-helpers/no-create-if-none/+merge/258982
[10:57] <gnuoy> and
[10:57] <gnuoy>  https://code.launchpad.net/~gnuoy/charms/trusty/percona-cluster/1454317/+merge/258981
[10:57] <gnuoy> they relate to Bug #1454317
[10:57] <mup> Bug #1454317: sstpassword often set to wrong value in cluster and ha relations <percona-cluster (Juju Charms Collection):Confirmed for gnuoy> <https://launchpad.net/bugs/1454317>
[11:00] <Mmike> gnuoy: ack, will take a look in a moment
[11:00] <gnuoy> ta
[11:02] <Mmike> gnuoy: one more thing, prolly needs a bug to be opened - debian.cnf file, which has password for debian-sys-maint user, differs accross units (in multi-unit deploy). It is not super-issue as percona-xtradb-server package doesn't use it, but sometimes some maintenance tool use that account (for instance, mysqltuner)
[11:03] <Mmike> hm, this might actually be a percona-cluster-xtradb-server package bug, and not a charm bug
[11:03] <gnuoy> Mmike, ok, I'll let you raise the bug if you think it's valid.
[11:08] <Mmike> Ack - it is a percona bug, but we might create a workaround in charm... as I'm not sure how percona could fix this easily...
[11:57] <Mmike> gnuoy: these look ok for me, just waiting for the deploy on ctsstack to finish before +1
[12:47] <gnuoy> jamespage, do you think it's ok for me to review and merge lp:~le-charmers/charm-helpers/leadership-election ? I didn't actually contribute any code to that mp
[13:01] <lazyPower> rick_h_: hey man, new profile page really pops :) It also highlighted i pushed to the wrong branch lastnight *doh*
[13:02] <rick_h_> lazyPower: :)
[13:43] <apuimedo> jamespage: could I add the repository from a config('midonet-repo') in hooks/neutron_api_utils.py:determine_packages() ?
[13:44] <apuimedo> I'd do it just before the loop if config('neutron-plugin') == 'midonet'
[14:04] <hatch> firl: hey did you get it working for you last night?
[14:04] <firl> hatch: I ran into a possible bug
[14:05] <firl> where once the units were associated with machines
[14:05] <firl> when I hit commit, the services got installed to juju, but the units didn’t get allocated and the relationships were not persisted to juju
[14:06] <hatch> hmm that's very odd, were there any errors in the browser console that you saw?
[14:08] <hatch> firl: from your bug it looks like they got units
[14:08] <firl> I manually re associated
[14:08] <hatch> ohh ok
[14:09] <firl> so I think the bug I made, specifically isn’t valid
[14:09] <firl> I think I was misreading the associations in the juju status
[14:09] <hatch> ahh ok ok
[14:09] <hatch> I'm quite interested in the issue where it didn't place the units
[14:11] <firl> want me to replicate and give you remote access so you can see it?
[14:11] <hatch> well the issues would be shown in the browser console if a notification wasn't generated
[14:12] <firl> I can give a vnc shell or a port forward and let you do what I did
[14:12] <hatch> so you could `juju set juju-gui juju-gui-console-enabled=true juju-gui-debug=true`
[14:12] <firl> sure
[14:12] <hatch> and then those errors 'should' be surfaced
[14:12] <firl> give me a sec
[14:12] <hatch> thanks!
[14:12] <firl> np
[14:13] <firl> takes about 15 minutes because of bare metal deploys
[14:14] <hatch> no rush whenever you have time
[14:14] <firl> also the changes in juju via console stoped being propagated into the juju gui like the stable release shows it
[14:18] <hatch> hmmm
[14:18] <hatch> I wonder if trying a smaller bundle wouldn't be a bad idea
[14:18] <hatch> something like the django bundle
[14:18] <lazyPower> ...or docker-core...
[14:18] <lazyPower> ;)
[14:18] <hatch> I've been using that one for testing here so I knwo it works
[14:19] <hatch> but requires the machine placements
[14:19] <lazyPower> nice mix of subordinate + services there, representations of all the major concepts
[14:19] <hatch> or that!
[14:19] <firl> so django or full on openstack?
[14:23] <hatch> firl: well I'm just thinkin that it might be better to try a smaller bundle to rule out any other issues
[14:23] <firl> sure
[14:24] <hatch> here is the docker one lazyPower was talking about https://jujucharms.com/u/lazypower/docker-core/4
[14:24] <firl> again, no “juju add-machine xyz.maas” first ?
[14:24] <firl> build it out all via the ui
[14:24] <hatch> nope
[14:24] <firl> ok
[14:24] <hatch> yup
[14:27] <firl> my MaaS is running on vivid as well, in case that makes any difference
[14:31] <hatch> hmm it shouldn't - at least as far as I know
[14:46] <firl> hatch: looks like the smaller bundle is working better than the large one
[14:47] <hatch> ok so that's good, placements appear to be working as expected?
[14:47] <firl> yeah it got further than the openstack one
[14:48] <firl> relations persisted etc
[14:48] <firl> let me try across 2 nodes
[14:51] <hatch> lazyPower: https://jujucharms.com/u/lazypower/docker-core/4 one of your code examples isn't highlighted as code fyi
[14:51] <lazyPower> hatch: bug plz <3
[14:52] <firl> after deleting the services, I had to restart the web ui to be able to add another bundle
[14:54] <hatch> hmm, what was it doing that requires you to do that?
[14:54] <firl> I dragged the bundle to the canvas, didn’t do it
[14:55] <firl> clicked add manually, didn’t do it
[14:55] <firl> checked console saw some errors, restarted, repeated manually adding and it worked
[14:55] <firl> I also recorded it
[14:55] <hatch> ahh you have the errors? great
[14:56] <firl> once docker is done deploying via 2 I will upload a video for ya
[14:56] <firl> see if it’s user issue
[15:04] <firl> hatch: https://vimeo.com/user29582213/review/127726021/09e7d5b4cd
[15:08] <hatch> firl: just otp right now, will look in a few
[15:19] <hatch> firl: thanks for that video - it's definitely not your fault, there is a bug there somewhere
[15:19] <firl> cool
[15:20] <hatch> the bundles were able to deploy properly? The smaller ones that is
[15:20] <firl> yeah
[15:26] <firl> did you want me to open a bug for it?
[15:26] <hatch> it's ok I got it
[15:26] <firl> I just finished recording a longer video showing the full issue
[15:26] <hatch> oh that would be awesome
[15:26] <firl> of how to replicate
[15:26] <firl> kk
[15:26] <hatch> if you wanted to file the bug so you get notified that would be ok
[15:27] <firl> Either way, just trying to help
[15:27] <hatch> I suppose you have a better idea of how you created it :)
[15:29] <firl> sure
[15:29] <firl> I will try the openstack bundle again to see
[15:31] <firl> https://bugs.launchpad.net/juju-gui/+bug/1454750
[15:31] <mup> Bug #1454750: bundle to canvas sometimes doesn't work <juju-gui:New> <https://launchpad.net/bugs/1454750>
[15:31] <hatch> thanks!
[15:31] <firl> no problem
[15:37] <firl> well it might work this time
[15:37] <firl> don’t know if the code changed between yesterday night and today
[15:37] <hatch> it hasn't
[15:37] <hatch> at least, nothing beyond tests
[15:38] <hatch> if it works....then yay?
[15:38] <hatch> :)
[15:38] <firl> haha yeah, I will try one more time to see if it is anything I might have done differently
[15:39] <firl> is there a way to mass select and delete services?
[15:39] <hatch> unfortunately no
[15:40] <lazyPower> firl: if you have juju-deployer installed:  juju-deployer -T
[15:40] <lazyPower> that will reset you back down to just the state server, you will need to re-deploy the gui
[15:40] <firl> that’s awesome
[15:41] <lazyPower> man, actions + new status stuff in 1.24 is really nice
[15:41] <lazyPower> hattip @jujucore for this
[15:41] <hatch> lazyPower: yeah it's fun
[15:42] <hatch> I'm going to add the actions to the gui charm and to the ghost charm at some point here
[15:42] <hatch> surfacing those statuses will be so cool
[15:42] <lazyPower> yeah
[15:42] <lazyPower> i'm working on a benchmark suite for a charm i prototyped lastnight, this is silly nice
[15:43] <lazyPower> i see some instances where my status set in config-changed are pretty bummer, and require extra logic to surface the proper message - but hey - this is 100% better than "installing","started"
[15:46] <hatch> oo a benchmarking suite as a subordinate would be awesome
[15:47] <hatch> firl: the first bug was closed (as discussed) and the second will be fixed before release
[15:47] <firl> very cool
[15:48] <firl> do you want me to just keep filing them? hah just found another where if a service on destroy has an issue, the UI leaves it “blue” but shows up in juju status and I have to “re-destroy"
[15:48] <hatch> yes please :) That would be very helpful
[15:54] <firl> ok, replicated my issue
[15:54] <firl> before I refresh the GUI is there anything you want me to do?
[15:55] <firl> hatch
[15:55] <hatch> those console errors, it would be nice if you could expand them to get a stack trace
[15:55] <hatch> little red arrow to the left of the error message
[15:56] <firl> cool
[15:56] <firl> already did
[15:57] <firl> file a bug for this, or did you want to see the video first?
[15:58] <hatch> might as well file the bug
[15:58] <hatch> I'm sure it's a real bug :)
[15:58] <firl> hah
[15:59] <hatch> and even if it isn't - if you're able to cause the failure that's an issue that needs to be fixed
[15:59] <hatch> :)
[15:59] <hatch> (but I'm sure it's a bug)
[15:59] <firl> hah thanks
[16:03] <firl> Anything else you want me to do with the environment before I blow it away ( going to do a landscape install )
[16:06] <hatch> nope let it go!
[16:07] <firl> very cool
[16:26] <hatch> firl: just in case you didn't know there is a gui specific channel at #juju-gui :)
[16:26] <firl> hatch: thanks, yeah when I first joined I didn’t know which channel my issue was in, but you seemed to care more about the gui so I obliged
[16:27] <hatch> there is also #juju-dev for development of juju core
[17:51] <mattrae> using the latest version of juju-deployer, i appear to be hitting this bug.. juju deployer exits with 'watcher was stopped' at different points. https://bugs.launchpad.net/juju-deployer/+bug/1284690
[17:51] <mup> Bug #1284690: all watcher api regression <hs-arm64> <juju-deployer:Fix Released> <python-jujuclient:Fix Released> <python-jujuclient (Ubuntu):Fix Released> <python-jujuclient (Ubuntu Trusty):Fix Released> <https://launchpad.net/bugs/1284690>
[17:52] <mattrae> anyone know what may be causing the 'watcher was stopped' error with juju-deployer?
[17:52] <mattrae> the comments seem to indicate that this error happens when the state server loses its connection to mongo
[18:01] <bdx> charmers, openstack-charmers: From what I can gather the openstack service charms deployed on trusty using the "openstack-origin: cloud:trusty-kilo" param are not preforming an "apt update" before the openstack services get installed and thus kilo packages are not installed until "apt update" and "apt upgrade" are ran manually on each respective node for which a charm is deployed.
[18:02] <mattrae> here's the exact output i'm seeing http://pastebin.com/DCtJpwRY
[18:02] <bdx> charmers, openstack-charmers: Is this expected^^???
[19:27] <lm_> hi
[20:00] <mbruzek> Does anyone here know a charm that takes a payload URL and a hashsum as a configuration parameter?  (and also happens to be written in bash)?