[00:35] <hazmat> hmm bellini is gone
[00:36] <hazmat> bummer.. okay working on a workaround
[00:54] <hazmat> fwiw.. workaround added as comment to relevant bugs.. https://gist.github.com/kapilt/a61efcb4eaef9e685397
[06:49] <sebas5384> ERROR cannot assign unit "nova-compute/0" to machine 0: machine "0" cannot host units
[06:49] <sebas5384> bye bye
[06:49] <sebas5384> :(
[08:22] <AskUbuntu> What types are allows in config.yaml in juju charms? | http://askubuntu.com/q/472802
[08:59] <nottrobin> ^ what AskUbuntu said - does anyone know what config types are allowed? Is there a "dictionary" type?
[10:01] <marcoceppi> nottrobin: click on the link, the question has been answered. The short answer is no dict type, int, float, string, boolean only at the moment
[10:01] <sarnold> lol
[10:02] <sarnold> marcoceppi: nottrobin is the questioner and answerer :)
[10:02]  * marcoceppi saunters away
[10:02] <nottrobin> marcoceppi: thanks =D
[10:03] <nottrobin> marcoceppi: I went and found the answer myself
[10:03] <marcoceppi> cool
[10:08] <caribou> marcoceppi: is it possible to have timing issue (i.e. race condition) with the definition of relations when a 'relation-change' hooks kicks in ?
[10:09] <caribou> I'm debugging an openstack charm and if I print the result of a 'relation_get' at the beginning of the "-changed" hooks the value is present
[10:10] <marcoceppi> caribou: there's a chance the relation data won't be available at the time of the hooks execution
[10:10] <caribou> if I don't print it (for debugging purpose), then it fails because the value is not defined
[10:10] <marcoceppi> caribou: there's always a chance relation data won't be available during -changed relation, which is why you should build in verification that the values you need are present before continuing
[10:11] <caribou> hmm, how do I work around this  ? wait for it to appear ?
[10:11] <caribou> since the hook will not get re-fired once it becomes available
[10:11] <marcoceppi> caribou: most charms do an idempotency guard and just exit 0 until it's available
[10:12] <marcoceppi> caribou: yes it will
[10:12] <marcoceppi> relation-changed will always fire at least once during a relation creation event, then will execute each time data on the wire changes
[10:12] <caribou> marcoceppi: ah, ok. I was tempted to add such a guard but didn't know if it would get refired
[10:12] <caribou> marcoceppi: thanks I'll do that
[10:13] <marcoceppi> most of the time this results in two relation-changed events being queued. one is the first one that will always happen (which may or may not have the data) then one for each subsequent run
[10:13] <marcoceppi> caribou: cheers
[10:13] <caribou> marcoceppi: this explains why I got the data when running within debug-hooks and not when running live
[10:13] <marcoceppi> caribou: right, there's enough latency that you don't need the second -changed event
[10:14] <caribou> marcoceppi: understood !
[10:24] <AskUbuntu> Destroy a juju service and also its associated machine | http://askubuntu.com/q/472849
[11:39] <ghartmann> anyone else is having issues with juju local provider ?
[11:40] <ghartmann> I was very happy with juju but it became unusable just about two months ago
[11:41] <stub> ghartmann: Working for me
[11:41] <ghartmann> I can't get juju local to work anywhere, even with virtual machines. devel / stable
[11:42] <stub> I can't seem to send syslog messages to a related rsyslog service. Anyone got it working in their charm?
[12:36] <hazmat> ghartmann, what symptons?
[12:36] <ghartmann> machine goes pending and it's not delivered
[12:36] <ghartmann> it seems to be related with the developer tools for saucy
[13:48] <cjohnston> Does charmhelpers CLOUD_ARCHIVE_POCKETS need to be updated for trusty?
[14:01] <niedbalski__> hey lazyPower , what does 'store error on...' means on the review queue? (http://manage.jujucharms.com/tools/review-queue)
[14:01] <lazyPower> Thats a great question niedbalski__. marcoceppi can you shed some insight into that one?
[14:02] <marcoceppi> niedbalski__ lazyPower it means that "charm proof" fails for that charm
[14:02] <marcoceppi> err, the store error is something else
[14:03] <marcoceppi> it means that the charm exists in a promulgated branch but isn't actually in the charm store
[14:06] <niedbalski__> marcoceppi, there is a manual process for doing that ?
[14:06] <lazyPower> marcoceppi: is that basically the branch was pushed but charm promulgate was not run against the charm?
[14:13] <jcastro> http://askubuntu.com/questions/tagged/juju?sort=newest&pageSize=15
[14:13] <jcastro> some new incoming questions!
[14:16] <jcastro> yo lazyPower
[14:16] <lazyPower> sup jcastro
[14:16] <jcastro> you're doing the review on elasticsearch?
[14:16] <jcastro> I don't remember if that was you or just in "the pile"
[14:16] <lazyPower> its in the pile atm.
[14:16] <lazyPower> i've touched it, so has mbruzek
[14:17]  * mbruzek waves
[14:17] <jcastro> sigh, whoops
[14:22] <whit> lazyPower, is the outstanding issue with ES the peer relation thing?
[14:23] <lazyPower> whit: That was the last known issue. hazmat has some activity against the charm stating its good in his use cases.
[14:23] <lazyPower> that and there's some missing offline environment installation logic for feature parity with the existing charm
[14:24] <lazyPower> otherwise i think its all thumbs up from here
[14:24] <hazmat> whit, which peer relation issue?
[14:24] <hazmat> whit, it doesn't have any that i'm aware of
[14:25] <whit> hazmat, we are talking about the ansible branch?
[14:25] <hazmat> whit, yeah.. simple offline and website relation needs port set were the items of note i had.
[14:25] <hazmat> whit, yes
[14:25] <hazmat> whit, also some philosophical discussion about removing either cluster or rest named relation
[14:25] <hazmat> but that's legacy from the charms impl
[14:26] <hazmat> from the old impl
[14:26] <whit> hazmat, yeah, bw
[14:26] <hazmat> but its needless confusion imo
[14:26] <whit> yeah
[14:27] <lazyPower> hazmat: so the charm embedded relationship exchange works? We don't need to worry about the multicast workaround thats going on there?
[14:28] <hazmat> lazyPower, that stuff was garbage
[14:28] <hazmat> needed ec2 credentials or multi-cast support.. its stuff that never should have been in the charm
[14:29] <hazmat> lazyPower, relations hooks can configure the address set for peers automatically.. and the ansible version of the charm does just that, its much more universal and simpler.
[14:30] <lazyPower> welp thats duly noted then. All we are missing is offline support, and that could easily be added as an issue to be fixed post promulgation - but I don't know that we want to set the precident for removal of features to be promoted in the store.
[14:30] <lazyPower> ergo: what if someone is in an offline environment and has this deployed? we just broke it for them.
[14:30] <hazmat> also note its currently breaking the interface on 'http' for the website rel
[14:30] <hazmat> it needs to pass the port
[14:30] <hazmat> one liner
[14:32] <lazyPower> ah right, and that.
[14:32]  * lazyPower glosses over the obvious
[14:57] <mhall119> marcoceppi: ping
[14:57] <marcoceppi> mhall119: pong
[14:57] <mhall119> marcoceppi: hey, we have a cloud devops track for UDS/UOS June 10-12, can you be a track lead for it? jcastro is already one
[14:57] <marcoceppi> mhall119: uh, sure
[14:58] <mhall119> marcoceppi: jcastro: I'd also like a track lead from the community, any recommendations?
[14:58] <james_w`> is there a spec for the actions work?
[15:32] <cjohnston> is it possible in one juju deployer file to be using precise and trusty for different charms?
[15:33] <marcoceppi> cjohnston: yes
[15:34] <marcoceppi> cjohnston: remove the series: key, then put the series in all the charm: lines
[15:34] <cjohnston> ack
[15:34] <cjohnston> ta
[15:34] <marcoceppi> ie, charm: cs:trusty/mysql
[15:34] <hazmat> er.. charm_url: cs:trusty/mysql
[15:34] <hazmat> you can keep the series at the top level, and just override on the ones you need it as well
[15:35] <cjohnston> what if I'm not using cs charms? charm: graphite for example
[15:36] <hazmat> cjohnston, series: trusty
[15:36] <cjohnston> ack.. sweet
[15:36] <hazmat> cjohnston, or charm_url: local:trusty/mysql ..
[15:36] <hazmat> cjohnston, this might be post trusty version btw. (0.3.6) .. latest is 0.3.8 on pypi
[15:37] <cjohnston> ok
[15:54] <mhall119> marcoceppi: jcastro: so far you two are the only track leads I have picked for DevOps, it would be nice to have a third, can you recommend somebody?
[15:55] <jcastro> someone from gaughen's team
[15:55] <nottrobin> how do I give a juju charm instance a specific public address? - rather than just an IP address?
[15:55] <nottrobin> also Is there any way to run bundles directly without starting a "juju-gui" instance? (I'm following https://juju.ubuntu.com/docs/charms-bundles.html)
[15:56] <jcastro> juju-deployer -c bundle.yaml will do the bundle without the gui
[15:58] <mhall119> gaughen: ping
[15:58] <mhall119> jcastro: anybody from the community side?
[15:58] <gaughen> mhall119, pong
[15:58] <gaughen> let me read a bit of the history here
[15:58] <mhall119> gaughen: I'm looking for a 3rd track lead for Cloud DevOps in the upcoming UDS/UOS
[15:58] <gaughen> mhall119, I can do that
[15:58] <gaughen> count me in
[15:59] <mhall119> awesome, thanks, will send out an email with instructions
[16:01] <nottrobin> jcastro: thanks, that looks perfect
[16:01] <jcastro> mhall119, jose'
[16:01] <jcastro> s been doing a bunch of work lately if he's interested
[16:02] <mhall119> jcastro: can't, jose's already a lead for community track
[16:02] <jcastro> thief!
[16:02] <mhall119> he was ours first!
[16:05] <jose> mhall119: if it's possible I can help with both
[16:06] <mhall119> jose: ok, you're on both now
[16:06] <jose> mhall119: cool, thanks :)
[16:06] <mhall119> no, thank you :)
[16:07] <jose> :)
[16:46] <khuss> hello there..
[16:46] <khuss> i'm just getting started with juju and have some questions
[16:46] <khuss> i have a MAAS server which has 30 nodes.. (Dell PowerEdge)
[16:47] <khuss> i installed juju on the MAAS controller
[16:47] <khuss> when I try juju status it, just hangs
[16:48] <khuss> here is the strace output
[16:48] <khuss> setsockopt(4, SOL_TCP, TCP_NODELAY, [1], 4) = 0 read(4, 0xc200407000, 4096)             = -1 EAGAIN (Resource temporarily unavailable) write(4, "GET /MAAS/api/1.0/nodes/?agent_n"..., 494) = 494 epoll_wait(5, {}, 128, 0)               = 0 futex(0x11a0d38, FUTEX_WAIT, 0, NULL)   = 0 epoll_wait(5, {}, 128, 0)               = 0 epoll_wait(5,
[16:51] <lazyPower> khuss: did juju return a routeable address for the bootstrap node?
[16:53] <khuss> LazyPower: that's the part I am confused with
[16:54] <khuss> there are 30 nodes but I want to install an app only on some nodes
[16:54] <khuss> when we do juju status, does it have to contact all nodes
[16:55] <khuss> here is the yaml file
[16:55] <khuss> environments:   maas:     type: maas     maas-server: 'http://localhost:80/MAAS'     maas-oauth: ''     admin-secret: topsecret     default-series: precise
[16:59] <khuss> here is the output from the deploy
[16:59] <khuss>  juju deploy wordpress -v verbose is deprecated with the current meaning, use show-log 2014-05-27 16:59:04 INFO juju.state open.go:68 opening state; mongo addresses: ["node17.master:37017"]; entity ""
[17:00] <lazyPower> khuss: no, it should only contact teh bootstrap node when running status.
[17:01] <khuss> is the bootstrap node is different from the node where I installed juju
[17:01] <khuss> i'm installing juju on the same node where maas cluster and regional controllers are running
[17:02] <khuss> in other words, how does juju determine which node to use as a bootstrap node
[17:07] <lazyPower> khuss: When youd efined your MAAS provider, if ti has any units enlisted, it will spin one of those nodes up and return it as teh bootstrap node.
[17:07] <khuss> how do I know which node has been configured as the bootstrap node
[17:12] <lazyPower> khuss: do you have more than one unit allocated to a user in your MAAS control panel?
[17:13] <lazyPower> khuss: in this instance, it would be difficult to discern the bootstrap node, and i'm assuming this is why you're asking - http://i.imgur.com/mWrBIbo.png
[17:14] <lazyPower> but if you look in your environment's .jenv file
[17:14] <lazyPower> in $HOME/.juju/environments/env-name.jenv
[17:14] <lazyPower> there is a line that states state-servers:
[17:15] <lazyPower> and it has an array of nodes defined as the state server. This is the unit that MAAS gave back to Juju as the bootstrap node during that provisioning request
[17:15] <khuss> yes.. there are many allocated to the user
[17:15] <lazyPower> http://paste.ubuntu.com/7530723/ <- it will look similar to that
[17:15] <lazyPower> it shoudl have a defined hostname, and the public-ip of the unit
[17:16] <lazyPower> or maybe i've got it backwords and the hostname is the public address, and the ip listed is private - i'm not positive on the order - i do know that it returns more than a single address though.
[17:16] <khuss> user: "" password: "" state-servers: [] ca-cert: ""
[17:16] <lazyPower> if neither of those addresses are routeable, your juju status call will fail.
[17:16] <khuss> in my case, state-servers is set to []
[17:16] <lazyPower> Looks like it didn't finish provisioning the bootstrap node if there are no state servers defined. Was there any output from your bootstrap?
[17:17] <khuss> it just came out
[17:17] <khuss> if i run it now, it says the following:
[17:17] <khuss>  juju bootstrap ERROR environment is already bootstrapped
[17:17] <khuss> i think i need to take a step back
[17:17] <khuss> I've close to 42 servers on this Dell Rack
[17:18] <khuss> I added them on MAAS
[17:18] <khuss> and close to 20 of them are allocated to the admin user
[17:18] <khuss> in fact, I didn't allocate them but I just powered the nodes up and they go allocated to the admin user
[17:18] <lazyPower> khuss: it may be prudent to add another user to the system so you can discern which nodes are assigned to a 'manual' provisioning system and which are under the control of juju.
[17:18] <khuss> the names given to these nodes are not resolvable
[17:19] <lazyPower> s/system/maas/
[17:19] <khuss> you mean add a juju user in maas
[17:19] <lazyPower> correct
[17:19] <khuss> and then just allocate one node to that user?
[17:19] <khuss> and then do bootstrap again?
[17:20] <lazyPower> you shouldn't need to do that. the idea is the user's api credentials would handle the allocation, and become immediately identifiable in MAAS if juju is managing the machine.
[17:20] <khuss> how do I remove the current juju bootstrap
[17:20] <lazyPower> juju destroy-environment <env>
[17:20] <khuss> ok..
[17:20] <khuss> does the name given to node be resolvable for the bootstrap to work
[17:21] <lazyPower> if it gets fussy with you, add --force
[17:21] <khuss> in other words, shd I add them in /etc/hosts file
[17:21] <lazyPower> yes. Juju reaches out over ssh to install the state server components
[17:21] <lazyPower> you should be able to add your MAAS region controller as a dns provider to your /etc/resolv.conf and those nodes should become resolveable
[17:21] <khuss> thats a good tip..
[17:22] <lazyPower> in my case, i added 10.0.10.2 as a nameserver, and set search maas, and everything maas produces is resolveable for me.
[17:22] <khuss> one more question though.. what does it mean when maas says a node is allocated to admin
[17:22] <khuss> it is not something that I did manually
[17:22] <lazyPower> it means that the admin user requested the units power to be on.
[17:22] <lazyPower> so therefore, nobody else can interact with that machine through maas, its occupied
[17:23] <khuss> got it.. may be the problem is with the DNS
[17:23] <lazyPower> if you'ev tried bootstrapping a few times
[17:23] <khuss> i will destroy the bootstrap
[17:23] <lazyPower> and none of them have succeeded, the unit is technically allocated in maas, and wont be dallocated - reason being - juju didnt' complete and it *should* have sent a destroy command to maas to return teh machine to the pool.
[17:24] <lazyPower> if thats not the behavior you're seeing, it may be time to file a bug once we've sussedo ut why you cant resolve your nodes. The networking portion of maas can be tricky at first, it took me a day to figure out my networking for the VMAAS setup i have here.
[17:25] <khuss> i will try a few things..
[17:25] <khuss> first, I will fix the DNS
[17:25] <khuss> so that juju can resolve the names
[17:26] <khuss> juju is going to pick a node that is allocated to the user right?
[17:26] <khuss> may be there should be only one of such nodes..
[17:26] <khuss> could you print your /etc/resolv.conf
[17:32] <lazyPower> actually no
[17:33] <lazyPower> khuss: juju will make the request to allocate a node when you issue the bootstrap command
[17:34] <khuss> ok. that means none of them need to be in the allocated state
[17:34] <khuss> currently, I've close to 10 in the allocated state
[17:34] <khuss> is there a way to return them to the pool
[17:35] <lazyPower> power them down from the maas admin interface
[17:36] <lazyPower> http://i.imgur.com/26doez3.png
[17:37] <khuss> ok got it
[17:38] <khuss> one more thing about the DNS server
[17:38] <khuss> currently my region/cluster/juju - all in one node
[17:38] <khuss> this node is not a DNS server
[17:38] <khuss> it has /etc/resolv.conf pointing to 8.8.8.8
[17:39] <khuss> do I need to make this as a DNS server for juju to resolve nodes?
[17:39] <marcoceppi> khuss: maas will setup a DNS server for you
[17:40] <khuss> marcoceppi: i'm not sure if it did
[17:40] <khuss> how do I check it
[17:40] <marcoceppi> your nodes will need to point to it, it however, doesn't nessisarily need to point to it, though you may need to configure the DNS server to forward DNS lookups to another serer (like 8.8.8.8)
[17:40] <marcoceppi> khuss: is this on 14.04
[17:40] <khuss> 13.10
[17:41] <marcoceppi> khuss: hum, there's quite a big difference in the maas versions between 13.10 and 14.04
[17:41] <lazyPower> khuss: Since you're just getting started it would make a good turning point to go ahead and bump that up to an LTS if you have the option.
[17:41] <khuss> i could probably do that
[17:42] <khuss> i can probably do an upgrade.. right
[17:42] <marcoceppi> khuss: if you do ps -aef | grep bind you should see a running process
[17:43] <khuss> bind      1339     1  0 May21 ?        00:03:13 /usr/sbin/named -u bind
[17:43] <khuss> it is running
[17:43] <khuss> can i upgrade from 13.10 and 14.04
[17:43] <khuss> without doing a reinstall
[17:43] <marcoceppi> khuss: of course
[17:44] <khuss> just do the upgrade using apt-get right?
[17:44] <marcoceppi> khuss: sudo do-release-upgrade is the best path forward
[17:45] <khuss> i also had some weird issues with not powering on the nodes.. hopefully the new version will help
[17:45] <khuss> once I do the upgrade, do I need to rebuild the nodes
[17:46] <khuss> or can I leave them or do I need to add them again
[17:46] <khuss> brb
[17:47] <marcoceppi> khuss: they would probably still be in there
[18:14] <khuss> marcoceppi: i just updated to 14.04
[18:43] <sebas538_> marcoceppi: hey! o/
[18:44] <marcoceppi> hey sebas538_
[18:44] <marcoceppi> khuss: cool, is maas still running?
[18:44] <sebas538_> marcoceppi: do you know if constraints are working in local env type?
[18:44] <marcoceppi> sebas538_: which constraints?
[18:44] <sebas5384> hardware
[18:44] <sebas5384> like cpu, mem, etc..
[18:44] <marcoceppi> Most probably won't work (like arch, mem, cpu, etc) since it's LXC
[18:45] <sebas5384> :9
[18:45] <sebas5384> :(
[18:46] <lazyPower> sebas5384: since LXC uses shared resource, i dont think that would lend itself well to constraints.
[18:47] <sebas5384> but cgroups isn't for that
[18:48] <lazyPower> ah, good point. I dont know that we have support for them yet.
[18:49] <lazyPower> that may be worth asking in #juju-dev to see if that's on the roadmap, already there, et al.
[18:49] <sebas5384> http://www.mattfischer.com/blog/?p=399
[18:49] <sebas5384> yeah! i will ask about that
[18:49] <sebas5384> :)
[18:59] <sebas5384> https://bugs.launchpad.net/juju-core/+bug/1323446
[18:59] <_mup_> Bug #1323446: constraints on local provider <constraints> <local-provider> <lxc> <juju-core:Triaged> <https://launchpad.net/bugs/1323446>
[19:20] <khuss> marcoceppi: yes maas is still running. will try creating the juju bootstrap
[19:33] <tvansteenburgh> anyone know of a good example of a python charm that uses charmhelpers but doesn't symlink hooks back to a single python source file?
[19:36] <khuss> anybody have experience on installing Maas on a Dell PowerEdge
[19:37] <khuss> I see a problem where the server is not getting powered on when I change the status to "commissioned" from "declared"
[19:37] <khuss> there is no error message in any of the log files
[19:50] <Mosibi> khuss: this is a juju channel... but okay... you have entered the ipmi credentials?
[19:58] <hazmat> tvansteenburgh, not really
[19:58] <hazmat> tvansteenburgh, there's a few but there not great examples
[19:58] <tvansteenburgh> hazmat: thanks
[20:00] <hazmat> sebas5384, so.. i have a jury rigged local provider you could do constraints on..
[20:00] <hazmat> sebas5384, its actually manual provider, where i create the containers
[20:03] <hazmat> its not very user friendly.. but its what i use day to day.. https://github.com/kapilt/juju-lxc
[20:10] <sebas5384> hmmm nice hazmat
[20:12] <sebas5384> hey hazmat nice work man!
[20:15] <hazmat> sebas5384, most of the tricks in there have been added to the local provider..
[20:15] <sebas5384> yeah! nice the lxc-clone for example?
[20:15] <hazmat> sebas5384, the only real feature delta there is its easy to mod the lxc creation with cgroup constraints.. i've got it modded for apparmor profile selection (nested containers)
[20:15] <hazmat> sebas5384, yup
[20:16] <sebas5384> well, hazmat thanks :)
[20:16] <hazmat> sebas5384, aufs is off by default due to compatiblity issues, but you can enable it via an environments.yaml flag.. btrfs is automatically used if found on /var/lib/lxc
[20:16] <sebas5384> hazmat++
[20:16] <sebas5384> hehe
[20:16] <hazmat> last line re local provider in core
[21:46] <khuss> i'm having problems in doing juju bootstrap. Could anybody help
[21:46] <khuss> the bootstrap is failing with the following message
[21:47] <khuss>  juju bootstrap Launching instance WARNING picked arbitrary tools &{"1.18.3-precise-amd64" "https://streams.canonical.com/juju/tools/releases/juju-1.18.3-precise-amd64.tgz" "c7dee5df130e242c436c43c21278a3f24997d23ca48ee02b93b8126d2f415cd7" %!q(int64=5359855)}  - /MAAS/api/1.0/nodes/node-ce3ce5d8-e1ba-11e3-85bd-b8ca3a5bc3f8/ Waiting for address Attempting to connect to node12.master:22 Attempting to connect to 10.209.0.134:22 ERROR bootst
[22:30] <cory_fu> Hrm.  I'm having trouble coming up with a good reason to ever do anything in the start hook in a charm.
[22:31] <cory_fu> Even if you don't have any dependencies on a relation, you're likely going to be re/starting the service in the config-changed hook, so why bother doing anything in start as well?
[22:42] <marcoceppi> cory_fu: I like to put all the start/restart logic in the start hook
[22:43] <marcoceppi> cory_fu: then in all my other hooks just call "hooks/start"
[22:43] <cory_fu> Hrm.  Fair enough
[22:43] <marcoceppi> it's there to model the event, it may also be called when a unit "wakes up" (in the event ofa suspend)
[22:44] <cory_fu> Though, a well written charm should set up the services such that a suspend or restart of the instance should bring the service back up, no?
[22:50] <marcoceppi> cory_fu: depends
[22:50] <cory_fu> On?