#juju 2013-08-26
<MACscr> ok, so as long as im not trying to provision anything on the actual host, i should be able to run juju within an lxc guest, correct?
<MACscr> why am i getting 'error: net: no such interface' when i try to deploy juju bootrap? I definitely already have lxc working on this host and a couple containers going
<davecheney> MACscr: is this using the juju client ?
<MACscr> davecheney: lol, my last name is chaney =P
<MACscr> close, but different
<MACscr> anyway, yes, its the juju client i guess
<davecheney> MACscr: what is the command you are using ?
<MACscr> i installed the juju-core, etc, on it in hopes of deploying the juju-gui within an lxc instance
<MACscr> 'deploy juju boostrap'
<MACscr> im following this tutorial (the answer) http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<davecheney> MACscr: 'deploy juju bootstrap' isn't a command I know
<MACscr> well 'juju bootstrap' gives the same response
 * davecheney reads http://askubuntu.com/a/65360
<davecheney> MACscr: the instructions say 'sudo juju bootstrap'
<sarnold> sudo o_O
<sarnold> ah.
<MACscr> my apologies, i was using sudo, same result
<davecheney> sarnold: you have to use sudo for the local provider as lxc requires root to create and destroy containers
<davecheney> this is the only time it is required
<davecheney> it is a limitation of lxc
<davecheney> MACscr: can you do it again, pass -v and paste the output
<davecheney> pastebinit, or paste.ubuntu.com
<MACscr> http://paste.ubuntu.com/6027630/
<kurt_> is it possible to run more than 1 charm on a node?
<MACscr> why is it trying to use that bridge when im using br0 for LXC as im not using nat, im getting dhcp from the actual router
<kurt_> for example to consolidate several openstack services on a single node?
<MACscr> kurt_: yes. using LXC
<sarnold> kurt_: investigate deploy-to -- I'm not sure of its current status..
<kurt_> yes, I remember that was around, but I was unsure of its current status too
 * kurt_ thinks its time to start RTFMing about LXC
<davecheney> MACscr: looks like you either don't have lxc installed, or it isn't installed correctly
<davecheney> you have no lxc bridge address
<sarnold> kurt_: I'm not sure lxc would be a good fit for deploying parts of openstack environment -- juju couldn't manage lxc containers on some hosts and maas or kvm on other hosts...
<davecheney> MACscr: I will log a bug for this, juju could be clearer in what has gone wrong
<davecheney> sarnold: --to is available in 1.12 and later
<MACscr> davecheney: sure i do, i just dont have one setup for nat. I currently have 2 LXC instances installed and working great
<davecheney> (i think, it is certainly avialable in the 1.13 devel series)
<sarnold> davecheney: AWESOME :D
<kurt_> davecheney: thanks
<davecheney> MACscr: all I can say is the way your LXC is setup is not how juju expects it to be
<davecheney> and it can't handle it
<MACscr> well thats obviously a mistake on its part to assume im using lxcbr0 =P
<kurt_> davecheney: will deploy to stay in the code?  I think it was there before, then removed
<MACscr> is that configurable?
<kurt_> now added back again :D
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1216775
<_mup_> Bug #1216775: cmd/juju: local provider doesn't give a clear explanation when lxc is not configured correctly <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1216775>
<davecheney> kurt_: s/--deploy-to/--to/
<davecheney> MACscr: it is not configurable, it is a hard requrement at this time
<kurt_> ahh
<davecheney> kurt_: the name of the optoin has changed
<davecheney> it used to be a jitsu(?) plugin
<davecheney> but we intergrated it into the cli and it's an option on the deploy subcommand
<kurt_> but jitsu has been depreciated, right?
<davecheney> kurt_: yes, it's dead, like the parrot
<kurt_> lol
<kurt_> I'm currently on .7.  do I have these options?
<davecheney> kurt_: no, you should upgrade to 1.12
<davecheney> full warning: there is no upgrade path from 0.7 to 1.12
<davecheney> but there wasn't one from 0.6 to 0.7 either
<davecheney> so we haven't really made it much worse in that respect
<kurt_> lol
<davecheney> there is an upgrade path once youre on juju 1.x
<davecheney> but you still have to bite the bullet and move away from your old pyjuju environments
<kurt_> davecheney: would I basically be starting from scratch?
<sarnold> davecheney: no upgrade path? I had the impression it mostly just worked?
<kurt_> ie. have to destroy the environment
<davecheney> kurt_: yes, just like 0.6 -> 0.7, you will need to make a fresh environment
 * kurt_ winces
<davecheney> sarnold: 1.x and up has an upgrade path
<davecheney> there is no upgrade path from 0.x to 1.x
<davecheney> sorry
<kurt_> well, I guess if it allows me to deploy multiple charms and thus services on fewer nodes, its probably worth it
<kurt_> is 1.12 stable?
<davecheney> kurt_: yes
<davecheney> that is why it lives in the stable ppa :)
<MACscr> davecheney: does lxcbr0 have to to nat? if not, can it use dhcp?
<kurt_> ok, I'm not totally familiar with bzr / ppa and all that yet - learning
<MACscr> i guess i could rename things
<MACscr> or do i maybe need to create a second network
<davecheney> MACscr: lxcbr0 is a layer 2 bridge mode interface
<MACscr> well technically its just the name of an interface. I was never aware it had to be used in a certain fashion
<davecheney> MACscr: i'm not really sure how you have configured your lxc networking
<MACscr> so pretty much i cant use juju with lxc unless i want to use nat? which is obviously pretty useless without doing some sort of port forwarding
<davecheney> juju pretty much expects to be able to control it
<davecheney> and that the setup that arrives out of the box is the setup that is present when you try to use the local provider
<davecheney> MACscr: i don't know if this is related to nat
<davecheney> it's probably simply that you've change the default LXC config from what juju is expecting
<davecheney> the local provider is still classed as experimental
<davecheney> or (insert word != production)
<MACscr> davecheney: out of the box, all of the lxc guests are not accessible outside the host because they are using nat
<davecheney> MACscr: using the bridge mode interface we expect they DHCP from the same source as the host
<MACscr> davecheney: i apologize for my ignorance, but both my host and my lxc instances are all getting their ip addresses from the same dhcp server on my router
<davecheney> MACscr: s'ok
<davecheney> no need to be sorry
<davecheney> lxc is super complex
<sarnold> :)
<davecheney> and has loads of knobs to twiddle
<MACscr> davecheney: not really, it can be complex, but it doesnt have to be in its most basic form =P
<MACscr> davecheney: so from the sounds of it, there is no way to use juju with lxc and still have the lxc instances accessible on the LAN?
<davecheney> MACscr: i don't think that is correc
<davecheney> that is exactly how the juju lxc provider expects things to work
<davecheney> we bridge the lxc containers onto whatever your default ethernet interface is
<davecheney> so they get DHCP addresses
<davecheney> i think
<davecheney> let me check
<MACscr> thats not now lxc works by default
<MACscr> er, how
<davecheney> MACscr: you probalby know more about it than me
<MACscr> well im only a few days into lxc
<davecheney> MACscr: i'm waiting for confirmation
<davecheney> you may find it easier to ask this question in #juju-dev
<tokern3> i installed juju but when i used it it gave me this error:           error: No environments configured. Please edit: /root/.juju/environments.yaml   it seems it's a bug . what should i do?
<sarnold> tokern3: did you mean to run juju deploy as root?
<tokern3> sarnold:  yes
<tokern3> sarnold:  http://fpaste.org/34727/13775025/
<sarnold> tokern3: did you configure your environments.yaml properly?
<tokern3> no i just installed it today sarnold. i did nothing on it
<tokern3> http://fpaste.org/34729/75027261/  sarnold
<sarnold> tokern3: ah. you must tell juju how to contact your cloud, with which credentials, for it to be useful. :)
<davecheney> tokern3: really, please don't run juju as root
<sarnold> tokern3: there are links for configuring juju for aws, hp cloud, and openstack: https://juju.ubuntu.com/docs/
<davecheney> there are only two cases where you need to use sudo, and they are using the local/lxc provider
<sarnold> bed time :) have fun!
<tokern3> sarnold:  wait a min
<davecheney> tokern3: i don't think juju deploy is doing to do what you think there
<tokern3> i wanted to install  jenkins
<tokern3> http://webcache.googleusercontent.com/search?q=cache:34CJESdYOe8J:https://wiki.jenkins-ci.org/display/JENKINS/Installing%2BJenkins%2Bon%2BUbuntu+&cd=2&hl=en&ct=clnk&gl=us
<tokern3> i'm doing every command in this page .
<tokern3> is the cloud part necessary ?
<sarnold> tokern3: but -where- did you want to install jenkins to? your amazon cloud? your hp cloud? your openstack cloud? some lxc instances?
<sarnold> tokern3: the cloud is the whole point of juju. :)
<davecheney> tokern3: ok, you need to configure juju first
<davecheney> to tell it where to deploy jenkins too
<davecheney> https://juju.ubuntu.com/docs/getting-started.html#install
<davecheney> ^ start at configure
<sarnold> tokern3: if you just want to install software on one machine for production use, and you're _not_ using any cloud computing infrastructure, maybe you're _really_ looking for "apt-get install jenkins".
<sarnold> tokern3: I suggest reading this page before going much further: http://blog.labix.org/2013/06/25/the-heart-of-juju
<sarnold> you might be in the wrong place depending upon what you want to get done. :)
<sarnold> good luck, and have fun. :D
<tokern3> sarnold:  i installed jenkins : apt-get install and det port8080 for it but i didn't  set juju at all.  my goal is integrate autotest into jenkins.           thank you for help
<rick_h> jcastro: if you guys want to hit up some node and ruby folks as well as java/etc. check out adding codemash.org to your list of conferences to hit up
 * jcastro nods
<jcastro> rick_h: do you typically go?
<rick_h> jcastro: I've not gone the last two eyars, but went the 3 before that
<jcastro> man I am 90% sure we're sprinting that week
<rick_h> jcastro: a little bit of a outside the normal crowd
<rick_h> which I think is good tbh
<jcastro> want to submit a talk? :)
<rick_h> jcastro: heh, was thinking about it. I tried to bring some python love there, but it's a small chunk of people. JS/Ruby would be larger communities.
<rick_h> will think on it, but wanted to put it on your radar
<jcastro> yeah
<jcastro> maybe marco or mims can join you
<rick_h> jrwren: do you recall numbers from there? I know they stopped growing to keep from outgrowing the space, but can't recall how many it was.
<jrwren> rick_h: what?
<rick_h> jrwren: how big is codemash? 600ish people?
<jrwren> oh the kalahri expanded. last year was 1200
<jrwren> likely to be 1200 this year too
<rick_h> ok, I wanted to think over 1k but seemed too big and knew that was around OLF. Thought I had confused them.
<jrwren> you gonna submit juju talks?
<jcastro> jrwren: you should do one!
<rick_h> jrwren: I'm thinking about it. I want to charm up bookie now that lxc containers work I can debug the charm nicely
<jcastro> our node charm should be finished by then
<jcastro> or do the rack charm, it's ninja
<rick_h> jrwren: but I know there's a bit node/ruby/windows community so might be cool to hit up off the beaten path
<jcastro> Do a node app .... on Azure.
<jrwren> jcastro: ok, i'll submit one.
<jrwren> rick_h: yes. there is a small python group there too.
<rick_h> jrwren: yea, I've tried to rep python there, but not made it the last couple of years
<jrwren> IMO juju transcends platform, well, it doesn't help windows.
<jrwren> rick_h: i'll try to rep python more there.
<rick_h> jrwren: yep, was thinking it'd be good to get some visiblity in some circles that don't read us that often
<jcastro> I can get a box of shirts, and swag, etc.
<jcastro> let me add it to my events target sheet
<jcastro> jrwren: are you entering the charm contest?
<jrwren> jcastro: not afaik.
<jrwren> jcastro: if I use juju much, it will be at work on proprietary codes. public charms will be of no use.
<jcastro> charms can deploy anything
<jrwren> other than that, I feel all the cool charms are written.
<jcastro> even blobs!
<jrwren> wordpress and discourse charms exist. The world is complete.
<rick_h> lol
<jcastro> when do CFPs open for codemash?
<jrwren> I think they are open.
<jcastro> I was actually whining to people to stop demoing wordpress
<jcastro> it's like man, show something complicated
<jrwren> the point is that it is not supposed to be complicated.
<jcastro> yeah so if you wanna do node that charm should be rewritten by then
<jcastro> jrwren: well, installing PHP stuff is relatively simple to do vs say .... hadoop
<jrwren> hadoop? never heard of it. wait. is that some jvm thing? :)
<jrwren> discourse is complex IMO.
<jcastro> yeah
<jcastro> it's not in the store yet though
<jcastro> soon
<jrwren> i looked at that charm. its interesting. it uses tcmalloc in ruby. nice complex stuff
<kurt_> jcastro: you were asking me if I had installed a later version of juju?
<kurt_> I am going to install 1.12
<jcastro> yeah
<jcastro> ok
<kurt_> starting over
<jcastro> that's the latest stable.
<jcastro> so no recovery after the vmware crash?
<jcastro> just boom?
<kurt_> what do you need me to test since I'm about to destroy everythign
<kurt_> it's all good
<kurt_> I still have some learning to do w/r network architecture
<kurt_> w/r openstack
<kurt_> I got everything successfully installed incl dashboard
<kurt_> i just couldn't get the networks to work or a vm to spin up within openstack
<kurt_> I believe later versions of juju will give me more flexibility in my deployment strategy
<kurt_> allow me to deploy more than one charm/service on a node
<jcastro> yeah
<jcastro> we do --to now so you can specify a machine
<kurt_> that's really nice
<kurt_> will that be supported within the gui?
<jcastro> currently a WIP in the gui
<jcastro> the feature itself is newish so there's some skew there
<kurt_> eh..huh?
<kurt_> WIP?
<jcastro> work in progress, sorry!
<kurt_> lol
<jcastro> http://www.jorgecastro.org/2013/07/31/deploying-wordpress-to-the-cloud-with-juju/
<jcastro> has --to examples
<kurt_> nice
<kurt_> while I have your attention.  do you have a reference network architecture visio or document?  I have seen jamepage's excellent write up, but it stops short with defining the actual network architecture used.
<kurt_> I mean for openstack
<kurt_> he references ensuring services are on particular interfaces without illustrating the reference
<kurt_> I found some good folsom-based examples that would likely work on the openstack site
<jcastro> hmmm
<kurt_> but it would be nice to match up the work he did in his reference architecture
<kurt_> for HA
<jcastro> that's a jamespage or adam_g question I think
<kurt_> yeah, I think they must be super busy or I've been pestering them too much for info :D
<jcastro> maybe there's something upstream on a doc page
<jcastro> SpamapS: ^^^ any idea?
<marcoceppi> jcastro: your daily reminder that it's WordPress
<jcastro> indeed
<marcoceppi> <3
<kurt_> If you guys have ideas or manage to speak with Jamespage, I would love to get my hands on that info
<SpamapS> sorry wha?
<SpamapS> kurt_: oh you want a recommendation for the physical network to deploy Neutron on top of?
<SpamapS> kurt_: http://docs.openstack.org/grizzly/openstack-network/admin/content/ has some things I think
<kurt_> SpamapS: that would be awesome
<kurt_> is that what jamespage used as a model for his HA openstack doc?
<kurt_> and yes I have been looking at that.
<SpamapS> kurt_: http://docs.openstack.org/trunk/openstack-ops/content/ may too
<kurt_> my complexity comes from: I am on vmware and I am using MAAS
<SpamapS> http://docs.openstack.org/trunk/openstack-ops/content/network_design.html
<jcastro> ah nuts, I don't think --to works on MAAS yet
<kurt_> oh seriously??
<kurt_> NOOOO :)
<SpamapS> jcastro: why wouldn't it? it worked with jitsu deploy-to
<jamespage> kurt_, jcastro: --to worked OK for me
<sarnold> it requires provider support?
<jcastro> SpamapS: it's maas provider specific
<jamespage> with MAAS
<kurt_> there is jamespage!
<SpamapS> I mean, I don't know that anybody has ever assembled the 28 actual machines to do a full HA deploy without --to ;)
<jamespage> SpamapS, :-0
<jcastro> it's a high priority bug that the core team is well aware of. :)
<jamespage> jcastro, is that a new bug in 1.13.2?
 * kurt_ thinks RATS!
<jamespage> I'm sure it was working OK with I tried it the other day with 1.12.0
<jcastro> jamespage: Oh really?
<jamespage> jcastro, I'm pretty certain yes
<jcastro> oh good then!
<kurt_> well, I can test it for you guys :)
<jamespage> kurt_, what was your question re network architecture?
<jcastro> yeah let me know
 * SpamapS stuffs a sock in his mouth and backs away slowly
<marcoceppi> jcastro: --to is provider agnostic iirc
<jcastro> SpamapS: just trying to reel you back in sir!
<kurt_> jamespage: I was looking at your HA openstack reference architecture
<kurt_> guide, etc
<jamespage> kurt_, ah - my epic
<kurt_> can you please provide a supporting network map? :D
<kurt_> I have run in to all kinds of issue with the quantum stuff and being able to allocate a network/floating IP etc
<kurt_> and even spin up a vm
<jcastro> jamespage: Aha, nevermind, I am thinking of the --contraints not working on maas, not --to
<marcoceppi> jcastro: last I talked with dfc about it. --to is really dumb and just does whatever you tell it to
<jamespage> kurt_, so the charms implement this - http://docs.openstack.org/grizzly/openstack-network/admin/content/use_cases_tenant_router.html
<kurt_> and I think its my own lack of understanding holding me back and having the underlying network not really set up correctly
<jcastro> marcoceppi: yeah I was thinking of constraints
<marcoceppi> jcastro: ahh yeah. e need more constraints for Maas
<jamespage> kurt_, each tenant must create/have created for them a private network and a route that connects it to the outside world
<jamespage> that ends up on the quantum-gateway unit btw
<jamespage> then you can float ip addresses on your instances for public access.
<jamespage> kurt_, the quantum-gateway charm current requires a dedicated network port connected to the 'public' network to support this feature
<kurt_> public = internet network right?
<jamespage> public can be another connection to the private network in test deployments
<jamespage> kurt_, yep
<jamespage> kurt_, look - I'm time limited right now - hopefully that points you in the right direction
<kurt_> Ok, I have been using NAT on my MAAS region controller - I think I am maybe breaking routing with that
<jamespage> take a look at the quantum-tenant-net and quantum-ext-net commands that get installed on the nova-cloud-controller node
<jamespage> they have help :-)
<kurt_> jamespage: yes, its a good start.
<kurt_> jamespage: maybe I can put together a visio with my stuff and email to you? you could give it a quick once-over?
<jcastro> kurt_: make it nice so we can steal it and put it in the documentation!
<kurt_> lol
<kurt_> as long as you credit me as a contributor :D
<jcastro> always dude!
<kurt_> maybe I'll earn that t-shirt then :)
<jcastro> I was going to bring one for ya anyway
<kurt_> jcastro: what version of juju do you want me to test?
<kurt_> 1.12?
<jcastro> yes please
<kurt_> sure
<jcastro> 1.13.2 is the new dev
<jcastro> and I am interested in you trying that too but -stable ftw for now
<kurt_> ok, give me some time today.  I need to thread in my "normal" work
<kurt_> jcastro: do you work with Maarten Ectors?
<jcastro> yeah
<kurt_> ok, he reached out to me for some feedback
<kurt_> I'm guessing that's fairly normal for mailing list contributors
<jcastro> I think it depends on what you posted
<jcastro> you posted that grizzly question?
<web-brandon> juju/pkgs for the latest charm-tools right?
<marcoceppi> web-brandon: ATM yes. though that may change soon
<kurt_> jcastro: I think it was...
<X-warrior> If I want to use juju with aws, does the S3 mandatory?
<web-brandon> marcoceppi: are you moving it to its own PPA? I noticed one but it had some failed builds.
<marcoceppi> web-brandon: it'll eventually be going in the juju/stable ppa
<marcoceppi> X-warrior: yes
<marcoceppi> web-brandon: with a separate daily builds ppa
<X-warrior> is it possible to choose the aws machine type? for example
<X-warrior> micro instance?
<X-warrior> I found it, --constraint ty
<marcoceppi> X-warrior: you'll need to set cpu-power=0 mem=128 cpu-cores=0 with constraints in order to get micro with juju. we really don't recommend micros though
<X-warrior> marcoceppi: oh really? ok, thanks for the information
<X-warrior> So I'm trying to bootstrap to sa-east-1 but I'm getting this error: "error: provider storage is not writable"... when I try to us-east-1 for example it created the machine. S3 says there is no region specification so I think it is ok. The only difference I could find is, ec2 from sa-east-1 has 2 machines (not created with juju) and us-east-1 has no machine. What could be wrong?
<sarnold> X-warrior: juju currently operates entirely within one availability zone in one region. there's no region-spanning intelligence..
<X-warrior> sarnold: Yeap, but does it need a clean region?
<sarnold> X-warrior: no idea, but seems unlikely
<X-warrior> So I have no idea what is going on, I can bootstrap to us-east-1 easily... but when I try to bootstrap to sa-east-1 I get that error
<X-warrior> :S
<X-warrior> sarnold: I found the problem, the problem was the s3 name. Just updated it and it worked. :D
<sarnold> X-warrior: woo! :)
<X-warrior> Is bootstrap a valid instance to deploy services?
<X-warrior> for example, juju deploy postgresql --to 0
<marcoceppi> X-warrior: yes, you can deploy --to the bootstrap node
<X-warrior> so I already run juju bootstrap, but when I try to to use deploy --to 0, it returns me 'error: no instance found', checking the amazon ec2 I see the juju-amazon machine
<_mup_> Bug #1217011 was filed: error connecting to rapi ws from chrome dev channel (30) <juju:New> <juju-gui:Triaged> <https://launchpad.net/bugs/1217011>
<X-warrior> 2013-08-26 19:18:04 ERROR juju supercommand.go:235 command failed: cannot start bootstrap instance: cannot set up groups: cannot revoke security group: Source group ID missing. (MissingParameter) error: cannot start bootstrap instance: cannot set up groups: cannot revoke security group: Source group ID missing. (MissingParameter)
<vds> Hello, I'm trying to deploy postgresql with the support for permanent storage, after changing the config file in this way http://paste.ubuntu.com/6030130/ I get this error, which is not very clear to me: http://paste.ubuntu.com/6030125/
<vds> any suggestion?
<sidnei> vds: did you get an error and then retry perhaps?
<vds> sidnei, nope
<sidnei> vds: uhm, try debug-hooks and then retry --resolved, see what 'config-get --format=json' returns once in the hook context.
 * vds tries
<kentb> anyone know when this will trickle down into juj-core, preferably the stable ppa?: https://bugs.launchpad.net/juju-core/+bug/1210328
<_mup_> Bug #1210328: cloudinit: switch apt-add-repository to use ppa:juju/stable <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1210328>
<sidnei> kentb: it's in the devel ppa atm, stable will only be updated when 1.14 is released.
<kentb> ok. thank you sidnei
<vds> sidnei, config-get is not installed on postgresql/0
<sidnei> !
<sidnei> vds: are you running from debug-hooks in the context of a hook?
<sidnei> vds: if you run in the default shell when debug-hook start up it won't work
<sidnei> vds: you need the 2nd shell that pops up after you do resolved --retry
<vds> sidnei, where should I run resolved --retry?
<sidnei> vds: from where you ran juju debug-hooks (which I assume is your local machine), on a separate terminal window of course
<vds> sidnei, config-get in the context of the hook returns {}
<sidnei> vds: cool. i wonder if it's a bug? the code expects it to return the default value that's set in config.yaml apparently, and afaict that has always worked.
<sidnei> thumper: ^
<sidnei> vds: which version of juju you're using?
 * thumper looks up
<vds> sidnei, thumper 1.13.2-raring-amd64
<thumper> vds: which provider (shouldn't matter just gathering info)
<thumper> vds: and which hook were you debugging?
<vds> sidnei, thumper could it be the config change I did? I'm not sure that's the way to pass a dict http://paste.ubuntu.com/6030130/ that's just what I changed
<vds> thedac, openstack
<vds> thumper, openstack
<vds> thumper, from the debug log it looks like install
<sidnei> vds: that looks like it should work
#juju 2013-08-27
<MACscr> is there a static config i can use to set a default for amazon instance sizes? I only want to use tiny instances to start
<davecheney> MACscr: use bootstrap constraints
<davecheney> or deploy constriants
<MACscr> also, just to verify, if i want to have 4 amazon instances in 4 regions, i have to have a controller in each region?
<davecheney> MACscr: not at this time
<davecheney> what we call 'provider specific' constraints are being worked at this time
<davecheney> there is currently no way to say 'this unit must be in a specific availability zone'
<davecheney> we know it's a problem
<davecheney> we're working on fixing it
<sarnold> I think to have four regions under control at once, you have four separate environments in your environments.yaml -- and none of them know anything about any of the others, right?
<davecheney> sarnold: regions are easy
<sarnold> https://bugs.launchpad.net/juju-core/+bug/1160667
<_mup_> Bug #1160667: Expose regions and availability zones to users <juju-core:Triaged> <https://launchpad.net/bugs/1160667>
<davecheney> avaliability zones within a region areharder
<MACscr> well to be honest, i dont really need them to be aware of each other as they are just going to be dns servers with a little ping monitoring on them as well (smokeping)
<MACscr> juju would just be used to deploy ubuntu on them. I guess i could manually install mysql on them
<sarnold> MACscr: you either use --to to deploy a mysql charm onto them, or write a subordinate charm that installs mysql alongside the dns servers..
<MACscr> ok, so you and dave seemed to either have conflicting views or i just got a bit confused. Are having multiple regions easy or not? Basically I need to setup an instance in Asian, Europe and two in the US
<MACscr> and the two in the US are going to be in two different amazon regions (east cost and west coast)
<sarnold> MACscr: I think we both said the same thing from different angles :)
<sarnold> MACscr: you can do multiple environments and then juju -e asia deploy dns, juju -e europe deploy dns, etc... but they can't do relations between asia and europe
<MACscr> sarnold: understandable. This might be an option in the future though? cross region relations?
<MACscr> also, how do i setup multiple regions/environments for a single cloud? comma separate list?
<sarnold> MACscr: different environment declarations in your environments.yaml
<marcoceppi> MACscr: there is talk on eventually having cross environment relations, not sure where on the roadmap that is ATM
<marcoceppi> MACscr: you'll need to create a new environment for each region in your environments.yaml
<marcoceppi> Just name them uniquely, and make sure each has a unique bucket-name. They can share the same credentials
<MACscr> so do i just copy and paste the amazon group of configs and then just change the region? Im not sure how to setup multiple environments for the same provider
<davecheney> MACscr: yes, we call these cross environment relations
<davecheney> basically relations between services in different enironments
<davecheney> it's on the roadmap
<davecheney> i can't give you a solid idea when it will happen
<MACscr> so not next monday?
<MACscr> jk
<marcoceppi> MACscr: here's an example
<marcoceppi> of multiple juju environments
 * marcoceppi scrubs his environments yaml for MACscr
<marcoceppi> MACscr: http://paste.ubuntu.com/6030850/
<marcoceppi> You just need to change the environments name (us-east, blah-lbah, whatever you want), the region, and the control bucket
<marcoceppi> Everything else can be the same
<marcoceppi> MACscr: and the region names, http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-regions
<davecheney> MACscr: i have an environment in my environments.yaml for ever ec2 region
<davecheney> so I can do things like
<davecheney> juju bootstrap -e ap-southeast-1
<MACscr> davecheney: thanks!
<MACscr> ok, so i know juju-ui can be used for creating relations, etc, but what about managing them all after they are deployed? id like a single panel for all systems, vm or physical
<MACscr> landscape pricing is just to high imho
<x-warrior> what is the parameters to create a micro instance? sorry to ask this again
<x-warrior> :/
<davecheney> x-warrior: for the ec2 provider, you'd use constraints
 * davecheney does a test before recommending anyting
<x-warrior> yeap it is constraints
<x-warrior> but I cant remember which ones
<x-warrior> :(
<x-warrior> I asked this early today but I couldn't find a channel log :/
<davecheney> x-warrior: i'm just doing a demo now
<davecheney> two secs
<x-warrior> ty
<x-warrior> I tried with cpu-power=0
<x-warrior> but I get this error: "ERROR juju supercommand.go:235 command failed: cannot start bootstrap instance: cannot set up groups: cannot revoke security group: Source group ID missing. (MissingParameter)
<x-warrior> "
<davecheney> x-warrior: ewww
<davecheney> nice error
<davecheney> will try to reproduce that one
<davecheney> still working on your first question
<x-warrior> I think that came from, starting a regular bootstrap... deleting s3, deleting ec2 machine and trying to create it again with constraints...
<x-warrior> maybe it loose the id from groups and stuff like that to 'revoke' and 'recreate' or something
<davecheney> x-warrior: hmm, if you are rough with juju it probably won't respect you in the morning
<x-warrior> :(
<x-warrior> removing the juju-amazon groups
<x-warrior> seems to solve the problem
<davecheney> x-warrior: I think it would be
<davecheney> juju bootstrap  --constraints="arch=i386 mem=640M"
<davecheney> but I haven't been able to check yet
<davecheney> having some other problems at the moment
<davecheney> you can't say type=t1.micro
<davecheney> because we don't support what we call `provider specific constraints`
<x-warrior> using 'juju -v bootstrap --constraints="cpu-power=0"'
<davecheney> so you need to describe something that looks like a t1 micro
<x-warrior> created the micro instance
<davecheney> yup, because there is nothing that is cpu-power0
<davecheney> so the next largest is a t1.micro
<davecheney> same thing
<davecheney> we just did it a different way
<x-warrior> sweet
<x-warrior> now I see the instance running on my panel, but when I check juju status it gives me 'no instance running'
<x-warrior> :s
<davecheney> x-warrior: i can only recommend juju destroy-environment
<davecheney> because you've probably damaged some invariants that juju was expecting
<x-warrior> any other information I could provide? I'm able to connect to instance via ssh
<davecheney> juju status -v
<x-warrior> I can see '15088 ?        Ssl    0:00 /var/lib/juju/tools/machine-0/jujud machine --log-file /var/log/juju/machine-0.log --data-dir /var/lib/juju --machine-'  on server
<x-warrior> on instance*
<davecheney> what has probably happened is the control bucket provider-state file does not match the intsance id of your bootstrap node
<x-warrior> 2013-08-27 05:30:37 INFO juju ec2.go:128 environs/ec2: opening environment "amazon", 2013-08-27 05:30:43 ERROR juju supercommand.go:235 command failed: no instances found
<davecheney> pop open the s3 console and get the contents of the provider-state file from your control bucket
<davecheney> i suspect it is missing or empty
<x-warrior> should i paste it to pastebin?
<x-warrior> http://pastebin.com/NvTmTBDi
<davecheney> right, so does that instance number match the machine that is running ?
<x-warrior> yes it does
<x-warrior> rebooting the instance and checking juju status again, gives me the same result
<davecheney> x-warrior: can you paste the output of juju status -v
<davecheney> i supect it will error very early
<x-warrior> http://pastebin.com/4fDUyAmA
<x-warrior> that is it
<x-warrior> too short I guess
<davecheney> x-warrior: so juju looks in the control bucket, gets the instance id of the machine
<davecheney> converts it to an ip address
<davecheney> uses that ip to talk to mongodb running on the bootstrap node
<davecheney> for whatever reason that instance id, or the yaml is invalid
<davecheney> so that is all she wrote
<davecheney> x-warrior: two secs, checking something
<x-warrior> I'm not sure that I'm following, but ok
<x-warrior> :D
<x-warrior> I'm trying to start without constraint option, to check if it is something related to micro instance
<x-warrior> or something
<davecheney> x-warrior: short version is
<davecheney> that environment is broken
<davecheney> you will probably have to delete it via the aws console and start again
<x-warrior> delete ec2, security groups and s3 is enough?
<x-warrior> or it writes to some other place?
<MACscr> ok, a controller is needed for each region/environment, right?  If so, is there any type of panel to keep track of everything as a whole?
<davecheney> delete the control bucket
<davecheney> and the instances you have lying around
<davecheney> MACscr: you need a bootstrap node per environment
<davecheney> and an environment can only cover one provider
<davecheney> so in effect you need one bootstrap node per ec2 region
<x-warrior> what do you mean by control bucket? bootstrap instance?
<davecheney> control bucket is listed in your ~/.juju/environments.yaml
<davecheney> it is where juju records persistant state
<davecheney> bootstrap instance is that machine that you have left running
<davecheney> it is the machine that juju spawns to host the mongodb
<MACscr> davecheney: right, a bootstrap, make sense. So while juju helps get things deployed and related, is there not a tool for managing things after the fact?
<davecheney> MACscr: you'd have to define after the fact
<davecheney> we have the gui where you can view your juju environment
<x-warrior> there is juju-gui
<davecheney> and commands like add-unit/remove-unit help you scale up and scale down the number of units of a particular service
<davecheney> but juju does not compete in the nagios/zabbix/xenoss space as a process monitoring tool
<x-warrior> and you can deploy more then one service to the same machine now
<davecheney> x-warrior: --to should be used with care
<davecheney> really
<davecheney> it takes all the safety guards off
<MACscr> ok, but can juju-gui work with more than one environment? its restricted to a single one just like how a bootstrap is needed for each one. Correct?
<davecheney> MACscr: yes, correct
<davecheney> each environment is separate and unrelated
<davecheney> the juju client and switch between environments with the -e, or juju switch commands
<davecheney> but the juju-gui, being a charm itself, is deployed into an environment
<MACscr> right, so something is need to manage everything
<davecheney> so only controls that environment
<MACscr> not just a single environment at a time
<davecheney> MACscr: we have no product for that at this time
<x-warrior> s3 cleaned up, instance terminated, .juju deleted... starting from scratch now
<MACscr> not when it comes to relational stuff, but general instance management, etc
<davecheney> MACscr: juju talks about services and units of services
<MACscr> davecheney: yeah, seems like landscape is the only option of yours and its ridiculously priced
<davecheney> the fact that it creates machines to host them is sort of co-incidental
<davecheney> MACscr: i cannot comment on the price, but you have certainly interpreted our marketing message correctly
<MACscr> hmm, well foreman with puppet can manage everything, but im trying not to have a bunch of overlap with tools
<MACscr> and i really havent figured out puppet yet =P
<x-warrior> after bootstrapping should I wait a while, to zookeeper and stuff like that goes 'up'?
<x-warrior> or get installed or something?
<davecheney> x-warrior: we don't use zookeeper anymore
<davecheney> we use mongodb
<davecheney> but the result is the same
<x-warrior> ah sweet
<davecheney> juju status will block until the bootstrap node is up and running
<davecheney> you can see tht with
<davecheney> juju status -v
<davecheney> in fact, you should pass -v to everythig that you do with juju
<davecheney> otherwise you'll have to rerun the command with -v anyway
<x-warrior> yeap I learned that
<x-warrior> x)
<davecheney> we are working on our logging
<davecheney> it needs fixing
<davecheney> we're not done yet
<x-warrior> no problem :D
<x-warrior> http://pastebin.com/N9eFC8ur
<x-warrior> that is all the outputs from a 'fresh' start... (deleted s3, groups, instance, .juju files)
<davecheney> x-warrior: something is very wrong with your setup
<davecheney> which juju
<davecheney> it smells like you have both 0.7 and 1.12 installed
<x-warrior> 1.12.0-raring-amd64
<davecheney> ok
<x-warrior> at least juju version gives me that
<x-warrior> and this is the first time I install juju (like 1 hour ago)
<davecheney> hmm, i'm a bit stumped
<x-warrior> (on this computer ofc, I was trying on a mac before, but I had the same issue... so I thought it was a mac os related problem... then I moved to ubuntu...)
<davecheney> can you confirm that i-2a4ed636 is running in the aws console
<x-warrior> yes I can see it
<x-warrior> I can connect to it as well
<davecheney> i cannot explain why this is not working for you
<davecheney> the logic is
<davecheney> get the instance id from the provider-state file in the control bucket
<davecheney> looking the ip that the instance points too
<x-warrior> is there a -vv option or something?
<davecheney> then connect to mongodb on that ip
<x-warrior> which gets more verbous?
<davecheney> x-warrior: there is --debug
<davecheney> but I don't think it will make it much more verbose
<davecheney> and it is fialing at the first step
<davecheney> it's rejecting your provider-state file in the control bucket
<davecheney> i do not know why
<davecheney> i have not seen this failure mode before
<davecheney> x-warrior: just for shits and giggles
<x-warrior> 17070/37017 are the corrects ports?
<davecheney> could you change the region: key in your enviornments.yaml to another region
<x-warrior> yes I can try that
<davecheney> (after deleting the current environemnt of course)
<davecheney> 37017 is the correct port
<davecheney> but you don't get that far
<x-warrior> uhmm
<x-warrior> so that seems very weird I guess
<x-warrior> x)
<davecheney> i have not seen that failure mode before
<x-warrior>  deploying to another region
<x-warrior> davecheney, changing the region 'fix' a little it goes a little bit further
<davecheney> fix ?
<kurt_> davecheney: do you know this error? error: cannot create bootstrap state file: gomaasapi: got error back from server: 400 BAD REQUEST
<davecheney> kurt_: i'm not a maas expert
<davecheney> i mainly do the public clouds
<davecheney> let me check
<kurt_> googling it now...
<davecheney> rough guess there is a permission problem creating or reading your control bucket
<kurt_> I believe it is this: https://bugs.launchpad.net/maas/+bug/1204507
<_mup_> Bug #1204507: MAAS rejects empty files <verification-done-precise> <verification-needed> <MAAS:Fix Committed by rvb> <MAAS 1.2:Fix Committed by rvb> <MAAS 1.3:Fix Committed by rvb> <maas (Ubuntu):Fix Released> <maas (Ubuntu Precise):Fix Committed> <maas (Ubuntu Quantal):Confirmed> <maas (Ubuntu Raring):Fix Committed> <https://launchpad.net/bugs/1204507>
<x-warrior> http://pastebin.com/MiXJ4y9Z
<kurt_> I think jcastro warned me about this this morning
<x-warrior> well it is not a 'bug fix' but it is going a little further... I have no idea what is different besides the region...
<kurt_> but some of the other folks thought it was fixed
<x-warrior> I was using sa-east-1 region which is SÃ£o Paulo, BR region... maybe some inconsistence between regions? :S
<davecheney> kurt_: it's a known bug in maas
<davecheney> if you switch your maas install to the daliy build
<davecheney> it is fixed there
<davecheney> i do not have a timeframe when the fix will be avilable in general
<kurt_> ok, was just chatting with bigjools too
<kurt_> lol
<davecheney> x-warrior: up, thatis normal operation for ec2
<davecheney> it takes 3-5 mins for each instance to start up
<davecheney> once it is ready status will succeed, that is why it is retrying
<x-warrior> yeap
<x-warrior> now it listed the bootstrap machine
<x-warrior> should the destroy-environment option
<x-warrior> destroy the instance and do a correct cleanup?
<davecheney> x-warrior: which region did not work
<davecheney> and which region did work
<davecheney> destroy-environment will remove all regions and the control bucket
<davecheney> it might leave the security groups around
<davecheney> that is fine
<x-warrior> ok
<davecheney> they are not expected to be deleted and can cope with being reused
<x-warrior> so, now I'm using the default region option
<x-warrior> without setting it on environment.yaml file
<x-warrior> and using region: sa-east-1
<x-warrior> it does not work
<davecheney> ok, so there is something wrong with the sao paulo region atm
<davecheney> it happens
<davecheney> ap-southeast-2 was broken for several months for me
<davecheney> x-warrior: if you would care to, you should log a but about this on juju-core
<davecheney> although ec2 will denyu it
<davecheney> each region is subtly different
<x-warrior> where should I log it?
<x-warrior> launchpad?
<davecheney> launchpad.net/juju-core/
<x-warrior> ok, I will log that later today
<x-warrior> :D
<x-warrior> btw I will keep joining this channel
<x-warrior> if you guys need more help to trace it
<x-warrior> I will be glad to help you
<davecheney> x-warrior: thanks for the offer
<davecheney> pointing the finger at sa-east-1 is good enough for now
<x-warrior> davecheney, ok, I will do that when I wake up later... need to get some sleep now, almost 4am here
<x-warrior> thanks for all the help
<x-warrior> :D
<x-warrior> have a good one
<davecheney> x-warrior: ok, thanks
<davecheney> ttys
<vds> stub, hello, can I ask you how to use the persistent volume support of the postgres charm? That's how I changed the config http://paste.ubuntu.com/6030130/
<vds> the volume exists already, of course
<stub> vds: The bit of the charm I'm not familiar with :) I can try, or invoke our devops if needed.
<vds> stub, who's the one to blame? :)
<stub> vds: You are modifying config.yaml, instead of passing configuration parameters to the charm?
<vds> stub, yes
<stub> vds: I think you are supposed to use a config file looking like http://paste.ubuntu.com/5751886/, and then do 'juju deploy --config=myconfig.yaml cs:postgresql'
<gnuoy> hi vds, whats the issue ?
<vds> stub, thanks I'll try
<vds> gnuoy, when I try to deploy postgresql changing the config this way http://paste.ubuntu.com/6030130/
<vds> gnuoy, I get this error http://paste.ubuntu.com/6030125/
<mthaddon> er, you're changing the config directly in the charm rather than passing it as an option to the charm?
<gnuoy> looks like the version param is missing ./
<gnuoy> ?
<gnuoy> HOOK KeyError: 'version'
<vds> mthaddon, is that bad?
<mthaddon> vds: yes, you shouldn't change the charm itself before deploying
<vds> mthaddon, ok, thanks.
<alejandro> hola spanis?
<alejandro> spanish?
<alejandro> para que es esto?
<varud> Anybody have thoughts on why an experimental 12.04 image I'm creating with juju has '    agent-state: down' after restart?
<varud> Found this old stackoverflow - http://askubuntu.com/questions/218645/juju-instances-in-agent-state-down-after-turning-them-off-and-back-on-on-ec2
<varud> but it's not relevant anymore (there's no juju-machine-agent.conf file)
<mthaddon> jcastro: I can't find any docs on the upgrade-charm hook any more - is that expected?
<marcoceppi> mthaddon: no, not expected. Let me see if I can find them. If not what's your question, I'd be happy to answer it
<mthaddon> marcoceppi: no question besides wondering where the docs were - thx
 * marcoceppi makes notes about having documentation for all hooks in the author docs
<jcastro> marcoceppi: can you add it to the doc sprint spreadsheet?
<jcastro> so we don't forget?
<jcastro> evilnickveitch: ^^^
<marcoceppi> jcastro: already did that
<jcastro> <3
<marcoceppi> E>
<m_3> what the heck is that?
<m_3> heart in a box?
<rick_h> that's 'marco'
<m_3> unhuh
<rick_h> we just smile and move on
<rick_h> :P
<jcastro> http://insights.ubuntu.com/news/juju-charm-championship-expands-with-more-categories-more-prizes/
<jcastro> help me spread the word folks!
<m_3> hmmm... getting intermittent 503's on manage.jujucharms.com again today
<jcastro> 5 minute warning on the first UDS session
<jcastro> http://summit.ubuntu.com/uds-1308/meeting/21897/servercloud-s-juju-charm-policy-review/
<jcastro> we're starting with a charm policy review
<kurt_> jcastro: juju 1.12 did NOT work
<jcastro> ah, where did it fall over?
<kurt_> jcastro: I ran in to this bug: http://pastebin.ubuntu.com/6032955/
<kurt_> error: cannot create bootstrap state file: gomaasapi: got error back from server: 400 BAD REQUEST
<kurt_> on bootstrap
<jcastro> hrpmh
<kurt_> and, after talking with bigjools, there are no near term plans to put the fix in to maas in quantal
<kurt_> fix has already made it's way to precise and raring (8/15)
<mattyw> marcoceppi, I live here :)
<marcoceppi> of course you do, just broadcasting for those who don't already live around here
<kurt_> jcastro: for what I am trying to get done, do you suggest I start over on precise?  Raring is not LTS, right?
<marcoceppi> mojo706: anything juju goes here, even "off-topic" juju, whatever that might be
<mojo706> thanks
<marcoceppi> Charm Policy review ongoing: http://summit.ubuntu.com/uds-1308/meeting/21897/servercloud-s-juju-charm-policy-review/
<mattyw> marcoceppi, have you and jcastro already starting working towards the netflix cloud prize? or is it just planning at the moment?
<marcoceppi> mattyw: I'm not sure, I think m_3 was spearheading that, IIRC
<kurt_> jcastro: would raring be a more viable option as long as my maas nodes are precise?
<jcastro> I think going precise all the way is the way to go personally
<jcastro> sorry I am unresponsive, we're doing UDS today
<kurt_> no worries
<kurt_> another acronym I don't know lol
<kurt_> doesn't matter
<kurt_> OK, I can try that.
<marcoceppi> kurt_: Ubuntu Developer Summit, http://summit.ubuntu.com/
<kurt_> macroceppi: ah thanks! awesome!
<marcoceppi> kurt_: actually, I think this gives more information http://uds.ubuntu.com/
<jcastro> kurt_: you might want to sit in on the openstack ones!
<kurt_> I was just looking for those
<m_3> mattyw: re netflix cloud-prize... how can I help?
<mattyw> m_3, just wondering if there's a way I can help with it?
<m_3> mattyw: I'm doing some netflixoss charms, but not for the cloud-prize
<m_3> mattyw: I'm disqualified for the netflix cloud-prize proper
<mattyw> m_3, how come?
<x-warrior> davecheney, just filled that bug report :D
<m_3> mattyw: there're reciprocal prizes/judging between canonical and netflix
<m_3> so canonical employees are excluded from both prizes
<m_3> mattyw: but there's lots to do... I'm getting recipes-rss working atm
<m_3> lots of subs to be created
<m_3> and I'm throwing around ideas about gradle/groovy hook impls
<marcoceppi> Starting in about 2 minuts, http://summit.ubuntu.com/uds-1308/meeting/21896/servercloud-s-eco-messaging
<jcastro> hey sinzui
<jcastro> http://summit.ubuntu.com/uds-1308/meeting/21892/servercloud-s-juju-audit-charms/
<jcastro> wanna attend or send someone so we can talk charm review queue stuff?
<kurt_> silly question: is local provider support all of the lxc stuff?
<kurt_> or is it *any* method in which juju is getting deployed locally?
<jcastro> yeah
<jcastro> when we say local provider we mean LXC support
<jcastro> currently. :p
<kurt_> ok
<kurt_> that appears to be hot topic
<AskUbuntu> Checking my juju instance through Amazon's AWS console? | http://askubuntu.com/q/337987
<sidnei> hazmat: i think we need a new release of juju-deployer, and then to ping jamespage to upload to saucy. the one currently in saucy is missing some important fixes.
<jamespage> sidnei, hazmat: let me know when and what
<kurt_> jamespage: when you consolidate services on to fewer nodes (juju --to) , do you have a suggestion on which charms/services stack best together or some kind of dev blueprint you guys use?
<sinzui> jcastro, that you for the reminder. I thought today was the 26.
<jcastro> it's the 27th!
<jcastro> your name is Curtis and we have a session today.
<jcastro> :)
<kurt_> I was looking at this, but I don't believe this follows the tenant client you were suggesting I research https://github.com/mseknibilel/OpenStack-Folsom-Install-guide/blob/master/OpenStack_Folsom_Install_Guide_WebVersion.rst
<hazmat> sidnei, it is? i thought it got cut from the branch directly
<sidnei> hazmat: it's 0.2.1 :/
<hazmat> sidnei, its possible we have a packaging version divergence against the same source.
<hazmat> jamespage, so latest rev 114 / version 0.2.3 (just incremented to be sure) is stable
<sidnei> hazmat: it's missing the fix from r110 at least
<hazmat> jamespage, if you could upload that we should be good for saucy. there are other changes inbound but they can land into the ppa for now.
<marcoceppi> hazmat: could you put that version in the stable ppa as well?
<jcastro> arosales: ok details updated for the next session
<jcastro> so we should be able to just start on time this time, heh
<arosales> jcastro, sorry about the conflicts last time
<arosales> jcastro, I am leaving getting session started in your capable hands :-)
<jcastro> no worries, first day is always rough
<jcastro> that seems to have been our only problem today, I'll take it!
<arosales> jcastro, for sure
<jamespage> hazmat, I'm cutting from the tarballs on pypi
<marcoceppi> jcastro: link?
<jcastro> https://plus.google.com/hangouts/_/db564284829c94ffbbbd54b843fc04d071554fb5?authuser=0&hl=en
<jcastro> http://summit.ubuntu.com/uds-1308/meeting/21892/servercloud-s-juju-audit-charms/
<hazmat> jamespage, aha, thanks
<hazmat> jamespage, updated on pypi
<jamespage> hazmat, sidnei: uploaded to saucy
<jamespage> hazmat, is that compatible with juju-core 1.12.0 ?
<hazmat> jamespage, yes
<jamespage> kurt_, its probably easier to say what won't go together right now
<jamespage> nova-compute, quantum-gateway, nova-cloud-controller will all conflict with each other in config files
<jamespage> likewise ceph, cinder, glance and nova-compute (around /etc/ceph)
<kurt_> jamespage: thanks.  do you use a two or three node deployment for testing?  I would be curious to see what people typically cluster together on the same node
<jamespage> kurt_, if you just want compute (i.e. no cinder) then you can get away with three nodes
<kurt_> jcastro: is it normal I would have to "sync-tools" after fresh install of juju 1.12?
<jamespage> kurt_, right now its tricky and kinda unsupported because of the conflicts in the filesystem
<jamespage> kurt_, juju container support will help with taht
<jamespage> a charm assuming it has control over the filesystem is not unreasonable
<kurt_> juju container support = lxc = local support you guys were just talking about?
<jamespage> kurt_, kinda
<jamespage> lxc is used in the local provider right now
<kurt_> ok
<jamespage> but a feature is being worked on to allow you to add LXC machines in other providers
<jamespage> so you can slice up a server using LXC for deploying servers into
<jamespage> servers/charms
<kurt_> right, but just forcing with --to is going to cause problems?
<kurt_> that was the path I was going down
<sarnold> I wouldn't expect --to to work with every possible combination of charms
<sarnold> but _many_ combinations might work fine
<kurt_> sarnold: right.  If someone could share their working blueprint for a successful deployment, that would be awesome
<kurt_> whether its 2, 3 or more nodes - I am just wondering what works
<kurt_> juju 1.12 -> YARGH! LOL http://pastebin.ubuntu.com/6033776/
<sarnold> lol
 * kurt_ beating fists, stomping feet, and rolling eyes
<marcoceppi> kurt_: destroying environment removes the bucket :\
<marcoceppi> davecheney: ^ Might want to change that.
<kurt_> marcoceppi: does syncing tools bootstrap too?
<kurt_> that doesn't seem logical
<marcoceppi> So, juju destroy-environment; juju sync-tools; juju bootstrap
<sarnold> marcoceppi: dunno. I could wanting the billing to end when the environment is destroyed...
<marcoceppi> kurt_: you need to sync-tools prior to bootstrap but after destroy
<kurt_> marcoceppi: ok, the sync-tools is new for 1.12 for me
<kurt_> never had to do that before
<kurt_> marcoceppi: why does it say this then? "error: environment is already bootstrapped"
 * kurt_ confused
<marcoceppi> kurt_: because when you run juju bootstrap it creates a file in the bucket that says "this is bootstraped", even if an instance doesn't launch. It's to prevent two bootstraps from happening if the cloud provider takes a long time to start up the bootstrap node
<marcoceppi> So there's a bug in there, in that if no tools are matched, or if there is a general bootstrap error, it should clean up that file
<kurt_> marcoceppi: but the tools are there after download, is the error being generated prior to the tools downloading?  If I destroy my environment, there is no bootstrapped node, so that error seems misleading
<marcoceppi> kurt_: there's a juju bootstrap --upload-tools option, however people keep telling me that it's more for development versions of Juju and that sync-tools is the right way to go.
<marcoceppi> kurt_: I'm not sure of the nuances with they sync-tools, I've not had the pleasure of using it much
<marcoceppi> hazmat: sync-tools and when to use it? ^
<kurt_> marcoceppi: I don't think one has a choice when bootstrapping. you must have the tools
<marcoceppi> kurt_: Yes, most public clouds have the tools already sync'd somewhere, so it's effortless
<marcoceppi> for private clouds and maas, I'm not sure of the proper procedure
<marcoceppi> I'm pretty sure you want to follow: juju destroy-environment; juju sync-tools; juju bootstrap
<kurt_> marcoceppi: I would hope it would function in nearly the same way. :)
<marcoceppi> I think the sync-tools is a pre-step to the process
<kurt_> that's exactly what I did and that error showed up
<marcoceppi> the bootstrap error?
<kurt_> yeah
<marcoceppi> kurt_: that's not right
<marcoceppi> what happens if you destroy-environment then bootstrap again?
<kurt_> and in looking at my maas, I have no bootstrapped node
<kurt_> same thing
<kurt_> let me try once more - I will paste to paste bin for you
<marcoceppi> so there's something else going on
<marcoceppi> kurt_:  use -v for both commands
<kurt_> doing that...
<kurt_> marcoceppi: ahâ¦ heh, in reviewing what I cut and pasted above, there was a slight error
<kurt_> marcoceppi: notice: juju bootstrapdestroy-environment; juju sync-tools; juju bootstrap
<marcoceppi> sarnold: I understand the wanting to kill the billing, possibly a juju destroy-environment --preserve-tools would resolve that
<kurt_> should be: juju destroy-environment;  juju sync-tools; juju bootstrap <- My bad for not picking that up
<marcoceppi> kurt_: np, give that a go
<kurt_> working :)
<hazmat> marcoceppi, when your in a private cloud, and don't have a compile env, ie just want to use juju
<hazmat> marcoceppi, it will copy the latest release from public ec2 bucket into private cloud object storage (for the env/user)
<marcoceppi> hazmat: so it's really for those who don't have go-land installed and don't want to compile the tools themeslves? IE majority of users?
<hazmat> marcoceppi, yup majority of users in a private cloud.. public clouds should already have tools installed
<marcoceppi> hazmat: ack, gotchya
<marcoceppi> We'll need to update our docs to let people know about sync-tools and maas/private clouds
<hazmat> there's another critical /high bug for private cloud usage, re allowing invalid ssl certs that causes issues for juju-core
<hazmat> atm, requires updating the client's os level certs to accept the private cloud ssl cert ca as trusted.
<hazmat> bug 1202163 fwiw
<_mup_> Bug #1202163: openstack provider should have config option to ignore invalid certs <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1202163>
 * marcoceppi +1s
<kurt_> hazmat: yes I saw something about a cert error
<kurt_> kurt@maas-cntrl:~$ juju status
<kurt_> error: no CA certificate in environment configuration
<hazmat> that's a little different
<kurt_> rebooting the node, destroying, and restarting seems to have fixed that anyways
<hazmat> juju also internally uses ca certs to secure communications with mongodb and the api server
<hazmat> those certs are kept in the JUJU_HOME (default ~/.juju)
<hazmat> the certs referenced in the bug are the underlying iaas provider ssl certs
<jcastro> marcoceppi: kirkland has a bunch of bugs that your sentry interface autodocumenter should fix
<jcastro> where can he file bugs?
<kurt_> ah ok - but that's mostly for external provider scenarios, right?
<marcoceppi> jcastro: what kind of bugs?
<hazmat> kurt_, it also applies to openstack using ssl
<hazmat> in a private cloud
<marcoceppi> lp:amulet should suffice
<marcoceppi> jcastro: ^
<kurt_> hazmat: ok, thnx
<kirkland> marcoceppi: okay, thanks
<X-warrior`> so I'm creating a charm, and now I used deploy on it, but some stuff went wrong and I'm fixing it, how can I remove this one to try the new one? or is it possible to update?
<marcoceppi> X-warrior`: you can either use juju upgrade-charm to upgrade the charm in place, you can deploy the charm again under a different alias (IE: `juju deploy --upgrade --repository /path/to/charm/repo local:charm-name now-your-alias) or you can destroy the environment and re-bootstrap
<kirkland> marcoceppi: thanks!  https://bugs.launchpad.net/amulet/+bug/1217540
<_mup_> Bug #1217540: Every interface defined by a charm should be documented with examples <Amulet:New> <https://launchpad.net/bugs/1217540>
<marcoceppi> kirkland: so I can do the first half of that, the documentation. The examples I might defer to another bug on another project. I'll let you know.
<marcoceppi> well, I can attempt to do the first part*
<kirkland> jcastro: marcoceppi: okay, and now I'd like to file a bug, requesting that the squid-reverseproxy charm add support for https_port -- where do I file that?  against launchpad.net/charms?
<jcastro> yeah
<jcastro> that would be against the charm itself
<web-brandon> If I include a folder in my charm with files.  Where is it placed on the service instance server?
<X-warrior`> marcoceppi, ty, does the upgrade-charm will update already deployed instance?
<X-warrior`> or do I need to upgrade charm an then call deploy with --upgrade flag
<web-brandon> Or is it only on the juju bootstrap instance
<marcoceppi> X-warrior`: So, upgrade charm will update the charm contents, but that's it. If you want it to run hooks again (like run hooks/install or hooks/config-changed) you'll need to create a new hook in hooks/ called upgrade-charm and put that logic in there, for example: http://bazaar.launchpad.net/~charmers/charms/precise/wordpress/trunk/view/head:/hooks/upgrade-charm
<marcoceppi> X-warrior`: nope, those are two different commands
<marcoceppi> one is just an upgrade charm, the other will deploy a fresh copy of the charm under a new service name, so it will follow the typical install and deploy as if it was freshly deployed
<X-warrior`> ok
<X-warrior`> and what about removing already deployed services?
<marcoceppi> X-warrior`: you can run juju destroy-service to remove deployed services, but you can't, IIRC, deploy a service again with the same name in an environment
<marcoceppi> So you'll need to use the `juju deploy --upgrade --repository ... local:charm <alias>` syntax
<marcoceppi> X-warrior`: So, in the event of mysql, if you've already run `juju deploy --repository ... local:mysql` and then you want to deploy fixes. You could run `juju upgrade-charm --repository ... mysql` OR if you wanted to start fresh, `juju destroy-service mysql` then run `juju deploy --upgrade --repository ... local:mysql db` that'll deploy mysql under the alias of "db" so you don't have it deployed again as "mysql"
<X-warrior`> yeap, but if I would like to deploy fixes, my charm must have a upgrade-charm
<X-warrior`> hook
<X-warrior`> right?
<marcoceppi> X-warrior`: It's not required, using upgrade-charm without an upgrade-charm hook works, a new version of the charm will be deployed
<marcoceppi> However, that's all juju will do, is unpack the new version. It won't run any other hooks
<X-warrior`> oh I get it
<marcoceppi> X-warrior`: So, you could write an upgrade-charm hook right now, and then run upgrade-charm. Juju will unpack the new version then execute the new hook
<X-warrior`> should the destroy-service command destroy the instance as well?
<marcoceppi> But all hooks are considered "optional" for juju, if it doesn't exist juju just skips it and moves on
<X-warrior`> yeap
<marcoceppi> X-warrior`: no, instances will remain. It's juju's way of protecting data. You can remove them with `juju terminate-machine <machine-number>` if you're done with them
<X-warrior`> ok
<X-warrior`> let me try
<marcoceppi> cool
<X-warrior`> ERROR juju supercommand.go:235 command failed: no machines were destroyed: machine 1 has unit "test/0" assigned
<marcoceppi> X-warrior`: is test/0 in an error state?
<marcoceppi> X-warrior`: it probably says something like life: dying and agent-state is in error?
<X-warrior`> I guess test/0 is an error state since the install hook failed and on log I can see "awaiting error resolution for "install" hook"
<marcoceppi> X-warrior`: so, when a charm is in an error state it stops all other events and leaves them in an events queue
<marcoceppi> In this case, config-changed and a bunch of other hooks are all queued up, with the last event being the service destruction.
<X-warrior`> ok
<marcoceppi> X-warrior`: so you'll need to "resolved" the errors for the charm before it can get to those other events
<marcoceppi> X-warrior`: `juju resolved test/0` should suffice
<marcoceppi> X-warrior`: you may need to run that command multiple times if it hits any other errors
<X-warrior`> ok
<marcoceppi> X-warrior`: Also, for future reference, you can have juju retry a hook, using `juju resolved --retry test/0`
<X-warrior`> what does the 0 stands for? install?
<marcoceppi> X-warrior`: that's the unit number. So each service you deploy gets at least one unit. If you wanted to scale out you could run juju add-unit test and you'd get test/0 and test/1
<marcoceppi> if you notice in juju status they're listed under the "units" heading for the service
<X-warrior`> so if I send the --retry flag it will rerun the failed hook
<X-warrior`> so let's say in my case I had a problem with install hook, so it gets locked
<X-warrior`> if I use upgrade-charm, it will get enqueued
<X-warrior`> and if I use the --retry it will fail again
<marcoceppi> X-warrior`: yes, so you could, for instance, juju ssh test/0 (to ssh in to the node), switch to the root user, go to /var/lib/juju/agents/unit-test-0/charm/, edit hooks/install, fix whatever the problem was, log out and then run juju resolved --retry test/0 to try again
<X-warrior`> and then I need manually update the hook on the server
<marcoceppi> Right, so at this point you're editing things on the server, it's just one possible workflow for writing charms. It's dangerous, because if you don't copy the fix to your local charm and destroy the service, you lose your changes
<X-warrior`> yeap
<marcoceppi> the alternative is to destroy the service, fix it locally, re deploy, or to destroy-environment, fix it locally, re-bootstrap, then deploy
<marcoceppi> It's up to whatever way works best for you as an author
<marcoceppi> each has their pros and cons
<X-warrior`> yeap
<web-brandon> If I include a folder in my charm with files.  Where is it placed? I cannot find it for the life of me
<X-warrior`> let me try it again
<X-warrior`> marcoceppi, I have to go now, will keep trying later, ty for your help
<X-warrior`> have a good one
<jcastro> FunnyLookinHat: is there a charm for beansbooks?
<marcoceppi> web-brandon: It's placed inside the $CHARM_DIR, typically /var/lib/juju/agents/unit-<service-name>*/charm
<web-brandon> marcoceppi: thank you so much.
<marcoceppi> web-brandon: each hook is executed at the $CHARM_DIR (and that variable is available to hooks)
<marcoceppi> We recommend not hardcoding paths when possible
<web-brandon> just ran through a test to spit out the path in to juju-log.  I am about to perform some 'cp' actions with that as the base dir.
<marcoceppi> web-brandon: if you want to be safe, you can just do $CHARM_DIR as the prefix to the path
<web-brandon> i understand. it can change from server to server
<marcoceppi> web-brandon: not only server to server, but version to version. However, all hooks will _always_ be executed from the $CHARM_DIR
<web-brandon> marcoceppi: good to know.
<web-brandon> gonna make a irc-bot charm
<_mup_> Bug #1217591 was filed: Had to manually add access to bootstrap node on 17070 and 37017 to the secgroup <juju:New> <https://launchpad.net/bugs/1217591>
#juju 2013-08-28
<freeflying> what is the appropriate name of the bootstrap node?
<davecheney> freeflying: that is what we call it
<freeflying> davecheney, bootstrap node?
<davecheney> yup
<davecheney> sometimes it's called the state server
<davecheney> but that isn't very accurate
<freeflying> davecheney, cool, no matter its python version or go version :0
<freeflying> :)
<davecheney> freeflying: we did keep _some_ things the same :)
<freeflying> davecheney, nice approach I' d say
<freeflying> davecheney, especially make our lives easier when write documents :D
<varud> Anybody have experience dealing with the following:        agent-state-info: 'hook failed: "config-changed"'
<varud> It's a chronic problem I've been experiencing after reboots on a local juju installation both on precise and raring
<marcoceppi> varud: yes, it means that a hook was executed but it failed during execution (exited with a status > 0)
<marcoceppi> varud: you can run `juju resolved --retry <unit>` to re-run the hook again. If it continues to error then you can either ignore it or re-deploy the service (ignore it with `juju resolved <unit>` (without --retry))
<varud> thanks, trying that out now
<rick_h> featured for flag bearer!
<rick_h> jcastro: ^
 * rick_h is catching up on the video
<marcoceppi> rick_h: thanks!
<rick_h> marcoceppi: yea, we've got the manual feature to mark as 'featured' and they're shown at the top of the gui. Great place to put flag bearers
<marcoceppi> rick_h: I think featured and flag bearer are slightly different
<marcoceppi> rick_h: in the end we decided not to display flagbearer in the gui
<marcoceppi> a flag bearer may or may not be featured
<jcastro> I will certainly feature any charm we flagbear
<rick_h> marcoceppi: yea, understood
<mattgriffin1> #join #ubuntu-uds-servercloud-2
<X-warrior> 'juju -v add-relation postgresql test-charm'... returns me 'error: no relations found'. My metadata.yaml has requires: database: interface: postgresql. What is wrong?
<mattgriffin1> jcastro: watching video for Hangout for Flag Bearer Charms. sorry i missed itâ¦ busy morning. re: percona xtrabackup.. i'm still trying to get internal resources to assist
<marcoceppi> X-warrior: can you pastebin your metadata.yaml file?
<X-warrior> yes I can
<X-warrior> just a sec
<marcoceppi> X-warrior: np
<X-warrior> http://pastebin.com/WUNCxJfx
<jcastro> mattgriffin: no worries, thanks for the follow up!
<marcoceppi> X-warrior: if you review the postgresql's metadata file, http://bazaar.launchpad.net/~charmers/charms/precise/postgresql/trunk/view/head:/metadata.yaml, it provides a pgsql interface. You'll need to make sure your interfaces match. So instead of "postgresql" as the interface, use pgsql
<marcoceppi> X-warrior: charms can provide/require multiple relations over multiple interfaces. Interfaces are the only thing* juju cares about when matching a relation
<marcoceppi> * unless there's an ambiguous interface match, in which case you'll need to provide the corrosponding relation endpoint
<mattgriffin> jcastro: np
<X-warrior> does database and db the same? or should I change it on my charm to db instead of database?
<marcoceppi> X-warrior: that naming doesn't matter so much
<X-warrior> ok
<marcoceppi> X-warrior: you can have it be database, db, one-thousand-suns; it's only used to name hooks within your charm and to potentially remove ambiguousness from relations
<X-warrior> and to get postgresql working, I MUST add a directory-path to it? I see the requires: persistent-storage: interface: directory-path
<stub> X-warrior: No, in this case requires is optional :-/ The naming there isn't the best.
<marcoceppi> X-warrior: no, requires and provides are a bit of a misnomer
<marcoceppi> X-warrior: all relations are inheriently optional
<X-warrior> it would be nice to have a distinction from requires and optional... so you could have then separated since there are some relations that probably are mandatory to get the service working (example: mysql to wordpress)
<marcoceppi> X-warrior: the charm should be able to operate at any time without any or all relations added
<marcoceppi> any caveats need to be in the README
<X-warrior> ty
<X-warrior> :D
<X-warrior> How could I use a private git repository on install? I saw the vanilla example, but if the repository is closed there are some problems with keys and stuff.
<marcoceppi> X-warrior: you'd have to have config options to provide authentication methods for that repo
<marcoceppi> X-warrior: so either an SSH key that you can provide, or a user/pass combo for the repo, etc
<X-warrior> marcoceppi: is it possible to pass parameters on deploy? I mean, the install hook will need this user/pass information... but I don't want to hard code it on charm, so I would like to pass it as parameter. Since the service is not running yet, I can't use 'set' I guess
<X-warrior> I can see the deploy --config option, but with that I will need to hard code the user/password on config.yaml file.
<sidnei> X-warrior: this config.yaml you pass to deploy --config is not the config.yaml of the charm itself, but a local .yaml file with a different structure
<X-warrior> yeap
<mrsolo> hi how do i force destroy a machine?
<sidnei> mrsolo: juju terminate-machine, but it must have no services on it anymore
<mrsolo> hm destroy-unit won't do?
<X-warrior> sidnei: how could I access this config vars inside a hook? `juju get service name`?
<X-warrior> iirc destroy-unit is to destroy units created by add-unit
<sidnei> mrsolo: remove-unit removes a specific unit from a service, if that was the last unit in a machine you can then terminate-machine
<sidnei> X-warrior: config-get name
<sidnei> X-warrior: the variable *has* to be defined in the service's config.yaml, think of that as the 'schema' for your config, which defines a default value and the type of the config key
<mrsolo> sidnei, thanks..
<mrsolo> just did remove service, does it take a long time for the service to be removed?
<X-warrior> http://pastebin.com/2sqw4FL4
<X-warrior> like this?
<X-warrior> and then git-key=`config-get git-key`  ?
<sidnei> X-warrior: yup
<X-warrior> I will try
<X-warrior> ty
<marcoceppi> mrsolo: does it have an error in juju status?
<marcoceppi> it doesn't take that long
<mrsolo> MACscr, no error.. the machine is listed as dying  so i wonder if it got into the state that database entry remove is not possible
<mrsolo> https://juju.ubuntu.com/docs/troubleshooting.html#die <- i got into this state.. and that link is broken..hah
<mrsolo> http://nopaste.info/6faf012158.html
<marcoceppi> mrsolo: it's in an error state, agent-state-info: '(error: hook failed: "install")'
<mrsolo> yes how do i correct that.. that instance is totally gone
<mrsolo> i forced wipe it from ec2
<marcoceppi> mrsolo: run `juju resolved jenkins/0`
<marcoceppi> mrsolo: Oh, so you took the neuclear option. Not sure if you'll be able to remove it at this point.
<mrsolo> nice
<mrsolo> ya i did the nuclear option :-)
<marcoceppi> mrsolo: it doesn't hurt anyone/thing at this point, just muddies up the status output
<mrsolo> ya something to know.. so  if i want to wipe the entire lab.. do i need to generate environments.yaml?
<marcoceppi> for future reference, if it's in an error and you're trying to destroy, running `juju resolved <unit>` will move the hook execution along. When an error is incurred, juju stops and queues all future events (including the destroy events)
<marcoceppi> mrsolo: if you want to remove the environment, just `juju destroy-environment`
<marcoceppi> that will delete and tear down everything (including bootstrap)
<marcoceppi> but you won't have to change your environments.yaml
<marcoceppi> you can then run juju bootstrap and generate a clean environment to work with again
<mrsolo> okay neat
 * marcoceppi records that we need a troubleshooting guide for the docs, with how to destroy a service in error state
<X-warrior> config.yaml
<X-warrior> ops
<X-warrior> sorry
<X-warrior> Can I create my keys on config.yaml? I mean, I would like to add a git-key on it but when I'm using git-key: | option, it returns me an error
<X-warrior> and if I remove the | and use just a regular string I receive this output "error: unknown option "git-key""
<sidnei> X-warrior: unkown option means the charm's config.yaml doesn't define that key
<sidnei> X-warrior: it needs to know that it's a valid option and that its type is 'string'
<marcoceppi> X-warrior: so you'll have to have git-key in your config.yaml for the charm in the charm directory, but you don'thave to give it a default value, you can leave it as an empty "" for the default. Then you can create a seperate configuration file (maybe call it deployment.yaml) that you can keep outside of the charm and fill in the git-key configuration option
<marcoceppi> X-warrior: an example, is with the phpmyadmin charm, which requires you to set a password for the user http://bazaar.launchpad.net/~charmers/charms/precise/phpmyadmin/trunk/view/head:/config.yaml however, when I deploy it I have another yaml file in my home folder with the following:  http://paste.ubuntu.com/6037542/ that I call deploy.yaml. So I can do things like `juju deploy --config ~/deploy.yaml phpmyadmin` and it'll get those
<marcoceppi> three values set at deploy time
<marcoceppi> https://juju.ubuntu.com/docs/charms-config.html for additional reference
<adam_g> wedgwood, sidnei any objections to merging darwin into lp:juju-deployer?
<wedgwood> adam_g: not from me. I've been using darwin exclusively for a while. you should ask mthaddon though
<adam_g> wedgwood, thanks
<adam_g> mthaddon, ^^
<wedgwood> adam_g: he's EoD, so an email might be best
<adam_g> k
<sidnei> adam_g: +1
<marcoceppi> adam_g: I'd love to see juju deployer in the stable ppa too, if you're going to be making the merge
 * marcoceppi thows 2C around
<jcastro> adam_g: yes please!
<jcastro> deployer in the the stable ppa!
<sidnei> c'mon, what's wrong with saucy? :)
<sidnei> jcastro, marcoceppi: which distros you care about precise only or all in between?
 * sidnei splashes some commas around
<sidnei> i think i *can* upload to the ppa, but i need someone to tell me if i *should* do it or not
<sidnei> maybe it should go into ppa:juju/pkgs with amulet and all that?
<sidnei> in fact, there's a version there, it's just that it's fairly old
<marcoceppi> sidnei: precise raring saucy
<marcoceppi> sidnei: no, it needs to go in to ppa:juju/stable
<marcoceppi> sidnei: per cross team discussion
<sidnei> marcoceppi: so it's agreed on already?
<marcoceppi> sidnei: correct
<sidnei> marcoceppi: ok, i'll trigger a backport from saucy then
<marcoceppi> sidnei: thanks
<sidnei> marcoceppi: no quantal?
<marcoceppi> sidnei: oh yeah, all current release please :)
<sidnei> marcoceppi: all pending build, starting soonish: https://launchpad.net/~juju/+archive/stable/+builds?build_text=&build_state=all
<marcoceppi> sidnei: thanks
<marcoceppi> sidnei: could you stick the python-jujuclient api thing in there too?
<marcoceppi> so deployer works
<sidnei> marcoceppi: i take it taht you haven't looked at the url :)
<marcoceppi> sidnei: what, click on things? nah
<sidnei> in other words, yes, done
<marcoceppi> sidnei: <3 thanks
<sidnei> marcoceppi: all done
<sidnei> well, 'Binary packages awaiting publication'
<marcoceppi> sidnei: Thank you sir!
<weblife> can someone help my figure out why I am getting a error with a mongo shell script on the install hook: http://paste.ubuntu.com/6038371/
<weblife> when I ssh into it I can load the mongo shell
<weblife> the service is up and running
<weblife> I know the bash script works locally with the same version and install
<weblife> The error response is below the script allso
<weblife> also
<sarnold> weblife: sudo sudo sudo ... won't this thing run as root?
<weblife> sarnold: I figured it wouldn't hurt just in case to have it there.  That could be the issue you think?
<sarnold> weblife: probably not the issue, you -do- get a mongo attempt to connect after all..
<weblife> DO I actualy need to open that port you think?
<sarnold> weblife: I think I'd throw a netstat -alp in there before the mongo << EOF ... -- see if the socket is open yet?
<weblife> sarnold: I wouldn't think so due to it being local but I could be wrong.
<sarnold> weblife: 'service' is going to return nearly immediately, the service may not yet be running?
<weblife> sarnold: looks like were on the same page.  Will try.
#juju 2013-08-29
<weblife> sarnold: looks like you were right.  It was the socket.  HAdn't opened yet
<sarnold> weblife: I wonder if there's a good / lightweight way to wait until a socket is opened..
<sarnold> ideally, 'service' wouldn't return until the mongo server is actually running. but if it daemonizes, the child will return very nearly immediately, and the grandchild may not yet be ready..
<weblife> I just seperated it into its own function and threw it a little later in the install hook.  I am sure there is something else I could do but I am good with throwing the function later
<weblife> This charm inits a mongodb instance of its own if there is no waiting instance.  Now i can focus on exporting the database if a mongodb instance is added.
<jrwren> juju destroy-environment is my favorite command :)  it reminds me of kill -9 1
<sarnold> hehe :)
<davecheney> gary_poster: thank you thank you thank you for juju-gui-74
<jrwren> trying this: https://juju.ubuntu.com/get-started/local/
<jrwren> juju is trying to connect to lcoal mongo on 37017 but mongo is listening on 27017 where is the right place to change it?
<thumper> jrwren: which version are you using? 1.12? or 1.13.2?
<thumper> jrwren: also, do you have root-dir specified in environments.yaml for the local provider?  if you do, comment it out
<thumper> known bug fixed in the dev version
<thumper> sudo juju destroy-environment
<thumper> and try again
<jrwren> 1.12
<jrwren> no root-dir set
<jrwren> i just changed mongodb.conf to listen on 37017 now bootstrap just hangs at opening state mongo.
<jrwren> i don't get connection failed messages like I did before though :(
<jrwren> oh... juju-db-$USER-local service.
<jrwren> apparently I was impatient on waiting for mongo local to start
<thumper> jrwren: I found on my SSD, mongodb starts up in about 2 seconds
<thumper> but on another's normal laptop, took upto 30s
<thumper> I was killing the bootstrap thinking it was a bug and hung
<thumper> but no...
<sarnold> 30s! ouch.
<hazmat> thumper, this is with no prealloc option?
<thumper> hazmat: NFI
<davecheney> thumper: hazmat we found the same thing on azure
<davecheney> hazmat: we arelady use no-prealloc
<davecheney> maybe we're doing it wrong
 * davecheney reaises an issue
 * davecheney lags
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1218176
<_mup_> Bug #1218176: cmd/jujud: bootstrap may not properly configured mongodb to avoid preallocation <juju-core:Triaged> <https://launchpad.net/bugs/1218176>
<kvt> davecheney azure has pretty slow disks.. possibly we also need --smallfiles
<davecheney> kvt: the idea is it shouldn't allocate anything
<davecheney> just move the file pointer to the end of the file, write a zero, and close the file
<davecheney> i'd be quite confident that this is (yet another) mongo bug
<kvt> davecheney possibly but i think it does need both
<kvt> davecheney prealloc is against db file stores (if the fs supports it will do sparse allocation)
<kvt> the smallfiles also adjusts the journal file size
<davecheney> kvt: it's your bug if you want it
<davecheney> i was going to flick it to my friends at 10gen and ask for advice
 * kvt makes some notes
<kvt> hmm.. from the bug it looks like we're already passing smallfiles
<davecheney> yes
<davecheney> i wonder if they are mutually exclusive
<melmoth__> anyone understand what does the hacluster's charm's corosync_pcm_ver is for ?
<melmoth__> the install hook does not seems to change the way in install things, just the way it start them based on the value of this conf
<jaywink> hi, just installed stable juju-core, signed up to aws and bootstrapped. It didn't give any errors and I can see a running instance with AWS console. juju status however says nothing there. But new bootstrap attempt says there is an instance, see: http://pastebin.com/c8S7sfTD
<jaywink> any ideas?
<noodles775> jaywink: That sounds like the bootstrap instance hasn't gotten to the correct state. Does the AWS console tell you that everything is fine with the instance?
<noodles775> jaywink: either way, a more consistent error message there would be helpful. I'll create a bug for it, unless you're already doing so?
<jaywink> yeah afaik, first time with juju. same happened with local, bootstrap went fine, then said no instances.. unfortunately the verbose flag didn't bring any more errors than was in the pastebin
<jaywink> I ended up already resetting everything and upgrading to devel ppa and now everything works, locally at least
<jaywink> so not quite sure what to say in a bug :( should have initially bootstrapped the amazon instance with -v I guess
<noodles775> jaywink: heh, sorry - I meant that juju providing a more consistent error message would be helpful :)
<jaywink> noodles775, there is this, after I upgraded, before I did an rm -rf .juju and terminated the AWS instance, I ran juju stat: http://pastebin.com/eYWK9SqR .. should I file that as a bug?
<noodles775> jaywink: I wouldn't think so - it looks like there's confusion over what environment (ec2) you're bootstrapped with. The earlier issue you had in stable is more worrying to me. I'll install stable and see if I can reproduce.
<mgz> jaywink, noodles775: if bootstrap gives you an instance, but juju doesn't work, it's worth checking the console log/sshing in and looking at /var/log to find the underlying issue
<jaywink> mgz, sorry, terminated the instance already - was a bit hasty I know .. :(
<noodles775> mgz: here's what jaywink pasted before you joined: http://pastebin.com/c8S7sfTD
<noodles775> jaywink: actually, your last paste confuses me a bit. It looks like you'd first had the old (python version) of juju installed (0.7), which isn't what you should have had from stable?
<noodles775> jaywink: Now you should have the new version (juju-core), but I think you'll find that removing the dev PPA and just installing 'juju-core' with from stable should work (https://juju.ubuntu.com/docs/getting-started.html )
<mgz> thanks noodles775
<jaywink> hmm good point, it's possible I installed some version long time ago without trying it - is juju-core a new package after that?
<noodles775> jaywink: Yeah, you need juju-core instead of juju (confusing I know :/ ).
<jaywink> ok sorry guys for taking your time, should have cleaned up :) I wonder though when I activated juju/stable repo juju wasn't updated, but when I hit devel ppa it was updated
<jaywink> could have been me though, I followed the tutorial and run update && install as there .. I guess a dist-upgrade should have been done too
<noodles775> jaywink: which tutorial? It may need updating (if it asked you to do 'apt-get install juju' instead of 'apt-get install juju-core'.
<noodles775> jaywink: https://juju.ubuntu.com/docs/getting-started.html has the right info, afaict.
<jaywink> yes it was juju-core, but since I had an old juju installed I guess I should have done upgrade too. Unless installing juju-core should have automatically done that
<mthaddon> if I have two juju-core environments on the same provider, would/should they use the same public-bucket-url ?
<mthaddon> specifically the AUTH_* part
<mgz> mthaddon: potentially
<mthaddon> mgz: what's the implications of sharing that amongst juju envs?
<mgz> one of the main reasons for providing that config was so that someone else could upload the tools, then you could use those without remirroring from aws yourself
<mgz> we're trying to move towards everyone using simplestreams, and cloud providers putting the simplestreams link in their identity service,
<mthaddon> mgz: ah cool - so yeah, it sounds like in this case I do want to do that - as long as the control-bucket and admin-secret is unique per env things are okay, right?
<mgz> yup.
<mthaddon> thanks
<mthaddon> mgz: hmm, I get https://pastebin.canonical.com/96563/
<jaywink> sigh ... really stuck with bootstrapping successfully to local .. everything goes fine but services pending forever, over an hour. tried many times. juju debug-log just says "ssh: connect to host 10.0.3.1 port 22: Connection refused" :P
<mthaddon> mgz: adding --upload-tools seems to do the trick (doesn't error anyway, will check status shortly)
<mgz> mthaddon: geh, the list is breaking things
<mgz> so, you don't want --upload-tools
<mgz> you want `juju sync-tools` probably
<mthaddon> mgz: can I undo --upload-tools?
<mgz> unless you're really trying to use a locally built trunk version of juju for testing rather than a stable release
<mgz> mthaddon: you can just nuke the container
<mthaddon> mgz: we're not using a stable release because we need fixes from trunk - we're using 1.13.2-1~1670 (packaged)
<mgz> mthaddon: the other option here is to generate a simplestreams file in one container which references the tools, and point at that. I'm not sure we have good instructions on how to do this yet though.
<mgz> mthaddon: you may want `juju sync-tools --source DIR` then
<mthaddon> what's DIR?
<mgz> the directory where you have the 1.13.2-1~1670 binaries
<mthaddon> and will that overwrite what's there now, or do I need to nuke the container first?
<mgz> nuke it to be safe, `swift delete CONTAINER` should do it
<mthaddon> what's the problem with having done --upload-tools? just want to make sure I understand what's going on here
<mthaddon> (things seem to be working as expected in terms of the bootstrap node now responding to juju status okay)
<mgz> --upload-tools is a development hack
<mgz> what it does, mostly, is build the copy of juju in your local directory, and upload that
<mgz> but, confusingly, it also currently has a hack that searches path for a 'jujud' binary
<mthaddon> in my case /usr/lib/juju-1.13.2/bin/jujud
<mgz> which in this particular case, might do the same as using sync-tools there, but generally just breaks things in really confusing ways if you get in the habit of using it
<mthaddon> breaks what kind of things? we've been doing --upload-tools for our envs so far and have production services running in them, so would like to know if we have a booby trap waiting for us
<mgz> well, the most likely breakage is we just remove that bit of code, so then it starts complaining about not having the juju-core source and go compiler
<mgz> the breakage devs normally hit is having multiple versions of juju around, and getting the wrong one uploaded
<mthaddon> ok, so the --upload-tools command itself might break, but if it succeeds the env is okay
<mgz> only if you're on a clean machine with only one jujud around
<mgz> which should mostly be the case for you guys, but is always breaking us :)
<mthaddon> we deploy from one server which will only ever have one version of juju-core installed (as far as I can tell)
<mthaddon> thanks for the help
<mgz> `juju sync-tools --source `which juju`` is the sane-ish equivalent, if you can start using that instead
<mthaddon> mgz: error: unable to select source: specified source path is not a directory: /usr/lib/juju-1.13.2/bin/juju - dirname $(which juju) then says "no tools available"
<mthaddon> ls /usr/lib/juju-1.13.2/bin
<mthaddon> juju  jujud  juju-metadata
<mgz> er... yes, you're better at shell than I :)
<mthaddon> mgz: right, but now I'm getting "no tools available"
<mgz> I'll investigate what's up and get back to you
<mthaddon> cool, thanks
<jaywink> anyone know why juju debug-log (and juju ssh 0) give "ssh: connect to host 10.0.3.1 port 22: Connection refused" after a successful local bootstrap? already tried shutting down ufw ... any tips would be greatly welcome .. running juju-core stable on raring
<X-warrior> What is wrong with my configs file? http://pastebin.com/XeSKXzjb when I try to use, I get "command failed: unknown option "git-key"
<X-warrior> Oh, I just find that the config.yaml should have options: on begin of it. So I updated it to http://pastebin.com/1jFpW26H but still doesn't work :S
<X-warrior> what user does run the hooks?
<marcoceppi> X-warrior: root
<jasondotstar> noob here. I'm using ec2. is there a guide to ensuring that I'm using the free tier only?
<jasondotstar> afraid i might see some hidden charges
<jasondotstar> :-/
<marcoceppi> jasondotstar: by default you get small instances
<jasondotstar> marcoceppi as i understand, t1.micro is the free tier
<marcoceppi> if you're using Ubuntu as you desktop though you can try juju using the local provider
<marcoceppi> jasondotstar: you get 700 hours a month of free t1.micro on was. however they are not recommended because they are severely underpowered. there is a way to use them though
<jasondotstar> marcoceppi ok. so if I simply want to develop charms and use juju to test them, i can use the local provider, which i assume means the charms deploy to my own nodes instead of a specific cloud provider?
<marcoceppi> jasondotstar: you can run `juju bootstrap --constraints "cpu-power=0 cpu-cores=0 men=128"` to get micros with juju. however for developing you can use local provider which turns your machine in to a cloud using LXC
<marcoceppi> mem*
<marcoceppi> and local provider is free :)
<jasondotstar> marcoceppi right. found this: http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<jasondotstar> marcoceppi seems like a decent place to start.
<X-warrior> I'm creating my hooks and there is some code that will be the same on more then one file, I would like to create a third script where I add this code. Should I add it to a subfolder inside hooks? Is there any specific name? Or can I just pick any one?
<marcoceppi> X-warrior: there's no real convention yet, we recommend putting it in a lib directory in the root of the charm (leaving just hooks to be in the hooks directory)
<marcoceppi> but having common code shared via a common location (lib, etc) is a great way and one we consider a best practice for charms
<kurt_> Hi All - anyone know why I would get "connection refused" from juju status, but can ssh to juju node direct just fine?
<kurt_> http://pastebin.ubuntu.com/6040584/
<kurt_> this is juju 1.12
<mhall119> smoser: arosales: can you guys start filling in your track summary highlights for today's closing session: http://pad.ubuntu.com/uds-1308-track-summaries
<arosales> mhall119, will do. what time does it need to be complete?
<smoser> arosales, its for http://summit.ubuntu.com/uds-1308/meeting/21888/track-summaries/
<smoser> (19:00 UTC)
<smoser> jamespage, if you want to help, that'd be good too.
<mhall119> arosales: the summary session is at 1900
 * smoser didn't think about this.
<arosales> mhall119, is the plan to have smoser and I present our respective tracks
<mhall119> also we'll need one of you to be on the hangout
<jamespage> smoser, I'll try to
<X-warrior`> marcoceppi: nice. so if I would like to execute it from hook, could I execute 'sh ../lib/file' or does the hook execute from another 'context'?
<arosales> smoser, jamespage: I'll coordinate with you on who does the presenting.
<arosales> mhall119, thanks for the link
<marcoceppi> X-warrior`: hooks are executed from $CHARM_DIR which is the root of the directory
<marcoceppi> so you'd just `sh lib/file`
<X-warrior`> oh sweet
<X-warrior`> :D
<arosales> kirkland, hazmat, gary_poster, would love to hear from you guy any really anyone on http://summit.ubuntu.com/uds-1308/meeting/21899/servercloud-s-juju-new-user-ux/
<arosales> s/guy any/guys and
<arosales> If anyone has any feedback on getting started with Juju, from the web site to charm authoring please join the uds session: http://summit.ubuntu.com/uds-1308/meeting/21899/servercloud-s-juju-new-user-ux/
<arosales> we'll post the hangout url in #ubuntu-uds-servercloud-2 channel, hope to see some folks there
<gary_poster> arosales, how do I join hangout?
<arosales> gary_poster, be in  #ubuntu-uds-servercloud-2 and i'll post the hangout url shortly
<gary_poster> cool arosales thx, there now
<arosales> gary_poster, thank you
<X-warrior`> Is it normal to bootstrap logging goes up to 3.3gb of logs in less then a week?
<sarnold> X-warrior`: I do recall seeing something that said logs were never rotated and running out of disk space was a real problem. I don't know if that's been addressed yet.
<X-warrior`> sarnold: uhmm
<weblife> Juan Negron  in here?
<jamespage> negronjl, ^^
<weblife> negronjl: I am not that good at reading python.  Have you made a option for mongorestore in the mongodb charm?
<weblife> jamespage:  thanks I just looked him up on launchpad
<mhall119> jamespage: smoser: arosales: which of you is going to give the cloud track summary today?
<weblife> negronjl: I am trying to see if I could send a mongodump on a joined relationship to the mongodb instance.  My charm runs its spins up its own mongodb service until one has joined.  I also want to send mongodump files over periodically to my charm in case the mongodb instance crashes.  All of this is to minimize instances for the sake of cash savings.
<arosales> mhall119, I am
<mhall119> thanks arosales
<arosales> mhall119, sure, np.
 * smoser goes to sendbeertoarosales.com
<arosales> smoser, +1 and I owe you a few too
<weblife> lol
<kurt_> That was a great session "Amazing First 30 min Juju Experience" - good discussion between you guys
<marcoceppi> weblife: he's in Japan ATM, might not reply right away
<weblife> marcoceppi> Thanks I will email him
<adam_g> how do i sync in juju-core tools into a firewalled MAAS cluster?
<jcastro> 2 new bounties on AU for those who can answer these questions
<jcastro> http://askubuntu.com/questions/335720/agent-state-info-hook-failed-config-changed-deploy-wordpress-using-juju
<jcastro> http://askubuntu.com/questions/337075/how-can-i-expose-icmp-ports-in-a-hook
<kurt_> Any idea why I cannot "juju ssh 0" in 1.12?  "juju status" also gives "connection refused" Time is in sync, but in UTC on charm node.  This worked previously.  verbose output: http://pastebin.ubuntu.com/6041608/
<kurt_> And yes, I should have the right mongodb installed
<weblife> kurt: I maybe you missing '\'  "juju ssh  \0"
<weblife> thats what i have to do when I ssh into bootstrap
<weblife> or perhaps your bootstrap failed
<marcoceppi> weblife: if you can't get status to give you back information, juju ssh won't work (it has to query juju status to get the address for the 0 machine)
<marcoceppi> err kurt_ ^
<kurt_> marcoceppi: I was just putting this all in to ask ubuntu
<kurt_> do you have an idea already?
<marcoceppi> kurt_: nope, stick it in ask ubuntu and I can take a look at it later
<weblife> marcoceppi:  yeah bad help idea.  Read the last part of his message after I responded
<kurt_> Ok
<marcoceppi> kurt_: but it sounds like either you can't reach that ip address (can you ping it?), or mongodb didn't start up
<marcoceppi> kurt_: you can try to just `ssh ubuntu@172.16.118.12`
<marcoceppi> some more stuff to try to ask ubuntu
<kurt_> marcoceppi: that works fine
<kurt_> marcoceppi: are you aware of any bugs around mongodb not starting correctly on node reboot?
<marcoceppi> kurt_: can you check the two upstart jobs, they start with juju-, are running (`initctl list | grep juju`)
<marcoceppi> kurt_: it's possible
<kurt_> is this what you are referring to? juju-db stop/waiting
<marcoceppi> kurt_: yeah, start that
<marcoceppi> there should be another job, I think, that starts with juju
 * marcoceppi checks
<kurt_> uh oh
<kurt_> marcoceppi: Thu Aug 29 20:52:56 [initandlisten] ERROR: Insufficient free space for journal files
<marcoceppi> kurt_: df -h should help you out :)
<kurt_> yeah, just did that
<kurt_> how much space does the node need?? LOL
<marcoceppi> kurt_: out of disk space?
<kurt_> si
<marcoceppi> kurt_: this might be a problem with logs not rotating
<kurt_> '/dev/sda1        18G   17G     0 100% /'
<marcoceppi> if you track down the large files, and they're juju related, please open a bug
<marcoceppi> we had an issue like this back in juju 0.7 where it just would eat disk space via logs not being rotated
<marcoceppi> I figured this was fixed, but it might not be yet
<marcoceppi> kurt_: to answer your question, yes nodes are designed to survive reboot
<kurt_> marcoceppi: thanks.
<kurt_> -rw-r-----  1 syslog adm  13047238656 Aug 29 20:55 all-machines.log
<kurt_> :D
<kurt_> that's 12 gigs of space
<kurt_> wtf
<kurt_> ROFL
<jcastro> stay classy juju logger!
<kurt_> sorry sir
<jcastro> this feels like a bug
<jcastro> it should never do that crap
<kurt_> I will log it
<kurt_> file a bug on it
<kurt_> #2 today for me
<kurt_> I'm on a roll
<jcastro> \o/
<jcastro> keep em coming
<kurt_> Dave is going to get to know my name
<jcastro> mail me your tshirt size and address, it's about time we send you something! jorge@ubuntu.com
<weblife> lol. Stay classy
<kurt_> Ok, Bug #1218616 filed for your viewing pleasure.
<_mup_> Bug #1218616: all-machines.log is oversized on juju node <juju-core> <juju-core:New> <https://launchpad.net/bugs/1218616>
<weblife> jcastro: I would like to see a charms school that focuses more on relationships. Just FYI.
<weblife> jcastro: Of course I just started watching the Best Practices video.  It sounds like this might be covered.
<kurt_> I was just thinking of what web life was thinking about in a different way
<kurt_> juju-gui should have a way to show how its deployed
<kurt_> with juju-1.x - since I can deploy charms/services on same node, the gui should show me in logical form all services running on a node and how they are related
<kurt_> or at least give me the ability to arbitrarily draw a circle or box around particular charms
<kurt_> Is the solution to send logging to /dev/null like the user suggests in https://bugs.launchpad.net/juju-core/+bug/1218199 ? That appears to spike the cpu to 100%
<_mup_> Bug #1218199: [MAAS] Deploy a charm into machine 0 = log loop <debug-log> <maas> <merge> <juju-core:New> <https://launchpad.net/bugs/1218199>
<weblife> if I am running "mongodb-10gen"(The mongodb recommended deb package)  will this be  problem for juju-local?
<thumper> maybe
<thumper> juju needs an ssl enabled mongodb
<thumper> it will fail to connect if this isn't enabled
<jcastro> weblife: for sure we can do that.
#juju 2013-08-30
<davecheney> weblife: pretty sure you need the mongo from raring
<davecheney> or one from our backports ppa
<davecheney> to the best of my knowledge 10gen do not ship an ssl enabled version
<weblife> davecheney: Correct!  I just spent the last half hour figuring this out, wans't documented to well.  But apparently no packages are except for the enterprise version. (bye bye mongodb 2.6)  Although, I do think I see how I can but that will be for a later date.
<davecheney> weblife: mongodb 2.4 + ssl is available in ppa:juju/stable if you are running < raring
<weblife> I just removed the 10gen package. The juju-local package pulls from there(I believe)
<weblife> davecheney: I just removed the 10gen package. The juju-local package pulls from there(I believe)
<davecheney> weblife: i cannot confirm that
<davecheney> it would only do so if there was a conflict with the name o the package
<weblife> arg
<davecheney> weblife: i have no experience with the 10gen provided package, so much so that I didn't even know it existed
<weblife> getting error on 'sudo juju bootstrap': error: net: no such interface.
<davecheney> weblife: have you renamed lxcbr0 to something else ?
<weblife> davecheney:  :)  I find it funny mongodb recommends that package over the apt-get package even though its not the current stable
<davecheney> weblife: sods law says that 90% of documentation is out of date
<davecheney> weblife: 10gen might change their mind when 14.04 is out
<weblife> davecheney: no... Think it was because of my initial install conflicks with the 10gen package
<davecheney> 2.4 hasn't been backported to 12.04
<weblife> that makes sense then why they do
<weblife> darn.  tried un-installing all the packages and re-installing. but still same error.  Need a break from this madness.  Maybe I will reboot and everything will work like magic. :) hahahaha (like this is windows)
<davecheney> if the error is about interfaces
<davecheney> it is unrelated to mongodb
<davecheney> 19:46 < weblife> getting error on 'sudo juju bootstrap': error: net: no such interface.
<weblife> maybe because im Saucy
<davecheney> 19:46 < davecheney> weblife: have you renamed lxcbr0 to something else ?
<weblife> davecheney: No never touched it.  Just let juju-local get it going
<davecheney> ifconfig -a
<davecheney> do you have lxcbr0 ?
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1206959
<_mup_> Bug #1206959: error from no installed lxc is obscure <juju-core:Fix Committed by axwalk> <https://launchpad.net/bugs/1206959>
<weblife> Oh yeah thats strange.
<weblife> I run lxc and says its not installed.  Then I try and install it and it says its already the newest version
<davecheney> lxc may not be a command
<davecheney> dpkg -L lxc
<weblife> juju-local installed it and started it: lxc start/running - lxc-net stop/pre-start, process 17104.  But it isn't showing in ifconfig
<davecheney> ifconfig -a
<sarnold> lxc-net should be started for the bridge to be created, right?
<weblife> yeah its acting funky on me.  I am going to switch kernels real quick.  I am using a special version to fix my network interface because I am on a surface pro.
<davecheney> weblife: mate, i think you're doing things which are too advanced for juju. We know there are some limitations on the environment the local provider expects to run in, and I think you're probably going to trip over those.
<weblife> Hey hey.  It was my kernel
<sarnold> how much memory does that surface pro have? how much memory will mongo take?
<weblife> That sucks makes connecting to the web a pain without it.
<kurt_> davecheney: is 1.14 making its way to precise?
<sarnold> but you can connect to irc fine? :)
<davecheney> kurt_: it will be in ppa:juju/stable
<weblife> I have the 128 GB only 30GB reserved for my ubuntu but 60 GB SD for most storage
<davecheney> getting things into the precise main archives is a very ardious process
<sarnold> juju probably more than most, converting from python to go doesn't easily fit in the SRU guidelines :) hehe
<kurt_> davecheney: sorry, I'm not fully aware of the release process yet.
<davecheney> kurt_: me too
<kurt_> I really only care it makes its way to precise
<davecheney> the most expedient way is ppa:juju/stable for well, stable
<davecheney> and ppa:juju/devel for devel
<davecheney> kurt_: i recommend you address you precise question to the mailing list
<davecheney> i honestly dont have the answers for you on that one
<weblife> sarnold: yeah once I connect its no problem for the most part.  Just have to deal with disconnect / reconnects to often.  Which does actually affect irc a little, especially if someone send me a msg in the small period
<weblife> and multiple net interfaces that I need to shut down
<sarnold> weblife: aha :)
<kurt_> davecheney: ok.  I'm trying hard to wrangle all the bits I need to successfully deploy openstack with gui with juju, etc.  It's been a major challenge
<davecheney> weblife: also, https://bugs.launchpad.net/juju-core/+bug/1216775
<_mup_> Bug #1216775: cmd/juju: local provider doesn't give a clear explanation when lxc is not configured correctly <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1216775>
<davecheney> kurt_: i'm sorry to hear that
<davecheney> obviously we don't intend it to be that hard
<davecheney> i'd like to hear more about the background of your problem
<davecheney> especially to find out where our documentation lead you astray
<kurt_> davecheney: it is what it is.  It's like a snapshot in time with the half-life of strontium 238
<weblife> davecheney: when I figure out the exact reason I will submit a bug report.  Just glad its working now.
<weblife> this is awesome though.  No more long waits on aws to setup
<davecheney> kurt_: we're going to cut a new stable 1.14 soon (ie, less than a week, hopefully less than 48 hours)
<davecheney> which we recommend everyone upgrade too
<davecheney> it'll be 1.14.0
<kurt_> davecheney: ok. hopefully that's on precise. :D
<davecheney> kurt_: can you please say that again
<kurt_> davecheney: I'm looking forward to getting the debug-log back.
<davecheney> kurt_: me too
<kurt_> davecheney: I opened another bug too
<kurt_> actually 3 today
<davecheney> kurt_: yup
<davecheney> thank you
<davecheney> we can't fix problems we don't know about
<davecheney> i apprecate you eating our dogfood
<zradmin> hey guys, I am trying to deploy an openstack HA with the juju guide posted on the ubuntu website. I am up to deploying the mysql instances, but the virtual IP never comes online and when i run a juju debug-log i get this line about corosync not configuring:
<zradmin> 2013-08-29 15:20:02,364 unit:mysql-hacluster/3: unit.hook.api INFO: Unable to configure corosync right now, bailing
<zradmin> does anyone have any ideas?
<thumper> hmm, which guide is saying to use juju 0.7?
<zradmin> the ubuntu HA
<zradmin> just a sec, I'll grba the url
<zradmin> https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<zradmin> or have the charms moved on to the go implimentation of juju now?
<kurt_> I think it comes through from jamespage's opus
<kurt_> zradmin: you can use any IP address.  Try deploying openstack in a non-HA scenarios
<kurt_> the vip will be ignored
<kurt_> zradmin: try using juju 1.12
<zradmin> ok
<zradmin> thanks
<kurt_> zradmin: np
<adam_g> hazmat, fyi hitting some issues with darwin + juju-core + MAAS today.  gonna look closer at it tomorrow, but i periodically fail to reset the environment. in case you have any ideas in the meantime: http://paste.ubuntu.com/6042664/
<hazmat> adam_g, hmm.. that looks like a maas or api binding issue wrt to oauth token
<hazmat> what's even more strange is that its coming back on the cli vs in the provisioner
<adam_g> wonder if its the same issue as https://bugs.launchpad.net/juju-core/+bug/1215670
<_mup_> Bug #1215670: After exhausing OpenStack quotas, juju status output polluted <juju-core:Triaged> <https://launchpad.net/bugs/1215670>
<hazmat> adam_g, not exactly, that's a recorded error async from the provisioning agent recorded into state. deploy is the initiation of the async command, records to state, provider output recorded to state shouldn't be on the cli till the provisioning agent has a chance to process the state change.
<hazmat> hmm.. it could be some sort of provider credential validation early that's failing.
<hazmat> adam_g, how reproducible is it?
<hazmat> adam_g, there's some sensitivity with the token and clock skew between the client and the server/maas afaicr
<adam_g> hazmat, seems to be 1/2 of the deployments i kick off. FWIW, MAAS has always been a bit unreliable
<adam_g> its just that the previous deployer / darwin's py env is more tolerate
<hazmat> adam_g, fair enough, but really we should be filing bugs on these rather than papering them over
<hazmat> but yeah.. we can do something similiar here if nesc, although ideally via a flag, cause i'd like to get them fixed if they occur.
<adam_g> hazmat, yeah, agreed.
<raywang> jamespage, ping
<jamespage> raywang, hello
<raywang> hi jamespage, i saw your wrote ceph charms, just want to make sure, and OSD daemon go with system disk .e.g. /dev/sda ?
<jamespage> raywang, nope
<jamespage> the ceph-mon daemons will run on the system disk
<jamespage> currently the OSD's required a dedicated disk each
<jamespage> raywang, I've just proposed some changes to allow you to run OSD's on the system disk - but they are still pending review
<jamespage> raywang, https://code.launchpad.net/~james-page/charms/precise/ceph/dir-support/+merge/182607
<jamespage> and
<jamespage> https://code.launchpad.net/~james-page/charms/precise/ceph-osd/dir-support/+merge/182608
<jamespage> you can provide configuration such as "osd-devices: /mnt/osd-local" for example
<jamespage> and the charm will use that for the OSD filesystem.
<jamespage> raywang, specifying osd-devices: /dev/sda will just not do anything - the charm will recognise that this disk is in use and ignore it
<raywang> jamespage, weired, i only have one disk (/dev/sda), and i set osd-devices: /dev/sdb, it fails, but it can be started successfully
<raywang> unit.hook.api@INFO: Path /dev/sdb does not exist - bailing
<jamespage> yeah - osd-devices is a whitelist
<raywang> unitworkflowstate: transition configure (started -> started
<jamespage> if any of the devices is already in use it just ignores it - same for if the device is not found
<jamespage> raywang, so you probably have a ceph cluster right now with no actual storage - just ceph MON's :-)
<raywang> jamespage, it won't report error without actual storage?
<jamespage> raywang, no
<jamespage> raywang, running the ceph charm without storage does actually make sense if you are using dedicated servers for MON's - but only for large clusters!
<raywang> jamespage, is there any problem with no actual storage ?
<jamespage> raywang, if you login to one of the nodes and do "sudo ceph -s" it should tell you if the cluster is running
<raywang> ok
<jamespage> raywang, well it won't store anything until you have added some nodes with storage
<jamespage> raywang, the ceph-osd charm can do that as well
<raywang> 2013-08-30 06:15:13.030434 7f16ac7d1780 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
<raywang> 2013-08-30 06:15:13.030476 7f16ac7d1780 -1 ceph_tool_common_init failed.
<jamespage> raywang, I suspect that you are not bootstrapped - how many service units are you running?
<raywang> jamespage, for mon x 3,  osd x 3
<jamespage> hmm
<jamespage> raywang, can you check for running ceph-mon daemons as well please
<raywang> jamespage, well, I run "sudo service ceph-all start", only two process are running
<raywang> 3793 ?        Ssl    0:08 /usr/bin/python -m juju.agents.unit --nodaemon --logfile /var/lib/juju/units/ceph-1/charm.log --session-file /var/run/juju/unit-ceph-1-agent.zksession
<raywang>  7064 ?        S      0:00 /usr/bin/python /var/lib/juju/units/ceph-1/charm/hooks/mon-relation-joined
<jamespage> raywang, which ones
<jamespage> ah - right - so I see mon-relation-joined still running
<jamespage> thats the peer relation between ceph units that performs the bootstrap
<raywang> jamespage, but from juju status, everything is started
<jamespage> any indication as to what its doing
<jamespage> ?
<raywang> jamespage, well, how do i know? :)
<jamespage> raywang, look at the debug-log
<jamespage> juju debug-log
<raywang> ok
<jamespage> or look at the unit log on the service unit directly - it somewhere under /var/lib/juju (for the version of juju you are using)
<jamespage> raywang, the ceph charms have been fully tested under the new version btw
<raywang> jamespage, i don't think there is any output about ceph now
<jamespage> raywang, can you pastebin me "ps -aef" please
<raywang> ok
<raywang> jamespage, http://pastebin.ubuntu.com/6043572/
<jamespage> raywang, hmm - i suspect the charm hook is blocking waiting for the ceph-mons to startup
<jamespage> raywang,  can you pastebin /etc/ceph/ceph.conf and check in /var/log/ceph for errors please
<jamespage> something has stopped ceph-mon from starting up correctly  I suspect
<raywang> jamespage, but from the juju status, everything is started,
<raywang> jamespage, is it a fake started?
<jamespage> raywang, no - its just means that no hooks have failed
<jamespage> raywang, some are still trying to run!
<raywang> ok
<jamespage> I should probably include a timeout on the pool for bootstrap  - right now the charm will wait for ever
<raywang> jamespage, ceph.conf -> http://pastebin.ubuntu.com/6043577/
<raywang> hah
<raywang> jamespage, but weird is, i add-relation ceph glance,  I can glance image-create without real storage...
<jamespage> ceph.conf looks OK
<jamespage> raywang, yeah - remember hooks can fire at any point in time
<jamespage> raywang, as ceph is not bootstrapped yet it won't give out information to glance
<jamespage> so glance is still using local storage right now
<raywang> jamespage, there is just one line in /var/log/ceph/ceph-mon.OS01-07.log
<jamespage> what does it say?
<raywang> 2013-08-30 03:33:29.453655 7ff412f31780  0 store(/var/lib/ceph/mon/ceph-OS01-07) created monfs at /var/lib/ceph/mon/ceph-OS01-07 for OS01-07
<jamespage> hmm
<jamespage> check in /var/log/upstart as well
<raywang> jamespage, if glance don't get information from ceph, it will use local storage as the fallback option?
<jamespage> raywang, yes
<raywang> jamespage, that's great :)
<jamespage> and beware - it won't transfer images when ceph does get related!
<raywang> jamespage, you mean even I register the image in glance, i also can not boot VMs?
<jamespage> I mean glance won't transfer images that got dumped on local storage into ceph once the relation is fully configured
<raywang> jamespage,  sudo zgrep ceph /var/log/upstart/ return nothing
<jamespage> hmm
<raywang> ok, got it
<jamespage> raywang, which ceph version are you using?
<raywang> but as long as glance get related to ceph, it wil pass the image to ceph later on ?
<raywang> jamespage, from grizzly cloud archive
<jamespage> raywang, if you upload images to glance after its been configured to use ceph - yes
<raywang> 0.56.6-0ubuntu1~cloud0
<jamespage> raywang, should be OK - I was testing that yesterday
<raywang> jamespage, the ceph is not started might be because i only have one disk?
<jamespage> raywang, no - the ceph-mon's should startup without extra disks
<jamespage> something is blocking it
<raywang> but i use the local.yaml from openstackHA, and it tells ceph start osd daemon too.
<jamespage> raywang, how much memory does you service unit have?
<jamespage> just trying to think that is block this
<raywang> jamespage, 196G
<jamespage> blimey
<jamespage> should be OK then
<raywang> and E7 CPU
<jamespage> can you pastebin me the relevant bits from your local.yaml file please
<jamespage> I'll try to reproduce
<raywang> ok
<jamespage> thanks
<raywang> jamespage, well thank you -> http://pastebin.ubuntu.com/6043604/
<raywang> jamespage, the reason why I got only one disk is because the HP D?L580 G7 must to have RAID for disks,  i only have two disks, so there is only one disk available to system
<jamespage> raywang, yeah - thats OK for MON's
<jamespage> raywang, upstream best practice is to have a dedicated devices for each OSD
<raywang> jamespage, so only disk is not working with ceph-osd charm, right?
<jamespage> raywang, not right now - you would need to use the branches I linked to above
<raywang> s/only/one/
<jamespage> raywang, general rule is 1 core/1GB/1OSD/1Disk
<jamespage> roughly
<raywang> ok
<raywang> jamespage, i'm sorry which branch you linked above?  I can't find it
<jamespage> https://code.launchpad.net/~james-page/charms/precise/ceph-osd/dir-support/+merge/182608
<jamespage> https://code.launchpad.net/~james-page/charms/precise/ceph/dir-support/+merge/182607
<raywang> ah i see
<jamespage> raywang, odd
<jamespage> first line of my ceph-mon.*.log is
<jamespage> 2013-08-30 10:46:14.333465 7fb27b326780  0 ceph version 0.56.6 (95a0bda7f007a33b0dc7adf4b330778fa1e5d70c), process ceph-mon, pid 17563
<jamespage> second line is:
<jamespage> 2013-08-30 10:46:14.336048 7fb27b326780  0 store(/var/lib/ceph/mon/ceph-juju-serverstack-machine-1) created monfs at /var/lib/ceph/mon/ceph-juju-serverstack-machine-1 for juju-serverstack-machine-1
<raywang> anything wrong?
<jamespage> raywang, no - started OK on my test rig
<raywang> jamespage, maybe I need to add an extra disk to re-deploy ceph and ceph-osd?
<jamespage> raywang, I'm just wondering whether its something todo with the name of your host
<jamespage> server is OS01-07 right?
<raywang> jamespage, OS01-07 is one of the ceph-mon
<jamespage> sidnei, ah - I just bumped into your backports of juju-deployer for the stable PPA
<mthaddon> hi folks, is this expected? http://paste.ubuntu.com/6043878/ (i.e. that setting a config value to "" doesn't work)
<marcoceppi> mthaddon: which version of juju?
<mthaddon> marcoceppi: 1.13.2-4~1703~raring1
 * mthaddon has to grab some food, will bbiab
<marcoceppi> mthaddon: So, I think there was a change committed for this, I thought it was in 1.13, let me find the mailing list
<mthaddon> marcoceppi: cool, thanks
<rick_h> marcoceppi: isn't that the change that setting "" actually sets "" vs the default?
<marcoceppi> rick_h: yeah, I can't find the mailing list post about it
<rick_h> I know I saw talk about it in #juju-gui, but don't recall seeing a mailing list post. Maybe I missed it
<marcoceppi> rick_h: somewhere, someone discussed it
<marcoceppi> mthaddon: I think the end result is it's a known issue and should be fixed soon
<marcoceppi> not sure what to do about it until then
<mthaddon> marcoceppi: I'll file a bug so we can track it at least - thanks
<mthaddon> https://bugs.launchpad.net/juju-core/+bug/1218877
<_mup_> Bug #1218877: Can't set config options to empty values <canonical-webops> <juju-core:New> <https://launchpad.net/bugs/1218877>
<TheMue> mthaddon, rick_h, marcoceppi: we already have https://bugs.launchpad.net/juju-core/+bug/1194945 for this missing feature
<_mup_> Bug #1194945: juju set is overloaded <juju-core:In Progress by themue> <https://launchpad.net/bugs/1194945>
<marcoceppi> TheMue: thanks, I couldn't quite find that bug
<rick_h> ah, thanks TheMue
<mthaddon> cool, thx, will merge the two
<TheMue> a new unset command has already been merged and will now be released
<TheMue> next step will be the setting of string options to empty strings. but we have to take care for the API which still unsets the option
<mthaddon> TheMue: I've observed the same behaviour when passing a yaml file as the config - will your work cover that too?
<TheMue> mthaddon: have to discuss with fwereade, but imho it should
<mthaddon> TheMue: do you have an idea of timeframe for that, as it's affecting a service we want to bring online in production and I'd like to know whether to try and work around it another way or not
<fwereade> TheMue: AIUI your charm package change does do that, right? the problem is that it changes public api behaviour and *that* needs to be worked around
<TheMue> fwereade: exactly
<TheMue> mthaddon: I think it's in next week
<mthaddon> ok, thanks
<natefinch> evilnickveitch: I'm adding a page to the Get Started section at the bottom of the getting started page, but I don't know what class to give the link... I can't find a pattern  for example, local is page-item-20 and openstack is  page-item-3596
<natefinch> (at the very bottom of https://juju.ubuntu.com/docs/getting-started.html)
<marcoceppi> natefinch: where?
<natefinch> this stuff: http://pastebin.ubuntu.com/6044074/
<marcoceppi> natefinch: OH, in the footer. If you're going to edit the
<marcoceppi> natefinch: don't edit the footer directly in the boiler plate
<marcoceppi> also, that's just auto-generated wordpress crap
<natefinch> oh
<marcoceppi> yeah, we sync the header and footer with the wordpress install using a build script that replaces the boilerplate in all the pages
<natefinch> ahh, well that's good. Certainly better than copy & paste by hand :)
<natefinch> how do I add a section to that list, then?
<marcoceppi> natefinch: you need WP admin access
<marcoceppi> we can add it in later
<natefinch> marcoceppi: that's cool
<marcoceppi> natefinch: just add it to the doc sprint spreadsheet and I'll make sure it gets done
<natefinch> marcoceppi: will do
<mgz> can we not version the boilerplate? is having a seperate build step that painful?
<marcoceppi> mgz: boilerplate is versioned
<mgz> sorry, as in, can we please not include the boilerplate .tpl in all the doc files we actually care about?
<marcoceppi> mgz: template/ directory has the boilerplate templates
<marcoceppi> mgz: eventually,  yes, but for now it's not a focus of this sprint IIRC
<jcastro> morning!
<jcastro> evilnickveitch: where's the hangout?
<marcoceppi> mgz: we really _really_ need content
<jcastro> or should I just pick a todo item?
<marcoceppi> jcastro: the hangout was about an hour ago ;)
<mgz> marcoceppi: I know, I know :)
<jcastro> oh I thought we would just be hanging out all day and people coming in and out?
<marcoceppi> jcastro: oh, I'm game for that
<mgz> jcastro: that too, I guess we just use the one in the calendar?
 * marcoceppi sits in the hangout
<jcastro> got it
<jcastro> https://plus.google.com/hangouts/_/552b1b180f1908b8a7ba7f65402ab654f5b73847?authuser=1
<bloodearnest> heya folks
<bloodearnest> am having some trouble with juju-deployer hanging after deploying services
<bloodearnest> using an lxc env
<bloodearnest> running with -v, I get a string of  "Delta unit: u1-psearch-fe/0 change:installed" type messages, and then hang
<bloodearnest> if I ctrl-c, the traceback looks like: http://paste.ubuntu.com/6044247/
<bloodearnest> have tried this on 2 different hosts with 2 different configs
<marcoceppi> bloodearnest: can you paste what juju deployer command / flags you're using?
<bloodearnest> marcoceppi, sure
<gary_poster> arosales, jcastro, hi. fwiw I'm going to be proposing to mramm (next week probably?) that the problem-driven services journey that I advocated in yesterday's "awesome first 30 min" vUDS session be hanging off of jujucharms.com--something like discourse.jujucharms.com and openstack.jujucharms.com and so on.  Maybe I'm crazy, but I wanted to let you know in case you or anyone else wanted to try to stop or redirect me off
<gary_poster> early. :-)  I don't think that should stop any plans you guys are doing, if plans are already being made.  We can move content easily enough, if we all end up having enough of a consensus to do so.
<gary_poster> Also, I'm hoping Makyo will be available to start on doc sprint in an hour or two, after his timezone comes around.
<bloodearnest> marcoceppi, juju-deployer -v -s 5 -W -L -c $DEFAULT -c $CONFIG -c $SECRETS $TARGET
<kurt_> Can't destroy my environment
<kurt_> error: gomaasapi: got error back from server: 409 CONFLICT
<kurt_> even after deleting node from maas
<kurt_> we need a hammer mode for destroy-environment. ie. -f switch :D
<arosales> gary_poster, if you put together a doc I would be happy to take a review
<gary_poster> cool arosales
<arosales> gary_poster, +1 on creative thinking on any idea to improve the first 30 min ux
<gary_poster> arosales, heh, ack
<arosales> gary_poster, also good to hear about Makyo and docs sprint
<Makyo> One more coffee and I'll be good.
<gary_poster> :-) cool
<arosales> evilnickveitch, https://code.launchpad.net/~a.rosales/juju-core/correct-exposing-link/+merge/183186
<jcastro> https://code.launchpad.net/~jorge/juju-core/add-to/+merge/183190
<jcastro> blam!
<evilnickveitch> jcastro, arosales yay! you guys are the best with all your exciting additions
 * arosales starting with the low hanging fruit first
<fwereade_> evilnickveitch, ISTM that there's so much overlap between authors-charm-anatomy and authors-charms-in-action that they should basically be merged, would you concur?
<fwereade_> jcastro, marcoceppi: ^?
<evilnickveitch> fwereade, there is a fair amount yes...
<fwereade_> evilnickveitch, jcastro, marcoceppi: possibly into anatomy (the files in the charm, and what hooks are run when) and hook-environment (the hook tools and env vars and other relevant bits)?
<marcoceppi> fwereade_: I feel like that might be a lot of content. Hook environment might be strong enough to stand on it's own
<marcoceppi> but to the former, yes I think files of a charm and hook execution plan make sense together
<evilnickveitch> fwereade_, yeah, I am restructuring some of the docs anyway. I think the anatomy should contain the descriptive bits, and we should have an additional pages for further hook realted bits
 * marcoceppi 's opinion
<fwereade_> marcoceppi, sorry, yeah, there was an implied leading "and then broken down again: " that I somehow forgot to type
<fwereade_> evilnickveitch, yeah, maybe just a listing of the valid ones in the first doc (along with the rest of the anatomy) and a link to the more detailed treatment
<fwereade_> marcoceppi, evilnickveitch: ok, I'll set about breaking those two into three, thanks
<evilnickveitch> ok, cool
<marcoceppi> evilnickveitch: for some reason this didn't find it's way in to the footer template
<marcoceppi> evilnickveitch: for jorge's page, add <script src="https://google-code-prettify.googlecode.com/svn/loader/run_prettify.js?skin=sunburst"></script> to the bottom of the page right above </footer>
<marcoceppi> evilnickveitch: going to keep digging in to the style issues
<marcoceppi> evilnickveitch: fix for funky css colors. https://code.launchpad.net/~marcoceppi/juju-core/fix-css-prettyprint/+merge/183220
<Makyo> evilnickveitch, https://code.launchpad.net/~makyo/juju-core/enable-viewboxing/+merge/180354 re: saving icon SVGs with viewbox attr.
<jcastro> hey TheMue
<jcastro> I am writing a nodejs example app page for the docs
<jcastro> and I need to link to the local config page in the docs that is currently in progress
<evilnickveitch> Makyo, hi - i thought that got merged. let me check
<jcastro> what's the filename of the html you will use so I can link it?
<TheMue> jcastro: it is config-local.html
<jcastro> thanks
<TheMue> jcastro: yw
<weblife> morning
<hazmat> adam_g, did you end up filing a bug on the maas unauthorized thing?
<hazmat> adam_g, didn't see it .. i filed bug 1218997
<_mup_> Bug #1218997: maas throws unauthorized sometimes for no reason <MAAS:New> <https://launchpad.net/bugs/1218997>
<jcastro> https://code.launchpad.net/~jorge/juju-core/nodeapp-example/+merge/183230
<jcastro> BAM!
<jcastro> bcsaller: hey I hear you're the one who volunteered to test docs?
<jcastro> utlemming: ^^^
<jcastro> sorry, mixed up my Bens
<utlemming> jcastro: ack, looking
<jcastro> http://pastebin.ubuntu.com/6044806/
<jcastro> has a rendered version so you don't have to html yourself
<jcastro> m_3: which branch should I MP on for the rack charm?
<natefinch> can someone help me get my stupid apache working so I can test my docs?
<natefinch> evilnickveitch, jcastro, marcoceppi: little help?  I tried symlinking to the docs directory from /var/www and that didn't work, so then I used marco's apache config... but I'm still just getting 403s when I try to view the docs
<utlemming> does LXC on precise not work for the local provider?
<utlemming> its stuck in pending and hasn't created the container for unit 1
<marcoceppi> natefinch: every directory between "/" and "htmldocs" needs to have r/x permissions for world
<evilnickveitch> natefinch, you can't symlink, you need to configure apache. Hopefull
<evilnickveitch> natefinch, does the log have anything useful?
<marcoceppi> utlemming: it does work, but I don't know if it's been tested thourgholy. We used it at a charm school using 1.13.1 on precise without much incident
<hazmat> utlemming, might want to check the machine logs inside of JUJU_HOME
<natefinch> marcoceppi: well, we've upgraded from a 403 to an internal server error :)
<marcoceppi> natefinch: what's apache2 error log look like?
<utlemming> hazmat: thanks
<natefinch> marcoceppi:  /home/nate/docs/htmldocs/.htaccess: Invalid command 'RewriteEngine', perhaps misspelled or defined by a module not included in the server configuration
<hazmat> natefinch, nginx ftw ;-)
<marcoceppi> natefinch: ah, this easy. `sudo a2enmod rewrite`
<natefinch> hazmat: heh... I'm sure that has its own problems ;)
<marcoceppi> well, I second that. nginx ftw.
<marcoceppi> but IS uses apache, got to replicate production :)
<weblife> jcastro:  Nice.  If Mims approves my merge,  you will also be able to run a default(It's working app) load the most current verified stable version or PPA.
<hazmat> marcoceppi, why is there a rewrite rule?
<marcoceppi> hazmat: from old docs to new docs
<natefinch> marcoceppi: brilliant, it's working.  Thanks a lot.
<hazmat> marcoceppi, gotcha
<hazmat> natefinch, ok even simpler.. cd htmldocs && python -m SimpleHTTPServer
<jcastro> weblife: which charm?
<natefinch> hazmat: actually, that's brilliant, especially for people like me that are just working on the docs temporarily
<marcoceppi> hazmat: +1000
<weblife> jcastro: sorry, was responding to your doc.  Node-app
<jcastro> ah, awesome!
<natefinch> marcoceppi: there should be a doc on how to contribute to the docs, and include that as an easy way to test them
<marcoceppi> natefinch: there is a doc for contributing to the docs, feel free to add that in there :D
<marcoceppi> natefinch: https://juju.ubuntu.com/docs/contributing.html
 * marcoceppi spies a lot of thigns that need to be fixed
<natefinch> marcoceppi: oh, sweet.  EvilNick should have mentioned that ;)
<weblife> https://code.launchpad.net/~web-brandon/charms/precise/node-app/install-fix
<weblife> Pretty sure were gonna be in Syria shortly. Assad won't sit at a table for talks. Naive Obama.
<hazmat> if we're doing politics we should be working on the drupal charm since its running most of the usg agency sites
<hazmat> minus the plone ones like fbi/cia
<weblife> lol
 * sarnold idly wonders who's brave enough to run on joomla!
<hazmat> sarnold, most of them are behind cdns with WAF appliances.
<sarnold> hazmat: I hope fairly draconian..
<hazmat> sarnold, i haven't seen any joomla, but drupal has become the defacto cms around the usg.. doe, whitehouse, nasa, etc.
<marcoceppi> hazmat: a lot of DC runs on Drupal, sadly
<marcoceppi> or fortunately, depending on your skill set
<hazmat> the revenge of php
<m_3> jcastro: renamed
<jcastro> <3
<m_3> jcastro: I'll send curtis mail asking to flush web caches
<m_3> lp:charms/rails is lp:~charmers/charms/precise/rails/trunk
<m_3> and neither lp:charms/rack nor lp:~charmers/charms/precise/rack/trunk exist anymore
<m_3> old rails is held at lp:~mark-mims/charms/precise/rails/trunk
<marcoceppi> jcastro: lp:~marcoceppi/charm-tools/python-port
<marcoceppi> python charmtools/getall.py /path/to/where/you/want/them/all
<natefinch> not sure who's looking at doc merge proposals right now, but here's mine: https://code.launchpad.net/~natefinch/juju-core/win-getting-started/+merge/183246
<utlemming> is there any way to configure which bridge the local provider uses?
<sarnold> utlemming: I think that's been a matter of discussion in here the last few days, I think the conclusion was the bridge name is more or less fixed as an assumption somewhere.
<utlemming> sarnold: that is unfortante, as lxc lets you define the bridge in /etc/default/lxc and /etc/lxc/lxc.conf
<sarnold> utlemming: yeah, and I could easily see wanting to use hand-managed lxc instances for one set of tasks and juju-managed instances for another set of tasks...
<utlemming> sarnold: my use case is the Juju Cloud Image for developers. I was going to create a bridge, bind eth0 to that bridge and then use that bridge as the LXC bridge and then use host-networking for Virtualbox. It would effectively allow people a developer environment and let them intereface with servcies from the host.
<dalek49> I'm an undergrad looking to include juju in my senior thesis.  I was wondering if anyone here knew of a place I could get a small number of servers for testing purposes
<natefinch> dalek49: If you're on linux, you can use the local provider to set up services on your local machine using lxc containers.  Otherwise... no?  If you can cobble together some cheap/free computers off of craigslist, you can install MaaS on them, which treats them just like a cloud, which juju can then use.
<natefinch> dalek49: http://maas.ubuntu.com/
<dalek49> Yeah, I've been working on local mode
<dalek49> I wasn't aware that I could put maas on my own cluster
<utlemming> any idea what's going on with the local provider here: http://paste.ubuntu.com/6045349/
<utlemming> I can't "juju ssh" or get a juju debug-log, but I can do deployments
<marcoceppi> utlemming: you can't ssh to a machine number in local, this is a known issue
<utlemming> marcoceppi: ah, thank you
<marcoceppi> utlemming: you can't debug-log either, because there is no bootstrap node
<marcoceppi> utlemming: you can however `tail -f ~/.juju/local/log/unit*.log`
<marcoceppi> utlemming: all local provider logs are stored in the ~/.juju directory to make them easily accesible
<marcoceppi> utlemming: juju ssh juju-gui/0 instead*
<utlemming> sweet, thanks for the help
<marcoceppi> evilnickveitch: charm-tools documentation https://code.launchpad.net/~marcoceppi/juju-core/charm-tools/+merge/183265
<weblife> I haven't looked into it yet but I was wondering if it would be possible to pass a tar file in  the deployment line? If not I could see some usefulness in it, fyi...
<jcastro> hazmat: how's deployer docs coming along?
<marcoceppi> weblife: not at the moment
<hazmat> jcastro, stuck in a meeting
<hazmat> jcastro, already 11m over and then back into docs
 * jcastro nods
<jcastro> TheMue: how'd you do today? Anything to land?
<marcoceppi> weblife: but it's a really interesting idea
<natefinch> evilnickveitch: https://code.launchpad.net/~natefinch/juju-core/win-getting-started/+merge/183246
<TheMue> jcastro: just tested if my doc is correct
<jcastro> TheMue: rock and roll!
<TheMue> jcastro: now adding it to the menu as it is a new document and then pushing it
<TheMue> jcastro: yeah
<jcastro> ok, a bunch of us landed stuff, nice!
<jcastro> I'll go ahead and mark your section as DONE then
<jcastro> \o/
<jcastro> https://code.launchpad.net/~charmers/juju-core/docs/+merge/183188
<jcastro> ^^ this should be deleted right?
<weblife> stupid connection
<weblife> off to see cafe tauba
<TheMue> jcastro: so, proposed
<TheMue> so guys, i'll stepping out. liked this sprint. have a nice weekend
<jcastro> marcoceppi: can you repaste me the hangout URL?
<marcoceppi> jcastro: https://plus.google.com/hangouts/_/552b1b180f1908b8a7ba7f65402ab654f5b73847
<hazmat> 4 branches in the queue, 15 merges today
<hazmat> go docs
<dalek49> why does juju depend on mongo?
<rick_h> dalek49: juju stores state into mongo
<dalek49> is there some documentation on that?
<rick_h> dalek49: http://blog.labix.org/2013/06/25/the-heart-of-juju has some of the basic design ideas. The 'state server' is mongo backed.
<rick_h> dalek49: I'm not sure if there's real docs on the state server in the full docs https://juju.ubuntu.com as it's more user docs vs technical 'how it works'
<marcoceppi> dalek49: we dont' really cover it too much, as it could change in the future (we've already moved from originally using ZooKeeper)
<arosales> evilnickveitch,
<arosales> https://code.launchpad.net/~a.rosales/juju-core/docs-update-scaling/+merge/183284
<marcoceppi> \o/
<marcoceppi> more merges!
<dalek49> I'm getting ssh: connection refused when I run debug-log.  Anyone run into this?
<marcoceppi> dalek49: local provider?
<dalek49> marcoceppi: yes. It's local
<marcoceppi> dalek49: debug-log doesn't work on local.
<marcoceppi> dalek49: however, all the logs are stored in ~/.juju/local/log/
<marcoceppi> dalek49: where "local" is the name of your local environment
<dalek49> marcoceppi: many thanks
<marcoceppi> np, if you want the debug-log experience of having a bunch of text rush at you at once, you can run `tail -f ~/.juju/local/log/unit-*.log`
<adam_g> hazmat, new issue https://bugs.launchpad.net/juju-core/+bug/1219116
<_mup_> Bug #1219116: juju-deployer fails against juju-core: dial tcp 127.0.0.1:17070: connection refused <juju-core:New> <https://launchpad.net/bugs/1219116>
#juju 2013-08-31
<hazmat> adam_g, hmm
<hazmat> it looks like the apiclient barfs and then the worker restarts it
<hazmat> rogpeppe any insight on https://launchpad.net/bugs/1219116
<_mup_> Bug #1219116: juju-deployer fails against juju-core: dial tcp 127.0.0.1:17070: connection refused <juju-core:New> <https://launchpad.net/bugs/1219116>
<hazmat> adam_g, the issue looks like its internal to juju-core
<adam_g> hazmat, yeah..
<adam_g> i retry
<adam_g> er
<adam_g> i have it retrying on failure, looks like when api goes down its down for a while
<clong> Hi Juju people
<clong> I have a question about MAAS & Juju on ubuntu 13.0.4 vs 12.0.4. Which version of Ubuntu does it actually install on with the least amount of issues?
<sarnold> clong: I'd expect 12.04 LTS to have better success, as it is more commonly used in deployments.
<clong> Yeh I think I might try that today.
<clong> I've got 31 nodes on my 13.0.4 MAAS server. I'll delete a few off that and try setting up a 12.0.4 server I think.
<clong> sarnold have you ever set up maas and juju?
<sarnold> clong: I've never done maas before
<sarnold> clong: one of the downsides of 13.04 is that there are probably fewer juju charms available
<clong> sarnold, thats correct for the juju nodes rolled out as 13.0.4 but the server doesn't discriminate if you're rolling out 12.0.4 from a 13.0.4 server
<sarnold> clong: oh cool :)
<sarnold> I guess that's only to be expected, but I haven't tried myself.
<clong> well time to experiment again today I spose :)
<sarnold> cool :)
<clong> Hi all. I'm new to this chan and just back into irc after a 15 year break. Hi
<clong> I'm currently commissioning a 32 node blade with maas and juju and having a few issues that I'm slowly working through and bit by bit I'm progressing.
<clong> I'm hoping that there might be a few on here who had already successfully deployed a similar solution that would be able to help answer questions with facts and not just opinions.
<lifeless> clong: they may, but you'll have to ask a question first :)
<clong> what is the difference maas vs local juju environment
<marcoceppi> clong: so MaaS is metal as a service. In the case of Juju, you're using MaaS as your cloud provider, deploying on either bare metal or vMAAS (virtualized servers, kvm, xen, etc, driven by maas)
<marcoceppi> local provider will create a "cloud" environment on your machine (if it's Ubuntu) using LXC
<marcoceppi> clong: local is designed to be a way to test charms without having to pay for a cloud service, it works reasonably well but without a lot of extra fidiling it's really limited, accessiblitywise, to your host machine
<marcoceppi> clong: welcome back to irc o/
<marcoceppi> clong: typically the room is "busier" during US business hours/business days. So if you don't get your question answered feel free to post it on http://askubuntu.com using the "juju" tag or mail the list at juju@lists.ubuntu.com
<clong> so I could install juju-gui in the local env but it would not allow me to deploy charms to my maas nodes?
<marcoceppi> clong: right, the juju-gui only interacts with the currently deployed env. So if you deploy juju-gui to local juju environment, you can then deploy and manage services (in addition to the cli) using juju gui for _that_ local environment. If you juju deploy juju-gui to your MaaS environment, you can use that instance of the juju-gui to drive your maas environment
<clong> I'm more likely on irc during my evening/night/early morning australia time (US day time) researching than I am otherwise :)
<marcoceppi> clong: only interacts with your current environment, at this time*
<clong> I thought so. So that's where I'm stuck.. I seem to be able to get the local env bootstrap working and deploying within the local env but my maas bootstrap is not completing correctly.
<marcoceppi> clong: maas and juju still has a few friction points, I'd be happy to help where I can. First, what version of juju are you using?
<marcoceppi> clong: either `juju version` or `juju --version`
<clong> When I juju bootstrap -e maas, my node is "deployed" and installs a new instance of linux. that node reboots on completion of install and seems to run the juju bootstrap script but the it fails to connect to server.
<clong> juju version 1.13.2-raring-amd64 and maas version 1.4.5
<marcoceppi> clong: Okay, cool, so when you run juju status -e maas you don't get a response?
<clong> it takes a while to get an errored response
<marcoceppi> clong: can you run the rest of your commands going forward with the `-v` and `--debug` flags? This should give us enough information to potentially diagnose issues, so run juju status again with those flags and paste the results to paste.ubuntu.com
<clong> I'll bootstrap my maas env now and post the responses if you like
<marcoceppi> clong: yeah, use the above flags as well ^ that'd be great
<clong> it's bare metal rather than vm so it'll be a little while. I'm running on a 32 node HP blade
<marcoceppi> clong: no problem, I should be around most of the day, and I'm sure others will be around too
<clong> would you like me to post all of the debug info or just the erros?
<marcoceppi> clong: might as well just post it all, makes it easier to follow the output
<marcoceppi> clong: as far as networking goes, when the bootstrap node comes up, are you able to access it directly from your host machine your'e running the juju commands from? Either via ping or ssh?
<clong> yes. I can ssh into it from standard cli but not 'juju ssh'
<marcoceppi> clong: right, gotchya.
<marcoceppi> If you can't get juju status back without an error then most other juju commands will fail as they all do lookups against the bootstrap node first
<clong> yep that's what I see
<clong> here's my env for maas from the environment.yaml
<clong>   maas:
<clong>     type: maas
<clong>     # Change this to where your MAAS server lives.  It must specify the base path.
<clong>     maas-server: 'http://10.200.0.100/MAAS/'
<clong>  maas-oauth: '[big long string]'
<clong> admin-secret: 'nothing'
<clong>     default-series: precise
<clong>     #default-series: raring
<clong>     authorized-keys-path: ~/.ssh/authorized_keys # or any file you want.
<clong>     #authorized-keys-path: ~/.ssh/id_rsa.pub
<clong> I'm not sure about the authorized-keys-path: , I've tried both of the above
<marcoceppi> clong: if you can ssh directly to the node then the authorized-keys-path is set correctly
<marcoceppi> clong: you can typically, at least for most other providers, omit it and it'll just used your id_rsa.pub file
<clong> as I thougtht
 * marcoceppi recommends a pastebin going forward, as the output for most of these commands will be more than one or two lines long, http://paste.ubuntu.com
<clong> how does paste bin work?
<marcoceppi> clong: you just copy and paste output in to it and it gives you back a link
<clong> ah cool.
<marcoceppi> there's also a command on ubuntu called `pastebinit` that you can install from the achives. So you can do things like `juju status -e maas -v --debug | pastebinit` or if you wanted to watch the output, `juju status -e maas -v --debug | tee status-output; cat status-output | pastebinit`
<marcoceppi> if you're lazy like me and don't want to use  your mouse to copy and paste :)
<clong> ok then... here's the initial bootstrap debug info: http://paste.ubuntu.com/6047680/
<clong> cool. I'm a 9mnth linux vet so I'm loving the tips for lazy fingers
<marcoceppi> clong: cool, looks sane so far. It's defaulting to 1.12.0 (latest stable release of juju-core) despite you using 1.13.2; there's a way to get the lastest tools but for the purposes of this initial bootstraping, 1.12.0 and 1.13.2 play together just fine
<marcoceppi> davecheney: How did we get bootstrap to use the lastest tools and not just the lastest stable tools?
<clong> ok. I've added all of the ppa's for stable, dev etc that's probably why the wierdness
<marcoceppi> clong: so actually, this  whole tool fetching process is something outside the realm of ppas. There's a client tools (the ones in the ppa) then there are the server tools which are kept in a public bucket numbered by releases. The tools on the server are installed independantly from the client tools
<marcoceppi> clong: in older releases of juju (juju < 1.0) all the tools were installed by ppas and packages, so if you had a long running deployment and deployed a new service, if there was a newer version of juju then that node would get it but none of the others would get the update. So it lead to some funkiness. This way the tools are cached on a per environment basis and can be upgraded all at once using a juju command
<clong> sounds like a much better way to go. Almost googlesque but all good to me :)
<clong> so are you one of the dev's too?
<marcoceppi> clong: no, I'm not a dev working on the juju tool itself per se, I'm a charmer. Creating charms, reviewing submitted charms, building tools to make charming easier, etc
<clong> I'm looking to start writing some charms for entermedia media asset manager (open-edit) once I get the env working
<marcoceppi> clong: awesome, I'm a lot more helpful when it comes to charming :)
<clong> that's cool.  I'm sort of a jack of all trades. I was an animator turned Windows sysadmin turned manager turned back to sysadmin/devops now but in linux land :)
<marcoceppi> clong: very cool
<clong> ahhh looks like the node is restarting to init the juju stuff
<marcoceppi> clong: awesome, once it comes back up you can actually watch what it's doing by logging in to the node and tailing a file. So in order to set itself up juju uses cloud-init to drive the configuration of the machine
<clong> unfortunately I can't copy and paste from the ilo window
<marcoceppi> clong: so there's a file, I think it's in /var/log/cloud-init* which covers what's going on
<marcoceppi> clong: that's fine, just for your reference, if you wanted to see where the process was
<clong> is that on the node or the maas box. I can't see it on the maas server tho
<marcoceppi> clong: it's going to be ont he bootstrap node that just rebooted
<marcoceppi> clong: also, it's not a requirement, just spouting out information
<clong> ok.. ^Croot@ce-maas:~/.juju# juju -v status --debug
<clong> 2013-08-31 12:54:11 DEBUG juju.provider.maas environprovider.go:27 opening environment "maas".
<clong> 2013-08-31 12:54:11 DEBUG juju state.go:158 waiting for DNS name(s) of state server instances [/MAAS/api/1.0/nodes/node-2a415564-05aa-11e3-9f95-0017a4770000/]
<clong> 2013-08-31 12:54:11 INFO juju.state open.go:68 opening state; mongo addresses: ["cloud05b.cuttingedge.com.au:37017"]; entity ""
<marcoceppi> clong: from your machine, can you resolve an IP address for cloud05b.cuttingedge.com.au ?
<clong> I think the dns isn't doing it's thing properly cos I can't ping
<clong> ping cloud05b.cuttingedge.com.au
<clong> ping: unknown host cloud05b.cuttingedge.com.au
<marcoceppi> clong: that's the problem!
<marcoceppi> clong: Okay, so, maas has it's own DHCP and dns server
<clong> it was working before I did the latest upgrade
<clong> yep I know. I installed it all
<marcoceppi> clong: you might be able to fudge your system settings to include the MAAS dns server in your lookups
<marcoceppi> which would allow you to query the IP addresses for it
<clong> so I wonder why it's not working
<marcoceppi> clong: try to dig @10.200.0.100 cloud05b.cuttingedge.com.au
<marcoceppi> clong: try to `dig @10.200.0.100 cloud05b.cuttingedge.com.au`
<marcoceppi> do you get an IP?
<clong> lxcbr0 is up. I think that might be overiding it
<marcoceppi> clong: local provider usually runs on 10.0.3.0 ip span, but it's possible
<clong> yeh. i'd previously setup a range starting @ 10.200.1.1 and it was working nicely till i install lxc
<marcoceppi> clong: gotchya, is the local provider still bootstraped? Did the above dig command work?
<clong> I destroyed the local environment too which I would have thought would have killed lxc
<clong> sry I missed the dig command.. yep it worked. I've found the address now
<marcoceppi> clong: you can get around this by editing /etc/resolv.conf on your local machine and adding nameserver 10.200.0.100 to the top of the file
<marcoceppi> that should add the maas server as a lookup
<clong> it's given cloud05b the cname 10-200-1-12.cuttingedge.com.au
<clong> that is the only entry in my resolv.conf
<clong> wierd
<marcoceppi> hum, interesting
<marcoceppi> it sounds like you've got juju setup correctly, since you're able to boostrap and get a machine running, it's just a matter of working out the DNS resolution
<marcoceppi> Most people don't use a domain for their maas server, it's typically something.local or something.ext so they can properly route their local traffic for that TLD to the maas server
<marcoceppi> afaik
<clong> I'm using our fqdn
<marcoceppi> clong: right, this is the part where I start to get hazy on what to do next
<marcoceppi> I've not had to resolve this specific issue, but the problem you're seeing is juju has the MAAS domain for it's record, and it cant' resolve that to an actual machine
<clong> it's for some reason using hypenated ip's as the hostname in the dhcp/dns. The  hostname of the prepared node is correct funilly enough when I'm logged into it
<marcoceppi> clong: so if  you do a dig for 10-200-1-12.cuttingedge.com.au you should see it resolves to 10.200.1.12 and the hostname of the node is either going to be 10-200-1-12 or cloud05b, I forget which one maas uses
<clong> I'm wondering ifit's got something to do with the dhcp or dns preprepping hostnames. I noticed in on of the conf files the other week that this seemed to be the case
<marcoceppi> clong: I was going to say you might get some help in #maas, but I think it's the same thing. They're mostly US timezones and work days
<clong> heres the dig: http://pastebin.ubuntu.com/6047772/
<marcoceppi> clong: and if you do a dig without the @10.200.0.100 ?
<clong> it's trying to resolve it from the dns server that I specified the maas server to use
<clong> http://pastebin.ubuntu.com/6047778/
<clong> therefore getting nothing back
<marcoceppi> clong: are you on the maas master right now?
<clong> yep
<marcoceppi> clong: hum, I wonder if that's why it's acting weird. You don't technically need to run juju from the maas master, if you were under that impression
<clong> I want to so I'm not burning my nodes. I've got to build a pretty decent render farm out of it
<clong> every node counts in render land
<marcoceppi> clong: I meant you can run it from a desktop, if you're using an ubuntu desktop, or a VM on a desktop. It's not restricted to the environment directly
<clong> oh ok. I think I get what you mean. In this instance it's just more convenient having it consolidated with the maas server
<marcoceppi> clong: regardless, I'm at a loss for how to get this dns to resolve properly
<marcoceppi> clong: right, so in the future you can just copy your environments.yaml and ssh keys to another machine and you should just be able to run juju status and get it to work, as an FYI
<clong> I'm just looking at the dhcp and dns stuff at the moment. To see if I can see how to resolf that stuff. It makes sense that if I can't see cloud05b from the maas cli and juju is looking for the hostname rather than the ip address that by fixing that issue it should fix the juju issue.
<clong> thats cool to know
<marcoceppi> clong: right, otherwise you've got juju and maas all set up from what I can tell
<clong> yep.. I've definately advanced alot in that last 2 days.. Finding the info on launch pad was a good thing and getting on here too.
<marcoceppi> clong: oh, I had a better idea
<marcoceppi> clong: actually, nevermind
<marcoceppi> s/better/different
<clong> maas-dns uses bind9 doesn't it>
<marcoceppi> clong: pretty sure, yes
<clong> whats your s/better/different idea?
<marcoceppi> clong: I was going to say you can just add cloud05b...com.au to your /etc/hosts and map the IP manually, that will get you the juju commands working, but you won't be able to juju ssh to any other nodes as you'll need to add their information directly
<clong> /etc/bind/maas/zone.cuttingedge.com.au has the whole ip range in there formated as 10-200-1-36 IN A 10.200.1.36
<clong> etc
<clong> I wonder if I deleted those that maas would re-enter them based on the node names specified in the maas gui
<marcoceppi> clong: right, since you're using another DNS server to manage cuttingedge.com.au, I wonder if you created a subdomain for maas directly, (say, cloud.cuttingedige.com.au) and had that DNS server 10.110.1.10 delegate NS to 10.200.0.100 if that would fix the loop. You'd need to re-configure maas to use <machine>.cloud...com.au but it might be a way to fix this DNS lookup issue
<clong> or at least I could change the 10.200.1.12 to be cloud05b in this instance and set the dhcp lease to never expire for each node as they come up
<marcoceppi> clong: that's another way, you'd have to edit the dhcp stuff, but it's a possibility. negronjl was telling me about how dhcp leases should expire with MaaS setups, but I can't remember why he was suggesting that
<clong> probably on the vm side where the mac addresses would be dynamic but bare metal should be fine
<marcoceppi> clong: there was something else about it, he does a lot of Juju + MAAS to deploy OpenStack, so it might have been specific to that
<clong> which is what I want to do so I should probably listen to that advise
<marcoceppi> clong: heh, I know it's a weekend, but we've got a few people around here who have done the MAAS + Juju for OpenStack deployment, they're just probably out enjoying the weekend :) It's something that's been on my todo list for a while just haven't found the time to sit down and set up maas
<clong> Hmm how so set up my windows dns to delegate ns to 10.200.0.100 .... I haven't done that in a very very long time.. need to crack open prof google
<clong> I always like to throw myself into the deepend
<marcoceppi> Sink or swim! :)
<clong> I'd never configured a hp blade before a week ago either.. that was fun, then getting ipmi to work with maas, that was even more fun.. Then juju and now back to the dns stuff. Today is the first time I've felt I've had to ask for help. I appreciate it
<clong> Yeh, I think I do have to setup this subdomain. God I hate windows dns
<marcoceppi> clong: fwiw, you seem to have gotten maas and juju cobbled together pretty well, so kudos to that (since I know our official maas + juju docs are a bit lacking)
<clong> yeh, they're a few generations old. I think that's probably where I could contribute a little. I'm always writing up install notes in our internal wiki
<clong> I'm a history hack and slasher :)
<marcoceppi> clong: we just had a document sprint, so any feedback you have from your experience with maas + juju would be greatly appreciated!
<clong> which group should I join to contribute to that in future?
<clong> I'm a launchpad member already
<marcoceppi> clong: The docs are in the bazaar branch lp:juju-core/docs; you don't need to join a group to contribute, but if you're interested in charming, ~charm-contributors is a group you could join
<clong> cool. I'll definetly do that
<marcoceppi> clong: I'd also join the juju mailing list, lots of conversations happen around there when irc is a quiet
<marcoceppi> https://lists.ubuntu.com/mailman/listinfo/juju
<clong> now I've just got to change my maas dns to be authoritative and change it to cloud.cuttingedge.com.au  so that my windows dns resolves it
<clong> time to destory my bootstrapped env methinks
<clong> well I've joined the  ~charm-contributors group just now
<clong> hopefully tonight I'll get this dns issue sorted then everything else will flow on from that
<marcoceppi> clong: I, personally, like to think charming is the real fun part. Though I may be a bit biased. We've got quite a bit of docs on how to deploy Openstack with Juju so that process should be pretty smooth
<clong> this is where i work btw: www.cuttingedge.com.au
<clong> I'm all for getting into the charming since we're using puppet at work at the moment
<marcoceppi> clong: while we don't have any good examples of puppet, you can probably use quite a few of your puppet scripts to drive any charms you create. We have an example of a charm using chef scripts to help drive configuration, just not one with puppet that I know of yet
<marcoceppi> So there at least won't be too much duplicated/lost effort when you start getting in to charms
<clong> I'm going to be using chef too with openstack, or do the charms sort of make chef and puppet redundant?
<marcoceppi> clong: so the whole point of the charms is to describe, in whatever language you like, how to setup a service. So most of what you get with configuration management tools is handled by charms. Charms just also expose the additional service orchestration layer that you don't nessisarily get with chef and puppet. As a result, since charms can be written in any lanuage you want, if you want to use chef or puppet scripts to drive the
<marcoceppi> configuration of a service and just link them with the orchestration bits that juju provides, you can do so
<marcoceppi> charms are designed to be super flexible
<marcoceppi> how to setup a service, and how that service talks to other services (eg charms)*
<marcoceppi> The whole chef and puppet in charms is a bit more advanced, and might not make it in to your first few cuts of charms, but it's definitely an available option
<clong> yep, I see I have alot to learn :) I'm on the bottom rung of the ladder :)
<marcoceppi> The important takeaway, really, is charms do whatever you want them to
<clong> cool. Could be good for some of our production packages such as houdini, maya, vray etc
<clong> hey is this you? http://www.youtube.com/watch?v=OCnBy1I-IZs
<marcoceppi> clong: heh, yeah, that's an older video of me
<clong> yep, you are way way more advanced than I. I'm an old dog learning new tricks
<clong> this is me: http://www.linkedin.com/profile/view?id=49733998
<marcoceppi> clong: sweet
<clong> Yep. I think that getting my head into juju and charms and cloud is the best thing I could be doing right now. I can see that we'll be using them alot more in our business.
<marcoceppi> clong: awesome, we'll we're happy to help when we can!
<clong> Thanks marco... I'm going to attempt to fix this dns setup now. I'll let you know how I go.
<marcoceppi> clong: cool, good luck!
<clong> thanks
<marcoceppi> evilnick: https://code.launchpad.net/~marcoceppi/juju-core/lang-yaml-pretty-print-fixes-for-nick-lessthan-3/+merge/183327
<jcastro> clong: man that sounds awesome!
<jackweirdy> Hey all - I'm planning on entering an LDAP charm for the championships, and I'm finding most of what I'm doing is templating stuff. Can't find anything in the docs about this - is there a "Right Place (TM)" to store config file templates? or should I just make a top level directory in the charm for them?
<marcoceppi> jackweirdy: top level directory, just call it templates or something aptly :)
<jackweirdy> cool, thanks :)
<clong> hi guy's. I was wondering if anyone could tell me if I, in a maas+juju setup, can use the juju machine 0 as anything other than the bootstrap machine ie. can i " juju deploy --to 0 juju-gui" without any detrimental consequences?
#juju 2013-09-01
<davecheney> clong: we don't recommend that
<davecheney> --to takes off all the safety guards
<davecheney> and allows you to deploy two units onto the same machine when they might conflict for resources
<davecheney> that is a situation that charm authors do not cater for
<davecheney> hence we don't recommend that
<davecheney> having said that, the juju-gui charm can be deployed --to 0 as we have verified it does not conflict
<clong> haha.. thanks dave.
<clong> I'll restrict it to just juju-gui on the bootstrap machine then. That makes more efficent use of me hardware resources.
<marcoceppi> clong: I've used the bootstrap node for postgresql and mysql without much issue mongodb is not bootstrap safe though
<clong> Ahh ok. Good to know. Well it all seems like it's going well now I've fixed up the dns issues.  I created subdomain cloud.cuttingedge.com.au and it's doing what it should now
<clong> marcoceppi: Now I have juju-gui installed and I'm on the browser,  long does it take a charm to deploy and what feedback will juju-gui give me?
<marcoceppi> clong, you'll see the bat at the bottom of the charm. yellow is pending, red is error, green is good to go
<marcoceppi> bar*
<marcoceppi> you can also run juju status to watch from the command line
<clong> Ok this is cool. juju-gui is rolling stuff out for me. I have a couple of experiments going. Building a mysql server and mediawiki and have added a relationship between them. Once I'm successful with those I'm going to move onto getting the openstack implemented. I think that's going to be tricky, trying to manage openstack vm's alongside the baremetal.
<clong> <marcoceppi> I'm liking the "watch juju -v status" in cli
<clong> It's a beautiful thing. You guy's have done an awesome job
<marcoceppi> clong: you can drop the -v at this point. you can also get the status of just a service. so like juju status juju-gui will only show the GUI details
<clong> I'm looking forward to putting it to good use on my 4096 real cores x 16 virtual cores once it's openstacked :)
<marcoceppi> clong: rockin'!!
<clong> Hi Guy's. I'm now able to roll out charms using juju-gui yay. However, I have just tried to setup mysql, memcache and mediawiki with a relationship from mediawiki -> memcache and mediawiki -> mysql:db. mysql and memcache show up as good and started however mediawiki has errored. Here's the juju status for mediawiki: http://pastebin.ubuntu.com/6050089/  . The error is:  error: hook failed: "config-changed" . The machine is up, webserver is up but there are 2 symb
<clong> olic links pointing to non existent files. These are /etc/mediawiki/AdminSettings.php and /etc/mediawiki/LocalSettings.php Any ideas??
<clong> Here is the log of where I think the hook error has occurred on the deployed mediawiki machine: http://pastebin.ubuntu.com/6050119/
<marcoceppi> clong: weird, it deployed OKay for me a week ago.
<marcoceppi> clong: you can run `juju resolved --retry mediawiki/0` to see if it was a temporary thing. That will mark the unit as resolved and attempt to retry the hook
<clong> marcoceppi: yep i'll try that
<clong> Hmm... When I go back to the juju-gui I've lost all of the icons on the canvas except the cache connector line .
<clong> Hmm I think I'll kill the env and try something different
<clong> Why because I can and it's juju :)
<clong> So when you destroy services in juju-gui, is that meant to also destroy the machine or do you have to do that in the CLI?
<clong> Hi All. I've just installed open stack following matthew scott's you tube vid and I'm now wondering where i can get the default logins for the dashboard etc from?
<clong> The onlything that failed to install btw out of the stack was Cinder. I'll look into that later.
<melmoth> clong, the keystone charm create a admin user. The password is defined in the config file you used to deploy keystone
<clong> I didn't know I had to define a config file for the keystone charm in juju-gui so I didn't. I'll check the details in the juju-gui for keystone settings.
<hazmat> marcoceppi, ping
#juju 2014-08-25
<tvansteenburgh> stub, cory_fu: would appreciate your thoughts on my latest comment here: https://code.launchpad.net/~tvansteenburgh/charm-helpers/config-implicit-save/+merge/224678
<stub> tvansteenburgh: I could live with it, but I still think it would be better to hook it into the @hook decorator and save on success.
<stub> tvansteenburgh: With implicit save, you won't see what has changed since the last hook but what has changed since a previous piece of code changed a value. It is less useful.
<tvansteenburgh> stub: true, i hadn't thought of that
<tvansteenburgh> stub: i'm coming around to the idea of saving in decorator. will just have to document that you don't get the auto-save if you're not using the decorators
<stub> tvansteenburgh: yes. I think that is better than magic (you could have both modes, with save-on-success being used if a @hook decorator has been entered and save-on-change being used otherwise, but that is probably the worst of all worlds ;) )
<tvansteenburgh> stub: yeah, i think covering the (extremely) common case is sufficient
<tvansteenburgh> stub: thanks for your input
<cory_fu> tvansteenburgh: Sorry, stepped away for a minute.  The problem with "save on success" is that not every charm uses the @hook decorator and so what constitutes "success" varies.  Also, since you can't change config values from within a charm, saving on __del__ is no different (and seems riskier) than saving immediately after reading the previous values.
<tvansteenburgh> cory_fu: i think you're missing the "write your own custom key/val data to Config" aspect
<tvansteenburgh> cory_fu: you can write whatever you want to the object and it'll be saved and available in the next hook
<cory_fu> tvansteenburgh: Hrm.  While that does seem useful, I worry that it implies that you can change the value of config options as visible outside the charm.  I wonder if "store arbitrary data" shouldn't be separated from "easily and detect changes in config"
<tvansteenburgh> meh
<cory_fu> :)
<cory_fu> tvansteenburgh: stub's concern also raises issues with idempotency when re-executing config-changed hooks during `juju resolved --retry`.
<tvansteenburgh> cory_fu: be more specific?
<cory_fu> tvansteenburgh: Just that, in that corner case, running `resolved --retry` may not get you the same behavior because it will detect a different set of previous config values
<tvansteenburgh> cory_fu: yeah, i agree
<cory_fu> "save on success" would be awesome, if it could be consistent
<stub> cory_fu: maybe Config and ArbitraryData should be separate, but right now they are conflated.
<tvansteenburgh> cory_fu: but i already agreed to do the save() in @hook, so that covers that concern right?
<stub> cory_fu: It isn't too late to change that though - the benefit of hiding documentation in docstrings.
<tvansteenburgh> stub: i don't follow re docs?
<cory_fu> tvansteenburgh: What about people who don't use @hook and instead use something superior like the services framework?  ;)
<stub> tvansteenburgh: I don't think many people are using this feature, so we could get away with changing behaviour.
<tvansteenburgh> cory_fu: you can call save() explicitly, or work it into your framework
<stub> Or better yet, move your framework to charm-helpers so we can all benefit :-P
<tvansteenburgh> cory_fu: i'm not sure what you're arguing for at this point, is there a better place to do the save?
<cory_fu> stub: The docstrings are published online
<stub> doh
<cory_fu> stub: Also, the services framework is in charmhelpers now  =D
<cory_fu> tvansteenburgh: I can't really think of one, no.  I'm just pointing out that the concept of "on success" isn't universal.  But I guess covering some bases is better than covering none
<cory_fu> stub: Actually it's the second result for "charmhelpers" on Google, even.  http://pythonhosted.org//charmhelpers/
<stub> So save will need to be hooked into two places then, since both mechanisms are supported.
<cory_fu> Aww.  My doc improvements haven't been merged & published in those.  :(
<tvansteenburgh> cory_fu, is there a good place to hook it into the services framework?
<tvansteenburgh> cory_fu, have your new docs been merged yet? if so i can push a new docs release
<cory_fu> lazyPower: failed me
 * stub guesses reconfigure_services in base.py
<cory_fu> tvansteenburgh, stub: Yeah, or manage.
<stub> yer, manage
<cory_fu> tvansteenburgh: Do you have merge powers in charmhelpers?  I'm sad https://code.launchpad.net/~johnsca/charm-helpers/services-docs/+merge/231235 hasn't been merged yet, despite being (practically) nothing but a doc improvement
<tvansteenburgh> cory_fu: sorry, i'm not in Charm Helpers Maintainers
<cory_fu> tvansteenburgh: I guess I should bug marcoceppi instead, then.  :)
<cory_fu> I am sad that I don't get automatically notified of updates to MPs that I've commented on
<lazyPower> cory_fu: what are you talking about?
<lazyPower> i merged that days ago :P
 * lazyPower sweeps it under the rug
<cory_fu> "Charles Butler (lazypower) wrote 2 minutes ago"  ;)
<lazyPower> must have been stuck in a queue somewhere
<cory_fu> :p
<cory_fu> lazyPower: Any chance you also looked at https://code.launchpad.net/~johnsca/charm-helpers/docker/+merge/231226 ?
<cory_fu> That one's more substantial, though, so "no" is a fair answer.  :)
<lazyPower> cory_fu: i skimmed it, but not enough to merge it
<cory_fu> Ok
<arosales> marcoceppi: how did mongodb NY meetup go?
<arosales> any video by chance?
<lazyPower> as a third party comment re: marcoceppi at NY MONGODB - he came to standup with jazz hands with talk about how to refine our presentation process if your audience is hipsters.
<lazyPower> but ill let him drop the details :)
<jcastro> JAZZ HANDS!
 * arosales anticipates the jazz hand update :-)
<cory_fu> tvansteenburgh: Since lazyPower apparently merged my doc changes "ages ago," can you update the published docs?
<lazyPower> haha
<tvansteenburgh> cory_fu: yeah gimme a min
<cory_fu> Thanks.  :)
<tvansteenburgh> cory_fu: updated
<cory_fu> Thanks
<marcoceppi> arosales: it went really well
 * marcoceppi jazz hands
<arosales> marcoceppi: any video of the meetup?
<marcoceppi> arosales: unfortunately, no
<arosales> marcoceppi: ah ok.
<arosales> marcoceppi: Good to hear it went well, and Jazz Hands were included.
<JoshStrobl> Doc error: https://juju.ubuntu.com/docs/config-vagrant.html has an example for vagrant box add JujuBox that has the URL outside the <pre> tag.
<lazyPower> JoshStrobl: the URL being outside of the code display correct?
<lazyPower> looks like how it was rendered, the source looks correct. thanks for pointing this out
<JoshStrobl> lazyPower, yea no problem.
<lazyPower> https://github.com/juju/docs/pull/143
<lazyPower> jcastro, mbruzek, marcoceppi  ^
<jcastro> dibs!
<lazyPower> JoshStrobl: and i stand corrected. vim lied to me. thanks again :)
<JoshStrobl> \o/
<lazyPower> jcastro: you gotta click faster than marco next time ^_^
<JoshStrobl> lazyPower, and that is why I don't use Vim :P
<JoshStrobl> Kidding, I just am too lazy to learn its keyboard shortcuts.
<lazyPower> :set hidden  was the secret sauce
<jcastro> atom!
<lazyPower> there was a carriage return hiding in there that rendered like the line length broke.
<JoshStrobl> also
<JoshStrobl> https://juju.ubuntu.com/docs/authors-charm-quality.html
<JoshStrobl> Upstream Friendly is in the list
<JoshStrobl> should be outside the list like "Scalable"
<lazyPower> We want that to be the case. Preferred if your charm delivers a mechanism for configureable upgrades, that follow upstream best practices.
<lazyPower> OH
<lazyPower> you mean teh bullet point
<JoshStrobl> yea
<JoshStrobl> much bullet points, such error
<JoshStrobl> Okay jcastro, here is your chance to be more click happy than marco :P get ready!
<lazyPower> https://github.com/juju/docs/pull/144 jcastro
<jcastro> <3
<JoshStrobl> \o/
<JoshStrobl> lazyPower, any idea why there is an unnecessary 4-space "tab" in the <code>juju add-unit -n5 mediawiki</code> section of https://juju.ubuntu.com/docs/charms-scaling.html . I know it is just spacing, just a bit of nitpicking I guess, since there are add-unit calls below that, that aren't spaced that way.
<lazyPower> JoshStrobl: probably due to the docs being in a quick rewrite when they were translated to markdown. It was done by a human so it may have some inconsistencies.  If you land a PR we'll review it.
<lazyPower> this is all rendered from src/en/*article-name*.md
<JoshStrobl> lazyPower, sure thing
<lazyPower> <3 thanks for the contribution/review JoshStrobl
<JoshStrobl> https://github.com/juju/docs/pull/145
<jcastro> man, we are actually making progress on the queue now
<jcastro> Soon, there will be an end to the dark ages
<JoshStrobl> lazyPower, I'll git pull my fork, if there are a bunch of changes, I'll make them locally, commit them as a group of changes and do a pull request from that so there won't be several pull requests unnecessarily.
<lazyPower> sounds good JoshStrobl
<JoshStrobl> needs a review: https://github.com/juju/docs/pull/146
<JoshStrobl> I went through every page real quick and checked for errors. Nothing too thorough since I'm a tad busy :P
#juju 2014-08-26
<veebers> When I `juju ssh <unit name>` after a couple of seconds I get a dropped connection (Write failed: Broken pipe ERROR subprocess encountered error code 255). Can anyone point me in the direction to debug this?
<veebers> this is with an lxc local container
<stokachu> trying to bootstrap into my private openstack cloud but juju is trying to communicate with the internal network rather than the floating ip
<stokachu> it starts the instance which is given a 10.0.4.x address along with an associated 10.0.3.x address
<stokachu> 10.0.3.x is the floating ip
<stokachu> ah there it goes
<stokachu> it switched to the floating ip this time
<stokachu> so where i can view debug output for bootstrapping into openstack? http://paste.ubuntu.com/8146668/
<stokachu> its just sitting at apt-get update
<gnuoy> marcoceppi, I, err, seem to have missed the emails about my charmers membership expiring. Is it possible to get my membership reinstated please ?
<jcastro> marcoceppi: I can do that
<jcastro> what's your lp id?
<gnuoy> jcastro, if that comment was aimed at me: Thanks ! It's 'gnuoy'
<jcastro> gnuoy: you're all set
<gnuoy> jcastro, fantastic, thank you
<jcastro> heya marcoceppi
<jcastro> do we have a plan for backporting charm-tools to trusty?
<jcastro> getting a bit stale over here
<marcoceppi> no, it needs a package not in archive
<marcoceppi> jcastro: I mean, I think it's possible? but it seems like it'll take a very very long time to do
<jcastro> which package?
<marcoceppi> python-charmworldlib
<stokachu> stupid question, wrt openstack private clouds, juju sync-tools does what juju metadata generate-* does?
<stokachu> apparently not
<stokachu> so for private openstack clouds you always have to do the juju metadata generate-* dance
<stokachu> jcastro, shouldnt the last two commands in http://askubuntu.com/questions/475348/juju-bootstrap-no-data-for-cloud be included in the https://juju.ubuntu.com/docs/config-openstack.html docs?
<jcastro> I didn't even know that command existed. :-/
<jcastro> so yeah, probably!
<stokachu> unless im missing some obvious step to get juju to bootstrap in openstack
<stokachu> jcastro, heres more info on the subject http://blog.felipe-alfaro.com/2014/04/29/bootstraping-juju-on-top-of-an-openstack-private-cloud/
<stokachu> talks about juju metadata generate-* at the bottom
<jcastro> that blog post looks more complete already than our docs.
<stokachu> yea b/c right now the openstack portion won't get you into a bootstrapped openstack env
<stokachu> jcastro, oh someone pointed me to https://juju.ubuntu.com/docs/howto-privatecloud.html
<stokachu> probably should link that in the openstack-config doc
<jcastro> got time today to do a PR on the docs? they're markdown, very easy to edit
<stokachu> yea ill update that
<stokachu> thats on GH right?
<jcastro> yeah, https://github.com/juju/docs
<jcastro> branch instructions in the readme
<stokachu> jcastro, ok thanks, https://github.com/juju/docs/pull/147
<jcastro> thanks man!
<jcastro> Loved the signed-off-by btw
<stokachu> jcastro, lol thanks :D
<ayr-ton> Hey there. If I make a change in a charm code, can I upgrade a unit in production deployed with the old version? Or, like, If I want to make some changes for add support to new relations or configs, can I update in production without lose my unit?
<mbruzek> Do we have any documenation that talks about immutable configuration?
<mbruzek> marcoceppi, lazyPower, or anyone else?
<lazyPower> mbruzek: not that i'm aware of
<mbruzek> One of the charms I am reviewing has immutable config and obviously didn't know about that
<lazyPower> mbruzek: its on the best practices document
<lazyPower> Has testing of changing all config options and verifying they get changed in the application (and applied, i.e. service reloaded if appropriate) been done?
<mbruzek> lazyPower, I didn't see that
<mbruzek> thanks lazyPower I will point them to this document.
<whit> jcastro, reviewing your bundle
<whit> jcastro, deploys, passes proof, looks good
<whit> jcastro, docs look good too
<whit> jcastro, my only nit to pick w/ the doc is wrt to the statement kibana's query language
<whit> jcastro, it's based on lucene.
<JoshStrobl> Is there a juju command to export a port for a particular service or charm, strictly for testing purposes.
<JoshStrobl> Nevermind, found the issue I was having :D
<JoshStrobl> oh apache2, how I hate the fact you need a2enmod rather than just intelligently detecting what modules to use
<mbruzek> JoshStrobl, yeah!
<JoshStrobl> mbruzek, if it wasn't for nginx not really having php-fpm configured out-of-the-box, I'd be more inclined to use it. Shouldn't be a problem though by mid next year, gonna switch all my PHP code to Go.
<JoshStrobl> and just use it's built in http package
<mbruzek> JoshStrobl, Apache2 changed versions between precise and trusty, and the a2en version in trusty insists that the configuration must end in conf
<JoshStrobl> lovely
<mbruzek> JoshStrobl, so many of our charms that use apache2 do not work on trusty because of the config file change.
 * mbruzek is working to fix that.
<JoshStrobl> well it beats having to dig around in nginx configuration just to get php-fpm to work
<JoshStrobl> but then again, I'm not really one for setting up web servers either
<marcoceppi> JoshStrobl: I started a php-website charm that you can deploy on nginx
<JoshStrobl> :o
<marcoceppi> it's not done by any stretch of the imagination, but the concept is to make nginx and apache2 framework charms
<marcoceppi> https://jujucharms.com/~marcoceppi/trusty/php-website-1/?text=php-website#code
<marcoceppi> this becomes a subordinate
<JoshStrobl> marcoceppi, I get the feeling your installing of git-core will be used by something at some point?
<JoshStrobl> oh wait
<JoshStrobl> nvm
<JoshStrobl> config-changed uses it
<ayr-ton> Hey there. If I make a change in a charm code, can I upgrade a unit in production deployed with the old version? Or, like, If I want to make some changes for add support to new relations or configs, can I update in production without lose my unit?
<JoshStrobl> mbruzek, marcoceppi /\
<mbruzek> Hello ayr-ton.
<ayr-ton> mbruzek, Hi o/
<mbruzek> ayr-ton, You are asking about the "juju upgrade-charm" command.
<mbruzek> ayr-ton, have you read this yet: https://juju.ubuntu.com/docs/authors-charm-upgrades.html
<ayr-ton> mbruzek, Awesome \o/ Everytime I ask something about juju I start wondering if I will found something like juju deploy skynet soon.
<mbruzek> shh!
<mbruzek> Don't give away our future roadmap
<mbruzek> ayr-ton, Please let me know if you have specific questions about the upgrade-charm path.
<JoshStrobl> juju deploy skynet...is that like "juju deploy google-infrastructure"?
<ayr-ton> mbruzek, Okay. I will start learning this and I will be back to deploy any further question.
<ayr-ton> ahahaha
<JoshStrobl> hey mbruzek, I noticed the Joomla precise charm does a relation-set with the hostname as the public-address, meanwhile the wordpress precise charm uses the private-address. so...which one should I use?
<JoshStrobl> Since those deploying my charm need to rely on being able to hit the callback php file
<mbruzek> JoshStrobl, charm to charm communication should use the private address (unless there is some reason that contradicts that).
<marcoceppi> JoshStrobl: it depends, if it's for direct communictaion withing the cloud, private, if it's some ajax like stuff, public-address
<mbruzek> JoshStrobl, charm to world communication use the public address.
<JoshStrobl> alright then I'll use public-address then.
<JoshStrobl> thought I'd ask, since I noticed the difference
<mbruzek> JoshStrobl, things like clusters or peer communication would be an example of using the private addresses.
<JoshStrobl> mbruzek, so for instance, MySQL would be appropriate using private-address, since charms like phpMyAdmin would be privately calling the exposed private address and the charm itself exposing a public-address that allows interfacing with the UI
<mbruzek> JoshStrobl, Yes I think so, marcoceppi is more the mysql expert.
<marcoceppi> exactly
<JoshStrobl> Yea, just trying to make sure I have a firm grasp on the private v.s. public address usage and in what situations one should be used v.s. the other.
<JoshStrobl> Yea, thought so.
<JoshStrobl> Coolness.
<marcoceppi> if communication doesn't have to go to the outside world, private-address
<gQuigs> how can I see what versions of juju there are to upgrade to?
<marcoceppi> gQuigs: of a charm or the client?
<katco> gQuigs: you can try juju upgrade-juju --dry-run
<gQuigs> of a charm.. katco that will just do the latest right?
<marcoceppi> gQuigs: that will upgrade the juju client
<marcoceppi> the only way to know if a charm upgrade exists, is to deploy the gui
<gQuigs> marcoceppi: interesting.. ok, thanks!
<jcastro> lazyPower, hey, with that fix released for Wildfly
<jcastro> did you promulgate it yet?
<lazyPower> jcastro: beg pardon? the bundles whit linked were identical.
<lazyPower> let me triple chekc before i discover i'm talking out my bum
<jcastro> I'm just wondering if you promulgated it yet
<jcastro> so I can fix the readme real quick
<lazyPower> oh you bet
<lazyPower> the bundle was push back in march
<lazyPower> i didn't take any action aside from closing the bug
<jcastro> Oh!
<lazyPower> jcastro: https://launchpad.net/~charmers/charms/bundles/wildfly/bundle
<jcastro> oh, I'll fix it.
<lazyPower> sorry about the confusion.
<ahasenack> hi, does anyone know how i can checkout a specific charm store revision of a charm? Like, cs:trusty/swift-proxy-3
<ahasenack> current one is -4
<ahasenack> charm get only gets me trink
<ahasenack> trunk
<ahasenack> marcoceppi: hi, do you know?
<marcoceppi> ahasenack: yeah, charm get only does trunk
<marcoceppi> do you need source control tied to it?
<ahasenack> marcoceppi: I need to map a cs revision to a bzr revision
<ahasenack> knowing it's not necessarily 1-1
<marcoceppi> hah, good luck with that
<ahasenack> but cs:trusty/swift-proxy-3 certainly corresponds to a bzr revision
<marcoceppi> what I can do is get you a way to download the files for rev 3
<marcoceppi> you'd have to math out what bzr version that corrosponds to
<ahasenack> marcoceppi: that helps
<ahasenack> marcoceppi: there was a page somewhere showing recent changes to charms
<ahasenack> marcoceppi: that does this mapping I'm talking about, and shows it as long as what I need is recent
<ahasenack> marcoceppi: but I lost the url :(
<marcoceppi> ahasenack: oh right, the charmworld does this
<marcoceppi> ahasenack: here you go: https://manage.jujucharms.com/api/3/charm/trusty/swift-proxy-3
<ahasenack> ahhh
<ahasenack> bookmark fail
<ahasenack> will do it more carefully this time
<ahasenack> marcoceppi: thanks!
<JoshStrobl> Someone mind giving me some pointers as to why I might be having issues? I set up the Vagrant Juju box and deploy my charm (https://code.launchpad.net/~truthfromlies/charms/trusty/metis/trunk) locally (I had it sitting in charms/trusty/metis) using juju deploy local:trusty/metis. It deploys successfully (http://paste.ubuntu.com/8152792/ and http://paste.ubuntu.com/8152787/), however even after apache2 is properly configured (wi
<JoshStrobl> th a site-enabled for the directory), including the test site, I still can't get it from 10.0.3.51:80/Metis/heartbeat.html. I destroyed the machine, service, tried again with a different I.P. and it obviously didn't help. Think it might be an issue with Vagrant interfacing with my local machine, apache2, something else?
<ahasenack> marcoceppi: https://manage.jujucharms.com/recently-changed
<ahasenack> but swift-proxy is gone already
<ahasenack> but no matter
<marcoceppi> lazyPower: you probably have the most vagrant experience of us all ^
 * lazyPower reads scrollback
<lazyPower> JoshStrobl: one sec, reading your pastes
<JoshStrobl> lazyPower, alright
<lazyPower> JoshStrobl: are you trying to reach your service from inside the vagrant box, or from your HOST thats running the vagrantbox?
<JoshStrobl> lazyPower, host
<lazyPower> did you use sshuttle to forward your traffic inside the vagrantbox?
<JoshStrobl> oh god...
<JoshStrobl> no
<lazyPower> by default, we can't forward all requests in there. It would create a black hole for traffic
<JoshStrobl> for hours...
<lazyPower> so using sshuttle as a VPN - you can achieve this
<JoshStrobl> I didn't even think about sshuttle
<lazyPower> a quick and dirty test would be to curl the url in the CLI of your vagrantbox
<JoshStrobl> And this, sir, is why I'm glad you guys are around.
<JoshStrobl> lazyPower, yea I think I'll try that
<lazyPower> there are details on the vagrant documentation page of the juju docs about getting moving with sshuttle.
<JoshStrobl> yea I know, I just didn't even think about that
<JoshStrobl> thanks lazyPower, I'll set up sshuttle and try :)
<JoshStrobl> I feel like an idiot :D
<lazyPower> happens to the best of us :)
<lazyPower> the local provider experience is just a little *too* easy at times.
<JoshStrobl> lazyPower, well, I'm able to hit apache now. So that is a fantastic start!
<JoshStrobl> it works!
<JoshStrobl> Note to self: mbruzek, lazyPower and marcoceppi now are owed beers.
 * lazyPower doffs hat
<tiger7117> HI
<tiger7117> i installed JuJu-GUI and had access it through Browser (URL),  how can i remove that display message (The password is the admin-secret from the Juju environment found in â¦ ) ?
<tiger7117> any one ?
<sarnold> good evening tiger7117, which message do you mean?
<tiger7117> The password is the admin-secret from the Juju environment. This can be found by looking in â¦. path â¦ jenv file searching for the password field. Note that using juju-quickstart (https://launchpad.net/juju-quickstart) can automate logging in, as well as other parts of
<sarnold> tiger7117: interesting.. I found the string here http://manage.jujucharms.com/~pmcgarry/quantal/juju-gui/config
<sarnold> tiger7117: .. but the default isn't included here: http://manage.jujucharms.com/charms/trusty/juju-gui/config
<sarnold> tiger7117: still, try setting the login-help property on the juju-gui charm
<tiger7117> login-help: i had put some thing in it.. but still its showing that message
<sarnold> I wouldn't be surprised if you needed to set it before deploying .. or perhaps restart the webserver that runs it
<tiger7117> webserver, which ?
<tiger7117> i just installed juju and then now install its gui.
<sarnold> tiger7117: aha, from https://bazaar.launchpad.net/~juju-gui-charmers/charms/trusty/juju-gui/trunk/view/head:/HACKING.md
<sarnold> tiger7117: "service guiserver restart
<sarnold> "
<tiger7117> unrecognized service
<tiger7117> i think first have to install charm-tools ?
<sarnold> tiger7117: you've got to run that command in the container or virtual machine that runs the guiserver; it might take a 'juju ssh' to whatever unit is running the gui... check juju status output to find out which
<tiger7117> hmm.. yes, i tried to install GUI on another machine.
<tiger7117> there are three Servers, one itself for JuJu-Core, Second for making enviroment/bootstrap and third for JuJu-GUI
<sarnold> okay, try that service guiserver restart in the juju-gui unit
<tiger7117> aah, now its working :) yes had to re-start service at 3rd Server
<sarnold> sweet
<tiger7117> well, i am new to JuJu, its my 2nd day :)
<sarnold> thanks tiger7117, this was fun for me to track down
<tiger7117> i have tiny confusion.
<tiger7117> Most Welcome !!
<tiger7117> hmm.. as i have 3 servers, 1 for core Ok, 3rd for GUI.. can i use this 3rd for MySQL + Wordpress and then second for other service ?
<tiger7117> one more thing that can i install services from this GUI to another machine (e.g 2nd) ?
<sarnold> tiger7117: the command line interface at least lets you specify --to  in order to co-locate services in a single unit..
<sarnold> tiger7117: .. and I think that same syntax can let you specify an lxc container on a specific hardware machine..
<tiger7117> i have 3 physical servers.. so how lxc contrainer for it ?
<sarnold> tiger7117: some details here https://juju.ubuntu.com/docs/charms-deploying.html
<tiger7117> i think we can't deploy/expose services to different machines from GUI ?
<tiger7117> hmm.. i see
<tiger7117> yes, we can do but from command line
<sarnold> yeah, I don't know if the gui can do that yet
<lazyPower> not yet
<lazyPower> thats going to be part of the machine view work that has been going on in the GUI or a bit now. i hear its getting there but not quite ready for release.
<sarnold> nice
<tiger7117> hm..
<tiger7117> LXC, why JuJu run on LXC ?
<lazyPower> LXC is a good containerization framework
<lazyPower> but i cant speak as to why it was the choice when that was made many moons ago.
<tiger7117> it create too many issues, if we want to run JuJu - Local Enviroment on Internet, means JuJu-GUI on LXC (10.0.3.233) and Eth0 is on 202.19.x.x .. how can then run this GUI in Live enviroment ?
<tiger7117> Reverse proxy also don't give proper solution for it, what kind of configuration we can do for making it Live ?
<lazyPower> The networking side of LXC has still yet to be resolved - as its dependent on where its being deployed, and a few other factors.
<lazyPower> if you're talking about the local provider - its intended use case is for development
<tiger7117> i can understand that adding machine in JuJu pick IPs from DHCP range of LXC and Lxcbr0 control it ..?
<lazyPower> but if you're tlaking about deploy --to lxc:2 (for example) thats whats being worked on.
<tiger7117> hmm..
<tiger7117> JuJu-GUI support latest MySQL version ?
<sarnold> tiger7117: you could also set up e.g. openstack or maas and deploy to either virtual machines or to hardware .. neither probably makes much sense for three machines, but it's an option to consider
<lazyPower> sarnold: for 3 machines its almost more prudent to juse use a cloud provider, or go with a manual environment.
<lazyPower> my rule of thumb: if less than 10 machines - manual is a great option. if > 10 machines will be managed, you're looking at wanting to setup MAAS/OpenStack
<tiger7117> sarnold; yes, i am just testing it, yes, virtual enviroment (VPS) will be proper way.
<lazyPower> tiger7117: i'm not sure what you're asking, JujuGUI doesn't have a dependency on MYSQL. its a stand alone web app that works with the juju api socket server.
<tiger7117> i heard about openstack, what is it ?
<lazyPower> if you mean does Juju Support the latest MySQL - i do believe its working with the 5.5 series, but i'd need to double check.
<sarnold> tiger7117: openstack is a pile of services that help you create something much like amazon's AWS yourself
<tiger7117> ahan .. our own Cloud Enviroment
<tiger7117> sarnold; what is the biggest advantage of this JuJu ?
<tiger7117> or JuJu-GUI
<sarnold> tiger7117: juju feels like our best hope to take us out of the dark ages of managing servers and services -- with juju you can write down, once, all the things that are important to your organization, and then re-deploy it as needed when needed
<sarnold> tiger7117: it's sort of like the little scrap of paper with all the magic written on it, but checked into source control, so you can easily make changes in one place. if the charms are properly written, you can quickly scale your services as your needs change
<sarnold> tiger7117: and you can use pre-written charms to deploy services you're not an expert in -- so you can easily put e.g. an haproxy in front of a dozen web servers without having to actually know how to run haproxy yourself
#juju 2014-08-27
<tiger7117> hmm.. yes, i noticed this magic thing on GUI. otherwise command line deploying/configuration is kind a same as those old ages way :)
<lazyPower> tiger7117: i'm working on a demo video to cover one of our stories with juju, which is getting moving with a big data stack (Hadoop to be specific) - which can take hours for an expert to manually setup, and scale. Using juju and bundles, you can have your hadoop installation ready to go and scale with your needs in literally 14 minutes.  Keep in mind that Service orchestration is a layer above config management.
<lazyPower> We're talking about services talkign to one another out of the box in Service oriented architecture principals. Meaning when you deploy your web app that has a MySQL dependency, it exchanges teh username, password, and database - transparently to you. It eases quite a few of your operational goals by intelligently automating away pain points.
<lazyPower> https://www.youtube.com/watch?v=CEfFy6tODrQ - this may help shed some light on what we're talking about
<tiger7117> ahan.. yes, its like a make a diagram on paper and magically those moutains/rivers are making itself at the back of that page :)
<sarnold> I like that :)
<lazyPower> tiger7117: if you want a really good intro talk i gave - i can link you to that as well
<lazyPower> its about 40 minutes long
<lazyPower> tiger7117: but it doesn't just stop at database credential exchange. There is work beign done to automate your DNS solution(s), Framework Charms (like Openstack, Rails, and CakePHP) that aim to be as generic as possible to support as many deployment scenarios as you will need
<tiger7117> talk :).. naa don't have too much spare time for talking . hav to setup Godzilla on papers :P
<lazyPower> and the best part about these is they are peer reviewed before the land in the recommended charm store for you to consume/use - which means you get the comfort of knowing you're not using some abandoned devops recipe from many months ago that will no longer work - there's a lot of work going into our end user scenario this cycle to bring automated testing into this
<lazyPower> When you asked sarnold what the best part of juju is, its the community.
<lazyPower> thats my 2 cents anyway.
<lazyPower> Hope that wall of text helped tiger7117. if you have any questions, dont hesitate to ping me directly.
<tiger7117> i appreciate you guys, its not easy or small thing, in future it can eat elephants in one bite :)
<lazyPower> we're dangerously close to eating buffalo in one bite, next step - elephants.
<sarnold> mmm buffalo...
<tiger7117> haha two buffalo = 1 elephant :)
<tiger7117> so u are near to your target.
<sarnold> yes, world domination feels quite close ;)
<lazyPower> every cycle we get closer.
<tiger7117> well, i also appreciate to one guy which made plugin (JuDo), i also have been tested it, it worked too well with digital Ocean cloud, OD is SSD based too fast. you guys put built in thing of JuDo or it will be good if build for RackSpace
<lazyPower> tiger7117: Juju DigitalOcean plugin is a pseudo provider built on top of the manual provider
<lazyPower> juju switch manual - and you can add machines/containers from any cloud provider you wish, and orcehstrate them with juju
<sarnold> wow
<lazyPower> your enlistment phase will be manual, and you'll need to use the CLI to deploy services until machine view lands
<tiger7117> yesterday, i tested JuDO and added machines and things had configured in 5 mins
<tiger7117> LaxyPower: RackSpace also support to JuJu ?
<lazyPower> Not natively - this is why you'd be using the manual provider.
<tiger7117> ahan.
<sarnold> .. which I'll admit has some niceness to it, it won't surprisingly cost you money :)
<lazyPower> there's a few requirements to land a provider that introduce some dependency issues - such as lack of object storage - but i've been following the mailing list and some of those are planning on being lifted in the near future
<tiger7117> because of this LXC thing, i tried Judo/Digital Ocean, there it is assigning direct Public IPs to Machines.
<lazyPower> so you might see a native rackspace provider land sooner rather than later - but manual provider will always be around, and always offer a method to build a juju orchestrated deployment
<tiger7117> so easy to access things publicaly.
<lazyPower> yeah, networking is difficult when you boil it down to the specifics of a datacenter, and how you want your traffic routed - so its no surprise that density with containers isn't a fleshed out story yet.
<tiger7117> My Best Wishes are with you Guys.. :)
<lazyPower> Thanks tiger7117
<sarnold> yeah, I'm hoping hallyn and stgraber can work some magic there :)
<tiger7117> i dont like, AWS (EC2 etc) , its kind a insecure.
<tiger7117> well, so JuJu is totally depend on Python or else as well ?
<sarnold> the juju core depends upon go, ssh, and shell (probably bash specifically) -- charms can be written in shell, or python, or ruby, or even in compiled languages, but you'd probably want to stick with a scripting language since that makes iteration on charm authoring a lot faster
<tiger7117> Sarnold: by the way, i can call to Dennis Ritchie /Ken Thompson, Linus Torvalds for this JuJu project :D
<tiger7117> hmm..
<tiger7117> Ruby didn't get too famous, like now days PHP is..
<tiger7117> well, mine 3rd day in JuJu is going to start :) have to configure few more things in it, hope to see you guys again ..
<tiger7117> Sarnold, LazyPower: Nice to chat with you both, Good Luck !!
<tiger7117> Bye !!
<lazyPower> take care tiger7117
<veebers> what could cause a local lxc container created using 'juju deploy cs:ubuntu' to give this error when attempting to lxc-start it? "lxc-start: Executing '/sbin/init' with no configuration file may crash the host"
<veebers> if I create one just using lxc (sudo lxc-create -t ubuntu -n testing) and try to start that it works
<veebers> err, disregard that question. It was a user error :-\
<tvansteenburgh1> stub: thanks for the review on the Config stuff, appreciate that!
<stub> np
<PiKiM> Buenos dias, tengo un inconveniente con juju-gui, lo instale pero por el navegador web me sale este error --> wss://jujugui.prueba.net/ws' failed: Unexpected response code: 400
<PiKiM> que puede ser??
<frankban> PiKiM: a bad request for the wss connection is weird. So you have the gui at https://jujugui.prueba.net? what kind of provider are you using?
<PiKiM> I'm using locally do not have it in the cloud
<frankban> PiKiM: local env? lxc?
<PiKiM> yes lxc
<frankban> PiKiM: so why the browser is trying to connect to wss://jujugui.prueba.net/ws?
<PiKiM> I do not know, I have the team here and set up a virtual host to see the interface from any other team, and placed in the url https://juju.prueba.net.ws the logo stays turning and seeing the wss console displays the error: //jujugui.prueba.net/ws' failed: Unexpected response code: 400
<frankban> PiKiM: so this seems related to an internal misconfiguration of the network. To exclude something client side, could you check if you see the same after clearing the browser cache (e.g. when browsing the GUI URL in incognito mode)?
<PiKiM> yes, every time I see the delete browser cache
<frankban> PiKiM: so, deleting the cache does not work, correct?
<PiKiM> yes, correct
<frankban> PiKiM: what URL are you using for the GUI?
<frankban> PiKiM: could you please paste the output of a get request to https://juju.prueba.net/gui-server-info ?
<PiKiM> frankban: https://jujugui.prueba.net
<frankban> PiKiM: could you please paste the output of a get request to https://juju.prueba.net/gui-server-info ?
<PiKiM> can this http://nginx.com/blog/websocket-nginx/ ??
<PiKiM> is this what you are asking for?
<PiKiM> curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
<PiKiM> error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
<PiKiM> More details here: http://curl.haxx.se/docs/sslcerts.html
<PiKiM> curl performs SSL certificate verification by default, using a "bundle"
<PiKiM>  of Certificate Authority (CA) public keys (CA certs). If the default
<PiKiM>  bundle file isn't adequate, you can specify an alternate file
<PiKiM>  using the --cacert option.
<PiKiM> If this HTTPS server uses a certificate signed by a CA represented in
<PiKiM>  the bundle, the certificate verification probably failed due to a
<PiKiM>  problem with the certificate (it might be expired, or the name might
<PiKiM>  not match the domain name in the URL).
<PiKiM> If you'd like to turn off curl's verification of the certificate, use
<PiKiM>  the -k (or --insecure) option.
<frankban> PiKiM: I was asking what do you see if you go to  https://juju.prueba.net/gui-server-info with your browser
<PiKiM> this
<PiKiM> {"uptime": 59872, "deployer": [], "apiversion": "go", "sandbox": false, "version": "0.4.0", "debug": false, "apiurl": "wss://10.0.3.1:17070"}
<frankban> PiKiM: that seems ok. So yeah the problem seems to be a misconfiguration of the internal network, and it's likely that the team that set up the infrastructure can help with the wss failure
<frankban> PiKiM: as it's not easy to undestand what's going on from here
<frankban> PiKiM: perhaps they are using a proxy between the LXC machines and the network you are using, and that proxy is failing to properly redirect secure websocket connections
<frankban> PiKiM: last check, what happens if you visit https://jujugui.prueba.net/ws with your browser?
<PiKiM> it shows this: Can "Upgrade" only to "WebSocket".
<frankban> PiKiM: it doesn't ask for self-signed certs approval, correct?
<PiKiM> no, it doesnt
<PiKiM> like you said, we are using a virtualhost to redirect the requests to the LXC machines
<frankban> PiKiM: so, as I mentioned, if incognito mode does not work, the problem is likely to be on the network configuration. I am sorry I am not able to be more specific.
<nuclearbob> I'm having some problems with the juju local provider, I keep getting "connection reset by peer" when the API server is trying to serve RPCs
<nuclearbob> and my "juju ssh" sessions get broken pipe pretty quickly
<nuclearbob> okay, I figured it out, all of my containers created by the local provider are getting created with the same MAC address
<nuclearbob> I think veebers had that problem as well
<JoshStrobl> New juju/doc issue: Filed in relation to the Authors Testing doc - https://github.com/juju/docs/issues/149
<JoshStrobl> marcoceppi, when you get the time (aside from earlier ping regarding a bug closure), could you add change and/or enhancement labels to it?
<JoshStrobl> mbruzek, or marcoceppi: MP for doc to fix some config-vagrant formatting: https://github.com/juju/docs/pull/150
<mbruzek> JoshStrobl, Thanks for the contribution.
<mbruzek> JoshStrobl, That does look better.
<JoshStrobl> mbruzek, \o/
<JoshStrobl> mbruzek, looks like you closed it but never merged (either that or it isn't updating for me)
<JoshStrobl> there we go :D
<mbruzek> JoshStrobl, I fixed that just now.
<JoshStrobl> \o/
<JoshStrobl> since the docs use, from what I can tell, GFM, would it make more sense to use their table syntax for the keyboard shortcut stuff at https://juju.ubuntu.com/docs/authors-hook-debug.html. It'd mean we'd be able to reduce the vertical space necessary for the page. Though I'm not sure if the parser detects tables and can correctly convert it to HTML tables.
<tiger7117> Hi..
<tiger7117> how can we change the user and password of JuJu Gui user-admin ?
<tiger7117> where is my friend LazyPower :) ?
<lazyPower> tiger7117: its set at bootstrap. If you don't have an admin-secret: key defined for your environment in ~/.juju/environments.yaml its autogenerated, and located in ~/.juju/environments/<environment>.jenv
<lazyPower> i dont think it can be updated post deployment. rick_h_ - is there a config option to update the login for the GUI at present?
<JoshStrobl> Is anyone opposed to me using GFM's tables for the tmux shortcuts at https://juju.ubuntu.com/docs/authors-hook-debug.html. I already branched my fork and tested it, it properly parses into a table and looks better than the current doc (in my opinion, of course).
<lazyPower> JoshStrobl: screenshot?
<tiger7117> yes, i know where to get the password of user-admin password from file ~/.juju/environments/<environment>.jenv
<JoshStrobl> lazyPower, disclaimer: I can't get CSS fetching working with the python stuff, so I copied the HTML generated from my page into the authors-hook-debug running on the actual docs
<JoshStrobl> lazyPower, I'll get a screenshot
<tiger7117> any other confusion is that .. in file <enviroment>.jenv file have user: admin and Longest Password: â¦â¦â¦..  it should be user-admin or admin, then where is user-admin username ?
<JoshStrobl> lazyPower, actually, just found a parsing issue :\
<lazyPower> booo
<JoshStrobl> lazyPower, gonna work on it a bit more
<lazyPower> ok
<lazyPower> tiger7117: not sure what you're asking.
<tiger7117> second might be important thing that.. JuJu-GUI Login page don't load on SAFARI browser. it only works perfect in chrome.
<lazyPower> tiger7117: its been tested in Chrome and Firefox - but recommended to use chrome for the best experience.
<JoshStrobl> lazyPower, had to do with having a | inside the actual table (which separates columns with |) since apparently once of the tmux shortcuts requires |. I changed it in the markdown to &#124; and it worked fine.
<tiger7117> i said in above that .. yes, this file (~/.juju/environments/<environment>.jenv) has the auto generated long password for JuJu-GUI user-admin, even in first two lines, i have confusion that in this file (~/.juju/environments/<environment>.jenv) at first line is User: admin and in second line Password: â¦â¦.. , it looks like that username of JuJu-GUI is Admin except user-admin.
<lazyPower> tiger7117: at present there is only a single user for the GUI, and it's the default populated value of user-admin, which corresponds to user: admin in the .jenv
<tiger7117> ahan.. if i manually change/edit to .jenv file for this password: then will it change as well or not ?
<bac> tiger7117: the GUI uses the 'admin-secret' to login.  if you set it in your enviornments.yaml then it'll be used.  if you don't set it, juju will generate one for you.
<bac> tiger7117: if you shutdown the environment and manually edit the .jenv file then the new admin-secret should be used when you bootstrap the new environment
<tiger7117> bac: how can i set admin-secret login info in enviroments.yaml file ?
<bac> tiger7117: just put it in the appropriate stanza: local, ec2, whatever
<bac> admin-secret:  shhhhh123
<tiger7117> bac: actually i am using digital ocean plugin (JuDo).. so mine enviroment.yaml file http://pastebin.com/XdBFN452
<tiger7117> should i just put admin-secret: mypassword at the end of this enviroment.yaml file ?
<bac> tiger7117: put it inside the section for the provider you are using.  in your case, inside the digitalocean section.  note you'll have to remove the corresponding .jenv file after editing environments.yaml so a new one can be regenerated.
<tiger7117> bac: i created two machines as well  one for juju-gui and one for another purpose, so after deleting .jenv or editing to this enviroment.yaml file will it effect on those as well because this .jenv file has lot of info regarding uuid/cert/ state servers IPs/bootstrap info etc?
<JoshStrobl> lazyPower, in theory, this is what it is supposed to look like. It'd be nice though if the Ubuntu CSS got rid of the disgusting thead color and made each initial column the same width, but nothing I can do about that. http://i.imgur.com/PssugDU.png
<bac> tiger7117: the other services will not use the admin-secret.  realize i don't know exactly what you're doing.  it may not be prudent to tear down your environment and bootstrap another.  i assume this is just a test system?
<tiger7117> yes, test system but dont want to spend more time again for building machines and setting up things from again .
<lazyPower> JoshStrobl: i think that looks fine
<lazyPower> s/fine/great
<JoshStrobl> lazyPower, cool, I'll merge my local branches, push to GitHub and do a MP then!
<tiger7117> Ok, thanks guys, Good Luck !!
<JoshStrobl> MP: https://github.com/juju/docs/pull/151
<JoshStrobl> gonna go get some food now
<Sh3rl0ck> Is there a way to do HA with rados gateway in the JUJU based OpenStack environment
<Sh3rl0ck> Seems like the ha-cluster charm does not have any rados gw support
<Sh3rl0ck> Has anyone used haproxy charm with rados gw?
<hatch> If I wanted to set up a url rewriting with the apache charm what's going to be the best approach? Fork it and deploy a custom version?
<lazyPower> hatch: it has base64 encoded vhost support if i recall correctly
<lazyPower> so you can write your url rewrites as a vhost, and ship it as a config option instead of forking the charm.
<hatch> ahh I missed that in the config
<hatch> that would work
<hatch> now the question is, will haproxy or apache be a better choice :)
<arosales> lazyPower: asanjar: well done on http://youtu.be/f9yTWK7Z9Wg
<hatch> lazyPower: great vid! jw why the text got all garbled towards the end....was that my stream or is that how the source is?
<jcastro> lazyPower, post that video to the mailing list as well!
<lazyPower> hatch: which text?
<lazyPower> thanks arosales :)
<lazyPower> jcastro: on it!
<hatch> lazyPower:  umm towards the end when you were in the console, it got very pixelated, like over processed or something
<lazyPower> hatch: there's some issues that stemmed after the youtube upload :(
<hatch> lazyPower:  ahh.... here http://youtu.be/f9yTWK7Z9Wg?t=8m50s is that pixelated for you?
<lazyPower> the source is good, the youtube publish seems to have ganked a few spots towards the end. i'm seeing what you're asking about, its reading the clipping when i cut from a video source to another.
<hatch> ahhh ok ok np I just wanted to point it out
<hatch> great vid
<lazyPower> yeah, it is a bit. and it replays the same few frames.
 * lazyPower curses youtube for mangling his cut
<lazyPower> well not bad for a first time pro-amateur cut. All filmed/cut/edited/compiled on ubuntu 14.04 using lightworks, simple screen recorder, audacity and Tracktion 5 for audio remastering.
<lazyPower> i bet my next one i nail it.
<hatch> lazyPower:  you should do a video, on how you make videos :D
<lazyPower> hatch: i'm still pending one on how i DJ on linux :P
<hatch> haha
<lazyPower> that one is pending me cleaning my office however.
<lazyPower> which let me tell ya is HIGH on my priority list </sarcasm>
<hatch> lol
<hatch> do you have any hardware?
<hatch> "2 turntables and a microphone"
<lazyPower> just a midi controller, hercules instinct
<lazyPower> the rest is all software based mixing, and my turtle beach headphones + mic combo
<lazyPower> I'm looking at investing in a mic arm, with a studio quality mic so i can get away from this headset whine that i have to remove in post
<lazyPower> not sure if its the cans, or if its pulse being silly
<hatch> ahh cool
<hatch> I have 2 tech 12's from days gone by :)
<lazyPower> http://i.ytimg.com/vi/K3sWl3QEzA4/sddefault.jpg
<lazyPower> i love this little guy. its portable and sturdy enough to backpack
<hatch> yeah that's how things are going nowadays
<hatch> have you seen any of the NI stuff?
<lazyPower> i dunno man, did you see the UK INFEST gear?
<lazyPower> everyone there was using CD/Vinyl based setups
<lazyPower> and routed through a laptop for recording
<lazyPower> not sure about NI stuff - i haven't gotten that deep into teh tech. I've only been at it for ~ 3 months.
<lazyPower> i should do a video "How to setup a pirate radio station with Juju so you can DJ with your friends across the globe"
<hatch> haha that would be cool
<hatch> yeah I used to be pretty big into it but.....well...time :) priorities....and such
<Sh3rl0ck> There is no ha-proxy charm for trusty..only for precise. Is there a work around?
<lazyPower> Sh3rl0ck: it deploys on trusty - whats preventing it from making it to the trusty side of teh charm store is automated tests (ie: amulet tests)
<lazyPower> Sh3rl0ck: in your local charm repository (presumably ~/charms) if you mkdir -p trusty && cd trusty && charm get cs:precise/ha-proxy
<lazyPower> then juju deploy local:trusty/haproxy
<lazyPower> you will get the trusty charm you seek - but be aware that doing it this way means you'll have to fetch all updates until the trusty version lands in the store. Deploying a local resource is effectively forking the charm.
<Sh3rl0ck> Oh I see...Thanks for the information. So this means I can use precise charm for trusty?
<lazyPower> Sh3rl0ck: in most cases yes.
<lazyPower> there are some charms that will fail due to outdated dependencies, PPA's, etc. - There's an effort at auditing the store to promote charms to trusty, but its mid-range in the priority queue at the moment. We're looking for community help and re-prioritization of the audit efforts.
<mwhudson> hi, is it possible to use juju's openstack provider with an openstack that doesn't have swift?
<aisrael> When a new unit is deployed, are the relation hooks re-fired?
<marcoceppi> aisrael: yes
<marcoceppi> relations are created between services, but they're executed on a per unit basis
<marcoceppi> mwhudson: Juju uses the object store for metadata about the environment, so it likely won't
<mwhudson> marcoceppi: fair enough
<mwhudson> frobware: ^
<frobware> mwhudson: right, thanks. so swift seems happily deployed for me today...
<thumper> mwhudson: not yet, although we are trying to get to the state where juju manages its own object storage
<aisrael> marcoceppi: thanks!
<frobware> is there a means for juju status to show what is the floating ip address for openstack launched images? I see 10.10.10.x, but would like to see the floating ip assigned. If I look at the horizon dashboard it is listed and I'm able to ssh using the public addr.
<marcoceppi> frobware: interesting, it should show in the public-address field. What version of juju are you using?
<frobware> marcoceppi: 1.20.5-trusty-arm64
<marcoceppi> frobware: interesting, that sounds like a bug
<frobware> ubuntu@mustang01:~$ nova list |grep 10.4
<frobware> | 0ca83fe4-3055-44d4-a3b2-c56e267a6b39 | juju-openstack-machine-1 | ACTIVE | -          | Running     | internal10=10.10.10.4, 192.168.2.2 |
#juju 2014-08-28
<melmoth> hola juju fols. Is there an official release date  planned for version  1.20.6 ?
<marcoceppi> melmoth: not sure, it will either be 1.20.6 or 1.21.0
<marcoceppi> depends on the release manager
<melmoth> we need a release _date_ for a stable release of juju (i m assuming 1.21.0 is developpment release, right ?) to plan a customer redeployment
<melmoth> my understanding is the next stable release is 1.20.6
<melmoth> (but i must admit i m confused with the releaseing shceme and what is stable and what is devel)
<rcj> Would this decorator fit anywhere in the charmhelpers? http://paste.ubuntu.com/8169760/
<rcj> It needs a bit more testing in my charm, but I thought others might have use for a decorator to run a function with the effective uid/gid of a particular user (rather than recursive chroot)
<rcj> That is rather than recursive chroot after the fact.
<marcoceppi> rcj: looks sweet! I could see someone needing to use this
<rcj> marcoceppi, charmehelpers.core.host ?
<marcoceppi> rcj: makes sense
<rcj> Okay, I'll open a bug with an MP
<jcastro> http://askubuntu.com/questions/517100/can-i-implement-a-saas-infrastructure-with-juju
<jcastro> feedback/edits appreciated here: ^^^^
<jcastro> hazmat, this is very hazmatish ^^
<jcastro> heya ceppi
 * hazmat pokes
<jcastro> so what's left blocking the review queue?
<jcastro> the new review queue I mean
<marcoceppi> jcastro: a few bits and bobs
<marcoceppi> that I'm working on now
<marcoceppi> mostly the cron to keep it updated
<jcastro> marcoceppi, the top 2 charm reviews (udd, and launchpad), I marked those as invalid but they're still in there.
<marcoceppi> jcastro: because it's not refreshing yet
<marcoceppi> jcastro: about to upload a new database
<jcastro> oh ok
<lazyPower> ping #juju - I'm having a serious dose of cranial flatluence. If i *need* a precise based bootstrap node, i set-env default-series=precise and pass --upload-tools to the bootstrap command correct?
<marcoceppi> jcastro: review-queue db updated
<marcoceppi> lazyPower: you shouldn't need to upload-tools
<marcoceppi> since tools are based on architecture not series
<lazyPower> ok so i was mixing series= with arch=
<lazyPower> got it
<jcastro> marcoceppi, man, we made a bunch of progress!
<marcoceppi> yeah, good job cleaning up. it's my review queue time so I'm going to hit the incoming queue hard and knock out more items
<marcoceppi> still missing feedback detection, so more things might pop up in the next update
<marcoceppi> should be going live later this after noon
<aisrael> If your juju charm won't die, you might just have an open debug-hook ^_^
<marcoceppi> aisrael: haha, yup! We should add that to the documentation if it isn't already there
#juju 2014-08-29
 * arosales finally getting to some review time.
 * marcoceppi_ as well arosales
 * arosales waves to marcoceppi_
<arosales> mbruzek: are you working on a community doc for reviews?
<mbruzek> arosales, Yes I am in a hangout with JoshStrobl
<arosales> perhaps with JoshStrobl ?
<arosales> ah cool.
<JoshStrobl> o/
<arosales> mbruzek: JoshStrobl one idea I had was to make the charm authoring docs a little more robust on commong things ~charmers catch in a review
<arosales> so I made https://docs.google.com/a/canonical.com/document/d/1El7011q-DlaouSlkMLCiMOtWOjWu0ZX9bMva6AuVfq8/edit#heading=h.fql3tpkqjkg
<arosales> so in addition to what can be caught in a review one idea I wanted to pitch was being preemtive on designing out a charm from learning in charm reviews
<arosales> JoshStrobl: o/ Hello
<arosales> JoshStrobl: mbruzek: just some food for thought. To JoshStrobl point I have been thinking about how to make the review time shorter in addition to getting more eyes on reviews
<mbruzek> arosales, This document looks different than what we were working on.
<mbruzek> https://docs.google.com/a/canonical.com/document/d/1QpKWUE4bVtwBg3bSxy0mdz7q2wMbXDBqjIfKs_TR8pY/edit
<arosales> you should be able to edit that doc
<arosales> mbruzek: understood
<JoshStrobl> arosales, if you want to join the hangout -> https://plus.google.com/hangouts/_/g7rb4acvbvhcmeb6jdix7djwiea
<mbruzek> We were taking the approach of the community charmer, what specific steps do they need to take to review a charm.
<JoshStrobl> no obligation ;)
<arosales> mbruzek: ya let me refill on coffee and join for a a few
<JoshStrobl> arosales, first draft up on the Charm Review Process doc: https://github.com/JoshStrobl/docs/blob/master/src/en/charm-review-process.md
<arosales> JoshStrobl: good stuff. Thanks for working on it with mbruzek
<arosales> JoshStrobl: I'll be taking a look this afternoon
<lazypooper> lazyPower_:  greetings from the subway charm
<mattrae_> hi, is there a way to re-run a relation-joined hook? i had to resolve some hooks so queued upgrade-charm could happen, but now i missed out on those relation-joined hooks
<mattrae_> i though juju run was originally meant to re-run hooks, can it do that?
<mbruzek> Hi mattrae_ You can run juju debug-hooks charm-name/0
<mbruzek> mattrae_, you can run juju resolved --retry name/0
<mbruzek> That will put you in a charm context where you can rerun the hook if you like.
<marcoceppi_> mattrae_: not really, no, if you've already passed the hook context. I opened a bug about this
<jose> hey aisrael, mind a quick PM? :)
<aisrael> Sure thing
<jose> anyone around? I'm having some troubles bootstrapping
#juju 2014-08-30
<d4rkn3t> hello guys, I've a question about MaaS . Is there a way to make the upgrade of nodes via MaaS or Juju? thanks
<d4rkn3t> anyone can tell me if it's possible or not?
<d4rkn3t> I've another question for you, after to deploy all service to realize a cloud infrastructure using Openstack I can not open the Horizon Dashboard (http://it.tinypic.com/view.php?pic=szel4n&s=8#.VAHbuWCSxNw) where everything relations are in green status.
<d4rkn3t> anyone can answer me?
#juju 2014-08-31
<hazmat> i have some new juju sliced bread
<lazyPower> hazmat: what is this sliced awesomeness you speak of?
<hazmat> lazyPower, cross cloud container density using rudder
<lazyPower> ooo i saw that on the mailing list
<hazmat> lazyPower, http://bazaar.launchpad.net/~hazmat/charms/trusty/rudder/trunk/view/head:/readme.txt
<hazmat> yeah
<hazmat> its pretty fun.. ec2 is borked atm ... i was going it with the digitial ocean plugiun
<hazmat> lazyPower, regular scale testing on the cheap and charm testing en-masse on the way :-)
<lazyPower> but you're adding lxc on top of  cloud provider right?
<lazyPower> so, you're doing density testing of containers on a given cloud host as i see it.
<hazmat> lazyPower, yeah.. its cross host container communication on any provider
<hazmat> using a software overlay network via udp
<hazmat> i was playing around with gre tunneling, when rudder got released.. and it basically automates all the boring bits
<hazmat> lazyPower, yeah.. lxc or docker on top of a cloud provider (charm only auto confs lxc atm)
<lazyPower> hmmmm
<lazyPower> sounds interesting
<lazyPower> is all this routed through an intermediary or is this creating a UDP mesh network?
<davecheney> lazyPower: mesh network
<davecheney> via userspace tun device
<hazmat> mesh wins
#juju 2015-08-24
<g3naro> how do i scp files to a juju box?
<g3naro> juju scp file 1:
<g3naro> or something like thi?
<plars> g3naro: juju scp local_file unit_name/num:/remote/path
<plars> g3naro: ex: juju scp myfile.tgz myservice/0:/tmp
<jose> jcastro: ping
<jcastro> yo
<jose> jcastro: you pinged me a couple days ago - haven't been on IRC for around a week
<jose> you needed something?
<beisner> gnuoy, thedac - so afaict, that cluster fix resolves the cluster races i was seeing (with LE) re: bug 1486177
<mup> Bug #1486177: 3-node native rabbitmq cluster race <amulet> <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):Confirmed for thedac> <https://launchpad.net/bugs/1486177>
<thedac> beisner: great. I will be working on a fix for pre leadership election versions today
<gnuoy> thedac, beisner, tip top, thanks
<beisner> coreycb, can you review/land this?  https://code.launchpad.net/~1chb1n/charms/trusty/swift-storage/amulet-update-1508/+merge/268788
<beisner> coreycb, heads up too - swift-proxy, openstack-dashboard shortly behind that.
<coreycb> beisner, sure.  I need to get liberty stuff done but then I'll look.
<redelmann> anyone know about juju-gui?
<redelmann> im trying to debug an issue
<redelmann> on ec2 and maas juju-gui is logging" {"RequestId":5,"Error":"unit not found","ErrorCode":"not found","Response":{}}"
<redelmann> juju debug-log: error stopping *state.Multiwatcher resource: unit not found
<mhall119> help, trying to re-connect to my old canonistack environment after a long time of ignoring it, now juju gives me: WARNING unknown config field "tools-url"
<mhall119> and doesn't do anything
<g3naro> maybe try removing that option from your config ?
<thedac> beisner: if you have time can you independently test juju < 1.24 against lp:~thedac/charms/trusty/rabbitmq-server/native-cluster-race-fixes and also make sure it did not regress for >= 1.24. I'll be running similar tests as well.
<bbaqar> Which branch of charmhelpers should I propose my changes in if I want them in each of the openstack charms?
<beisner> thedac, thank you.  yes, i'll cycle both.
<mattrae> hi, i'd like to use the openstack provider.. is there an option to specify the object store endpoint?
<mattrae> i can't seem to find it
<jcastro> Juju office hours in 30 minutes!
<rick_h_> jcastro: is there a topic or general Q/A?
<jcastro> general office hours
<jcastro> so like if someone shows up with an agenda that becomes the agenda
<jcastro> rick_h_: we haven't had a UI guy in a while if you want to fill us all in
<rick_h_> jcastro: ok, debating showing up but I don't have an agenda. Just to cheer or such :)
<jcastro> well, jrwren shows up but he never knows what he's working on
<rick_h_> :P
<rick_h_> jcastro: k, linky me happy to jump in
<jcastro> rick_h_: I'll file up the hangout in about 15
<rick_h_> alexisb: what were we talking about hte other day about getting notice about?
<jcastro> also if anyone from juju-core wants to hop in that'd be awesome
<jcastro> wwitzel3: ^^^
<jcastro> beisner: if you've got time for some openstack charm updates since you guys just had a release ...
<wwitzel3> jcastro: sure
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYd2-532QvR_YgYczuO1Np1AHT7LT9PBI5Hw-YeiJNflAe0_bQ
<jcastro> rick_h_: wwitzel3: cory_fu ^^^^
<jrwren> jcastro: i can't talk about what I'm working on :p
<cory_fu> kwmonroe: ^^
<rick_h_> linky: https://jujucharms.com/docs/devel/charms-bundles
<rick_h_> jcastro: linky: https://github.com/juju/charmstore/blob/v5-unstable/docs/bundles.md
<rick_h_> jcastro: https://jujucharms.com/docs/devel/wip-systems
<rick_h_> jcastro: https://jujucharms.com/docs/devel/wip-users
<kwmonroe> hey rick_h_, is "bundle" the right source branch name for bundles?  or "trunk", or will either work?
<rick_h_> kwmonroe: it's bundle I think.
<rick_h_> kwmonroe: trunk is for charms
<kwmonroe> cool
<kwmonroe> ack
<rick_h_> kwmonroe: I think the diff was done as part of 'telling what's what' but it's history and not sure tbh
<wwitzel3> workload devel branch: https://github.com/juju/juju/tree/feature-proc-mgmt
<wwitzel3> jcastro: ^
<kwmonroe> realtime syslog analytics bundle: https://jujucharms.com/u/bigdata-dev/realtime-syslog-analytics
<cory_fu> http://interfaces.juju.solutions/
<rick_h_> jcastro: https://jujucharms.com/q/db-admin
<rick_h_> jcastro: https://github.com/juju/charmstore/blob/v5-unstable/docs/API.md#search
<wwitzel3> jcastro: https://insights.ubuntu.com/event/juju-charmer-summit-2015/
<Mortin> cool walkthru thx for stream :)
<mhall119> jcastro: what does "agent-state: down" mean? Does it mean the instance is down, or just something with juju?
<jcastro> it means the juju agent itself is down
<mhall119> the controlling node?
<jcastro> is this on a new deployment?
<jcastro> no, the agent on that node
<mhall119> no, old canonistack one that I haven't touched in months
<marcoceppi> mhall119: it means that juju can't speak to the agent that machines is running on
<marcoceppi> either the agent crashed or the machine is no longer reachable on the network (taken offline, networking changed, etc)
<mhall119> ok, can I juju destroy-environment when it's like this? or might that leave orphaned instances
 * mhall119 things canonistack might have moved recently 
<mhall119> s/things/thinks/
<hazmat> mhall119: if you remove machine --force it should do the trick, if its the bootstrap node, yeah destroy env force should do the trick (sans orphans)
<hazmat> marcoceppi: if your around on wednesday, i'm doing an ansible talk at the modev meetup .. its right off the silver line at mclean stop
<marcoceppi> hazmat: sounds sweet!
<marcoceppi> hazmat: I just RSVP'd thanks for the heads up
<jcastro> https://insights.ubuntu.com/2015/08/24/a-midsummer-nights-juju-office-hours/
<hazmat> rick_h_:  just read through new bundle thingy in jorge's link above, doesn't support containers as machines per description
<rick_h_> hazmat: nested lxc's were fixed in a pr I believe. It's not landed yet. Waiting on review?
<rick_h_> hazmat: I know we had to fix something with that for the OS bundle case and we're running a deployer fork atm for that to work.
<rick_h_> hazmat: if I'm misunderstanding let me know/have an example and we'll get it fixed up.
<arosales> marcoceppi, jcastro: thanks for hosting the most recent office hours and sending out highlights with minute markers
#juju 2015-08-25
<h0mer> anyone know what this issue is?  Trying to install openstack w/landscape (ubuntu)
<h0mer> 2015-08-25 11:39:26 ERROR juju.worker runner.go:223 exited "rsyslog": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "juju-generated CA for environment \"rsyslog\"")
<g3naro> maybe you need to add the self-signed certificate to trust store ?
<h0mer> its already there
<h0mer> man this landscape/juju openstack install is such a pita
<h0mer> it's failed more times than actually worked (got it working once like a month ago).
<g3naro> what about maas ?
<h0mer> maas works fine
<h0mer> its just the landscape/juju stuff
<g3naro> ahh
<h0mer> i usually just blow the juju bootstrapped node and try again
<h0mer> but it's been failing at the same place for the past 10 times ive tried this
<g3naro> hmm
<g3naro> so you're trying to install openstack on box with juju
<h0mer> im doing this
<h0mer> http://www.ubuntu.com/download/cloud/install-ubuntu-openstack
<g3naro> intersting
<g3naro> i have no experience with this
<g3naro> anyone using elasticbeanstalk ?
<whit> so why can't we name charms with periods in them?
<rick_h_> whit: historical due to . being a reserved mongodb thingy I htink
<whit> :/
 * whit cries
 * rick_h_ sends tissues
<whit> we are working on some examples of layered charms and need a naming convention that denote inheritance hierarchies
<whit> guess dashes it is
<rick_h_> :/
<rick_h_> so has this gone out to the bigger picture? I mean I know you're hacking stuff together but it seems like smoething that if we're going to push/promote/make standards it needs to go a bit wider?
<whit> rick_h_: right now we are trying to establish good patterns for charm compose and the reactive framework (relation stubs in the parlance of sabdfl)
<whit> so smoothing would be good
<rick_h_> whit: right, I saw the demo in the office hours and I'd love to chat through with some folks and figure out an end goal that's nice for users to work towards
<rick_h_> whit: maybe we can't get all the candy, but have some idea where we're heading vs being reactive to it.
<whit> rick_h_: yeah...
<rick_h_> whit: stuff like this seems like maybe we could work out some patterns that could require some charmstore updates, or maybe some othger tools, but to help make it smoother than 'using hyphens'
<rick_h_> because that doesn't jump out as sustainable but I'm not in as deep as you are there so /me tosses grains of salt/etc
<whit> in general, discovering you can't name your thing the thing you want to sort of blows
<rick_h_> to be fair nested . seperated stuff to 'means inheritet' seems a bit hidden though either way.
<rick_h_> I mean try / or something? :)
<rick_h_> it's really the same
<jrwren> try :: :)
 * whit tries to salve the caremad
<whit> as a *user* of juju in the midst of a hypothetical onramp experience
<whit> if I can't name my charm what I want to, I'm sort of already feeling not so psyched
<whit> we already have a confusing force directory structure with series
<whit> naming a charm with a slash is not going to be helping there
<jrwren> i agree. i wanted to name my charm ðð but i couldn't :(
<whit> anyway, there is what would be better, and what I can do now
<whit> unicode baby ;)
<whit> how are we going to rule chinese private clouds if you can name your charms in unicode!?
<whit> anyway, not really a huge deal, but ... warts
<whit> ur... "can't name"
<kwmonroe> cory_fu: learn me about Relation.unfiltered_data vs .filtered_data.  the comment in helpers.py indicates filtered gives me info for services that have all their required_keys.  why would i ever want unfiltered?
<cory_fu> Because it's used to implement filtered_data, for one
<kwmonroe> i'm guessing for cases where a service isn't ready yet, but you still need to know something?  like private_addr.
<kwmonroe> heh.. good answer.
<cory_fu> But yes, the example you gave is a decent one
<skylerberg> After a charm goes through the review process is accepted, does it need to be re-reviewed in order to update the charm in the charm store?
<jcastro> yo beisner
<jcastro> http://askubuntu.com/questions/665795/openstack-juju-charms-test-suite-not-running
<jcastro> beisner: I tend to just ping you as the "openstack guy in my timezone", so if it gets annoying lmk
<beisner> yo
<beisner> hey jorge
<beisner> jcastro, i'll reply on that one
<jcastro> my man
<thomi> marcoceppi: Not sure if you're still planning on looking at juju-deployer any time soon, but https://bugs.launchpad.net/juju-deployer/+bug/1488667 would be really nice to fix (and probably not too hard either)
<mup> Bug #1488667: juju-deployer does not error on incorrect config option <juju-deployer:New> <https://launchpad.net/bugs/1488667>
<marcoceppi> thomi: thanks for the info, we'll triage this bug with feedback and generate a plan
<thomi> marcoceppi: thanks
<beisner> jcastro, replied.  thanks for bringing that to my attention.
<jcastro> beisner: if you hover over the "openstack" tag on the site you can subscribe to it sending you sets of questions over time
<beisner> jcastro, ah cool.  subscribed.
<lazyPower> jcastro: we need to go triage and beef up on the juju questions in Stack OVerflow as well
<lazyPower> there's quite a few tagged on the dashboard that are quite crusty at this point.
<lazyPower> it may be relevant to setup some kind of scraper for this data and address/move it as it comes in to AU vs letting it age and be seen as non-critical on SO
<jcastro> it'd be nice to use the SE API to just snag the top ones and throw them in the review queue
<catbus1> Hi, I have a juju and maas question, say if I have 100 machines managed by a single MAAS controller, and I would use say 3 servers for one customer, 5 for the other. But when I want to show customer A the services on juju-gui I deployed I don't want to show the services deployed for customer B. Is there a way to define a subset of the environment to be shown on juju-gui?
#juju 2015-08-26
<claw_> Not sure if this is the right place to ask, but hopefully it's a good start.  Is anyone able to help with problems I'm having with metadata services and shared_secret_keys not being passed around properly?  Or can point me somewhere that might be more appropriate to ask?
<lukasa> If I use juju set keystone admin-password=<some password> once my OpenStack deployment is up, should I expect Horizon to automatically pick this change up?
<lukasa> Because it doesn't seem to
<Walex> I am seeing in the Juju 'all-machines.log' several periods where there are lines like: "machine-0: message repeated 46502 times: [2015-08-25 09:49:53 WARNING juju.lease lease.go:301 A notification timed out after 1m0s.]" where should I start looking? That seems to be causing problems...
<jose> jcastro: hey, you submitting a talk for fossetcon?
<jcastro> no I have devops days I am submitting to
<jcastro> do you want to submit this one?
<beisner> gnuoy, can you review/land mongodb c-h sync re: Unsupported cloud: source option trusty-liberty/proposed?:  https://code.launchpad.net/~1chb1n/charms/trusty/mongodb/sync-fetch-helpers-liberty/+merge/268413
<gnuoy> beisner, done
<beisner> gnuoy, thanks.  fyi, pxc is in the same boat.  working on a mp now.
<stub> Is anyone else seeing odd failures with juju-deployer? It seems to sometimes 'forget' do deploy a service, and then fails when it tries to add relations to the service it neglected to deploy.
<stub> Maybe receiving out of date information when introspecting the environment?
<lazyPower> so lets say I got persnickety and turned off all of my EC2 vm's associated with my juju environment lastnight, when i powered them on this morning and fetched the new bootstrap node IP and updated my .jenv file with the proper bootstrap ip address, does anyone have suggestions as to why juju 1.24.4 cannot seems to get a response? the daemons are running on the state server
<lazyPower> juju just seems to not be able to reach the node, and i'm suspect that the provider state file in s3 is the culprit....
<lazyPower> stub: is this a local charm declaration or a charmstore service?
<g3naro> you can telnet on port ?
<lazyPower> g3naro: let me try that, 1 sec
<stub> lazyPower: These are local charms
<g3naro> if response, then maybe port is up but service is not responsive, try restart of agent on one box if possibe
<lazyPower> g3naro: yeah service is up but not responding, and there's a relevant security group rule
<lazyPower> i'm calling schenanigans
<g3naro> hmm
<lazyPower> stub: I'm not sure why that would be the case. does juju deploy the service just fine manually when you tell it to deploy local:series/foo?
<stub> lazyPower: Its buried three miles deep in the test suite, so hard to tell. I think it might be seeing a dying service that is taking its time to disappear. I might have to wait harder.
<lazyPower> stub: it would be nice if it had the option to force destroy services eh?
<lazyPower> force destroy env, time-wait until service disappears - so we know its not an issue.
<stub> lazyPower: You can force destroy the machines, then you should always be able to destroy the service. But juju is async, so you still need to wait. I just didn't realize I needed to wait for that too.
<stub> (assuming my guess is right)
<lazyPower> right
<lazyPower> i can see that, getting in a scenario where the service is stlil present in the env but has no units backing it, so you get this weird transient no unit but have a service situation
<jose> jcastro: yeah, I'm checking my schedule and looks like I can go. I think I'll submit a talk for then
<jcastro> ack.
<jose> jcastro: also, is it fine if I book now? later today the agent will be EOD (belgium based)
<jcastro> sure, go for it!
<jcastro> jose: seb has visa issues so he won't make it. *sadface*
<jose> jcastro: are you sure? he said he could get one fine... I'll double check in Spanish with him, maybe he meant something else
<jose> I'll let you know how that goes
<jcastro> no there's like a 20 day minimum
<jcastro> next time we will book the hotel like 3 months in advance
<jose> it's more than 20 days and afaik they give you the appointment for the next business day
<jose> oh welp
<jose> btw, if you need me to bring something from peru lmk
<g3naro> bro i would love 2 hot peru girls
<jose> g3naro: don't think I can take those tu the summit :P
<g3naro> hehehe worth a shot
<Walex> so I am getting on a master node of a Juju setup of a dozen nodes a few dozen thousands to a few millions warnings saying "WARNING juju.lease lease.go:301 A notification timed out after  1m0s.
<Walex> per day... Sometimes thousands of those messages per second. And the Juju MongoDB db has perhaps relatedly grown to ~ 800 filles and 200GB. WHere to start looking?
<beisner> hi gnuoy, pxc ready for review/land.  cleaned up, ch-sync'd, passing.  https://code.launchpad.net/~1chb1n/charms/trusty/percona-cluster/liberty-prep/+merge/269210
<gnuoy> beisner, done
<beisner> gnuoy, ta!
<gnuoy> np
<beisner> gnuoy, coreycb - with today's mongodb and percona-cluster merges, that completes the Liberty uca ch-sync bit for all of the next charms in all of our tests.   Now we're completely clear to find other issues.  ;-)
<coreycb> beisner, nice, step by step..
<beisner> coreycb, yep.
<beisner> thanks again guys for the effort on all fronts.
<beisner> hi coreycb, in a spot where you can review/land this puppy?  https://code.launchpad.net/~ack/charms/trusty/keystone/pause-and-resume/+merge/267931
<coreycb> beisner, I took a quick glance but seeing as it's actions, james may want to take a scan
<beisner> coreycb, fyi it's parallel to what we've already landed in swift-storage (https://code.launchpad.net/~adam-collard/charms/trusty/swift-storage/add-pause-resume-actions/+merge/268233), so I think it's good to go.
<coreycb> beisner, ah, in that case...  I'll take a closer look
<coreycb> beisner, I'll look today, tied up in liberty packaging/mentoring atm
<beisner> coreycb, much appreciated
<mattyw> lazyPower, any update on https://github.com/juju/juju/issues/470?
<lazyPower> ah i haven't looked at that in ages O_O
<lazyPower> mattyw: I have no updates, but this looks to be something we just need to tune in the vagrant config?
<lazyPower> swap the port and adjust the python script thats setting up the GUI on boot
<lazyPower> aisrael: issue 470 linked above is in your fifedom if you have a moment to take a look and weigh in
<lazyPower> aisrael: and i swear i'm not passing the buck, just want your opinion as the current liason/maintainer of the vagrant workflow
<sparkiegeek> lazyPower: btw /topic could do with an update ("Office Hours, here July 30'th")
* lazyPower changed the topic of #juju to: Welcome to Juju! || Juju Charmer Summit Sept. 18-19 US Washington DC - Ask us about details || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
* lazyPower changed the topic of #juju to: Welcome to Juju! || Juju Charmer Summit Sept. 18-19 US Washington DC - http://ubunt.eu/KorUSN   || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<lazyPower> sparkiegeek: thanks :)
<aisrael> lazyPower: reading
<aisrael> lazyPower: I'm not sure if that error is still happening. Seeing if I can recreate it now.
<aisrael> mattyw: Have you re-tested lately?
<mattyw> aisrael, I haven't - I'm on call reviewer today so I'm just going throught the issues on github and chasing things that might need to be chased
<aisrael> mattyw: Gotcha. I'll follow up soon, thanks!
<aisrael> mattyw: commented, but don't have perms to close. I'm unable to recreate. I believe this has been fixed within the vagrant image since the bug was opened.
<natefinch>                       
<ddellav> coreycb,  upgrade action mp complete and all tests pass: https://code.launchpad.net/~ddellav/charms/trusty/cinder/upgrade-action/+merge/269247
<coreycb> ddellav, cool, I'll take a look tomorrow
<ddellav> :)
<jose> niedbalski: ping, mind a quick PM?
<jose> also, marcoceppi, http://review.juju.solutions/review/2196 needs clearing from the revq
#juju 2015-08-27
<beisner> thedac, wolsen, gnuoy, coreycb, dosaboy - fix for bug 1485722 proposed and tested.  i view this as critical as it is a deployment blocker for Vivid and later when nrpe is related to rmq.  please & thanks :-)
<mup> Bug #1485722: rmq + nrpe on >= Vivid No PID file found <amulet> <openstack> <uosci> <nrpe (Juju Charms Collection):Invalid> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1485722>
<wolsen> beisner, well sir let me go look
<wolsen> beisner - added a comment
<wolsen> beisner, please take a look - if I don't hear back I can fix it in the merge as the rest of the change looks fine
<beisner> wolsen, removed a cat :-)
<wolsen> beisner, awesome, those pesky felines getting in the way :)
<wolsen> cats, rabbits, meh
<wolsen> beisner, this is backport-potential yes?
<beisner> wolsen, definitely
<beisner> wolsen, affects next and stable rmq + nrpe
<beisner> well just the rmq charm but nrpe users will hurt
<wolsen> beisner, yuppers - though its slightly mitigated by not affecting an lts version - but ya
<beisner> wolsen, it'll be a MP blocker once those tests land ;-)   then it'll get heat if it doesn't already have any.
<wolsen> beisner, +1 on that
<wolsen> beisner, I was noticing earlier with a proposal that jillr put up - there's not a lot of test around some of the nrpe stuff - which is something we should mark for improvement somewhere
<beisner> wolsen, i added a comment on that fyi
<beisner> basically to hang tight while we iron out these crits
<beisner> then resubmit
<wolsen> beisner, ah ok - hadn't looked at it too closely, but was trying to help jillr take a look at it
<wolsen> beisner, did she retarget? I had asked her to retarget /next (honestly haven't looked at that)
<beisner> wolsen, yep.  but it's running the old tests, which as we know now, lead to releasing broken-ass charms.
<wolsen> beisner, ack
<beisner> wolsen, hence the hang-tight, then rebase & resubmit review.  no sense in not-testing any additional features.
<beisner> biab, thanks for the check-in wolsen
<wolsen> beisner, landed
<beisner> wolsen, thanks sir
<wolsen> beisner, np
<Walex> Hi I am trying to login to the Juju MongoDB instance and I am getting "not authorized" errors. I am using '-u admin' and '-p ....' from the '/var/lib/juju/agents/machine-0/agent.conf' file (API and state password are the same). The 'mongod' instance with started with '--keyFile ....' but there does not seem to be an equivalent option for the 'mongo' client. Suggestions welcome.
<Walex> also curiously all three members of the replica set have different passwords. How does one member log into the other members?
<lazyPower> Walex: MongoDB replicaset passwords are cluster specific, so typically you log in through a mongos gateway to reach your cluster nodes
<lazyPower> however i'm not certain how you would log in and poke around in the Juju DB - thats a good question. The core devs probably would have some insight here
<Walex> lazyPower: http://www.metaklass.org/how-to-recover-juju-from-a-lost-juju-openstack-provider/ has a suggestion which I tried...
<lazyPower> natefinch:  wwitzel3 - Any insight for Walex on how they can cannect to the jujudb?
<Walex> lazyPower: maybe I am using the wrong terms here, I don't see any 'mongos' daemons running on the nodes. I meant perhaps the state servers.
<Walex> the 3 nodes can log into each other obviously (I see the port 37017 traffic)...
<Walex> but that's obviously done with the replica set keyfile.
<lazyPower> right. I'm not so familiar with how our juju stateserver is setup to be honest. i just know it exists and what function it serves
<lazyPower> the experience i speak of is from running a distributed monogdb cluster
<Walex> will wait or perhaps later send a mailing list message.
<wwitzel3> let me see if I can remember
<natefinch> you use the oldpassword field from the agent config on machine 0....
<natefinch> I forget the exact incantation
<Walex> natefinch: I'll try again
<Walex> natefinch: in http://www.metaklass.org/how-to-recover-juju-from-a-lost -juju-openstack-provider/
<Walex> there is a plausible looking line
<lazyPower> Thanks natefinch and good morning o/
<Walex> but I use the 'oldpassword' and "auth fails" not sure if 'admin' is the right user
<natefinch> Walex: that looks good to me. I usually just get the password the old fashioned way (copy and paste) but assuming the grep does the right thing, then yes
<Walex> ahhhhhhhhh I have just noticed my mistake: I was trying to log into the 'juju' database, not 'admin'. oops
<Walex> indeed with "/admin" that works, sorry...
<lazyPower> Walex: glad we could get you sorted!
<Walex> lazyPower: and I can connect directly to 'localhost:37017/juju' if I add '--authenticationDatabase admin' as an option to 'mongo'
<Walex> sorted!
<lazyPower> cory_fu: so, if you've got a moment - we got this far yesterday - http://paste.ubuntu.com/12205843/
<lazyPower> thats our reactive/nginx.py, down to implementing a relationship stub and the super simple intro to reactive and layers is basically complete. we've mirrored what we use in charm schools to teach charming w/ docker
<cory_fu> lazyPower: Would it have killed you to select Python as the language to get syntax highlighting?  ;)
<lazyPower> cory_fu: i used pastebinit?
<cory_fu> Ah.  Fair enough
<cory_fu> I'm surprised pastebinit doesn't guess the format based on the file name
<lazyPower> papercutz
<cory_fu> :)
<lazyPower> cory_fu: thanks for the sync yesterday. that really got mbruzek and I moving. We're going to have this particluar charm wrapped today and ready to move on to extending the base layer(s) and writing docs before the week is up
<cory_fu> lazyPower: You have a bit of a bug in your config-changed handler.  It could potentially call stop_container and attempt to issue docker commands before docker is installed
<lazyPower> that was critical to resolving things we were doing exploratory dev for
<lazyPower> cory_fu: in our testing it was from install =>    and the entire chain ran before it hit a possible stop hook.
<lazyPower> what scenario would be exposed that leaves us vulnerable to calling stop before its present
<cory_fu> Oh, yeah, I suppose you're right.  docker.available will be set during the install hook, so it's a bit moot.  Though, if your docker base layer ever changes (say to require a repo URL config or something) that could potentially delay docker install until config, it could open you up.  *shrug*
<cory_fu> lazyPower: I was going to suggest creating an nginx.restart state that you could set
<cory_fu> Would be another potentially useful entrypoint for layers using this
<cory_fu> And would future-proof the code against the admittedly non-issue
<lazyPower> we thought about that, and i forget why exactly we refactored down to just dropping in a config-changed hook context vs using the state
<lazyPower> but it makes sense
 * mbruzek starts reading the scrollback
<kwmonroe> hey lazyPower, i just deployed cs:trusty/etcd and was met with: http://paste.ubuntu.com/12206396/  have you seen that before?
<kwmonroe> ^^ no relations or anything, just "juju deploy cs:trusty/etcd" got me there
<mbruzek> hey Kevin that looks aweful
<mbruzek> kwmonroe: that looks like our overuse of path.py has come to bite us.
<mbruzek> kwmonroe: Can you paste the entire unit log?  Did the pip install path.py not work?
<mbruzek> in the install hook
 * marcoceppi wears a smug face
 * mbruzek waves and nods at marco
<kwmonroe> momento mbruzek, i tore that env down, but I'll fetch the logs again shortly
<mbruzek> kwmonroe: it looks to me that the install hook does not install pip
<Walex> I see that the Juju "command" node(s) don't run (necessarily) any daemon, and that the "state" nodes run 'mongod' from the 'juju-mongodb' package. Also I see that all nodes run 'jujud' from the unit 'tools' directories. I am about to update to 1.24.5. How do the 'jujud' binaries in each unit get updated? When? Are there ordering dependencies among the 'juju-*' packages for upgrade, and among the state servers? ...
 * Walex worries about details...
<kwmonroe> mbruzek: i betcha you gotta do "from path import Path" (cap P on the 2nd path): http://paste.ubuntu.com/12206611/
<mbruzek> kwmonroe: It looks like path.py was updated today!  It is possible that is not working.
<mbruzek> kwmonroe: we use lowercase path all over the place.  whit can you help with this problem?
<kwmonroe> mbruzek: we use cap P in our big data charms... now fight.
<lazyPower> kwmonroe: hmm interesting, let me check the charm code 1 sec kwmonroe
<kwmonroe> mbruzek: lazyPower.. i'm just gonna leave this here: https://pypi.python.org/pypi/path.py
<lazyPower> kwmonroe: i follow that with the bug hat spawned this issue
<lazyPower> https://github.com/jaraco/path.py/issues/102
<lazyPower> kwmonroe: but thanks for the heads up on the issue. We'll cut a hotfix patch and get it cueued up - as we are apparently broken in the store now
<kwmonroe> gracias lazyPower!
<lazyPower> kwmonroe: once we have a fix in place do you mind being the on-call reviewer for that MP? i'll stack it on what you're already reviewing so its applicable :D
<kwmonroe> sure lazyPower, i'll be your huckleberry
<lazyPower> aww yeee
<lazyPower> kwmonroe: broken rel of path.py was just pulled from pypi
<lazyPower> ready for you to re-test at your leisure
<lazyPower> whoa juju gui just removed its crosshatched background - https://github.com/juju/juju-gui/pull/799
<kwmonroe> confirmed lazyPower, latest deploy pulls path.py-7.7.1 and all is right with the world.  would you like a tracking bug requesting s/path/Path for the inevitable time when path does finally go away?
<rick_h_> lazyPower: quit watching us :P
<lazyPower> kwmonroe: we're going to pin package deps now, and prepare for the inevitable breakage when we have the bandwidth
<lazyPower> thats used in a lot of places
<lazyPower> and we have a lot of auditing to do
<kwmonroe> ack lazyPower.  thanks to you and whit for the nudge to get path-8 out of pypi :)
<Walex> I updates packages on master node, now 'juju upgrade-juju' tells me that "no more recent supported versions available"  how do I make a more recent version of the tools available to the nodes?
<lazyPower> Walex: when you juju upgrade-juju it should have published a newer version of the tools to your state server
<lazyPower> the nodes will slowly start to upgrade once your environment is upgraded if memory serves me correctly
<Walex> lazyPower: ahhhhhhh so in theory I just wait. I noticed somewhere a mention of a queue
<Walex> ah but just looked at my state servers and I don't see the 1.24.5 directories I'll investigate
<Walex> what's peculiar is that when I upgrade 'juju-core' it took many minutes and it is a fairly small package.
<wolsen> ackk, regarding our discussion for the keystone pause/resume
<ackk> wolsen, yes
<wolsen> ackk, so for the clustering support - if we had the support in hacluster charm to move the vip off a node (e.g. get a node in maintenance mode or paused), then I think what is in the keystone charm would be fine actually
<wolsen> ackk, the pause and such that is
<wolsen> ackk, I still have a concern that if a user were to simply issue the pause against keystone but they hadn't done the appropriate action on the hacluster charm that they could end up with a service disruption
<wolsen> which might not be great
<ackk> wolsen, right. there are other similar cases where there's more to do than "service foo start/stop". for instance ceph OSDs need to be set to "noout"
<wolsen> ackk, right - maybe we can address it with docs around the pause/resume action?
<ackk> wolsen, I see your point. I'm a bit worried about putting a lot of logic in a single action and having an action with the same name doing different things across charms
<ackk> wolsen, there are other cases where you'd definitely want separate steps, like for nova-compute
<wolsen> ackk, that's a fair point, but to me the action defines the semantics of what you want to happen and its up to the charm to define what needs to happen for that action to take place - which can add some complications
<wolsen> ackk, that being said lets try to keep it simple until we have to
<wolsen> do more
<wolsen> ackk, but if we do keep it simple, we still need to be able to inform the user what other actions need to take place
<ackk> wolsen, you mean documenting that you should do other stuff before stopping services/
<ackk> ?
<wolsen> ackk, yep
<wolsen> ackk, I'm thinking the action docs would say something about requirements in a clustered scenario, e.g. running the pause action there first
<ackk> wolsen, btw what's needed on the hacluster side to move the VIP?
<wolsen> ackk, if we could enforce that the action were running first, that'd be great, but that's kind of above and beyond...
<wolsen> ackk, for the hacluster - theres the option to move a resource - but the cluster may need to be in maintenance mode as well or the node marked as offline
<wolsen> ackk, i'd have to go through the specific details of how to do that (to refresh my memory)
<ackk> wolsen, I see. so basically we could add a pair of actions there so that you'd "juju action do pause hacluster-keystone/X; juju action do pause keystone/Y"
<ackk> (roughly)
<wolsen> ackk, yep
<ackk> wolsen, cool
<wolsen> ackk, so it'd still keep the building blocks you're looking to add (we can fancy it up in the future if needed)
<ackk> wolsen, +1
<wolsen> ackk, but the user needs to know that they have to do the multi-step process
<ackk> wolsen, agreed
<ackk> wolsen, could you sum that up in a comment on the MP?
<wolsen> ackk, doing so now
<ackk> wolsen, thanks
<ackk> wolsen, totally unrelated (but since we're on openstack charms topic), do you know any downside of not using the embedded webserver for ceph-radosgw?
<wolsen> ackk, and I think the other proposals which are similar (e.g. glance and percona-cluster) will likely fall into the same - though percona-cluster I think we should carefully think through that in some more depth (I'll try to give some more thought to it)
<wolsen> ackk, when not using the embedded server, it doesn't have the 100 continue support built-in to the apache service. Ceph devs used to provide an apache package which had it but they yanked it in favor of the embedded web server
<wolsen> ackk, the 100 continue support is necessary for some of the use cases (e.g. using it from the horizon dashboard)
<ackk> wolsen, I see
<wolsen> ackk, so the preferred way forward is the embedded server
<wolsen> ackk, but is there another use case that you have for not using it?
<ackk> wolsen, well, we've seen failures in autopilot deploys recently. I'm not sure it's related, but it might have happened since we switched to the embedded server
<wolsen> ackk, oh :(
<ackk> wolsen, as said it's just a guess, maybe it's an unlucky coincidence
<wolsen> ackk, logs and a bug would be great (if you haven't gotten one already)
<ackk> wolsen, https://bugs.launchpad.net/charms/+source/ceph-radosgw/+bug/1477225
<mup> Bug #1477225: ceph-radosgw died during deployment <cloud-install-failure> <cpec> <ceph-radosgw (Juju Charms Collection):New> <https://launchpad.net/bugs/1477225>
<wolsen> ackk, also wanted to say the MP looked really good in general and thanks for that contribution!
<ackk> wolsen, np! :)
<ackk> wolsen, wrt maintenance, we're also not sure yet of what needs to be done on neutron-gateway nodes (see notes in the doc)
<wolsen> ackk, bleh yeah that's tricky as it will almost certainly cause service disruption unless dvr is enabled I believe
<ackk> wolsen, specifically if removing/readding the l3router in neutron is needed, and how properly cause a failover if
<ackk> wolsen, we deploy with l3ha
<wolsen> ackk, ok
<ackk> router-ha, that is
<ackk> wolsen, still, stopping services on the node is not enough to cause a failover
<wolsen> ackk, I'll have to dig into it (I don't have enough background on neutron gateway and ha to be honest)
<ackk> wolsen, ok, thanks for the info
<ackk> wolsen, and for the review :)
<wolsen> ackk, np :-) it was fun!
<ackk> heh
<kwmonroe> lazyPower: fwiw, i saw pypi went to path.py-8.1 and re-checked etcd.  you're still good.
<lazyPower> kwmonroe: above and beyond, thats awesome. Thanks!
<kwmonroe> np lazyPower, gives you time to work out which version you want to pin.
<whit> lazyPower: this path.py hiccup makes me think we should have an official juju python index
<marcoceppi> whit: that sounds like the opposite of what we need, why not just version lock your deps?
<whit> marcoceppi: accomplishes the same thing without having to edit all the places the dep is defined everytime you need to update
<whit> marcoceppi: think of it as a hierarchy of control
<whit> the index is centralized, but under our control (unlike pypi)
<whit> then reqfiles and setup.pys become the more granular control
<marcoceppi> sounds like a lot of work for little pay off
<lazyPower> whit: that sounds like an extra maintenance burden and infrastructure for the sake of running infrastructure. It would yield some benefit, but i'm not certain thats enough to not just version lock deps.
<lazyPower> if we had packages constantly getting yanked from pypi, then yes, that sounds like the way to move forward
<lazyPower> so we can maintain the versions we depend on that are otherwise disappering
<whit> the issue is the "default" version
<whit> which is always the most current in the index
<lazyPower> well, thats fair, but we also didnt define any of that in our requirements. in 2 places we had blind install path.py on the CLI
<whit> if you pin, you no longer will pull newer
<lazyPower> and in others we had no version data in the requirements.txt file
<marcoceppi> sure, so you should develop a charm, pip freeze, deliver, iterate, pip update, freeze again, release
<whit> I don't think you all are grokking how these things scale or the tooling available
 * marcoceppi shrugs
<whit> just take it as my advice until it makes sense ;)
<lazyPower> whit: thats possible
<lazyPower> whit: but do we adopt the same thing with companion technologies for other languages? Run our own gem host, npm index, et-al?
<whit> think of it as every deploy of any charm as your "product website"
<whit> you wouldn't deploy packages straight from pypi to prod
<whit> quality control folks
<lazyPower> i did :3
<whit> ur.. production
<lazyPower> and i hated the pain that introduced like today, but we pinned deps then too. we didn't spin up a gem host.
<marcoceppi> so, how does pip freeze not solve this?
<marcoceppi> I don't want us to be responsible for someone's charm not working because an index is down
<marcoceppi> or they're using a newer version than our index or vice versa
<whit> marcoceppi: vs. pypi or github being down which you can do nothing about?
<marcoceppi> yeah, but we're not responsible and they're all well estabilished services that have a team dedicated to keeping those things running
<marcoceppi> no ones perfect but I don't want to run pager duty because we're running our own index
<whit> marcoceppi: yeah, but those being down == a crap experience for charm deployment
<marcoceppi> and our index being down?
<whit> this is an academic example in reliability and control
<whit> we can fix that
<whit> we can't fix externalities
<whit> that's the point of the example
<marcoceppi> this is the exact reason SAAS exists
<whit> marcoceppi: when your shit break because some elses saas break, you still look like an ass
<marcoceppi> if you want to run this for a set of charm syou maintain, sure, that sounds great, I don't think it sets us up for success anymore than what exists with pypi or other services
<whit> saas exists so you don't have to build it
<whit> but when aws goes down, netflix loses money
<marcoceppi> and when yoru shit breaks because you can't run a web service 24/7 due to stafing you look like a bigger ass
<whit> marcoceppi: that we can fix ;)
<whit> marcoceppi: my general point is that python libraries working are part of charms working and therefore part of a good charming experience
<whit> which is important to the success of juju
<lazyPower> whit: this sounds more like deps should be bundled with charms.
<lazyPower> not that we should run an indexer
<marcoceppi> running a proxy isn't a project concern
<marcoceppi> it's an operations concern
<marcoceppi> we run pypi proxies in our private environments
<whit> resources would help, but an index fixes the problem now without the developer issue of pin maintenance
<marcoceppi> that just work, leave this for ops people to run themselves, not us
<whit> https://pypi.python.org/pypi/devpi
<marcoceppi> lunch is here
<whit> marcoceppi: yes it is an operations concern.  that we agree on.
<whit> who's operation concern and why, we don't
<lazyPower> whit: actually the fact we were pulling from pypi and not some mirror helped us today... had we still had the 8.0 copy cached we would still be broken in the charm store right now
<rick_h_> whit: marcoceppi fyi from another team's perspective. We've recently discussed talking with IS on running a pypi index in prodstack for our services there and gating/curating. However, we currently have a matching "xxx-download-cache" project for each codebase and build it into the project's build steps.
<rick_h_> this allows for complete offline building of code/projects
<rick_h_> and completely verison locked w/o internet access (since prodstack has egress firewall locks)
<rick_h_> whit: marcoceppi so I guess some additional feedback, I can't not redeploy my produciton because pypi or GH are having DDOS issues atm.
 * rick_h_ goes back to lurking
<jrwren> Every organization and project has different tollerances for acceptable risk. Some projects may be willing to accept the risk that goes with depending on github or pypi being up, others cannot.
<whit> lazyPower: we would have tested the new copy before updating the index
<whit> therefore no breakage?
<lazyPower> i guess
 * lazyPower shrugs
<lazyPower> i'm not interested in running a pypi mirror
<lazyPower> but if someone else is,i'm thumbs up to them doing it
<whit> rick_h_: that sounds good
<whit> ideally grabbing and freezing all necessary resources for a charm has lots of benefits
<whit> this is the general idea behind IBWFs mas o menos
<whit> whether you are building on the fly or building resource blobs or some sort of image, controlling the source material has lots of benefits
<whit> image workflows do have the advantage of breaking before deploy (in the build stage) rather than during deploy
<jrwren> whit: how far do you take this? what makes archive.ubuntu and PPA different from pypi?
<beisner> Fwiw I use devpi caching mirror in uosci
<beisner> As doing A lot of iterations revealed pypi weaknesses and false test failures
<whit> jrwren: archive.ubuntu is better curated than pypi
<whit> ppa is effectively == to a self hosted index if you control the ppa
<whit> if you don't, you trust the maintainer, so it depends on the nature of the relationship
<jrwren> whit: ah, I thought you were refering to uptime. I used to deal with pypi being down often enough.
<whit> so if you run the index, you have a bit more control of the uptime
<whit> rather than depending on the packaging vols of pypi
<jrwren> yes, my point is, that to a 3rd party, there isn't much difference between pypi and a ppa.
<jrwren> Both are out of control systems which present risk.
<whit> jrwren: risk is contextual
<jrwren> what do you mean?
<whit> if i am trying out juju, and my charm fails due to pypi being down, I still will say juju sucks
<jrwren> definitely true.
<whit> if I'm using juju for a situation I'm invested in, it behooves me to run a debian index and a python index
<whit> and my own charm server
<jrwren> exactly.
<jrwren> or accept the risk of not doing so.
<whit> so from the context of eco (and anyone who care bout adoption), controlling the central vectors of potential failure is valuable
<whit> yep
<whit> a good first time experience is one that works
<whit> from our perspective, it's a tradeoff between investing in running, curating and monitoring our own index, vs. the less known cost of random failure
<jrwren> its a very good point.
<jrwren> it makes me wonder if charms shouldn't declare their external dependencies.
<jrwren> certainly with unit status external deps could be handled in the charm an a status-set blocked used to clearly tell end user that an external dep failed.
<beisner> +1 to charms declaring external dependencies, at minimum in the form of a README blip.
<beisner> That is so much better than having to figure it out via install hook failures when you're sitting behind firewalls and proxies.
#juju 2015-08-28
<Walex> what does 'juju sync-tools' do? Is it related to 'juju upgrade-tools'?
<Walex> my issue is: I upgraded the Juju packages 1.23.3 to 1.24.5 on the "control" node but the '/var/lib/juju/tools/' directories on the state nodes are not being updated
<luqas> hi, I might be hitting https://bugs.launchpad.net/juju-core/+bug/1417875 on juju-core 1.24.5
<mup> Bug #1417875: ERROR juju.worker runner.go:219 exited "rsyslog": x509: certificate signed by unknown authority <canonical-bootstack> <logging> <regression> <juju-core:Fix
<mup> Released by wwitzel3> <juju-core 1.21:Fix Released by wwitzel3> <juju-core 1.22:Fix Released by wwitzel3> <https://launchpad.net/bugs/1417875>
<luqas> wwitzel3: I'm trying your manual workaround but I can't find the "good" certificate
<ssmoCoffee> We are installing Openstack per the single installer guide ubuntu-cloud-installer.readthedocs.org/en/latest/single-installer.guide.html.  Bootstrapping Juju fails with an error message about kvm-ok and cpu-checker packages needing to be installed.
<ssmoCoffee> The packages cpu-checker and qemu-kvm are installed on both the host server and the bootstrap container.
<ssmoCoffee> Anyone run into the same issue?
<ssmoCoffee> I reproduce the error attaching the bootstrap container and running juju --debug bootstrap -e local
<ssmoCoffee> ERROR juju.cmd supercommand.go:430 there was an issue examining the environment: failed verification of local provider prerequisites: kvm-ok is not installed. Please install the cpu-checker package.
<firl> anyone able to possibly help me diagnose why my landscape isnât installing via the openstack-install? I am looking in the JuJu files and I canât seem to figure out which error to try to research.
<lazyPower> dpb1_: ping
<lazyPower> ssmoCoffee: thats... odd. Which version of ubuntu/openstack installer?
<lazyPower> Walex: Did you get an answer out of band?
<ssmoCoffee> lazyPower: OpenStack Installer v0.99.19
<lazyPower> firl: i'm pending a response from a landscape charmer, however - if you could send over the juju log from that landscape node we should be able to track down whats happening
<lazyPower> ssmoCoffee: is this on trusty?
<ssmoCoffee> lazyPower: running kvm-ok from the bootstrap container fails
<ssmoCoffee> vivid
<lazyPower> @ddellav - can you check on this? kvm-ok on a bootstrap node in vivid?
<lazyPower> ssmoCoffee: which version of juju?
<firl> lazyPower: sure will do. I tried on 2 different systems, and a VM with embedded. Do you know if I deploy landscape automatically ( not openstack-installer ) if I get the openstack-beta tab still?
<lazyPower> firl: i'm not positive, but i would *think* so
<sparkiegeek> firl: you do :)
<ssmoCoffee> 1.24.5-vivid-amd64
<lazyPower> perfect, latest stable. thanks ssmoCoffee
<firl> lazyPower: sparkiegeek: so the openstack-installer is just a âniceâ wrapper so you donât have to create your own juju file and environment?
<lazyPower> 1 sec while i poke around and try to get some output.
<sparkiegeek> firl: that and setting up of MAAS
<firl> which can be done post install
<firl> kk
<sparkiegeek> (which is the hard part that the installer automates [some of] for you)
<firl> that makes me feel a little better
<sparkiegeek> so if you're comfortable getting MAAS setup, you can quite easily deploy Landscape using Juju
<firl> lazyPower: http://pastebin.com/U8TeqYAK
<sparkiegeek> https://help.landscape.canonical.com/LDS/JujuDeployment15.01
<firl> nice, yeah I will do that
<ssmoCoffee> lazyPower: thanks.  Should /proc/cpuinfo be empty on the bootstrap contianer?
<lazyPower> ssmoCoffee: not at all
<lazyPower> the /proc/cpuinfo should be filled out always due to procfs
<lazyPower> and thats part of ubuntu core
<firl> sparkiegeek: do you know if the openstack-beta installer has been upgraded to vivid/kilo?
<ssmoCoffee> lazyPower: on the host /proc/cpuinfo is populated and kvm-ok returns that KVM acceleration can be used
<sparkiegeek> firl: there's Work In Progress on trusty/kilo - we don't target the interim releases (vivid, wily ...)
<firl> kk
<firl> I just liked the idea of lxc containers that came with vivid
<sparkiegeek> what do you mean? LXD?
<ssmoCoffee> lazyPower: so the lxc bootstrap container isn't valid
<firl> ya LXD sorry
<firl> https://insights.ubuntu.com/2015/04/22/here-comes-kilo-15-05-containers-will-never-be-the-same-again/
<sparkiegeek> firl: ah, containers in your OpenStack - right we're excited by that too, but you'll have to wait for 16.04 (next LTS) before we integrate it
<sparkiegeek> (in the autopilot I mean)
<firl> I figured :)
<firl> are the openstack charms also only targetted for lts?
<sparkiegeek> there's some rounding out of nova-compute-lxd that still needs to happen - an LTS is the best place to do that :)
<sparkiegeek> no, the charms will support vivid/kilo
<sparkiegeek> s/will/do/
<firl> oh really?
<firl> I can deploy via the juju ui and get vivid/kilo right now?
<sparkiegeek> beisner: ^^ do you guys have a bundle somewhere for vivid/kilo?
<beisner> sparkiegeek, firl - while there isn't an official V:K bundle published in the charm store, we do have test bundles which exercise that.  it would need to be tailored to you desired topology and machine/service placement, however.
<sparkiegeek> beisner: sure, I expect firl could start with them as a drag/drop into Juju GUI and then tailor from there?
<sparkiegeek> machine view for placement + any other tweaks
<firl> beisner,sparkiegeek: yeah that doesnât scare me. I ran into an issue last month where I had to commit the ceph config before adding the units because of having to specify all the drives
<firl> I am using a custom deployed openstack bundle on my machines at home
<beisner> sparkiegeek, firl - i'm not as familiar with the gui.  i generally tune the yaml in advance, with the desired deployment completely described before even bootstrapping.
<beisner> but, i think you can do some pretty cool things like that in the gui
<firl> beisner: that would work great actually for the deployment I am going for
<sparkiegeek> beisner: sure
 * beisner fetches a ymmv bundle as a starting point for ya
<firl> :)
<beisner> flir - caveat - these aren't vetted for production, they are the topologies we test with in charm and package development.
<beisner> http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/baremetal/7-default.yaml
<beisner> juju-deployer --bootstrap -c default.yaml -d vivid-kilo
<beisner> rather
<beisner> juju-deployer --bootstrap -c 7-default.yaml -d trusty-juno
<beisner> ah bugger
<beisner> copy paste fail
<firl> haha
<sparkiegeek> 7-default.yaml, vivid-kilo :)
<beisner> juju-deployer --bootstrap -c 7-default.yaml -d vivid-kilo
<beisner> yagotit
<beisner> lol
<beisner> thankfully, it is friday
<firl> beisner: thanks! I will definitely be trying this later, there is a 40 node maas cluster I am trying to work with currently
<sparkiegeek> firl: so not sure how familiar you are with Juju + bundles, but that one is a v3 bundle format (contains multiple targets) the latest/greatest format that the GUI supports is v4 so you'll have to use juju-deployer to ... deployt it :)
<firl> sparkiegeek: I can still manage it post install via the juju-gui right?
<sparkiegeek> firl: sure
<firl> very cool
<beisner> firl, sparkiegeek - ++caveat:  there is a known issues with pxc (percona-cluster) on vivid.   you can substitute lp:charms/trusty/mysql
<beisner> bug 1481362
<mup> Bug #1481362: pxc server 5.6 on Vivid and Wily does not create /var/lib/mysql <amulet> <openstack> <uosci> <percona-xtradb-cluster-5.6 (Ubuntu):New> <percona-cluster (Juju Charms Collection):New> <https://launchpad.net/bugs/1481362>
<beisner> oh wait
<beisner> i already did that via inheritance in the bundle :-)
<beisner> http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/baremetal/7-default.yaml#L314
<firl> haha
<beisner> firl, ah cool 40 nodes.  happy deploying!
<firl> beisner: haha thanks should be fun
<beisner> o/ thanks sparkiegeek lazyPower
<firl> and whoever set the title of the room scared me, the dates seem to be 17-18
<sparkiegeek> firl: 10 points for attention to detail
<sparkiegeek> lazyPower: ^^ topic has wrong dates for the summit :)
<firl> haha, well I am about to book travel so I got scared
<firl> beisner: do you have a help article on how to get the new environment ready for juju as well? I had quite a pain prepping my environment, and that is one thing I liked that landscape did
<beisner> firl, you already have maas with machines enlisted and ready to go to work?
<firl> ya
<firl> I meant the vivid/kilo OS prepped for juju deployments
<sparkiegeek> ah, Juju on the inside?
<firl> yeah
<firl> thatâs one thing landscape did, deployed tools, and prepped the juju
<beisner> oh to use juju with the openstack provider on the deployed cloud
<firl> ya
<sparkiegeek> https://jujucharms.com/docs/stable/howto-privatecloud is the documented way
<firl> yeah â¦ I am very familar with that page, hope it works out better than last time haha
<beisner> also https://jujucharms.com/docs/devel/config-openstack  for the openstack-provider specific environment.yaml options
<firl> thanks, yeah Iâve been through it. I will try again, hopefully with the latest juju itâs a little easier now
<firl> 1.24 caused issues / frustrations
<beisner> firl, as reference, my environments.yaml looks like this, sanitized:  http://paste.ubuntu.com/12215832/
<firl> beisner: thanks, I appreciate it
<sparkiegeek> firl: FWIW in my experience you don't have to put tools in to your OpenStack - what the Autopilot does is "just" put things in Swift to tell Juju where to find the Ubuntu images (e.g. Glance)
<beisner> firl, you're welcome
<sparkiegeek> i.e. just the Image Metadata Contents section of that doc page
* lazyPower changed the topic of #juju to: Welcome to Juju! || Juju Charmer Summit Sept. 17-18 US Washington DC - http://ubunt.eu/KorUSN   || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<pmatulis> does 'enable-os-upgrade' affect just 'apt upgrade' or 'apt full-upgrade'?
<pmatulis> lazyPower, wallyworld, marcoceppi: ? â
<lazyPower> pmatulis: that stops the apt-get update && apt-get upgrade when spinning up a new instance iirc.
<pmatulis> lazyPower: ok, so just upgrade and not full-upgrade
<lazyPower> aiui, correct
<pmatulis> thanks
<bdx> core, dev: Does anyone here know how to add custom cloud-config to maas provisioning....i.e. curtin_userdata preseeed or custom preseed??
#juju 2016-08-29
<PCdude> http://askubuntu.com/questions/817572/openstack-fails-to-install-caused-by-juju
<kjackal> Hello Juju world
<magicaltrout> hey kjackal
<kjackal> Hi magicaltrout!
<PCdude> hey kjackal  and magicaltrout
<PCdude> I have a problem with JUJU and openstack
<PCdude> here is the question
<PCdude> http://askubuntu.com/questions/817572/openstack-fails-to-install-caused-by-juju
<kjackal> Hi PCdude
<magicaltrout> blimey
<magicaltrout> that only took 15 hours
<magicaltrout> finally got maas 2.0 to pxe boot on bloody virtualbox
<PCdude> any idea on how to fix it?
<kjackal> PCdude: Unfortunately not, but I would suggest you try #openstack-charms
<magicaltrout> yeah i've never used openstack i'm afraid
<kjackal> the people at #openstack-charms might be able to help you
<kjackal> admcleod_: are you there?
<admcleod_> yes
<admcleod_> kjackal: hello
<kjackal> admcleod_: you might have an idea about http://askubuntu.com/questions/817572/openstack-fails-to-install-caused-by-juju
<admcleod_> kjackal: PCdude unfortunately i dont know anything about conjure-up - but that error makes me think perhaps there is a lib mismatch or something.
<admcleod_> the first error anyway, the second look like a hardware problem
<admcleod_> ...although fd0 is probably floppy disk? so perhaps its just complaining because theres a virtual floppy device attache
<admcleod_> d
<admcleod_> *guessing*
<PCdude> admcleod_:  yeah, I thought about the floppy disk too, but the weird part is that there is no floppy disk turned on in the bios and also there is no floppy disk added in the VM of the node
<admcleod_> PCdude: weird. well, maybe that can be ignored - it looks like other people have had the first issue too
<PCdude> what u mean "other people", is this a know issue?
<admcleod_> PCdude: it looks like it, i googled the error and i see other people have reported it recently
<PCdude> admcleod_:  I have tried different fora and sended a bug report to JUJU, so I think u are looking at my post :) maybe u can share the link?
<admcleod_> PCdude: e.g. http://askubuntu.com/questions/808372/ubuntu-server-16-04-openstack-mitaka-installation-juju-error/813395
<admcleod_> PCdude: perhaps try the suggestions at the bottom of that link? let us know if that helps
<PCdude> that is not me, but thanks for the link, I am gonna try that, I will let u know admcleod_
<admcleod_> magicaltrout: you at the new job yet? or are you still on earth
<admcleod_> PCdude: im hopeful..:)
<magicaltrout> admcleod_: hehe
<magicaltrout> i'm on the next ship out of here
<magicaltrout> although today i'm working on my presentation for MesosCon on Wednesday
<magicaltrout> learning MAAS in a hurry
<admcleod_> colorado?
<magicaltrout> Amsterdam
<magicaltrout> Amsterdam on Wednesday, Bluefin for a Pentaho thing on Thursday, back for a week, then Pasadena for Juju Summit & working the following week @ JPL
<admcleod_> oh yeah cool
<PCdude> magicaltrout:  , where are u from?
<magicaltrout> Good question PCdude
<admcleod_> ill be at pasadena - ill bring some cheese chocolate
<magicaltrout> I can tell you I live outside London currently
<magicaltrout> admcleod_: i look forward to not bumping into you then
<PCdude> magicaltrout: ah ok, cool! I live a little to the east of u then :)
<magicaltrout> where you based PCdude ?
<PCdude> not amsterdam, but close
<magicaltrout> been a few years since I was last in Holland, shame its a flying (no pun intended) visit
<admcleod_> magicaltrout: aww :)
<PCdude> how old are u magicaltrout ?
<magicaltrout> well yesterday I thought I was 34
<magicaltrout> the mrs had to correct me
<admcleod_> magicaltrout: so you're not 34, you're a messy idiot?
<admcleod_> :D
<PCdude> haha, 21 here
<magicaltrout> 21
<magicaltrout> makes me feel old
<PCdude> and admcleod_ how old are u?
<PCdude> u are old magicaltrout ;)
<admcleod_> 36 i think
<magicaltrout> you'd never guess with his bald head
<admcleod_> haha
<PCdude> I think?! haha
<admcleod_> yeah, i look about 50
<admcleod_> i stopped counting aftr 30 because i felt like id clocked the game
<admcleod_> now every year is just a step closer to death.
<admcleod_> ;]
<PCdude> woah, the best had yet to come :)
<magicaltrout> every day
<magicaltrout> not every year
<admcleod_> you younuns and your days
<admcleod_> younguns
<PCdude> well, I tried both options of the link admcleod_ , no luck so far...
<PCdude> stupid openstack
<admcleod_> PCdude: ok - if you stay in openstack-charms then someone who can help should be along soon
<magicaltrout> don't use it
<magicaltrout> its not the law ;)
<PCdude> admcleod_:  I will
<PCdude> magicaltrout:  , yeah well the whole landscape idea sounded really good to me, and what are the alternatives?
<magicaltrout> what you trying to acheive PCdude ?
<PCdude> I really wanna build a private cloud, with several servers. storage node, network node, provisioning
<PCdude> basically a private version of amazon aws
<PCdude> I know its a big plan, but I do not care :)
<magicaltrout> fair enough, go wild then! :)
<PCdude> but yeah, are there other options than openstack
<PCdude> ?
<PCdude> and I am still a student, so I dont have 1000 dollars a month laying around
<admcleod_> PCdude: can you pastebin your juju accounts.yaml? (without passwords)
<PCdude> admcleod_:  , sure one moment
<magicaltrout> well it still depends on what you're trying to do. Learn cloud computing? then use Openstack. Create a private cloud for development? Maybe there is a different way of doing stuff outside of a private cloud
<PCdude> admcleod_:  http://pastebin.com/auGcqBiM
<PCdude> with of course, the maas key and password included in the real file
<PCdude> magicaltrout:  , learning for sure, development as in for programming projects? or as in development of a private cloud?
<admcleod_> PCdude: so, i think thats missing some stuff ...
<admcleod_> PCdude: and im not sure what conjur-up is supposed to add
<PCdude> admcleod_:  ok, hit me, what should I add?
<PCdude> admcleod_:  conjure-up is what is supposed to be used when installing openstack on 16.04 and higher
<PCdude> before that, someone could issue "sudo openstack-install"
<admcleod_> PCdude: i mean, i dont know if conjure-up should be adding this extra configuration that should be there or not
<PCdude> conjure-up (according to some stackexchange users) should give more freedom
<admcleod_> PCdude: i would say, just to check that the juju install is ok, try https://jujucharms.com/docs/stable/getting-started
<PCdude> admcleod_:  uhm, good idea  I think
<PCdude> admcleod_:  , I fucked up this VM now anyway, so I think I need to start over to avoid some weird errors, I tried to fix the python error myself, without any luck
<admcleod_> PCdude: right ok so starting from scratch sounds like a good idea
<admcleod_> PCdude: verify juju, and then maybe someone will have a response to the specific error
<PCdude> admcleod_:  btw, a little offtopic, but do u know some simple program to make an unattended version of ubuntu 16.04 for install?
<PCdude> its just for some simple stuff, so I dont need a whole server with puppet or FIA
<admcleod_> PCdude: preseed?
<admcleod_> PCdude: https://help.ubuntu.com/lts/installation-guide/example-preseed.txt < something like that?
<PCdude> admcleod_:  I did find some old posts yesterday, but was not sure if that was still the way to go in 2016 with ubuntu 16.04
<PCdude> admcleod_:  I will look into preseed
<admcleod_> PCdude: that one is for xenial - it basically provides answers to the normal installation prompts
<PCdude> admcleod_:  and how do I add it to the install?
<PCdude> the iso I mean
<admcleod_> PCdude: you would need to mount the iso as an fs, add the file, rewrite the iso..
<magicaltrout> admcleod_: you'll know this, if I juju deploy onto maas, does it install onto the bare metal? or in a lxc container?
<admcleod_> PCdude: perhaps its better to ask that in #ubuntu though, maybe there is a better way
<PCdude> magicaltrout:  on bare metal
<admcleod_> magicaltrout: if you just 'juju deploy ubuntu' itll deploy it to bare metal, if you 'juju deploy ubuntu --to lxc:machine#' then lxc
<magicaltrout> cool I need the baremetal scenario just checkin
<magicaltrout> g
<PCdude> admcleod_:  yeah, will try that, I find a ton of options u have to go around this "problem" so maybe there are some simple GUI versions even I dont know
<rock_> Hi.  I did juju deploy openstack on LXD using  https://github.com/openstack-charmers/openstack-on-lxd.   Bymistake I did  #juju logout --force. Now I am not able to see $juju status.
<rock_> How can I login back? I tried #juju login. But it was asking username and password. We last password. It was not showing anything in accounts.yaml. How can I achieve my previous status?
<rock_> please help me in this
<rts-sander> juju logout --force by mistake?
<rock_> rts-sander
<rock_> rts-sander: Yes
<rock_> rts-sander: do you know how to solve? please tell me
<magicaltrout> cmars: ping
<cmars> magicaltrout, hi
<magicaltrout> hi cmars, I'm trying to ascertain the difference between your two nagios interfaces
<magicaltrout> can you shed any light?
<cmars> magicaltrout, local_monitors vs nrpe_external_master?
<magicaltrout> yeah. if my charm wanted to run an nrpe script, which do I pick? :)
<magicaltrout> my gut says local_monitors, but I'm just guessing ;)
<cmars> magicaltrout, well, they're basically the same content-wise. the local-monitors interface is useful if you want to deploy nagios into the same model and relate the nrpe subordinate to it
<magicaltrout> ah, now i see what you mean by external master
<cmars> magicaltrout, so i typically use this to test our nagios integration, for internal deploys at work
<cmars> magicaltrout, when we go to production, we use nrpe-external-master
<magicaltrout> cool, now it makes sense :)
<cmars> magicaltrout, i've also noticed a layer:nrpe written more recently, this might be useful as well. haven't tried it yet, but it looks interesting
<cmars> i think it may take a different approach
<stub> magicaltrout, cmars : I was putting together https://launchpad.net/nagios-layer so everyone could share the same boilerplate
<cmars> stub, awesome!
<jcastro_> marcoceppi: can you help out with this one? Kind of complex http://askubuntu.com/questions/817399/juju-charms-related-to-etsi-osm-mano-nfv-deployment/817449#817449
<beisner> hi rock, so to recap:  you did a deploy with the lxd provider, and that worked ok.  then after a juju logoff, you're wondering how to re-auth so you can interact with the model again.  is that right?
<beisner> hi rock_ - so you have a charm that you've authored, or are planning to author, and you're wondering how to get it into the charm store as a recommended (official) charm?  is that what you mean by certified?
<magicaltrout> lazyPower: ping
<lazyPower> magicaltrout pong
<magicaltrout> ah hello there
<magicaltrout> prepping for wednesday
<magicaltrout> quick question
<rock_> beisner : Yes.
<magicaltrout> if is possible to use layer-docker without installing docker?
<magicaltrout> I would like to provide docker.available
<magicaltrout> i guess the actual question is, is there an interface instead? : )
<beisner> hi rock_ - i'm not sure that is recoverable (or if so, how).  based on the output of `juju logout` without force, in an identical use case, i would believe that there is an autogenerated initial password that is expected to be changed by the user before logging out.  ex:  http://pastebin.ubuntu.com/23107425/
<lazyPower> magicaltrout https://github.com/juju-solutions/layer-docker/blob/master/layer.yaml
<lazyPower> in your inhereting layer, override skip-install to be true. you get docker.available without the delivery bits
<magicaltrout> nice
<magicaltrout> i can just put hte same config in my layer.yaml right?
<lazyPower> magicaltrout https://github.com/juju-solutions/layer-basic#layer-configuration
<lazyPower> err
<lazyPower> https://github.com/juju-solutions/layer-basic#layer-options
<lazyPower> magicaltrout ^
<magicaltrout> ta
<rts-sander> is there some kind of minimal example of a charm making a call to the juju API?
<marcoceppi> charms don'
<marcoceppi> rts-sander: charms don't usually make calls to the juju API
<marcoceppi> rts-sander: outside of the hook tools
<marcoceppi> rts-sander: what kind of call are you looking to make?
<rts-sander> I want the charm to scale itself
<rts-sander> so add-unit in this case
<marcoceppi> rts-sander: ah, I see
<andrey-mp> marcoceppi: hi, I've asked you about mysql charm for 16.04 and I see that it's not ready yet. But what is the way to install OpenStack to 16.04 now?
<rock_> beisner: OK. But all container are in persistent state. I ran  #lxc list    . Can we manage all these containers with a newly bootstrapped controller?
<shruthima> hi petevg/kevin , may i know ibm-http charm review process status ?
<petevg> kwmonroe ^^ it looks like our notes on it are blank. Is that the one where we were missing some files from the ftp site?
<magicaltrout> lazyPower: Unable to load tactics.docker.DockerWheelhouseTactic from /home/bugg/Projects/dcos-master
<lazyPower> magicaltrout - are you running the latest charm-tools?
<magicaltrout> dunno
<magicaltrout> 2.1.2
<rock_> andrey-mp: Hi. You can use  https://github.com/openstack-charmers/openstack-on-lxd. All in one deployment on VM.
<lazyPower> marcoceppi - can you confirm/deny if the apt packaged tooling has the wheelhouse fix?
<lazyPower> magicaltrout - give me 10 to do some investigation
<magicaltrout> sure
<andrey-mp> rock_: these bundles contain 'charm: cs:xenial/percona-cluster' for mysql )
<rts-sander> I know juju-gui makes calls to the API but it's not easy to extract a minimal example from it
<rts-sander> then there's the golang juju/juju/api package but it has minimal documentation and no examples
<andrey-mp> rock_: thank you. it's useful.
<lazyPower> magicaltrout - i've just built from layer-docker  in charmbox:latest (which is using 1.25.6 and tooling from apt) - so if you bump your version of charm-tools you should get a clean build. If that's not the case, i'll need the output of dpkg --list | grep charm  so we can poke around. An issue related to your build error was also filed here: https://github.com/juju-solutions/layer-docker/issues/67
<shruthima> petevg: if anything is missing from our end on ibm-http charm please let us know.
<lazyPower> magicaltrout - marco says "install the snap" alternatively, its also in the devel ppa. ppa:juju/devel
<lazyPower> rather ppa:juju/stable (we have no idea where we put things)
<petevg> shruthima: Sorry about the lack of comment on the PR. It looks like ibm-http is the one that was missing files. We couldn't find the resources mentioned in the README on the ftp site.
<petevg> shruthima: I'm in a meeting now, but after I'm out, I'll drop a comment on the PR w/ the specifics.
<shruthima> petevg: ok thanks
<petevg> you're welcome!
<magicaltrout> here's a good dcos reference implementation lazyPower : https://ibin.co/2tFN0mNz6mr5.png
<lazyPower> magicaltrout - which charm is missing the icon?
<magicaltrout> most of the code doesn't exist yet, but who cares ;)
<magicaltrout> I borrowed you're docker-nginx charm and hacked it up to do dcos-nginx
<lazyPower> right on
<magicaltrout> so you have your dcos core services, logging and monitoring from logstash and nagios
<lazyPower> hense the desire for docker.available :D
<magicaltrout> then you can attach docker comntainers
<lazyPower> i see, i see.
<magicaltrout> yeah i didn't use that in the end =/
<lazyPower> no worries, options are there :)
<magicaltrout> I'd like to figure out soem dcos/docker compatible interface
<magicaltrout> but that'll have to come later
<lazyPower> magicaltrout - last of the refactorings are landing in layer-docker. Give your layers a rebuild and let me know if you encounter any new failures. We circled back and triaged #69/#71 this morning after the reported failures.
<lazyPower> just waiting on this one: https://github.com/juju-solutions/layer-docker/pull/73
<cmars> i got an error when trying to build the charm-tools snap: https://paste.ubuntu.com/23108417/
<cmars> any ideas how to fix this?
<cmars> reason i ask, is i'd like to add the term publishing tools to the snap, but ran into this when i went to build it
 * lazyPower looks
<lazyPower> that looks like its pretty new cmars
<lazyPower> marcoceppi ^
<lazyPower> pyyaml version mismatch that might have been fine last week, but i'm not positive. calling in the author, but we're in standup
<cmars> lazyPower, marcoceppi thanks. i've opened a bug
<marcoceppi> cmars: weird, I'll push out an update
<cmars> marcoceppi, thanks!
<marcoceppi> cmars: oh, you're trying to build it
<cmars> marcoceppi, yeah, want to test out https://github.com/juju-solutions/charm-pkg/pull/1
<marcoceppi> cmars: change master to 2.1 in the charm-tools target
<marcoceppi> cmars: there seems to be no way to twiddle that without editing the file (which is lame)
<cmars> marcoceppi, yeah, it'd be cool if you could interpolate vars in there...
<marcoceppi> totally
<marcoceppi> cmars: either way, try that
<cmars> trying now..
<cmars> marcoceppi, still getting the same error. are you able to build the snap?
<marcoceppi> cmars: I was a fwe days ago ;)
<marcoceppi> cmars: let me try again
<cmars> marcoceppi, thanks
<marcoceppi> cmars: I replciated, apparently something is installing 3.12 of PyYAML
<marcoceppi> ugh
<cmars> marcoceppi, ok, thanks for confirming
<marcoceppi> cmars: I just can't replicate it with a regular venv
<hatch> when a layer emits an 'event' how does that event get actually called? say 'nrpe.available' ?
<lazyPower> hatch - thats dependent on how the charm author has intended to handle it. If its written for you, search for a @when('nrpe.available') state is being consumed. or @when_any or even a @when_not
<marcoceppi> hatch: that's not an event, that's a state
<marcoceppi> nrpe.available is now, and forever, set until something removes that state
<lazyPower> if its an open ended event, meaning we're just saying that some base level function to provide nrpe has been performed (such as a package being installed and default configuration rendered) its now up to you as the next inheriting layer, to decide what that means in the context of your deployment.
<hatch> I mean, when does the layers framework execute functions decorated with an @whrn?
<lazyPower> or as marco said in so many fewer words
<marcoceppi> hatch: it's an event loop
<hatch> so what triggers the event?
<hatch> config-changed?
<marcoceppi> hatch: so at the end of each loop, reactive evaluates prev states with current states
<marcoceppi> hatch: if there's a mismatch, it'll re-run the loop
<hatch> or do these @when's have nothing at all to do with the hook system?
<marcoceppi> hatch: so, nrpe.available is set in the nrpe interface layer, typically when the relations is completed
<marcoceppi> hatch: nothing
<hatch> ok and are the @when's executed every time per loop? or only when that state has changed?
<hatch> does the framework have a diffing mechanism?
<marcoceppi> hatch: everytime the when criteria matches
<magicaltrout> hatch: data_changed() for variable diffing
<marcoceppi> which is why you'll typically see one or more when and when_not statements around a method
<hatch> marcoceppi: so in this loop, which executes every Nms it's going to call the functions decorated with @when's
<marcoceppi> hatch: so long as all criteria is met, then yes
<hatch> I see
<marcoceppi> hooks are just probes to run the reactive framework which combines Juju hook with the set of states/flags set currently
<hatch> whats the duration of this event loop?
<marcoceppi> most reactive/*.py files don't respond to @hook because it doesn't matter that it's install or start
<marcoceppi> hatch: as long as it takes
<hatch> so, something that's listening on @when('nrpe.available') will also need a @when_not('nrpe.not-available') ?
<hatch> or...
<hatch> you know what I mean
<marcoceppi> hatch: no, because nrpe.available is only set when it's available
<hatch> but every loop of the event loop it's going to call that function
<marcoceppi> hatch: however, you might want something like @when('nrpe.available') and @when_not('app.nrpe-configured)
<marcoceppi> where app.nrpe-configured is named appropriately and is set after you do swhat you need to with nrpe
<marcoceppi> the state declarations are really just to help easily map dependencies of what needs to happen before this method is executed
<hatch> so how do you create singletons for events? if you only wanted to perform an action once when nrpe.available?
<marcoceppi> hatch: my above example would do just that
<hatch> ok so then @when's are really "gates"
<marcoceppi> hatch: what's the name of the layer
<marcoceppi> hatch: yes
<marcoceppi> they're conditionals
<hatch> I see
<hatch> so you really have to think about how to create a singleton
<marcoceppi> arbitrary names as well
<hatch> because I can't think of a time when you'd want to spam something every time the loop happens
<marcoceppi> hatch: http://paste.ubuntu.com/23108940/
<marcoceppi> hatch: there are a few instances
<marcoceppi> por ejemplo
<hatch> marcoceppi: so that seems like the default action people would want to do
<hatch> a 'once'
<marcoceppi> sure, it's one of the many possibilities
<marcoceppi> once, occasionally, everytime are a few other examples
<hatch> ok, but when would you ever want anything to happen on every event loop?
<hatch> that could be hundreds of times per second
<marcoceppi> update-status ?
<marcoceppi> well, not nessisiarly
<marcoceppi> event loop runs every time there's a hook until there are no state changes
<marcoceppi> could be one time, could be 100
<marcoceppi> but it's not like a node.js event loop or nginx
<hatch> ok so there IS a direct relation between hooks and events
<hatch> events are _not_ executed until a hook is triggered
<marcoceppi> I think we're overloading event
<hatch> ergo, not a loop
<marcoceppi> the reactive framework is executed on each hook run
<marcoceppi> > hooks are just probes to run the reactive framework which combines Juju hook with the set of states/flags set currently
<hatch> ok so let me put an example
<hatch> I have foo.available , that's 'fired' when my foo service has successfully installed and configured itself - this is a purely remote action, there is no hook fired on my charm. How does it know to execute that foo.available event?
<marcoceppi> stop thinking of states as events
<marcoceppi> step 1 ^ ;)
<hatch> ok they are states
<marcoceppi> the only way a state can be set is during a hook
<marcoceppi> well
<marcoceppi> 99.99999999% of times a state is set during a hook
<marcoceppi> technically you can set states outside of hooks, but no one does that I'm aware of
<lazyPower> despite actions having access to states... thats the other use case i can think of
<marcoceppi> actions are hooks
<lazyPower> but thats also an anonymous hook context
<hatch> ok so assuming that this remote install and configuarion step is performed when a relation-joined happens. I'd have to block the relation-joined hook from continuing probing the remote service to see if it's done
<marcoceppi> hatch: the relation interface should be the one that does the probing
<marcoceppi> and it should be the one to set nrpe.available once it's confirmed it
<hatch> sure, but that is still blocking the relation-joined hook from finishing
<marcoceppi> either you ahve all the data to complete configuration, or you don't
<marcoceppi> why?
<hatch> because it can't finish the hook until it knows that this remote service is setup so it can set 'available'
<marcoceppi> just don't set a state and the hook will complete and the next will run
<marcoceppi> let me back up, are you writing the nrpe interface layer or are you writing somethign that depends on it?
<hatch> I was just using that as an example, let me simplify this to abstract applications....How does AppA know that AppB is done doing something?
<hatch> assuming that AppA has to know when AppB is done before setting 'AppB.available'
<marcoceppi> the interface tells AppA it's available
<hatch> what's in ths interface?
<marcoceppi> hatch: how about an example?
<hatch> what is the interface doing to know this
<hatch> is AppA blocking a relation hook to poll on AppB
<marcoceppi> hatch: https://github.com/johnsca/juju-relation-mysql/blob/master/requires.py
<marcoceppi> hatch: so you poll, and it fails, then you exit the hook or it succeeds and you set the state
<marcoceppi> or you don't do either, and assert that a completion of relation data required to make the connection is an indication of health for the remote application
<hatch> right this example you linked is an immediate "oh we connected, we're good" but that's not always the case
<marcoceppi> and without a sane way in juju to assert health of an application, we have to work off the assumptions that when a relation has sent all required data to make the connection
<hatch> sometimes available is more than 'relation-joined'
<marcoceppi> hatch: yes, in this case, you sent me /all/ the data I need to make a connection
<marcoceppi> short of actually logging in which we /could/ do, there's not much else to be done
<hatch> and the only way to know that is to block a relation hook and poll the remote service
<marcoceppi> the handshake is complete, and it's availalble because the data has been made available
<marcoceppi> hatch: so poll the remote service
<marcoceppi> it'll either: fail - not ready or it'll succeed, ready
<marcoceppi> but if a service is sending over relation data and it's not ready, that's a problem
<hatch> right
<marcoceppi> that charm isn't of quality
<marcoceppi> so sure, you can go above and beyond, but if you expect to have another go at validating that relationship, without proper event modelling of health in juju there's no event that will be tirggered again to re-evaluate that state
<hatch> that's fine, I'm just trying to understand the relationship between hooks and states
<marcoceppi> this is why we have relation-changed, used to update the relation wire and if we withold crucial information like, connection details, until the remote serivce is ready
<hatch> and without a hook there is no state
<marcoceppi> we can use that to key on it being "available"
<marcoceppi> states are set and evaluated inside of a hook context
<marcoceppi> but available might mean two diferent things here, available: connection details available and available: service is functioning
<hatch> right
<marcoceppi> the former is today, the latter, well, I'd like to have a discussion about at the charmer summit :)
<hatch> and simply by decorating a function with a state value doesn't mean you're going to be called with that immediately
<marcoceppi> and how to measure application health with juju
<hatch> a hook has to be executed to trigger this change
<marcoceppi> yes
<marcoceppi> fundementally, all juju and charm things are boiled down to "a hook ran"
<marcoceppi> it's our entry point into the system
<hatch> and when a state is changed there is a loop which is triggered which is basically a while loop which checks "have any states changed since the last loop"
<hatch> it's not a never-ending stack
<marcoceppi> right
<marcoceppi> maybe one day? but not currently in today's design
<marcoceppi> since we do still fundementally rely on juju hooks to determine method execution
<hatch> ok now this makes sense
<marcoceppi> cool, we're very much still open to input on different ways of doing things
<hatch> marcoceppi: the docs need to be updated then
<marcoceppi> hatch: link me to the mistakes
<hatch> "States are synthetic events that are defined by the layers author"
<hatch> https://jujucharms.com/docs/stable/developer-layers
<marcoceppi> "states are synthetic flags that are defined by the layers author and evaluated during hook execution" ?
<hatch> sure
<marcoceppi> would that shape that sentance better?
<hatch> just to me an event is something that reacts to changes
<marcoceppi> yeah, event is totally the wrong wording there
<marcoceppi> I think some more pictures too, describing this might be helpful
<marcoceppi> I'll update that for now, thanks for the feedback
<BradCrittenden> marcoceppi: why is 'synthetic' necessary?  it seems to just add confusion.
<marcoceppi> bac: : not sure, it's meant to convey they are arbitrary iirc
<bac> marcoceppi: if only there was a word for 'arbitrary'.  :)
<marcoceppi> would arbitrary be a better word there for describing what a state is?
<bac> marcoceppi: i think so
<marcoceppi> I mean, they are synthetic, much in the way we promulgate charms ;)
<hatch> marcoceppi: fwiw layers make charms so much eaiser to understand
<hatch> I'm just being picky because I like to know the internals of things :D
<marcoceppi> hatch: it's good discord, the more [people poking reactive the better it'll become
<hatch> marcoceppi: there is also ". Hook decorators so the code can react to Juju events."
<hatch> on the same page
<marcoceppi> well, hooks are juju events, what wording would make more sense for you?
<hatch> Juju hooks
<hatch> :)
<hatch> no need to introduce extra terminology
<marcoceppi> psh, w/e <3
<hatch> lol
<hatch> marcoceppi: it's also very important to point out that it will execute those 'callbacks' every time there is a hook run
<hatch> and probably an example on how to prevent that
<hatch> as I'm assuming a singleton is the primary usecase
<hatch> marcoceppi: I guess it DOES have a sentence about it
<hatch> just doesn't really say that it's going to happen all the time
<hatch> it sort of eludes to it
<hatch> but anyways...thanks for clearing up all my confusion :D
<marcoceppi> hatch bac https://github.com/juju/docs/pull/1314
<marcoceppi> cmars: I patched charm-tools to get around this
<hatch> marcoceppi: thanks looks good
<hatch> marcoceppi: this link you sent http://paste.ubuntu.com/23108940/ would be a really good example for implementing idempotency
<hatch> ya know, with a real example though :D
<marcoceppi> hatch: psh, "examples" ;)
<marcoceppi> cmars: https://github.com/juju/charm-tools/pull/250
<hatch> lol
<hatch> marcoceppi: maybe there needs to be a @when_once() ?
<hatch> it would really be a simple wrapper over a setstate
<hatch> but would make it much easier to grok for those not familiar with the execution flow of a layered charm
<marcoceppi> hatch: so that has some major implications (and discussion has been had around this)
<marcoceppi> but what does once mean?
<hatch> I'd say once it becomes truthy
<hatch> but I'v eonly thought about it for....30m?
<hatch> lol
<marcoceppi> https://github.com/juju-solutions/charms.reactive/issues/22
<marcoceppi> hatch: so we have an only_once wrapper but it's problematic
<hatch> oh nice, I totally missed this issue somehow
<marcoceppi> odds are you want it once until something else changes
<marcoceppi> which the something else is rare
<hatch> hmmm
<hatch> interesting
<marcoceppi> hatch: yeah, the definition of once in a distributed system varries by the person defining what once means
<hatch> right, and accounting for failures, does that mean the 'once' is satisfied, or not
<bac> marcoceppi: gotta be quick on those doc reviews!
<hatch> this is a rabbit hole
<hatch> :)
<bac> marcoceppi: if you're poking at the documentation, could you look at https://github.com/juju/docs/issues/1313 ? I think the problem is the use of an absolute URL.
<marcoceppi> bac: yeah, good find. I've submitted #1315
<bac> great, thank you marcoceppi
<kwmonroe> lazyPower: i'm befuddled.. when charmbox builds went to ci.containers, i sadly assumed i'd lose the status from https://hub.docker.com/r/jujusolutions/charmbox/builds/, but those builds are green now.  is that status page pulling results from ci.containers?
<lazyPower> those greens were the last builds before we moved over as best i can tell
<lazyPower> i'd need exact timestamps to tell you in more detail
<bac> marcoceppi: what is the cause of: charmtools.build.tactics: Options set for undefined layer: apt
<bac> seems to be environmental, as makyo doesn't get it building the same charm
<marcoceppi> bac: you have set options for a layer you didn't include
<bac> hmm
<marcoceppi> bac: can I see you layer.yaml ?
<bac> marcoceppi: https://github.com/juju/bundleservice-charm/pull/2
<marcoceppi> bac: huh, that's weird. you have layer:apt right there
<marcoceppi> bac: what does `charm version` say?
<bac> charm 2.1.1-0ubuntu1
<bac> charm-tools 2.1.4
<marcoceppi> well that's bizzare
<marcoceppi> let me try
<bac> marcoceppi: https://pastebin.canonical.com/164230/ shows it getting the apt layer
<cmars> marcoceppi, thanks!
<marcoceppi> bac: can you run with -l DEBUG
<x58> Is there a reason to use the apt layer over just the basic layer with packages: in layer.yaml?
<bac> marcoceppi: sure
<x58> In my case I just want to make sure that my package is installed...
<marcoceppi> x58: there are reasons to use apt over basic
<marcoceppi> x58: apt layer provides states like `apt.installed.<pkg>` that you can respond to
<marcoceppi> where as basic: packages: is more for "I need these packages before hook code can even run"
<marcoceppi> since that's executed during the bootstrap process
<marcoceppi> well, the bootstrap of the reactive framework
<x58> Ah, got it.
<marcoceppi> apt layer is really a more flexible encapsulation of things you acn do with apt: custom repositories/keys, queuing packages dynamicaly, and loads of states
<bac> marcoceppi: https://pastebin.canonical.com/164232/
<bac> marcoceppi: i need to step out.  ping me here if you find something otherwise i'll keep poking at it later.
<marcoceppi> cory_fu: I'm getting this error too ^^
<marcoceppi> bac: it appears layer-apt doesn't include a layer.yaml in it
<marcoceppi> so this error is "right" and is likely because Makyo has an older version of charm-tools where the checks fail? I'll open a patch for layer-apt
<cory_fu> marcoceppi: That error message implies to me that the charm being built is trying to set layer options for the apt layer, and it doesn't define any.  I think it would be ok for it to not have a layer.yaml and the error is actually in the charm
<marcoceppi> cory_fu: but layer-apt does support an option
<marcoceppi> it's just not defined
<marcoceppi> and it seems 2.1.4 of charm-tools starts checking that
<cory_fu> Ah.  It should have failed prior to that anyway due to there being no schema to validate it against, but I guess it was lenient
<marcoceppi> cory_fu: is this commit maybe the one that fixes that: https://github.com/juju/charm-tools/commit/f651e144fe24879a0470c8e49ab792a9321e6b4b ?
<cory_fu> Doesn't seem like it
<cory_fu> marcoceppi: https://github.com/juju/charm-tools/commit/134e860d3b717f821ade4765d95e0bf38ecc1e1e
<cory_fu> That seems like it would have been in a previous version
<marcoceppi> but that's from april 4th
<marcoceppi> yeah
<cory_fu> Well, it clearly was the intended behavior that it be validated
<cory_fu> Perhaps the error message could be better
<marcoceppi> hah
<marcoceppi> stub: bac: this is what you'll need to address this: https://code.launchpad.net/~marcoceppi/layer-apt/+git/layer-apt/+merge/304308
<bdx> I think I just found a bug in the pgsql interface
<bdx> take this charm code for example, which requests two different databases using each their own interface -> https://gist.github.com/jamesbeedy/b02c701df4e34132ea622f78cb30df43
<bdx> what seems to be happining here is when the second database is requested, a new password is generated for the user 'juju_feed', but this changes the password for the user globally
<bdx> this is my application.yml file that is rendered with the configs cached in the kv from each respective database connection
<bdx> http://paste.ubuntu.com/23109426/
 * marcoceppi takes a look
<bdx> whats interesting here, is that only the password generated for the second database works on both
<bdx> :-(
<bdx> logging into feeddb with feeddb password  -> http://paste.ubuntu.com/23109431/
<marcoceppi> bac: this looks like a bug in the postgresql charm
<marcoceppi> bdx: ^^
<marcoceppi> I imagine it's not checking if the user it created already exists
<marcoceppi> and it just recreating it with a new password :\
<bdx> logging into feeddb with feedreportdb password -> http://paste.ubuntu.com/23109434/
<bdx> totally
<marcoceppi> bdx: stub is your best bet for working through a fix, but he's in APAC timezone
<marcoceppi> bdx: let me see if I can get you a patched postgresql
<bdx> marcoceppi: thanks
<bdx> the charm *really* needs to create non-superusers that are owners of only their database
<bdx> this would be HUGE
<bdx> marcoceppi: I can just work with a temp modification to use the same user/password for both until this charm gets a little smarter
<bdx> no need to spend cycles just for me
<bdx> thanks though, you're such a gentleman
<marcoceppi> bdx: well, either way, it'd be good to file a bug on this, stub is awesome about his charms
<marcoceppi> bdx: https://bugs.launchpad.net/postgresql-charm
<bdx> excellent, on it
<x58> marcoceppi: is charm build broken with the apt layer at the moment? I can't see the pastebin from bac so I can't be sure if I am running into the same issue.
<marcoceppi> x58: probably
<marcoceppi> x58: you can patch the apt layer locally, or let me know the error so I can confirm
<x58> AttributeError: 'NoneType' object has no attribute 'combine'
<cholcombe> marcoceppi, now that my charm is promulgated how do i apply updates to that?  Does that go through review again?
<marcoceppi> x58: that's something different
<bac> x58: not what i see
<x58> marcoceppi: http://paste.ofcode.org/zNTqD6xMCXFB5BucvsjAK6
<lazyPower> cholcombe - yep, you should be able to push to the devel and unpublished channels
<lazyPower> but stable becomes ours, and requires a review
<cholcombe> lazyPower, i see
<bac> marcoceppi: ah, so makyo must have a cached version of the apt layer?
<lazyPower> by "ours" i mean, must be reviewed and curated.
<marcoceppi> bdx: or an older version of charm-tools
<marcoceppi> bac: ^^
<cholcombe> lazyPower, did review.juju.solutions url change?
<cholcombe> i'm getting 502
<bac> marcoceppi: no, he has the same charm
<marcoceppi> x58: can you run with -l DEBUG
<marcoceppi> bac: charm-tools*
<Makyo> marcoceppi: bac 2.1.2 for charmtools
<marcoceppi> ?
<marcoceppi> there we go
<bac> ah
<marcoceppi> Makyo: if you update it'll probably err for you
<bac> sweet.  will play with it tomorrow
<bac> Makyo: don't! it's a trap
<x58> marcoceppi: http://paste.ofcode.org/CXjMC8QivjDMHR8RVGemVs
<marcoceppi> x58: and finally, can you tell me the output of `charm version` ?
<x58> charm 2.1.1-0ubuntu1
<x58> charm-tools 2.1.2
<marcoceppi> x58: we just released 2.1.2, it's in ppa:juju/stable ; let me see if this was a bug fixed in that release, I don't think it was though
<marcoceppi> we just released 2.1.4**
<x58> I'm following juju/devel ... is that wrong?
<marcoceppi> x58: no, but you can add juju/stable in additon to devel. Devel version of charm-tools is, comically, out of date since 2.1 went stable
<marcoceppi> I dont' see anything in 2.1 that would address this, is your SNMP layer somewhere I could poke it?
<x58> Not yet. Still developing it. I will get it up on Gitlab in a minute. Give me a couple.
<marcoceppi> x58: cool, thanks. I'll be helpful to poke it, hopefully I can patch charm-tools quickly if it is indeed a bug
<x58> marcoceppi: https://gitlab.com/bertjwregeer/juju_snmpd/tree/master
<x58> I don't think I did anything out of the ordinary...
<marcoceppi> x58: yeah, I hit the error to on 2.1.4, <insert canned you should still upgrade message>, but I'll see if I can hunt down the reason for this error
<marcoceppi> x58: is there a reason you ignore the readme?
<x58> marcoceppi: I use a .rst README
<x58> not .MD
<x58> so I would end up with README.rst and README.md
<x58> which caused issues when rendering on the charmstore.
<marcoceppi> x58: that's a good reason, could you just delete README.md ?
<marcoceppi> bleh
<marcoceppi> of course
<x58> No, because charm build would pull it back in from the basic layer.
<marcoceppi> x58: I understand now, so that ignore line seems to be causing the problems, let me see why. I know we had a few fixes around ignore and exclude
<x58> Nevermind the fact that .rst rendering on the charmstore is currently broken (yes, I have a bug open for it)
<x58> marcoceppi: Works fine for: https://gitlab.com/bertjwregeer/juju_staticroutes
<x58> Which doesn't have the layer:apt in it though (only the basic layer)
<marcoceppi> x58: ingore is getting overhauled in charm-tools 2.2, fwiw.
<marcoceppi> what you really want is exclude
<x58> I used ignore because exclude is what I tried first but that didn't exclude.
<marcoceppi> as in "exclude this file from the final charm" and ignore is more for "don't include this file from my layer"
<x58> Also, the documentation is severely lacking in that area :-/
<marcoceppi> x58: exclude doesn't exist (yet)
<marcoceppi> which is why it didnt' work
<x58> I saw exmaples using "exclude"
<marcoceppi> also, yeah, that's my bad. I've nto been super strict on documentation
<x58> That makes sense. lol.
<marcoceppi> x58: you on xenial?
<x58> Yes.
<marcoceppi> x58: so, this https://github.com/juju/charm-tools/pull/235 is in master which will be in 2.2 of charm-tools
<marcoceppi> and I think it address your problems with exclude, which was the right idea
<marcoceppi> actually, I had that reversed
<marcoceppi> ignore is the right key, it's just not really working well in 2.1
<bdx> marcoceppi, bac: https://bugs.launchpad.net/postgresql-charm/+bug/1618248
<x58> This is the second charm that I am writing, and documentation is all over the place. I like the reactive framework, but documentation is hit or miss, mostly miss.
<x58> The charmhelpers stuff is also inadequately documented, and I've spent an awful lot of time looking at source code to try and reverse engineer why something works the way it does.
<bdx> oh, you prob can't see that huh
<marcoceppi> x58: even using https://pythonhosted.org/charmhelpers ?
<x58> Yes.
<marcoceppi> x58: I apprecaite the feedback, our documentation is definitely lacking in a lot of areas, I'm sure there are numerous examples but if you had time (after all this charm build stuff) to just highlight a few examples we're eager to fix
<x58> Sure. I can put some stuff together.
<marcoceppi> x58: so this is defininitely a bug, and I think it's because there's no layer.yaml in apt which is making the build tactic choke
<marcoceppi> x58: I'll file a bug and see if I can get a patch up for review
<x58> marcoceppi: Thanks. Is there a temporary fix I can use in the mean time to get this charm build completed?
<x58> Should I remove the ignore?
<marcoceppi> x58: comment out the ignore then manually remote the .md for now, sadly, is the quickest workaround
<x58> It'll get me where I need to be for now.
<x58> Which is a charm I can start testing ;-)
<x58> I can worry about publishing later.
<marcoceppi> x58: are you going to the charmer summit in Sept?
<x58> Nope.
<x58> I wish. I only write charms to get my work completed :P
<x58> And they are probably not even that great... Just the bare minimum to support my requirements.
<bdx> x58: thats how it all starts ;-)
<x58> bdx: haha =)
<marcoceppi> x58: here's the bug for your particular build issue https://github.com/juju/charm-tools/issues/251
<x58> Awesome +1. Will watch it later.
<x58> Not logged into Github on my work machine.
#juju 2016-08-30
<stub> marcoceppi: You are infecting layer.yaml with that awful json schema stuff? Is there some gui that needs to drive this?
<marcoceppi> stub: no, but build time validation is still quite valuable
<marcoceppi> and this has been in charm-tools for quite a while ;)
<stub> T_T
<marcoceppi> stub: either way, we can't have layers hoping keys will exist, or throwing run time errors during deploy, jsonschema makes it so we can validate layer configurations at build time, before a deploy even happens
<marcoceppi> stub: just trying to make sure WE HANDLE THE UNEXPECTED
<stub> yeah, I just hate the spec. I mean, your schema *could* just be an example snippet that is decoded and introspected for all the type information you need. But nooo, lets repeat the mistakes of the past like XML schemas.
<kjackal> Hello Juju World!
<magicaltrout> stub: ping
<stub> magicaltrout: pong
<magicaltrout> ah hi stub sorry to bother you had a quick nagios question for my demo tomorrow, I was hoping you might be able to shed some light on
<magicaltrout> https://github.com/buggtb/dcos-master-charm/blob/master/reactive/dcos_master.py#L200
<magicaltrout> does that look correct to you?
<magicaltrout> it seems to execute fine
<magicaltrout> but Nagios never picks up the service
<stub> I use the nrpe-external-master relation, so not sure if that has anything to do with it.
<stub> I also tend to use charmhelpers, rather than the helper on cmars' interface
<magicaltrout> https://pythonhosted.org/charmhelpers/api/charmhelpers.contrib.charmsupport.html
<magicaltrout> that one?
<stub> yes, buried way down in contrib. That one.
<magicaltrout> okay
<magicaltrout> cool
<magicaltrout> thanks
<magicaltrout> I see all you lot using the external interface but for the demo I'd like to keep it all inside one model if possible
<stub> magicaltrout: I can't see anything obvious wrong with yours. It seems wired up the way I think it is supposed to be if you are using the interface.
<magicaltrout> yeah it seems like there is no rsync action
<magicaltrout> never mind I'll keep prodding
<stub> I'm no nagios expert. I just got sick of seeing the same boilerplate everywhere and decided to not perpetuate it.
<magicaltrout> hehe
<PCdude> any MAAS users/experts here, the MAAS channel itself has been dead the last 12 hours
<magicaltrout> i fiddled with it for the last couple of days
<magicaltrout> so I'm not an expert
<magicaltrout> but i did get bits of it working
<PCdude> which version did u used?
<magicaltrout> 2.0-Rc2
<PCdude> yeah, the 2.0 version is good, but since the JUJU version is still in beta I decide to use the 1.9.4 version on ubuntu 14.04
<magicaltrout> I've used 2.0 since the juju alphas
<magicaltrout> so I'm well past that stage ;)
<PCdude> are u telling me that u got JUJU and MAAS working together with MAAS 2.0-RC2?
<magicaltrout> depends what you call working
<magicaltrout> no I didn't but only because of Virtualbox sucking
<magicaltrout> but yeah it was bootstrapping etc
<PCdude> well, still the whole openstack with landscape shit, which needs JUJU and MAAS
<PCdude> I tried the 2.0-RC2 but with an older version of JUJU, maybe the latest beta with that MAAS version might work
<magicaltrout> you have to use 2.0 with 2.0
<magicaltrout> and 1.x with 1.9
<magicaltrout> but my personal opinion is that juju 2.0 is around the corner so I might as well use the beta's and put up with a few bits changing
<PCdude> sorry thats what I meant, I used the JUJU beta4 or beta7 I thought with the MAAS 2.0
<magicaltrout> and its worked out for me so far
<magicaltrout> on a fresh xenial juju beta15 should be current
<magicaltrout> afaik
<magicaltrout> and on a fresh xenial maas 2.0 is certainly the default
<magicaltrout> anyway.... what bit didn't work?
<PCdude> yeah, also what I have seen
<PCdude> http://askubuntu.com/questions/817572/openstack-fails-to-install-caused-by-juju/818310#818310
<PCdude> in that question u can see the whole error , its a nice one haha
<PCdude> I could try the older 2.0-rc2 version of MAAS, maybe that could work
<magicaltrout> dunno. I was just spinning up nodes with MAAS and Juju and that was fine
<magicaltrout> I've not used conjure up etc
<PCdude> yeah, conjure-up is the only option for openstack with landscape
<PCdude> any idea when JUJU is coming out of beta?
<magicaltrout> soon
<magicaltrout> its been in beta for ages
<magicaltrout> so it can't be that long :P
<PCdude> I hope soon too, there is a summit this week, so maybe that helps a bit
<magicaltrout> well we're all off to pasadena in a couple of weeks
<magicaltrout> for a juju summit
<magicaltrout> its coming along, but there is a lot of tooling that needs bringing into line
<PCdude> ah ok, fingers crossed :)
<magicaltrout> i'll swap
<magicaltrout> you fix my nagios relation
<magicaltrout> and I'll sort out your open stack ;)
<magicaltrout> oh and write my slides for tomorrow please
<PCdude> deal!
<PCdude> magicaltrout:  what slides do u have to make for tomorrow?
<PCdude> for a JUJU summit? or just regular work
<magicaltrout> http://sched.co/7n8J
<PCdude> looks good, u want me to hand ur slides before it starts ;) haha
<magicaltrout> I always leave talk stuff to the last minute
<magicaltrout> now I'm cramming :)
<PCdude> sounds good, I am a last minute person too, most of the exams 80% was learned in the night beforehand haha
<magicaltrout> i failed uni 3 times
<magicaltrout> i didn't do any learning ;)
<PCdude> well nothing at all is mostly not that good for ur grades, but u succeeded the 4th time?
<magicaltrout> no
<magicaltrout> I gave up and went to work
<PCdude> I have a n00b question, how do I check if postgreSQL is running?
<magicaltrout> I did fine in all the practical stuff, I just hate sitting in lectures
<magicaltrout> which is ironic because I enjoy talking to people about software
<PCdude> why? go to a night uni or something, or become steve jobs and make millions
<magicaltrout> I don't need a degree I've been working in IT for 10 years, it pretty much covers it
<PCdude> no degree problems with ur boss?
<magicaltrout> infact, CS degrees a decade ago would have got you pretty much nowhere in real life
<magicaltrout> I'm a self employed contractor for NASA, so I am my own boss
<PCdude> CS stands for...
<magicaltrout> computer science
<PCdude> contractor for NASA, cool! what u do for them?
<magicaltrout> Devops and data management platforms
<PCdude> ah ok, u gonna tell me when they bring aliens back right? ;)
<magicaltrout> ironically, I don't do much space stuff, scientific and data research projects mostly :)
<PCdude> I study Electrical Engineering, but I find computer science more interesting at the moment, so yeah weird guy am I
<magicaltrout> there is certainly plenty to learn in computer science, the data processing and ops platforms have all changed wildly in the last 5 or so years
<PCdude> yeah true, I started with programming and have some degrees with cisco, but when I heard about openstack I jumped on the linux train and try to learn it as fast as I can
<PCdude> *certifications
<magicaltrout> openstack certainly has a lot of traction
<magicaltrout> personally I'm more interested in Mesos type stuff
<magicaltrout> to run containers across hardware
<PCdude> ah ok, a bit like docker as I can see?
<magicaltrout> indeed
<magicaltrout> most of it is docker
<magicaltrout> but the underlying stuff to manage containers is interesting, stuff like Mesos, Kubnernetes etc
<magicaltrout> where you can say "here's my server pool, run my containers in the best possible way"
<magicaltrout> and off it goes
<PCdude> sounds cool! would u prefer that above openstack?
<magicaltrout> well yeah, but each serve a different purpose
<magicaltrout> like the project we're working on at NASA currently is genomic research search and discovery platforms, it makes sense to have a bunch of docker containers each with different components in
<magicaltrout> and it works great because developers can deploy the identical stack locally as in production, dev or wherever
<magicaltrout> but then in production we can leverage a cluster of servers and deploy docker stuff across a data center
<PCdude> of course that is true, imo openstack is more widely, I mean u can add LXD or docker too, but u can also use KVM or VMware ESXI
<magicaltrout> and its easier and more lightweight than running all of the openstack stuff and then containers or services over the top of that
<magicaltrout> openstack might be, but i suspect kubernetes probably isn't too far behind it
<PCdude> and kubernetes is more an all in one solution?
<PCdude> I mean, the big con for me concerning openstack is the million packages u need
<magicaltrout> its a docker management system originally by google
<PCdude> free?
<magicaltrout> its not entirely all in one
<magicaltrout> and hard to setup, but luckily.....
<magicaltrout> https://jujucharms.com/observable-kubernetes/
<PCdude> long live JUJU haha
<magicaltrout> and if you're around in the afternoon lazypower hangs out here who maintains a bunch of it
<magicaltrout> but that gives you kubernetes and monitoring
<PCdude> cool, I am not here in the afternoon sadly...
<PCdude> yeah, the problem is, I would like a little more freedom, I would like to set something up like a private cloud like the amazon aws
<PCdude> and afaik, openstack is coming closest to that rn
<magicaltrout> yup
<magicaltrout> well I know plenty of people have had success using openstack on conjure up, so someone will know the answer to your problems
<PCdude> yeah, I am currently hunting for those people, since the project itself is young ,the amount of experts is small too
<magicaltrout> this is true
<PCdude> I have searched the internet, but for a private version of amazon aws the only option is openstack right? I mean there are no other solutions (that is reachable for a student)?
<magicaltrout> not that i'm aware of
<magicaltrout> I don't pay much attention to that space, but openstack is the only main one I see on twitter etc
<magicaltrout> although I might live a sheltered existence, who knows
<magicaltrout> lets face it though, clouds are complex :)
<magicaltrout> they will all have their own quirks
<PCdude> oh so damn true! I am just a student with to much drive to get it to work, but I have put in an fair amount of time already, and nothing is working yet
<magicaltrout> don't worry i know that feeling, I've been working on Kerberos based authentication for a webservice for NASA for the last week, where I have no access to the Active Directory server, stuff doesn't work and I don't know why and the Java libraries used are so old the source doesn't exist any more and I had to decompile them
<magicaltrout> we've all been there ;)
<magicaltrout> eventually for whatever reason stuff starts working
<PCdude> java, yuck! please, lets just decide for the sake of the human race to through that ugly one of the earth
<PCdude> *throw
<magicaltrout> I quite like java, mostly cause its the one thing I learnt at uni
<magicaltrout> but for data applications, its pretty important these days
<PCdude> yeah, I understand, but all the security bugs with the application for the normal home user is insane
<PCdude> the same holds for adobe flash player
<magicaltrout> someone else mentioned that a few weeks ago
<magicaltrout> so I showed them this
<magicaltrout> http://security.stackexchange.com/questions/57646/why-do-i-hear-about-so-many-java-insecurities-are-other-languages-more-secure
<magicaltrout> which makes some pretty good points
<PCdude> good point
<PCdude> read most of it
<PCdude> yeah, I never used java in that way and only see the problems with the plugin
<PCdude> that is a good point about the memory overflow though
<magicaltrout> java in a browser is a terrible experience
<magicaltrout> no one denies that ;)
<PCdude> well honestly, I dont think java is the only problem when looking at the plugin. I think windows itself is build pretty bad too
<PCdude> although they are making progress
<PCdude> sandboxing should be almost compulsory these days
<magicaltrout> which is basically what Snaps do for package management on Linux systems
<PCdude> indeed the point I was trying to make, I convinced my parents to switch from windows to linux a few months ago
<PCdude> it took 2 years, but its good now
<PCdude> basically the only problem for most people is microsoft word/excel/pp
<magicaltrout> true
<PCdude> I am really curious though, what is gonna happen with the vulcan game engine
<PCdude> some big names are making games for it now (at least that is what they say)
<PCdude> the footprint of linux is smaller and could push up the FPS, maybe some of the gaming community will switch to linux
<magicaltrout> probably depends if steam keep up the linux support
<PCdude> true that
<PCdude> how are the slides going?
<PCdude> btw, what country do u live in?
<magicaltrout> UK
<magicaltrout> I'm on 17 of 22
<magicaltrout> they're getting there :P
<magicaltrout> although I need to go to the opticians this afternoon and drive to heathrow
<PCdude> wait, did we talked yesterday too? I am bad with those names so sorry if yes
<magicaltrout> lol
<magicaltrout> yeah we did
<PCdude> sorry then haha
<PCdude> what program are u using for IRC?
<magicaltrout> I just irssi running on a remote server
<magicaltrout> most of my life is spent inside bash terminsals, I don't see why irc would be any different
<PCdude> ah ok, I use https://kiwiirc.com
<PCdude> I know IRC for some time now, but have only been using it for 1 week now
<magicaltrout> i have a 15 year headstart on you :P
<PCdude> it is grandpa ;)
<magicaltrout> i am old...
<PCdude> yeah, but I could use a 15 year knowledge head start rn haha
<magicaltrout> it does occasionally have some benifits being old
<admcleod_> like spelling
<magicaltrout> pfft
<admcleod_> ;)
<PCdude> I am dutch, so no spelling rules applied to me :D
<magicaltrout> i still struggle to read teh screen admcleod_ post LASEK, luckily I have you to back m up! :P
<admcleod_> GOOD POINT
 * magicaltrout ignores admcleod_ and returns to slide writing
<PCdude> read the screen? what happened?
<magicaltrout> had laser eye surgery PCdude
<admcleod_> now he has laser eyes
<admcleod_> peow peow
<magicaltrout> hehe
<PCdude> may the force be with u
 * D4RKS1D3 Hi everyone
<magicaltrout> gaa
<magicaltrout> stupid nagios
<magicaltrout> what on earth am i doing wrong
<PCdude> good luck, with pulling out hair magicaltrout :)
<PCdude> I am off to work
<magicaltrout> hehe
<magicaltrout> cya
<PCdude> I will be here tomorrow, probably praying for an answer too haha
<magicaltrout> stub: am I being a moron? to get nrpe checks registered in nagios, do you have to include monitors.yaml?
<magicaltrout> i'm struggling to digest the nrpe readme
<magicaltrout> or cmars
<stub> I haven't used monitors.yaml, so I know it works without it. But it might be something to do with local-monitors vs. nrpe-external-
<magicaltrout> sad times
<magicaltrout> fair enough
<stub> Best I can tell, local-monitors was intended to replace nrpe-external-master but never gained traction. I don't know who was pushing that.
<stub> (this is all antique)
<marcoceppi> magicaltrout stub local-monitors pre-dates nrpe-external-master, it's goal was to provide an agnostic way to describe what to monitor that could be implemented by nrpe or zabbix or any other tool
<magicaltrout> yeah
<marcoceppi> instead of having tens of interfaces for each monitoring tool, you just have one
<magicaltrout> i was hoping for something plug and play for tomorrow
<marcoceppi> now that we have interface layers, it'll probably be way easier to implement
<magicaltrout> but failed
<marcoceppi> but it's not anything that has traction
<magicaltrout> so hacked the result
<marcoceppi> FWIW, local-monitors even predates my involvement with Juju
<magicaltrout> and you're bloody ancient
<marcoceppi> I know >.>
<magicaltrout> https://docs.google.com/presentation/d/1UGuGfU7nuJISaAEIFZ_5lm1o0o010kpdm0HMBF8jvMo/edit?usp=sharing
<magicaltrout> cast your eye over them marcoceppi when you get bored
<magicaltrout> i need to fill 50 minutes with slides and a demo
<magicaltrout> let me know if I've missed anything obvious
<marcoceppi> magicaltrout: there's a few empty slides at the end ;) but presentation flows well
<magicaltrout> yeah, that bit I do know :P
<magicaltrout> just doing some lunchtime hacking to try and iron out a few demo kinks
<marcoceppi> cool cool, loog great so far
<magicaltrout> virtualbox won't play ball and let me demo maas which is a bit sad
<magicaltrout> but don't kill my instances mid presentation tomorrow please :P
<marcoceppi> magicaltrout: hah, I'll keep an eye out for anything with dc/os and make sure not to reap them
<marcoceppi> hey stub, had a qustion for you on layer-apt
<stub> yo
<marcoceppi> stub: are there plans to add something like charm.apt.does_this_package_even_exists
<marcoceppi> bac Makyo charm build is fixed for your charm now
<bac> marcoceppi: cool, thanks!
<stub> marcoceppi: Sure. I'd like the API fat enough that all the apt stuff in charmhelpers.fetch can be deprecated.
<stub> marcoceppi: But with better naming than doesthis_package_even_exist :-P
<marcoceppi> stub: yeah, wasn't sure a name, just wanted to convey the concept
<marcoceppi> I've got a situation in the php-fpm charm where packages exist in trusty as php5-<name> but don't in xenail as php-<name> because they're builtins now in php7
<stub> marcoceppi: The other option is delegating that to the existing apt package, but that seems to suffer from poor docs.
<marcoceppi> stub: so if I ask to queue_packages and it doesn't exist will the apt layer error?
<stub> marcoceppi: yes, the apt layer will error. Ignoring errors by default is one of charmhelpers.fetch's mistakes IMO
<marcoceppi> stub: makes sense
<marcoceppi> I'll open a bug
<cmars> magicaltrout, this may help: https://paste.ubuntu.com/23111690/
<bac> marcoceppi: worked!
<stub> marcoceppi: In this case, I would do 'if trusty then:' rather than 'if package in available_packages'. Because then the dead code is easily removed when trusty eols
<marcoceppi> stub: well, the package name is not defined by me, it's defined by the person including the upper layer
<magicaltrout> oooh
<magicaltrout> very handy cmars
<magicaltrout> thanks a bunch
<stub> actually, it won't error. it will block with a status message informing you of the problem.
<marcoceppi> stub: for example php-modules: ['mcrypt', 'mbstring', 'mysql']
<marcoceppi> php5-mbstring exists, but php-mbstring in xenail doesn't, as an example
<cmars> sure thing
<stub> marcoceppi: ok. So for now, I think it is 'if package not in charmhelpers.fetch.apt_cache()'. And a bug on the apt layer, since there are other use cases too.
<marcoceppi> stub: cool, thanks for a path forward
<magicaltrout> oooh actually that'll be killer if it works
<caribou> Hello, I have a layer question : when using the basic layer & adding options: basic: packages: ['blah'], should I expect the 'blah' package to be installed when the charm is deployed ?
<magicaltrout> I can nagios up my dcos master
<magicaltrout> nagios up my nginx test container in the charm code
<magicaltrout> yeah caribou
<caribou> magicaltrout: ok, thought so; thanks!
<magicaltrout> lazyPower: just the man you'll know this, if I do, juju remove-application ...
<magicaltrout> can I fire a reactive hook inside that charm itself?
<marcoceppi> magicaltrout: typically, the stop hook is called during remove-application
<lazyPower> ^
<marcoceppi> but that's not the /only/ reason stop may fire
<lazyPower> however, you also get any relationship broken/departed hooks
<marcoceppi> I don't think there's a clear "tear down" hook, but maybe we should have one
<magicaltrout> boo
<magicaltrout> I would like a "I've removed my docker container" hook
<magicaltrout> but fired from the container charm
<magicaltrout> not from the dcos-master charm
<marcoceppi> magicaltrout: well couldn't you do that in relation-broken ?
<magicaltrout> dunno
<magicaltrout> thats why i was asking
<marcoceppi> that's where I'd put it
<magicaltrout> do they fire within the charms that you tear down?
<marcoceppi> yup
<marcoceppi> juju basically "undos" what you do
<magicaltrout> that'll do nicely then
<marcoceppi> so if you have a relation and you remove one side of that, the relation-departed/broken hooks fire in addition to the units being torn down
<marcoceppi> lazyPower: I think I found a focus that's more "smoke test" applicable
<lazyPower> marcoceppi - meet you in the batcave?
<marcoceppi> omw
<Guest55756> Hi, getting error while deploying juju-2.0beta16
<Guest55756> 2016-08-30 05:59:48 INFO juju.cmd supercommand.go:63 running jujud [2.0-beta16 gc go1.6.2] 2016-08-30 05:59:48 ERROR cmd supercommand.go:458 creating LXD client: Get https://10.178.209.1:8443/1.0: Unable to connect to: 10.178.209.1:8443 ERROR failed to bootstrap model: subprocess encountered error code 1
<Guest55756> Is there any work around for this error?
<Guest55756> getting below error while deploying juju-2.0beta16
<Guest55756> 2016-08-30 05:59:48 INFO juju.cmd supercommand.go:63 running jujud [2.0-beta16 gc go1.6.2] 2016-08-30 05:59:48 ERROR cmd supercommand.go:458 creating LXD client: Get https://10.178.209.1:8443/1.0: Unable to connect to: 10.178.209.1:8443 ERROR failed to bootstrap model: subprocess encountered error code 1
<Guest55756> Is there any work around for this error?
<babbageclunk> Guest55756: What was the command you ran?
<Guest55756> juju bootstrap local.lxd-test localhost
<babbageclunk> And what does the local.lxd-test cloud look like?
<babbageclunk> (I mean, how's it defined?)
<babbageclunk> Guest55756: Ooh, what version of LXD are you running?
<Guest55756> LXD version - Let me check
<Guest55756> root@ptcvm3:~# lxd --version 2.0.4 root@ptcvm3:~#
<Guest55756> babbageclunk_:its 2.0.4
<Guest55756> babbagecluck_:any work around for this error?
<marcoceppi> lazyPower: so, -ginkgo.focus=Kubectl.client seems to be a decent smoke test, 32 tests including Guestbook.application
<marcoceppi> basically, can I talk to the cluster via client?
<Guest55756> babbageclunk_:any work around for this error?
<lazyPower> marcoceppi - its a brilliant place to start.
<lazyPower> marcoceppi - let me recap whats required to do that. Just build the e2e test suite, set the ginkgo.focus= flag during run of e2e.test and thats basically it?
<lazyPower> aside from the credentials dance
<marcoceppi> lazyPower: just quick_release
<marcoceppi> then run with a few parameters
<lazyPower> allright, i'll see if i can get a job setup so we can pipeline it
<babbageclunk> Guest55756: Sorry, was on the phone.
<lazyPower> are you still wanting an external charm to do this validation step?
<marcoceppi> lazyPower: waiting for test to finish, it's a bit long
<babbageclunk> Guest55756: Are you able to launch a lxd container on that machine separately from juju?
<Guest55756> babbageclunk_:how to do that?
<babbageclunk> Guest55756: Also, how hard would it be for you to upgrade to lxd 2.1?
<Guest55756> babbageclunk_:when I do apt-get install lxd, this version only comes
<Guest55756> babbageclunk_:If I do apt-get upgR
<bac> marcoceppi, pmatulis: is this ready to merge? https://github.com/juju/docs/pull/1315  would like to kill those 404s
<babbageclunk> Guest55756: launch a new container: `lxc launch ubuntu:`
<Guest55756> babbageclunk_:If I do apt-get upgrade lxd will it upgrade
<Guest55756> babbageclunk_:ok
<Guest55756> root@ptcvm3:~# lxc launch ubuntu: Creating vorticose-tifany Starting vorticose-tifany root@ptcvm3:~#
<Guest55756> babbageclunk_:root@ptcvm3:~# lxc launch ubuntu: Creating vorticose-tifany Starting vorticose-tifany root@ptcvm3:~#
<babbageclunk> Guest55756: I think 2.1 is only in the ppa, so you'd need to do `sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable` and then `sudo apt-get update` and `sudo apt-get upgrade`
<Guest55756> babbageclunk_:ok let me follow these steps and retry
<Guest55756> babbageclunk_:any other step apart from these?
<babbageclunk> Guest55756: Thanks. If this works then it means we have a bug though - we should be able to work with lxd 2.0.4.
<Guest55756> babbageclunk_:will update you if its successful
<babbageclunk> Guest55756: Ok, thanks. At the moment I'm not getting notifications of your replies, because they're showing up for me as babbageclunk_ - if you just say babbageclunk I'm more likely to notice if IRC isn't focused (ie. most of the time).
<Guest55756> ok :)
<pmatulis> bac, i'll look at it now. i would like some feedback from your side re https://github.com/juju/docs/pull/1314
<marcoceppi> lazyPower: so here's the output
<marcoceppi> lazyPower: http://paste.ubuntu.com/23111989/
<marcoceppi> I wonder if a few of these are becuase I'm using 1.3.3 kubectl and not $master
<lazyPower> doubtful
<marcoceppi> \o/
<lazyPower> do we get more detailed summary?
<lazyPower> or is that what we get
<mattyw> mgz, ping?
<marcoceppi> lazyPower: I can get a bunch of failure output
<marcoceppi> but it's w3ay up in my buffer
<lazyPower> ack
<Guest55756> babbageclunk:2016-08-30 13:58:37 INFO juju.cmd supercommand.go:63 running jujud [2.0-beta16 gc go1.6.2] 2016-08-30 13:58:37 ERROR cmd supercommand.go:458 creating LXD client: Get https://10.209.9.1:8443/1.0: Unable to connect to: 10.209.9.1:8443 ERROR failed to bootstrap model: subprocess encountered error code 1
<Guest55756> babbageclunk: same error
<marcoceppi> lazyPower: like this: http://paste.ubuntu.com/23111996/
<lazyPower> marcoceppi - packaging conflict has bubbled up in charm-tools   pkg_resources.ContextualVersionConflict: (PyYAML 3.12 (/usr/local/lib/python2.7/dist-packages), Requirement.parse('PyYAML==3.11'), set(['jujubundlelib']))
<lazyPower> i'm seeing this in all the current implementations of charmbox, on status and in ci
<marcoceppi> lazyPower: yeah, that's been fixed in master
<lazyPower> ok, so bump the boxes to build from master?
<mgz> mattyw: responded in other channel
<marcoceppi> lazyPower: what are they building from now?
<marcoceppi> pypi?
<Guest55756> babbageclunk:sorry, its still running
<lazyPower> marcoceppi - apt
<marcoceppi> lazyPower: that should never happen
<lazyPower> https://github.com/juju-solutions/charmbox/blob/master/install-review-tools.sh#L12
<Guest55756> babbageclunk: same error --> Reading state information... tmux is already the newest version (2.1-3build1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Attempt 1 to download tools from https://streams.canonical.com/juju/tools/agent/2.0-beta16/juju-2.0-beta16-xenial-amd64.tgz... tools from https://streams.canonical.com/juju/tools/agent/2.0-beta16/juju-2.0-beta16-xenial-amd64.tgz downloaded: HTTP 200; time 80.146s; 
<Guest55756> babbageclunk: these are the steps I followed. Please validate --->  2009  sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable  2010  sudo apt-get update  2011  sudo apt-get upgrade  2012  sudo add-apt-repository ppa:juju/devel  2013  sudo apt-get update  2014  sudo apt-get install juju-2.0 lxd  2015  lxd --version  2016  juju --version  2017  sudo lxd init  2018  #juju bootstrap lxd-test localhost  2019  juju list-controllers  2020  j
<marcoceppi> lazyPower: why are you pip upgrading?
<babbageclunk> Guest55756: Can you put that on paste.ubuntu.com? It's getting mangled here.
<lazyPower> Tooling bump
<lazyPower> we put tims test tooling bumps in both stable and devel, it required installation from source and a bump. but thats been a few weeks ago, the scenario may have changed
<marcoceppi> lazyPower: well a new dep now installs PyYAML 3.12
<marcoceppi> which does not work with jujubundlelib
<lazyPower> do you know which component is pulling in the new dep?
<lazyPower> oh wait i see the explicit upgrade to pyyaml derp
<Guest55756> babbageclunk_: http://paste.ubuntu.com/23112045/
<babbageclunk> Guest55756: ok, looking
<babbageclunk> Guest55756: When I try the same thing I get 'unknown cloud "local"' - have you got a definition for local in `juju list-clouds`?
<Guest55756> babbageclunk: let me check
<babbageclunk> Guest55756: Try `juju bootstrap local lxd` instead.
<Guest55756> babbageclunk: Ok will try this. Result of previous command : root@ptcvm3:~# juju list-clouds CLOUD        TYPE        REGIONS aws          ec2         us-east-1, us-west-1, us-west-2, eu-west-1, eu-central-1, ap-southeast-1, ap-southeast-2 ... aws-china    ec2         cn-north-1 aws-gov      ec2         us-gov-west-1 azure        azure       centralus, eastus, eastus2, northcentralus, southcentralus, westus, northeurope ... azure-ch
<Guest55756> babbageclunk: shall i paste in ubuntu.com?
<babbageclunk> Guest55756: pastebin again please ;)
<Guest55756> babbageclunk: i am trying the other command
<babbageclunk> Guest55756: ok
<Guest55756> babbageclunk: http://paste.ubuntu.com/23112081/
<babbageclunk> I get the same error as you're getting.
<Guest55756> ok
<Guest55756> babbageclunk: I was not getting the same error in beta15
<babbageclunk> Guest55756: Ok, so it might be a bug in beta 16 - I'm going to create a bug for it.
<Guest55756> babbageclunk_: when tried `juju bootstrap local lxd`, got the same error. Looks like a bug
<Guest55756> babbageclunk_: is there any way to take juju2.0-beta15 version? by seeting ppa:juju/ ?
<Guest55756> babbageclunk_:it take default devel version beta16
<babbageclunk> Guest55756: Try using http://askubuntu.com/questions/307/how-can-ppas-be-removed to remove the juju ppa, then remove juju and reinstall it - that should get you back to beta15
<babbageclunk> Guest55756: (There might be a more direct way than that.)
<babbageclunk> Guest55756: (But I'm not sure.)
<Guest55756> babbageclunk_:ok. Thanks a lot. Will try that one
<babbageclunk> Guest55756: good luck, sorry for the trouble.
<Guest55756> babbageclunk_:Thanks, np :)
<marcoceppi> petevg: I had some problems using charms.unit over the weekend, mainly that it broke a whole bunch of imports when I tried Harness.patch_imports
<marcoceppi> wnated to run through them witih you to see if I can figue out a way to fix it
<petevg> macroceppi: I am working on an email on exactly that subject.
<petevg> marcoceppi: basically, patching imports is a horrible idea.
<marcoceppi> petevg: how else do we get around it?
<marcoceppi> I'm using layer:apt and rely on lib/charms/apt.py from that layer
<petevg> marcoceppi: I've got a PR in on the docs with some "best practices" for working around it. But basically I think that I need to scrap the harness :-(
<petevg> https://github.com/juju/docs/pull/1317
<cholcombe> I have a py2 module that i need to use with reactive.  Am I up a creek?
<bac> pmatulis, evilnickveitch: this PR landed https://github.com/juju/docs/pull/1315 and it fixes the issue but only for 'devel'. can you update the stable tag on that branch so we don't have the 404s? i'm not sure of your criteria for moving the tag.
<marcoceppi> cholcombe: kind of
<bdx> is there an option that exists somewhere that will silence child layer messaging - if not this would be super useful
<bdx> s/child layers/lower layers/
<bdx> also, what is the technical terminology we are using for "lower layer"|"child layer"?
<marcoceppi> bdx: is it really that hurtful? I mean at the end of the day it's just telling the operator what it's doing
<marcoceppi> bdx: and since you own the top layer, you set the final messages
<bdx> marcoceppi: well, I only bring it up because I just demoed deploying one of our apps as a charm, people were thoroughly impressed, the only negative feedback I got was that "the ruby messaging persisting for the entirety of the life of the charm is annoying, can we change that?"
<bdx> its a polish thing
<marcoceppi> bdx: you totally can, in your top layer :)
<bac> pmatulis, marcoceppi, evilnickveitch: automated dead link checking: https://github.com/juju/docs/pull/1318
<pmatulis> bac, i'd like to but that is the same file affected by the other bug and it has a bad link. i'd like to correct it first
<bac> pmatulis: ok.  if not immediately, perhaps soon.
<bdx> marcoceppi: I am ... the last thing I do  in my charm is -> http://paste.ubuntu.com/23112522/
<bdx> no matter what layer ruby messaging just trumps all
<marcoceppi> bdx: that's a bug in ruby layer
<marcoceppi> bdx: you might also want to respond to @hook('update-status')
<bdx> oooh, and set status there as well
<bdx> can I use @hook in a reactive layer?
<marcoceppi> bdx: you can, it's discouraged, but this is a good example of where it works
<marcoceppi> bdx: also, you can just have @when('prm.available') and have it always status_set...
 * marcoceppi looks at ruby layer
<marcoceppi> bdx: ah, I see the problem
<marcoceppi> this is a quick fix
<bdx> marcoceppi: should I just not use the @when_not('prm.available')
<bdx> and set_state('prm.available')
<marcoceppi> bdx: no, I think that's fine
<bdx> instead just ensure that it fires all the time
<bdx> lol
<marcoceppi> bdx: https://github.com/battlemidget/juju-layer-ruby/pull/7
<bdx> lol
<bdx> YES
<marcoceppi> bdx: I've also got a bunch of other fixeds to make this layer more modern I'm about to put in another PR
<marcoceppi> bdx: this will have a few changes to what you're doing in your layers, as an FYI
<bdx> marcoceppi: also, I've another odd little issue ... http://paste.ubuntu.com/23112614/
<bdx> no matter what I do, I can't get my charm to open ports
<bdx> ha
<bdx> I feel like I'm about to feel really stupid, but I've gone over everything 10x and can't seem to figure it out ... works in all my other charms implemented identically
<marcoceppi> bdx: none of the charms are expsed
<marcoceppi> also, lxd doesn't raelly have a firewaller
<bdx> http://paste.ubuntu.com/23112623/
<marcoceppi> they should all just be "open'
<bdx> on aws I get the same thing tho here ..
<marcoceppi> bdx: I see the problem
<bdx> even when exposed
<marcoceppi> website.available is only run when you connect to the http interface
<bdx> ooooooh
<marcoceppi> bdx: you'll want to do that during prm.available, probably
<marcoceppi> the rest of the hook looks fine, but open the port there in stead
<bdx> ok
<bdx> exactly what I needed
<bdx> thank you thank you
<marcoceppi> bdx: this is what I want to change: https://github.com/battlemidget/juju-layer-ruby/pull/8
<marcoceppi> bdx: how much of the configuration options do you use for the ruby stuff?
<bdx> just 'version'
<bdx> 'ruby-version'
<marcoceppi> bdx: it seems that config is idempotent though, like if you change it after a deploy, it won't do anything
<marcoceppi> how often do you plan on changing ruby version?
<bdx> never
<bdx> it should be an option then?
<bdx> all of those should be
<marcoceppi> bdx: yeah, that seems like a compile time option
<marcoceppi> bdx: that's what I'm thinking
<marcoceppi> I'll make that a separate pull request
<marcoceppi> I feel bad ripping out all these chunks
<bdx> anything that reduces the # of configs getting slammed in config.yaml +1
<bdx> oh I see, you haven't gotten that far
<marcoceppi> bdx: eyah, not yet, I may do taht here or another PR
<bdx> marcoceppi: when I try to build my charm with your modernize branch -> http://paste.ubuntu.com/23112908/
<bdx> possibly I need to remove /home/bdx/allcode/charms/repo/deps ?
<bdx> marcoceppi: https://github.com/battlemidget/juju-layer-ruby/issues/9
<marcoceppi> bdx: yeah, that WIP is not ready to land until charm-tools is fixed
<bdx> awwwwee
<cholcombe> marcoceppi, any eta on review.juju.solutions coming back up?
<cholcombe> just wondering
<valeech> How can I modify a charmâs bundle.yaml to deploy services to a specific maas host? I have tried changing the -to: from just 1 2 3 to hostname1.maas hostname2.maas and it fails with âplacement âhostname1.maasâ refers to an application not defined in this bundleâ
<bdx> valeech: https://github.com/jamesbeedy/os_ha_test_stack/blob/master/l3_ha_charmconf.yaml
<bdx> valeech: specifically https://github.com/jamesbeedy/os_ha_test_stack/blob/master/l3_ha_charmconf.yaml#L428,L440
<bdx> thats one way
<valeech> Eureka! Thanks bdx!
<bdx> add constraints that are machine tags in maas
<bdx> valeech: np
<hatch> using tip it appears that --region is no longer supported?
<magicaltrout> marcoceppi: sometimes i'm very sad
<magicaltrout> bugg@tomsdevbox:~/Projects/dcos-master$ juju deploy nagios
<magicaltrout> panic: unknown channel "edge"
<marcoceppi> magicaltrout: the saddest of pandas?
<magicaltrout> sad because it worked this  morning :P
<marcoceppi> magicaltrout: yeah, a new charm store just went out, are you on beta15?
<magicaltrout> yeah
<marcoceppi> oh boy.
 * magicaltrout 's head hits the keyboard
<magicaltrout> okay let me caveat that
<magicaltrout> i have an empty beta 15
<magicaltrout> i'm not stuck to beta 15
<magicaltrout> i just need stuff to deploy
<BradCrittenden> magicaltrout: please update to beta 16
<magicaltrout> sage advice
<magicaltrout> is that in some funky ppa?
<marcoceppi> magicaltrout: ppa:juju/devel
<bac> magicaltrout: we have three moving parts we tried to coordinate the release as well communication. they didn't all make it out together
<bac> magicaltrout: short answer, we now have four channels and juju before beta 16 will not work
<magicaltrout> no problem
<magicaltrout> being locked out is something i'm used to ;)
<magicaltrout> alrighty
<magicaltrout> take 1
<magicaltrout> 2
<aisrael> de3b7q!0
<aisrael> gah
<cmars> huh, all i see is ********
 * aisrael snickers
<magicaltrout> i see de3b7q!0
<magicaltrout> it must be some osbfucated hash
<jrwren> i see hunter2
#juju 2016-08-31
<kjackal_> Hello Juju World!
<KpuCko> hello is it that any way to change the juju controller ip address? I've installed juju with lxdbr0 random generated network as a bridge, now i prefere to use manually created bridge to access my vm's in easy ways, so now when i type juju status, juju cannot connect to juju controller
<magicaltrout> ah one of those quality days where you try and start ec2 nodes and they just fail in error :)
<magicaltrout> anybody around who know what on earth you do in beta16 now upload-tools seems to have vanished?
<admcleod_> magicaltrout: sync-tools instead?
<magicaltrout> what the f**k
<magicaltrout> you need to tell your overlords to stop being so pedantic
<admcleod_> im just guessing. i remembered upload-something was obsolete but that appears to be upload-series
<admcleod_> also, i cant really help since im mainly using 1.25
<magicaltrout> you make me sad admcleod_
<magicaltrout> get with the times
<admcleod_> thats why im here ;) are you going to strata ny?
<magicaltrout> i'm not. I'm doing spain and pasadena twice before the end of the year
<magicaltrout> any more and the mrs might kill me
<admcleod_> also are you saying --upload-tools is missing from bootstrap?
<magicaltrout> that is correct
<magicaltrout> it was there in 15
<magicaltrout> sync-tools does look suspiciously like its what its moved to
<magicaltrout> but it did used to live in the bootstrap
<babbageclunk> magicaltrout: There's a thread about it here: https://lists.ubuntu.com/archives/juju-dev/2016-August/005893.html
<babbageclunk> I guess it was only discussed on the juju-dev list because people assumed that it was only used by devs. Oops!
<magicaltrout> thanks babbageclunk it probably ended up in a release email somewhere I failed to read as well
<magicaltrout> no problem
<rock_> Hi. I deployed OpenStack on LXD using https://github.com/openstack-charmers/openstack-on-lxd. I have created "cinder-storagedriver" charm . I pushed the created charm to public charm market place(charm store). So Using JUJU GUI when i was trying to deploy "cinder-storagedriver" charm by adding relation  to "cinder" it was throwing an error.
<rock_> ERROR: Relation biarca-openstack:juju-info to cinder:juju-info: cannot add relation   "biarca-openstack:juju-info cinder:juju-info" : principal and subordinate applications' series must match.
<rock_> Manually , When I was deployed our charm by taking from charm store then it is able to deploy and able to add relation to cinder.
<rock_> As part of debug,  I deployed juju-gui as $juju deploy cs:juju-gui-134 on our setup. But it was showing series as trusty even we have choosen xenial.
<rock_> I am thinking this is JUJU-GUI issue.
<rock_> (or) Is it some other issue? Please anyone respond to this issue.
<kjackal> Hi rock_ the series of the deployed charms should be shown on your juju status
<kjackal> can you make sure you have biarca-openstack and cinder charms on the same series?
<kjackal> rock_: you are on juju 2.0?
<kjackal> you could choose the series with the --series flag at deploy time
<rock_> kjackal: Hi. biarca-openstack and cinder charms are in the same series(Xenial). Yes juju 2.0
<rock_> But I can able to deploy my charm through juju cli by taking from charm store.
<rock_> And I can able to add relation to cinder successfully.
<rock_> When I did from Juju CLI everything working fine.
<D4RKS1D3> Hi, I want to change the mtu of juju-br0, I changed the value of the associated interface but when i restart the machine change other time to 1500.
<D4RKS1D3> Someone knows how to change always? the interface in /etc/networks/interfaces is in manual and i can not change in this config file.
<rock_> kajackal: But from JUJU gui, I am not able to add relation to cinder . It was giving   ERROR: Relation biarca-openstack:juju-info to cinder:juju-info: cannot add relation   "biarca-openstack:juju-info cinder:juju-info" : principal and subordinate applications' series must match.
<kjackal> rock_: where is your cinder-storagedriver charm?
<kjackal> rock_: can I see it?
<rock_> Yes. Type  "biarca"  in charm store.
<kjackal> rock_: this one: cinder-storagedriver
<kjackal> rock_: this one: https://jujucharms.com/u/siva9296/biarca/0
<kjackal> rock_: Looks ok to me. You could ask at #juju-gui and #openstack-charm . It might be a bug
<rock_> kjackal: ok. Thank you.
<magicaltrout> T-1hour
<magicaltrout> still working on my demo \o/
<PCdude> hÃ© magicaltrout , hru?
<magicaltrout> lol
<magicaltrout> tired... still hacking
<magicaltrout> you know
<magicaltrout> the usual before a 50 minute presentation ;)
<PCdude> haha, 50 minutes and u are wasting time here ;) , good time managment
<PCdude> but anyway, good luck!
<magicaltrout> thanks :P
<marcoceppi> good luck magicaltrout
<magicaltrout> aye... then i need to come back to the speakers room and start tomorrows :P
<magicaltrout> which you guys claim to want to film and slap on youtube... never good
<magicaltrout> woop thanks for that dump cmars
<magicaltrout> it started working
<cmars> magicaltrout, awesome!
<magicaltrout> don't think i got my original config.yaml stanza correct
<admcleod_> magicaltrout: good luck
<lazyPower> magicaltrout knock em downnnn
<admcleod_> break someones leg
<marcoceppi> smash 'em mash 'em stick 'em in a stew!
<marcoceppi> or, whatever
<admcleod_> ill take 2 whatevers
<lazyPower> petevg Thanks for https://github.com/juju/docs/pull/1317
<lazyPower> petevg looks good from my house, and it gets us some proper representation of what charmers should be thinking about as they organize and structure their modules.
<kjackal> hey petevg, are you there?
<magicaltrout> all done
<magicaltrout> not a bad turn out, must have been 30 odd
<admcleod_> did you tell any jokes?
<magicaltrout> I don't believe i did
<magicaltrout> i did ask who'd heard of Juju
<magicaltrout> and a whole 2 hands went up
<Randleman> Nice
<magicaltrout> now i need to write my talk and demo for tomorrow :P
<jrwren> magicaltrout: were they your two best friends, or were they complete strangers?
<magicaltrout> pfft
<magicaltrout> actually  the only guy i knew in the room was a guy that hacks on Karaf in his spare time
<magicaltrout> not got many friends at this one =/
<magicaltrout> anyway! in reality whilst we all wish everyone knew of and used juju at talks its probably better to talk to a room of people who don't know what it is
<magicaltrout> hell, it was in my talk title and still 30 people showed up ;)
<jrwren> magicaltrout: well, that is pretty good then!
<magicaltrout> aye
<magicaltrout> and appears 30odd are likely to turn up tomorrow to my Business Intelligence juju talk
<magicaltrout> so at least its getting faces in front of the software
<magicaltrout> appears to be standing room only at Bluefin tomorrow night
<magicaltrout> better come up with something half decent then
<petevg> kjackal: I am here, though apparently I'm not paying attention to my IRC client ...
<kjackal> petevg: :)
<petevg> kjackal: I just pushed a new revision of the Zookeeper charm. If those timeouts are legit, it should fix them.
<kjackal> I wanted to ask you something on the zookeeper charm
<petevg> kjackal: ask away.
<petevg> lazyPower: thank you for the positive feedback on that testing PR :-)
<kjackal> I see that now the zookeeper waits for a service restart when new units are added
<petevg> kjackal: It does. We did that because the Zookeeper docs advised that it's not always stable after removing or adding units.
<kjackal> petevg: have you tested this behaviour with the HA bundles that expect ZK to have 3 units?
<petevg> kjackal: I haven't. That is a very good test to add to the bundle tests.
<kjackal> petevg: Yes I remember that, I am not sure how this behavior will play along with the hadoop processing and the rest of the bundles
<petevg> kjackal: it probably will function well. The Zookeepers don't stop working as they're waiting for the restart. They just won't pick up new peers until you do the restart.
<kjackal> Hm... so we need the restart to have Zookeeper HA
<petevg> kjackal: if you specify three zookeepers in your bundle, I believe that this works around the issue; Zookeeper will only wait for a restart if it has already setup a quorum that then changes.
<petevg> Let me verify that ...
<kjackal> I have tested "juju deploy zookeeper -n 3" and the restart is needed there
<valeech> Sorry guys. I just registered for the charmers summit. I hope I donât hold back too many people :)
<kjackal> petevg: And what happens upon restart? Do the ZK records of all peers get merged?
<petevg> kjackal: I'm not sure what Zookeeper does internally. I just know that the docs advise that you do a rolling restart after you've changed the number of peers.
<petevg> ... if you don't, in the Zookeeper version that we're using, a job can get left without a peer to attach to.
<petevg> kjackal: darn it. You're right about the peers needing a restart even on setup. I bet that the timing of things changed when we ripped out the openjdk relation :-/
<jamespage> marcoceppi, hey - I need to add uosci-testing-bot to the charmers team to support our automated push/publish process for promulgated charms - is that OK with you?
<marcoceppi> jamespage: you don't actually need to do that
<marcoceppi> jamespage: we can just give uosci-testing-bot write access to the charms that it needs to publish
<jamespage> beisner, ^^
<jamespage> marcoceppi, oh yeah that's a good idea
<jamespage> beisner, ^^
 * jamespage faceplatns
<marcoceppi> beisner jamespage just give me a list of entities and I'll give you guys the commands, I'm sure a few of your are still ~charmers ;)
<beisner> jamespage, marcoceppi - hrm yah, looks like perms are good for some of them, but not all, for the bot user.
<marcoceppi> beisner jamespage tl;dr: charm grant <charm> --channel stable --acl write uosci-testing-bot
<beisner> ack marcoceppi thank you
<valeech> running juju2.0beta15 when trying to deploy to lxd container, I get the following error: 'failed to ensure LXD image: unable to get LXD image for ubuntu-xenial: The requested image couldn''t be found.' If I ssh into the machine and run âsudo -E lxc launch ubuntu:16.04â it starts just fine. Here is the yaml file I am using to deploy: http://pastebin.com/hzx6icGk
<magicaltrout> itsn't it some naming thing
<magicaltrout> isn't
<magicaltrout> like
<magicaltrout> you download the image and tag it with a custom name, in your case ubuntu-xenial
<magicaltrout> its been a while, my memory might be failing me
<petevg> kjackal: on that failing zookeeper test run, are there tracebacks in the juju logs?
<valeech> magicaltrout: That makes sense. I am learning so I am cutting and pasting from different sources and I bet that is the case.
<kjackal> petevg: I am afraid not
<magicaltrout> valeech:
<magicaltrout> if you run
<magicaltrout> lxc image list
<magicaltrout> you'll see the alias column on the left
<magicaltrout> thats what it uses
<valeech> magicaltrout: I see that. How do you set that on newly created machines from maas?
<magicaltrout> erm
<magicaltrout> well you do it when you do an image pull
<magicaltrout> does that happen in MAAS or is it magic?
<valeech> magicaltrout: It is very possible there is a gap in my understanding, but doesnât maas deploy a vanilla image on the machine then hand it over to juju? At that point, juju would need to pull down whatever images it needs for lxd?
<magicaltrout> ah yeah, now i get you
<magicaltrout> so this is during juju bootstrap?
<magicaltrout> or post bootstrap?
<magicaltrout> like, if you do a juju bootstrap it should just dump it on the metal
<magicaltrout> so no lxd involved to my knowledge
<valeech> Correct. I have a new juju model setup and I am trying to deploy my first charm to a container. So juju has to get a machine from maas to work with
<magicaltrout> k
<valeech> that part works. when juju tries to setup the container to push the app to the container fails to start.
<magicaltrout> well, i asked around the other day and my understanding was that whilst it needs an image, thats not a lxd image
<magicaltrout> okay, so you have MAAS spin up a node, then you try and push a LXD charm to it?
<valeech> Kinda. juju asks maas to spin up a node which will become machine 0 in juju. Then juju attempts to initialize a container on machine 0 which would become machine 0/lxd/0 in juju. Then juju would push the charm to that container.
<magicaltrout> i'm sufficently confused, I tried to get MAAS running on virtualbox the other day and it failed, but i did ask around and was told that if you "juju deploy mycharm" it would go onto the bare metal not a lxd container
<skay> does mojo with with juju2 yet?
<lazyPower> valeech - there has been a ton of work around that. this is juju 2.0-beta16 right?
<lazyPower> skay - last i heard it was still 1.25 only
<valeech> That is correct. It will deploy to bare metal. I am trying to get the next step which is deploy inside a container on the bare metal.
<skay> aw
<valeech> lazyPower: it is beta 15. I can load beta 16.
<lazyPower> shouldn't make too big of a difference.
<lazyPower> do you have logs from the container thats failing to start?
<magicaltrout> valeech: i'd update anyway
<magicaltrout> because the charm store will make your instance explode as soon as you get going :)
<valeech> ok, I will update. as far as container logs, how would I go about getting those?
<valeech> haha
<babbageclunk> valeech - I've seen that as well - I think it's resolved by beta16.
<valeech> babbageclunk: great. I am upgrading to beta16 now but having some trouble...
<babbageclunk> valeech: trouble?
<valeech> babbageclunk: I donât think I upgraded properly. I did an apt-get upgrade juju which upgrade the juju client host to beta16. Then in the controller model I did juju upgrade-juju. It said is was upgrading but when I would issue a juju status I would get this error: âERROR invalid entity name or password (unauthorized access)â Maybe I didnât wait long enough?
<babbageclunk> valeech: hmm - I'm not really familiar with using upgrade-juju. Are you in a position where you can rebootstrap instead?
<valeech> babbageclunk: That is what I did :) It is adding my redundant controllers right now from juju enable-ha.
<babbageclunk> valeech: ok cool - so things are looking alright at the moment?
<valeech> babageclunk: everything looks good. About to deploy this charm to the container and see what happens.
<babbageclunk> valeech: fingers crossed.
<valeech> babbageclunk: it spun up the container!! Now itâs installing the charm :)
 * babbageclunk dances
<valeech> babbageclunk: Woohoo! mysql/0  active    idle   0/lxd/0  10.0.129.59            Unit is ready
<valeech> many thanks to babbageclunk, magicaltrout and lazyPower
<jacekn> is threre a known problem with "charm build" not working any more? I'm getting "ERROR unrecognized command: charm build" when I try to compose charm using charm 2.1.1-0ubuntu1
<lazyPower> nice valeech
<lazyPower> jacekn - try charm-tools from the snap channels
<jacekn> lazyPower: ah that could be it, thanks
<magicaltrout> jacekn: also it got broken into  a bunch of binaries
<magicaltrout> charm-build
<magicaltrout> for example
<jacekn> magicaltrout: tha's fine as long as dependencies and/or sensible error messages are there
<magicaltrout> sensible error messages?
<magicaltrout> this is juju
<valeech> ok, next challenge, how do I set the lxd container to use a specific bridge? when juju created it inside of machine 0, lxdbr0 was added to machine 0 with what looks like a random IP segment. Is there a mechanism that I can tell juju to use a specific interface for containers so I can make sure the containers end up inthe correct IP segment?
<PCdude> magicaltrout:  how did it go?
<magicaltrout> not  bad PCdude
<magicaltrout> 30 odd people and i streched it out to  40 mins
<magicaltrout> so i guess  it was a success
<PCdude> sounds like a succes
<magicaltrout> just tomorrow to do now
<PCdude> whats tomorrow?
<magicaltrout> business intelligence and juju talk for a user group in london
<PCdude> ah ok, u keep urself busy haha
<magicaltrout> initially both talks were on the same day
<magicaltrout> one in ams one in london
<PCdude> well it is possible, but convenient is something else
<PCdude> magicaltrout:  let me ask u, sometimes questions on stackexchange takes years before they get answered and on IRC is sometimes hard too to find answers, were do u go when u have questions?
<magicaltrout> IRC
<magicaltrout> mailing lists
<magicaltrout> i do  a lot of  apache stuff
<magicaltrout> thats all mailing lists
<PCdude> I never used mailings lists, how does that work exactly?
<magicaltrout> you sign up
<magicaltrout> and send emails
<magicaltrout> ... mailing.... lists ;)
<babbageclunk> valeech: I'm not sure - I think the best people to help you have dropped off for the day, sorry.
<magicaltrout> PCdude: juju relies heavily on IRC and mailing lists
<magicaltrout> if people are offline on IRC
<magicaltrout> you generally post to the list and get an answer there
<magicaltrout> depends on the project though, not all use mailing lists but many do
<PCdude> magicaltrout: ah ok, so if I am correct, I add my e-mail address there and when I have a question that e-mail is send to anyone in that list?
<PCdude> and anyone can repond to that mail
<magicaltrout> https://lists.ubuntu.com/mailman/listinfo/juju
<magicaltrout> you subscribe there
<magicaltrout> then you will recieve mails sent to the list
<magicaltrout> and you can send to the list
<PCdude> but on what speed are those e-mails send? I mean a regular forum goes so fast that means 5 mails a second
<magicaltrout> few a day
<jose> it's a discussion list. not extremely heavy volume, but not as low as an announcement only list.
<magicaltrout> PCdude: i get about 800 a day from apache projects, you'll be fine ;)
<PCdude> ah ok, good to know, lets not use my primary mail address haha, I just wanna get closer to the heat :)
<holocron> hi all, quick question i hope.. I already have some machines that are added to my model and I'd like to specify them sa the targets in a juju bundle
<magicaltrout> i just have  a bunch of filters
<holocron> the machines were manually added:
<holocron> whenever i attempt to deploy a bundle, it goes and tries to make 4 more machines, but i'd rather it just used the 4 i already have
<magicaltrout> holocron: you probably need to use --to:... in the deploy stanza
<magicaltrout> to tell it where to go
<holocron> magicaltrout: sure I can direct charm deployment to the machines
<holocron> but i want to do it from a bundle
<magicaltrout> i suspect you're out of luck
<magicaltrout> but lazyPower might know better
 * lazyPower reads scrollback
<lazyPower> bundle machine #'s dont necessarily correspond to machine #'s already in the model
<lazyPower> i'm not certain how to do that other than to use maas tagging holocron, but that assumes you're using the maas juju provider
<holocron> unfortunately i'm not lazyPower, I'm using the lxc provider
<holocron> lazyPower, can you tell me where some of this might be documented better? https://jujucharms.com/docs/devel/charms-bundles is not very telling
<holocron> in particular, i'm looking for the specification on placement directives
<lazyPower> holocron - that seems bug worthy holocron. If you dont mind opening a bug at http://github.com/juju/docs/issues with the specifics you're looking for I can make sure we get it addressed
<holocron> lazyPower okay.. i think i answered my own question after reading the page i linked for a 2nd time
<holocron> there's simply no way to use a pre-defined machine directly from a bundle unless it's MAAS controlled?
<lazyPower> holocron - can you pastebin me your bundle?
<kwmonroe> petevg: i notice you have a convention with branch names at https://github.com/juju-solutions/layer-apache-bigtop-base.  i'd like to adopt it.. what are your keys?  i see bug/foo and feature/foo.  anything else?
<petevg> kwmonroe: I've just been using bug and feature. If you have something that doesn't fit, feel free to add a word, and let me know about it, so that I can use it, too :-)
<holocron> lazyPower http://pastebin.com/getn0Vzm
<holocron> that isn't importable due to "The following errors occurred while retrieving bundle changes: placement "3" refers to a machine not defined in this bundle.." etc etc
<holocron> i have machines 1-4 defined to juju already
<holocron> but there's no way to indicate them from the bundle it seems
<cory_fu> kwmonroe, petevg: If there is a corresponding bug / issue number, what do you think about including it in the branch name so that it's easy to associate them?  Something like "feature/123-foo"?
<petevg> cory_fu: that's officially part of the convention that I've borrowed, though I'm not always great at remembering. I'll try to be better about it :-)
<cory_fu> Oh, I didn't realize.  Carry on, then
<kwmonroe> cory_fu: i think the 123 won't be useful in practice.  how would you know if it was a gh issue, lp bug, or jira?
<cory_fu> kwmonroe: Hrm.  What about feature/gh-123/foo?
<kwmonroe> i suppose you could say bug/gh123-foo, feature/lp-bug... heh.. yeah cory_fu
<lazyPower> holocron - i think this is a regression.  Bugs would def. be welcome around this so we can get eyes on it.
<holocron> okay lazyPower, i'll open a bug .. LP or github?
<cory_fu> kwmonroe, petevg: Actually, is knowing that something is a feature vs bug useful info?
<lazyPower> https://launchpad.net/juju/  would be preferrable, if its closed fairly quickly with instructions we can port those into the docs.
<kwmonroe> i think so cory_fu.. i (will) tend to look at bug branches before features.. i think.
<cory_fu> Also, should we worry about cleaning up our branches after they're merged?
<cory_fu> kwmonroe: Fair enough
<lazyPower> I'm a +1 for cleaning up branches after merging
<lazyPower> but nobody asked me :)
<cory_fu> lazyPower: We appreciate your input all the same, you special snowflake, you. ;)
<kwmonroe> +1 for lazyPower being a good person.  and yes cory_fu, i'm all about deleting branches, whether they're merged or not.
<cory_fu> lol
<kwmonroe> ;)
<cory_fu> I should create a bot that deletes kwmonroe's branches as soon as he pushes them
<petevg> I'm merciless about cleaning up branches locally.
<petevg> I would be down with cleaning them up remotely, too.
<lazyPower> @cory_fu have you seen: https://github.com/jfrazelle/ghb0t
<cory_fu> I wish GH had a "Merge and delete this branch" button
<petevg> That actually fixes the issue where you squash a branch to get it ready to merge, and then you confuse someone who has the branch checked out locally.
<petevg> If the branch is just gone, the error is much less confusing.
<petevg> cory_fu: the delete button does appear right after you click the merge button ... but yeah, it would be nice to do it in one go.
<cory_fu> lazyPower: Nice
<kwmonroe> ok cory_fu petevg kja-tab-tab-tab, so here's our new thing:  branches are bug/lp-123/<short-desc>, feature/gh-123/<short-desc>, bug/jira-123/<short-desc> and the merger deletes the branch.  sound good?
<cory_fu> +1
<lazyPower> LOL @ tab-tab-tab
<petevg> Ooh. Extra forward slashes. Makes it look all neat and organized.
<lazyPower> RIP
<petevg> +1 :-)
<kwmonroe> aight cory_fu petevg, wiki step 2 updated: https://github.com/juju-solutions/bigdata-community/wiki/Bigtop-Patch-Process
<petevg> Nice!
<holocron> https://bugs.launchpad.net/juju/+bug/1618996
<mup> Bug #1618996: unable to specify manually added machines from bundle.yaml <juju:New> <https://launchpad.net/bugs/1618996>
<lazyPower> jcastro ping
<Prabakaran> hello Team, I am not getting any proper info on the error when i am using make lint command on the build charm. I meant, make lint command is not showing from which file i am getting these error. Could someone please explain me on this?
<kwmonroe> Prabakaran: will you pastebin your charm's Makefile and 'make lint' output?
<Prabakaran> ya sure kwmonroe
<lazyPower> holocron thanks, i'll bring this up
<Prabakaran> kwmonroe: i have pasted here http://pastebin.ubuntu.com/23117014/
<kwmonroe> ah, that's an easy one Prabakaran :)  your machine is out of space.. what does "df -h" show?
<Prabakaran> kwmonroe: link is here http://pastebin.ubuntu.com/23117020/
<kwmonroe> yup Prabakaran.. your / filesystem at /dev/sda3 is 100% full
<Prabakaran> kwmonroe: you meant all these error are because of memory of the machine
<kwmonroe> yup Prabakaran.  the "apt-get install $apt_prereqs" failed because it didn't have enough space to write data to /var/lib/apt.
<magicaltrout> sad times
<Prabakaran> thanks kwmonroe
<Prabakaran> i wil clear my machine space and try this out
<kwmonroe> np Prabakaran.  if you've downloaded any charms from magicaltrout, i'd suggest deleting those first.
<Prabakaran> kwmonroe: magicaltrout means?
<kwmonroe> heh, sorry Prabakaran.. it was a joke.  magicaltrout is the guy in this channel that causes so many problems.  i like to blame all things on him first ;)
<magicaltrout> lol
<Prabakaran> thats no problem :)
<magicaltrout> kwmonroe: you're so mean
<magicaltrout> you make me sad
<magicaltrout> in other todos... graylog, we should get graylog onto juju
<kwmonroe> pretty sure it's the weather that's making you sad magicaltrout.  you need southern california stat.
<magicaltrout> hehe
<magicaltrout> our summer has been very nice
<magicaltrout> compared to the last few years
<magicaltrout> but i do need southern california
<magicaltrout> i even have a car god help you all
<x58> stub: Are you stub42 on Github?
<PCdude> magicaltrout:  for some weird reason is the MAAS error gone now :D
<magicaltrout> magic
<magicaltrout> told you PCdude we all have those
<PCdude> magicaltrout: best guess is something with the networkcard, when installing with only one card it fails, but thats it and frankly I dont care anymore it works now
<x58> Is there a helper in charmhelpers for restarting  a systemd service?
<lazyPower> x58 - i think thats a missing feature, however there is a juju-reboot hook tool that will safely reboot a unit in the context of juju + containers that may be running on the host (lxd based)
<lazyPower> x58 https://jujucharms.com/docs/stable/reference-hook-tools#juju-reboot
<x58> lazyPower: I found charmhelpers.core.services.base which has service_restart which is a wrapper around charmhelpers.core.host.service_restart
<x58> or something like that.
<x58> Anyway, my @when_file_changed hook correctly restarts my service.
<x58> As for other accomplishments... published my second charm today =)
<x58> https://jujucharms.com/u/bertjwregeer/snmpd
<lazyPower> nice! ^5
<x58> ^5
<lazyPower> oo your readme is from teh apt layer
<x58> marcoceppi: https://jujucharms.com/u/bertjwregeer/snmpd (another charm built in anger ;-))
<x58> Refresh a couple of times..
<x58> It should show my .rst
<x58> which isn't rendered ;-)
<x58> Unless I need to make cs:~bertjwregeer/snmpd-2 public? Although I am not sure that should be required since I can see it...
<marcoceppi> x58: weird the RST isn't rendering, this might be a bug in our GUI
<x58> marcoceppi: It is. I have an open bug for it on Github
<x58> marcoceppi: https://github.com/CanonicalLtd/jujucharms.com/issues/313
<marcoceppi> x58: cool thanks!
<marcoceppi> charm looks awesome though
<x58> Thanks :-)
<x58> Ultimately ended up being fairly simple, but the docs aren't all that nice to get to that point ;-)
<x58> Also, charm proof is angry that I don't have any "provides", but I am not sure what my charm would provide exactly.
<kwmonroe> cory_fu: halp!!
<cory_fu> kwmonroe: What up?
<kwmonroe> cory_fu: http://paste.ubuntu.com/23117537/
<kwmonroe> that's pip 8.1.2 in charmbox.. is it me pip or me python-distutils-extra?
<cory_fu> kwmonroe: Do you have an old version of the layer:apt?
<x58> marcoceppi: BTW, even with layer.yaml in layer:apt having ignore: ['README.md'] in my layer.yaml won't let charm build succeed.
<cory_fu> kwmonroe: Or a modified version?  There's no wheelhouse.txt file in the apt layer
<kwmonroe> yeah cory_fu.. i think you're right
<kwmonroe> (rebuilding)
<kwmonroe> all good kwmonroe
<kwmonroe> heh
<kwmonroe> derp
<kwmonroe> all good cory_fu
<cory_fu> :)
<kwmonroe> you were right.. this time ;)
<lazyPower>  kwmonroe - you can embed github repos in a wheelhouse like the openstack-layer has done - https://github.com/openstack/charm-layer-openstack/blob/master/wheelhouse.txt
<lazyPower> just fyi
<kwmonroe> gracias lazyPower!
<lazyPower> anytime homie
<kwmonroe> two Ms in hommie, hommie.
<x58> Cue "you are not my hommie, pal" :P
<kwmonroe> lol
<lazyPower> dont call me buddy, guy!
<x58> Abbott and Costello. My favourite still has to be whose on first.
<lazyPower> thats better than my weak reference to an old adam sandler routine before he lost any sense of credibility :)
<x58> lol
<x58> For those nostalgic: https://www.youtube.com/watch?v=kTcRRaXV-fg
<catbus1> Hi, is agenda available for the upcoming juju charmers summit?
<marcoceppi> catbus1: yes, a tentative one is
<catbus1> marcoceppi: Can I see it now and share with an attendee from a technology partner company? Or is it too early to share now?
<marcoceppi> catbus1: nope, we mailed the Juju mailing lits on it
<marcoceppi> catbus1: https://lists.ubuntu.com/archives/juju/2016-August/007743.html
<catbus1> marcoceppi: thank you!
<bdx> marcoceppi: I seem to be hitting some kind of limit on my aws charm-dev account -> http://paste.ubuntu.com/23118016/
<bdx> marcoceppi: can you clear out my instances ... or something ..
<bdx> pls
<rajith> hi while bootstraping on juju2 beta 16 getting error : ERROR cmd supercommand.go:458 creating LXD client: Get https://10.35.77.1:8443/1.0: Unable to connect to: 10.35.77.1:8443 ERROR failed to bootstrap model: subprocess encountered error code 1
<bdx> rajith: you need to run `sudo lxd init` again, and configure lxd for api access
<bdx> rajith: to my knowledge, this means you need to first wipe your existing lxd containers
<bdx> rajith: and images
<bdx> mbruzek: http://paste.ubuntu.com/23118148/
#juju 2016-09-01
<stub> x58: yes
<x58> stub: Unfortunately adding a layer.yml doesn't fix the ignore issue :-(
<stub> x58: I don't think there is anything else I can do in the apt layer. You might need to make a local fork and remove README.md from it until it can be addressed in charm-tools.
<stub> Maybe this is why nobody else documents their layers ;)
<x58> stub: README.md is also coming from the layer:basic :P
<x58> If I only have a single layer I pull in layer:basic for example, ignore works just fine, if I pull in multiple layers, things go sideways.
<x58> rm build/mycharm/README.md :P
<x58> before doing a charm push . works :P
<x58> stub: Appreciate the work you do :-)
<kjackal> Hello Juju World!
<KpuCko> yesterday everyting went fine with juju, today im started new installation, same way as yesterday, but when i try to deploy application is got: panic: unknown channel "edge"
<KpuCko> goroutine 1 [running]:
<KpuCko> panic(0x1a24400, 0xc8200d7320)
<BlackDex> Hello there, if i install neutron-gateway via juju (trusty-liberty) the neutron-gateway doesn't have l3 agent installed.
<zeestrat> KpuCko: What version of Juju are you on?
<KpuCko> 2.0
<KpuCko> zeestrat http://pastebin.com/MBusU0U5
<zeestrat> KpuCko: It looks like you might be on the beta15 release. There was an update to the charm store which breaks that release (https://blog.jujugui.org/2016/08/30/jujucharms-com-updated-with-new-channel-support/). I suggest you update your Juju to beta16.
<KpuCko> how to update?
<KpuCko> im using ubuntu 16.04 lts with the repository package
<KpuCko> i do apt-get update && apt-get upgrade, and juju stay with the same version
<KpuCko> same issue when i add latest stable release ot juju with ppa repository
<zeestrat> KpuCko: Ah, I see. Juju is currently transitioning to version 2.0, but it is still in beta release. Unfortunately due to the transition the current default docs and packages in the stable repository are a bit of a mess. If you want to continue to use Juju 2.0 beta releases, then I suggest you add ppa:juju/devel repo and install beta16. Otherwise you can
<zeestrat> install juju-1.25 package which is the current stable one.
<PCdude> magicaltrout: I have a trouble with JUJU now....
<PCdude> http://imgur.com/BCOZSpC
<PCdude> any idea?
<rock__> Hi. How can I set bugs-url and homepage  for my newly developed charm? Please anyone respond to this.
<PCdude> I think the environments.yaml is wrong, but I have no idea what is wrong with it
<PCdude> magicaltrout:  the rest of it: http://askubuntu.com/questions/819506/juju-bootstrap-fails-with-openstack-cloud-installer
<babbageclunk> PCdude: The screenshots you're posting don't show the error. Can you put the log file you're showing in nano on a pastebin (like paste.ubuntu.com) instead?
<kivilahtio> Hi! I am trying to add a Raspberry pi running Raspbian-linux as a new machine to to my juju model. juju add-machine ssh:pi@xxxx tells me that it cannot find provisioning script. How can I add a provisioning script? I cannot find a clear place in the source code where the provision scripts are generated?
<kivilahtio> I tried to make my Raspbian to idnetify itself as a 'xenial' instead of 'jessie', but still i get the same error. Maybe it is because of the ARM-architecture?
<kivilahtio> I am using juju 2.0beta15
<kivilahtio> I would like to start using juju in our whole server infrastructure, but Raspberry Pi-based monitoring devices are part of that and I would like to control the lifecycle of the software therein using juju actions
<PCdude> babbageclunk: here is the pastebin: http://pastebin.com/42JXtMmD . this is the output of ~/.cloud-install/*.log
<kivilahtio> I would imagine if I add the Raspberry Pi as a new machine to my model, then deploy a charm to the machine, I should be able to manage them  using juju?
<PCdude> babbageclunk:  the error what I get is the above one in this question: http://askubuntu.com/questions/819506/juju-bootstrap-fails-with-openstack-cloud-installer
<kivilahtio> without actually instantiating any virtualization in the Raspbian itself
<babbageclunk> PCdude: The juju output (the line starting "Problem during bootstrap") is still truncated - it looks like you cut and paste from within nano?
<babbageclunk> PCdude: If you install pastebinit you can paste a file directly without needing to cut and paste it.
<PCdude> babbageclunk:  sorry that is all there is.. maybe I can higher the verbose level?
<babbageclunk> PCdude: no, I'm sure there's more in the file, it's just that the line is long so nano isn't displaying it.
<PCdude> babbageclunk: yup u are right, I thought I copied all of it
<PCdude> babbageclunk:  let me make another pastebin
<PCdude> one second
<babbageclunk> PCdude: use pastebinit - it's a command-line util that will let you put the whole file into the pastebin.
<PCdude> babbageclunk: here is the pastebin: http://pastebin.com/ujM62KZH
<PCdude> babbageclunk: I will install that program next, did not know that exsisted
<PCdude> babbageclunk: is that enough information?
<babbageclunk> PCdude: sorry, on a call
<PCdude> babbageclunk: np, I am just curious what the problem is so I can fix it
<KpuCko> zeestrat  thanks a lot
<PCdude> babbageclunk: I am afk for20 minutes, but I am back after that
<babbageclunk> PCdude: I reformatted the last bit of the Juju output http://paste.ubuntu.com/23119674/
<babbageclunk> You can see that downloading the agent failed because it couldn't get to https://streams.canonical.com/juju/tools/agent/1.25.6/juju-1.25.6-trusty-amd64.tgz
<PCdude> does that mean the node does not have internet?
<babbageclunk> I can download that, so possibly there's a connectivity problem on your end?
<babbageclunk> PCdude: I think so
<PCdude> babbageclunk: uhm strange, if u look a little earlier in the dump file, u can see it is reaching for the ubuntu archives and it gets there too
<PCdude> when I click the link of JUJU it works on my own PC, so no firewall crap or something like that
<babbageclunk> Hmm, not sure then
<PCdude> I will aquire a node in MAAS and try to download that package manually, just to check that it can get there
<babbageclunk> PCdude: I was just about to say the same thing
<babbageclunk> I think maas does some proxying for apt packages, so that might be working while general connectivity from a node to the internet isn't.
<PCdude> babbageclunk: uhm good point, that is true.
<PCdude> MAAS is deploying a node as we speak
<PCdude> babbageclunk:  ok it is deployed how do I login again with the SSH key? (been some time haha)
<babbageclunk> PCdude: I think you had to specify a public key in maas?
<PCdude> yeah did that, only forgot the last letter of the key.... I am in
<PCdude> and bingo, it can't get to the internet
<PCdude> babbageclunk:  the server with MAAS installed has internet, but the nodes should route through that node
<babbageclunk> PCdude: can it resolve DNS? Something I always forget is to set the upstream DNS - under Settings in the MAAS UI.
<PCdude> babbageclunk:  haha, just what I was checking and yes it can resolve DNS it can also ping the gateway, but just not to the outside
<PCdude> let me check the settings again
<PCdude> maybe some stupid setttings
<babbageclunk> PCdude: Ok, sounds like you're on it
<PCdude> babbageclunk: at least thanks for the help, I will try some stuff and will keep u updated
<babbageclunk> PCdude: :)
<PCdude> babbageclunk: how can I manage routing in MAAS?
<babbageclunk> PCdude: I'm not sure, sorry
<PCdude> babbageclunk: every apt command can indeed reach the internet. pings to the gateway succeed and dnslookups succeed too. ping an ip address outside the LAN fails and also to a hostname
<babbageclunk> PCdude: is the gateway for the subnet the nodes are on the IP of the controller, or the switch?
<babbageclunk> PCdude: My knowledge gets a bit spotty here, you might need someone else's help
<PCdude> its the IP of the controller
<PCdude> babbageclunk:  ah ok, yeah the MAAS channel is pretty silent most of the time, maybe u know another channel?
<babbageclunk> PCdude: You could try setting it to the IP of the switch instead (not sure that's right though).
<babbageclunk> PCdude: You could ask on the maas mailing list?
<PCdude> http://askubuntu.com/questions/717803/openstack-install-problem-with-juju-bootstrap/718820#718820
<babbageclunk> PCdude: this is the point where I'd often ask dimitern ;)
<PCdude> babbageclunk: that sounds promising
<PCdude> uhm ok haha, I am gonna remember that name
<dimitern> :)
<dimitern> PCdude: I'm having a look now
<PCdude> good nice!
<dimitern> PCdude: AFAIK MAAS 2.1, currently in recently release alpha1 has some support for managing routes via the API
<PCdude> ah ok, well to try openstack and landscape I will need to stick to the 1.x version of MAAS
<PCdude> JUJU is still beta and does not work for me yet
<PCdude> dimitern: but the link I provided might be a solution
<dimitern> PCdude: but otherwise, earlier versions do the bare minimum, like allow you to specify different gateways for different subnets, which might not work that well
<PCdude> dimitern: indeed, very very basic stuff only
<dimitern> PCdude: if you only need to allow nodes to access the internet via NAT, that's easy - do basically what's described in that askubuntu answer
<PCdude> yeah, right now, that is sufficient, but I know myself pretty good with those things and I will be fucking around with those files within 1 week
<PCdude> dimitern: are there maybe extra programs I can add to manage all that?
<dimitern> PCdude: I'm sure there are more user-friendly ways to manage a firewall on linux than iptables CLI :), if that's what you're asking about
<dimitern> but usually I never bother with these
<PCdude> dimitern: added the firewall rules, lets try again :)
<PCdude> dimitern: and babbageclunk many thanks I have internet now on my nodes!
<dimitern> PCdude: \o/ awesome! :)
<babbageclunk> PCdude: woot!
<magicaltrout> ah the chaos of my life
<magicaltrout> now to spin up demo #2
<cholcombe> is there a force flag for application removal?  I have an app that has no machines attached to it that i can't remove
<lazyPower> not that i'm aware of cholcombe
<lazyPower> anything in the logs associated with the application?
<cholcombe> lazyPower, lemme check
<cholcombe> lazyPower, nadda
<lazyPower> single service left in teh model?
<lazyPower> is it related to anything?
<__szilard_cserey> hi
<__szilard_cserey> hi
<lazyPower> cory_fu kwmonroe https://github.com/juju-solutions/charmbox/pull/55
<lazyPower> need an express on this if we can land it before seman comes in we'll have CWR fixed in time to intercept todays runs with a working build
<cory_fu> lazyPower: Am I reading that right that installing snaps depends on systemd?
<lazyPower> yep
<cory_fu> ...
<lazyPower> well at least system logging
<cory_fu> Why?
<cory_fu> *sigh*
<lazyPower> there's no init, so theres no logging socket
<lazyPower> same issue we ran into with conjure-up that stokachu  fixed
<lazyPower> i'll get a bug filed and xreference
<lazyPower> and we can follow up on teh snappy list
<magicaltrout> I need to NAT juju gui from LXD to an external port
<magicaltrout> not websocket protocol
<magicaltrout> what have i forgotten?
<magicaltrout> actually ssh portforward may suffice
<magicaltrout> nope
<magicaltrout> gaa annoyance
<magicaltrout> oh it does
<magicaltrout> if i pass the full url
<lazyPower> nice
<magicaltrout> https://ibin.co/2taneCgvEp1T.png
<magicaltrout> tonights demo
<magicaltrout> all in LXD
<cory_fu> lazyPower: Reviewed and tested.  +1
<lazyPower> nice magicaltrout
<lazyPower> cory_fu ta, waiting for mbruzek to give final +1 and we'll cut a release
<lazyPower> perfect, merged and cut
<lazyPower> thanks for the help gents *hat tips*
<magicaltrout> any idea on this one lazyPower
<magicaltrout> The following errors occurred while retrieving bundle changes: cannot read bundle YAML: cannot unmarshal bundle data: yaml: unmarshal errors: line 1: cannot unmarshal !!str `xenial` into charm.legacyBundleData
<magicaltrout> exported my stuff
<lazyPower> err
<magicaltrout> and tried to import it
<lazyPower> pastebin the bundle?
<magicaltrout> on a new controller
<magicaltrout> http://paste.ubuntu.com/23120903/
<lazyPower> imported via the gui i presume?
<magicaltrout> same juju installation , just exported my lxd model and wanted to import it into a aws one
<magicaltrout> yeah
<lazyPower> ok hang on
<lazyPower> i'm trying from cli first
<lazyPower> then i'll try import into a gui
<magicaltrout> command line works lazyPower don't worry about it
<magicaltrout> gui bug i guess
<lazyPower> well it appears to be a gui bug
<lazyPower> yeah
<lazyPower> we should get that filed
<magicaltrout> can you do it, i'm well short on time before i need to head down
<magicaltrout> (pretty please)
<lazyPower> you betchya
<magicaltrout> thanks
<lazyPower> magicaltrout - which log did you get that output from?
<lazyPower> browser or unit log on the controller?
<magicaltrout> the error message?
<magicaltrout> its just spat out by juju gui to a warning modal
<lazyPower> ok
<lazyPower> magicaltrout https://github.com/juju/juju-gui/issues/1966 and https://bugs.launchpad.net/juju/+bug/1619389
<mup> Bug #1619389: juju-gui fails to parse exported bundle in beta-16 <juju:New> <https://launchpad.net/bugs/1619389>
<magicaltrout> thanks lazyPower
<lazyPower> magicaltrout - its apparently fixed, will come with the next release of hte gui
<magicaltrout> k
<Anita> Anita
<magicaltrout> that you are!
<Guest46979> I am getting following error
<Guest46979> root@ptcvm3:~# juju bootstrap local.lxd-test localhost Creating Juju controller "local.lxd-test" on localhost/localhost Bootstrapping model "controller" Starting new instance for initial controller Launching instance ERROR failed to bootstrap model: cannot start bootstrap instance: unable to get LXD image for ubuntu-xenial: The requested image couldn't be found. root@ptcvm3:~#
<Guest46979> during juju boorstrap
<Guest46979> any idea, why I am getting
<magicaltrout> someone else had that yesterday
<magicaltrout> i think the solution was to make sure you're on beta16
<Guest46979> beta16 has already an issue
<lazyPower> Guest46979 - update juju-2.0 by adding the devel ppa, and then running sudo apt update && sudo apt upgrade juju-2.0
<Guest46979> thats why I downgraded
<Guest46979> to beta15
<magicaltrout> thats beta's for you
<Guest46979> Yes, Its beta15
<magicaltrout> indeed
<magicaltrout> which  doesn't work with lxd
<Guest46979> beta15 does not work with lxd?
<magicaltrout> nope
<magicaltrout> but beta 16 does
<Guest46979> but in one of the other m/c i have beta15
<Guest46979> which works fine
<lazyPower> Guest46979 - beta16 added the image fetching routine and made the entire lxd bootstrap experience more robust
<Guest46979> lazypower_: do you suggest me to apply beta16?
<lazyPower> Guest46979 - yep, here's the release notes of what changed https://jujucharms.com/docs/stable/temp-release-notes
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Charmer Summit Sept 12-14th  http://summit.juju.solutions || Juju 2.0 beta-16 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<lazyPower> cory_fu sorry i missed one https://github.com/juju-solutions/charmbox/pull/56
<lazyPower> when i tested i didn't give it a full bundletester test, and it failed miserably. http://ci.containers.juju.solutions:8080/job/Kube-Core-Bundle-Test/12/console
<lazyPower> but that little bump to six seems ot resolve it
<Guest46979> lazyPower_:ok let me try beta16
<cholcombe> is there a way in actions.yaml to say i want the user to specify one parameter or the other but not both?
<cholcombe> i suppose i could make one the default
<marcoceppi> cholcombe: check jsonschema, not sure but probably
<cholcombe> marcoceppi, yeah good point.  thanks
<icey> bdx I deployed your vault, consul, and consul-agent charms but the vault charm didn't seem to work with my charm
<Guest46979> lazyPower_:lxc config set core.https_address [::] , with this work around, finished bootstrapping.
<lazyPower> Guest46979 - ah i didnt setup ipv6 networking on my lxd bridge
<lazyPower> makes sense though
<Guest46979> lazyPower_:it was mentioned in one of the mail. I didnt setup ipv6...
<Guest46979> lazyPower_:Thanks
<bdx> icey: how so?
<icey> bdx it almost seems to not send keys in response to the token.requested state
<icey> bdx please tell me I'm just using it wrong :-P
<icey> it's my unlock_ceph charm that isn't working
<bdx> whaa ... let me stand up a deploy real quick
<icey> want to see?
<icey> bdx ^
<bdx> icey: ya
<bdx> icey: deploying now ... did you deploy -> https://gist.github.com/jamesbeedy/102d1bbcfe5dbf6227f014348026d0a3
<bdx> ?
<icey> basically, PM'd you a hangouts link
<Brochacho> Is the juju 2.0 getting started guide up to date?
<Brochacho> Seem to get 'ERROR failed to bootstrap model: cannot start bootstrap instance: unable to get LXD image for ubuntu-xenial: The requested image couldn't be found.' when bootstraping using lxd
<pragsmike> Bro I got that same error
<pragsmike> You can work around it by logging into the container host and copying in that image under that alias
<pragsmike> lxc image copy ubuntu:16.04/amd64 local: --alias ubuntu-xenial
#juju 2016-09-02
<Brochacho> pragsmike: Thanks! Figured it was an image name issue, do you know if there's an open issue for this? Every search I made turned up nothing
<pragsmike> I haven't looked into it further, it could be the repository doesn't match what the tool assumes
<pragsmike> I'm still exploring the list of bugs, but the lxd support seems spotty.  I can't get containers on different machines to talk to each other, for instance.
<pragsmike> as long as I deploy all containers to one machine, it's ok :/
<Brochacho> pragsmike: Looks like it's reported: https://bugs.launchpad.net/juju/+bug/1618948
<mup> Bug #1618948: Can't bootstrap localhost cloud <juju:Incomplete> <https://launchpad.net/bugs/1618948>
<pragsmike> hmm, that doesn't seem to be related, you sure that # is correct?
<kjackal> Hello Juju World!
<stub`> bcsaller: Do you know what is going on here? http://pastebin.ubuntu.com/23123739/ It seems I'm missing build tools on a bare metal trusty deploy. Testing locally works no probs.
<stub`> bcsaller: oooh.... python3-pip only recommends build-essential (and python3-setuptools, and python3-wheel), so some deploys won't get those
<rock__> Hi. I want to deploy openstack using [MAAS+openstack-base-bundle]. Can anyone please give me the detailed hardware requirements. and refrence links for doing that.
<shruthima> hi kwmonroe , how to push ibm-im charm for xenial series in to charm store ?(charm push . cs:~ibmcharmers/xenial/ibm-im)
<shruthima> hello team, how to build a deployable charm under xenial directory?
<stub`> shruthima: metadata.yaml defines the supported series, so if you have "series: [xenial]" or "series: [xenial, trusty]" you should be good. The default series is the first one listed there.
<shruthima> ok if we give xenial first the deployable charm will be build under xenial directory is it ?
<stub`> If you list a single series, the charm will be built in a directory with the series name. If you list multiple series, it will be build under a 'builds' directory.
<stub`> Since Juju 1.25, it doesn't matter where it gets built to as 'juju deploy ~/tmp/builds/mycharm' works. You don't need a $JUJU_REPOSITORY setup with magic subdirectories.
<shruthima> oh k thanku
<shruthima> tp push charm for xenial series in to charm store do we need to use charm push . cs:~ibmcharmers/xenial/ibm-im?
<stub`> If you have a multiseries charm, 'charm push . cs:~ibmcharmers/ibm-im' is fine, and all three URLs will have your new charm (ibm-im, trusty/ibm-im and xenial/ibm-im)
<stub`> I think pushing to a specific series works too, although I haven't tried it
<shruthima> we have IBM-IM charm for trusty in charm store , now i want to push for xenial with same name
<stub`> But the charm store is smart enough to look at the series: in metadata.yaml and do the right thing I think.
<shruthima> root@z10lnx04:~/charms/xenial/ibm-im# charm push . cs:~ibmcharmers/xenial/ibm-im --resource ibm_im_installer=/root/placeholder.zip --resource ibm_im_fixpack=/root/placeholder.zip ERROR cannot post archive: charm name duplicates multi-series charm name cs:~ibmcharmers/ibm-im-7
<stub`> Ok. I'd try pushing to xenial/ibm-im as you sugested.
<shruthima> iam getting that error if i try to push with xenial series
<stub`> Someone has already pushed a multiseries charm to cs:~ibmcharmers/ibm-im then. I don't know how to override only the xenial charm.
<stub`> I think pushing to cs:~ibmcharmers/ibm-im will do the right thing and only change the series listed in your metadata.yaml, but I'm not sure.
<shruthima> ohh ok thanku stub`
<rock__> Hi. I pused my charm to charm store. I am trying to set BUG URL. Here I am facing issue. pasted issue info : http://paste.openstack.org/show/566078/
<stub`> rock__: Did that first command work? I've used the cs: url to set bugs-url (so charm set cs:~siva9296/kaminario-openstack bugs-url="https://bugs.launchpad...")
<rock__> stub: Yes. first command I ran. But it ran with out giving any response.
<stub`> rock__: When I go to https://jujucharms.com/u/siva9296/kaminario-openstack , I see a submit a bug link pointing to https://bugs.launchpad.net/charms/+source/kaminario-openstack/+filebug , so something worked even if it didn't look like it.
<stub`> I don't think 'charm set' says anything on success.
<stub`> oh, that bugs url isn't valid
<rock__> stub:  So please tell me what I need to do now?  I tried with charm set cs:~siva9296/kaminario-openstack bugs-url="https://bugs.launchpad...". But it was giving This site canât be reached  bugs.launchpad...âs server DNS address could not be found. DNS_PROBE_FINISHED_NXDOMAIN
<stub`> The command you are running is fine, but the bugs-url you are trying to set is incorrect. It should point to your bug tracker, but I don't know where that is.
<stub`> Is your charm in a branch on Launchpad? What is the branch URL?
<stub`> Or maybe the openstack people here know.
<lazyPower> rock__ o/
<lazyPower> rock__ - i understand you're having some issues publishing to the juju charm store?
<jcastro> welcome!
<rock__> lazypower: To populate charm to charm store i used for pushing charm we used https://jujucharms.com/docs/2.0/authors-charm-store. And Used https://github.com/openstack-charmers/release-tools/blob/master/push-and-publish#L138 to set bug URL. Can you please tell me how to set a bug URL?
<lazyPower> rock__ - charm set cs:~lazypower/foo bugs-url=https://foobar.co
<lazyPower> as an example
<rock__> lazypower: I tried as $ charm set cs:~siva9296/kaminario-openstack-2  bugs-url=https://kaminario-openstack.co.  And from UI when i click on "submit a bug" for charm it was giving  bugs.launchpad...âs server DNS address could not be found. DNS_PROBE_FINISHED_NXDOMAIN
<lazyPower> rock__ - you've apparently set bugs-url to https://kaminario-openstack.co/
<mbruzek> rock__: that url does not resolve for me
<mbruzek> rock__: https://kaminario-openstack.co/
<lazyPower> rock__ - additionally it may take the display a moment to update. you can use charm show cs:~siva9296/kaminario-openstack to see the information about your charm
<mbruzek> rock__: Most of the time the bugs url is a github issue tracker URL, something like this https://github.com/mbruzek/jenkins-jobs/issues
<rock__> mbruzek:So my source have to on github (or) launchpad? required any bug tracker? I don't have any bug tracker as of now. How can I do that?
<mbruzek> rock__: Github or Launchpad are not required . But the URL should resolve or the bug link will not work
<rock__> lazypower: I tried a moment later. Then also it was giving the same issue.
<lazyPower> rock__ - well, lets start from step 1. Where do you want to manage bugs for your charm? This typically resides with the charm/layer source
<lazyPower> its up to you as the author which system to use. github, launchpad, bitbucket
<mbruzek> rock__: So I would put the url to a community page, or somewhere the user could get some help with this charm.
<lazyPower> are 3 of the popular choices
<stub`> I've put a promulgation request up at https://bugs.launchpad.net/charms/+bug/1619577 , which is I hope somewhere that gets it onto the right track?
<mup> Bug #1619577: Review and promulgate telegraf <Juju Charms Collection:New> <https://launchpad.net/bugs/1619577>
<rock__> mbruzek/lazypower: do I need to do any extra work with launchpad/github/bitbucket to manage bugs? Please tell me what I need to do right now w.r.t user perspective.
<lazyPower> stub` - have you seen http://review.jujucharms.com?
<mbruzek> rock__: Where is your layer code?
<stub`> Ooh, shiny
<mbruzek> rock__: You are building out of a version control system of some kind right? Any version control system should have a bug/issue/problem link. Unless it is internal, in that case you should make a page that tells users of your charms how to get a hold of you.
<rock__> mbruzek: what do mean by layer code?
<mbruzek> rock__: The charm code then
<stub`> tvansteenburgh1: You should set the defaults of 'share email' and whatever else you need with the SSO login. I can't remember if you do that at your end, or if it is configuration in the SSO.
<mbruzek> rock__: The code that you wrote to build this charm, I call that the "layer code"
<tvansteenburgh1> stub`: i tried
<rock__> mbruzek: Actually, I pushed my charm to store from local machine.
<mbruzek> rock__: We have found it is best practice to use a version control system for the code.
<rock__> mbruzek:  Thank you. I understood that If we have a version controll system for my code and if we set bug url link to that then that will be good.
<rock__> marcoceppi: As we discussed before. This issue was not solved. Series miss match issue between mycharm and cinder while adding relation from JUJU GUI. pasted info  http://paste.openstack.org/show/566123/
<rock__> Hi.  I have a question. For suppose acc. to openstack-base bundle requirements I have taken machines. Which node I have to take as main node to deploy bundle? Once I deployed bundle how that bundle distribute service charms to all hardware machines that we have taken?
<rock__> Can any one provide me the network architeture of {MAAS+openstack base bundle] setup.
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Charmer Summit Sept 12-14th  http://summit.juju.solutions || Juju 2.0 beta-17 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<kwmonroe> petevg: cory_fu, any beef with my repo pr?  i'd like to get this in before i update ~bigdata-dev with xenial charms: https://github.com/juju-solutions/layer-apache-bigtop-base/pull/46/files
<kwmonroe> kjackal ^^ you too, but you should be eod by now :)
<kwmonroe> carrot cory_fu:  if you +1 my pr, i'll help you with the ODSC CFP
<kwmonroe> lazyPower: i'm having trouble getting to http://ci.containers.juju.solutions:8080/.  is it up for you?
<lazyPower> kwmonroe - i ran bundletester in that environment and didnt juju lock it, so i learned some hard lessons lastnight
<davecore> jcastro: Hi
<kwmonroe> lol lazyPower
<kwmonroe> reset: false ftw
<lazyPower> kwmonroe https://jujucharms.com/u/lazypower/container-ci
<lazyPower> its all charmed up however, and i have 2 instances of it deployed and purring atm
<lazyPower> i'll get the DNS back online before i EOD with its final resting place and send you new credentials
<lazyPower> it'll be in teh same model i shared with you, but new user login/pass
<kwmonroe> ack lazyPower... would you mind kicking a new charmbox devel build?  i'd like to beat on beta 17 this afternoon
<lazyPower> kwmonroe - i have jujubox build and pushed at beta17
<lazyPower> do you mind cloning locally and building a beta17 charmbox until we verify the test tooling updates still work for b17?
<kwmonroe> roger that lazyPower
<kwmonroe> but what are the chances test tooling and juju would be incompatible at this stage in the game?  i mean, we're in double digit betas dude.
<lazyPower> i dont know why did beta16 break it? :)
<lazyPower> so pretty good
<kwmonroe> i was make joke
<lazyPower> oh!
 * lazyPower is dense on the interwebs
<jrwren> beta is the new alpha.
<kwmonroe> ha!
 * lazyPower clicks the "like" button on jrwren's last post
 * lazyPower assigns a thumbs up emojii 
<jrwren> ð ðð¾
<lazyPower> kwmonroe - let me know what you find out with charmbox, i just kicked off a build here too
<lazyPower> if its thumbs up in both places we'll cut a release
<cory_fu> kwmonroe: Added a comment and replied to petevg's comment.  Other than the tests and verifying the requiredness of the layer options, I think it's good
<cory_fu> kwmonroe, petevg: Just checked; those options are *not* marked as required, but they do have default values.  I'm not sure if it's possible to actually remove the default for those
<petevg> kwmonroe, cory_fu: the .get for those values just happens to make unit testing a lot easier. I don't mind moving away from it, but it will require a lot of fixing in the tests ...
<plars> marcoceppi: Hey, I'm just starting to try to figure out reactive charms, and I saw you have a uwsgi layer. I have a really simple flask app that I'd like to make a charm for and deploy, so I thought this might be a good starting place
<plars> marcoceppi: it's not clear to me how to define the config for it using the uwsgi layer though, or maybe I'm completely on the wrong track? Not really sure... having trouble finding a lot of good examples of reactive charms, especially ones using different layers
<marcoceppi> plars: no worries, let me find you an example
<lazyPower> https://github.com/juju-solutions/bundle-elk-stack/pull/4 -- could use some community eyes on this one, low hanging fruit, charm revision bumps
<marcoceppi> plars: here's what I wrote the uwsgi layer for https://github.com/marcoceppi/layer-charmsvg
<marcoceppi> plars: it's an example of uwsgi & nginx for a flask app
<marcoceppi> lazyPower: have you tested this with bundletester?
<lazyPower> marcoceppi - running now
<marcoceppi> lazyPower: LGTM pending tests
<lazyPower> marcoceppi - should i add this to build-cloud?
<lazyPower> its kind of a cornerstone thing that a lot of people look for
<plars> marcoceppi: awesome I'll take a look, thanks!
<marcoceppi> lazyPower: sure :)
<lazyPower> rgr
<lazyPower> magicaltrout - is ELK good enough or do you *really need greylog*?
<marcoceppi> plars: no worries, I wrote uswgi a little while ago, I'm sure I could improve it. With that it only really has one use case, my charmsvg layer
<marcoceppi> plars: feel free to reach out if things don't make sense, because it's probably me not you :)
<cory_fu> petevg: Yeah, I agree.  Though, we could actually just switch from using Mock() to MagicMock(), as it supports being used like a dict out of the box
<cory_fu> kwmonroe: Easy fix for the tests: http://www.voidspace.org.uk/python/mock/magicmock.html#magic-mock
<petevg> +1
<kwmonroe> thx for the feedback cory_fu petevg!  i'll remove the unit tests and merge away.
<cory_fu> lol, +1
<kwmonroe> (kidding)
<lazyPower> kwmonroe - testing tools were +1 LGTM on b17
<lazyPower> were your findings ~ the same?
<kwmonroe> lazyPower: i cheated and apt updated to b17.. didn't actually rebuild charmbox.  and i didn't exercise the test tools -- just your normal boostrap, deploy, add, destroy.
<kwmonroe> that said, b17 was good to me
<lazyPower> kwmonroe - you were a big help today :)
<kwmonroe> lol
#juju 2016-09-03
<rock> Hi. I have a question.  For suppose acc. to openstack-base bundle requirements I have taken machines. Which node I have to take as main node to deploy bundle? Once I deploy bundle how that bundle distribute service charms to hardware machines that we have taken?
<rock>  please provide me the network architeture of {MAAS+openstack base bundle]  based openstack setup.
<zeestrat> rock: The best you will find without diving into documentation is Dimiter's blogposts: http://blog.naydenov.net/category/juju/. Now please stop spamming all the channels with the same questions.
<rock> zeestart: Sorry. And Thanks for your answer.
<zeestrat> rock: No worries. Good luck and have a good weekend!
<rock> zeestrat: Hmm. Thanks. Have a good weekend!
<rock> Hi. For JUJU Version 1.25.6-trusty-amd64  I am getting âERROR cannot load cookies: file locked for too long; giving up: cannot acquire lock: open /home/ubuntu/.go-cookies.lock: permission deniedâ  issue while deploying our âKaminario-openstackâ charm on Ubuntu-OpenStack-Autopilot Setup.
<rock> Issue pasted info :http://paste.openstack.org/show/566401/
<rock> please provide me some solution for this.
#juju 2016-09-04
<PCdude> hello everybody :)
<PCdude> hey admcleod_ lazyPower magicaltrout
<pragsmike> greets
<pragsmike> In beta17, using MAAS cloud, I'm having trouble, maybe related to [[https://bugs.launchpad.net/juju/+bug/1607964][#1607964]] but puzzlingly different.
<mup> Bug #1607964: juju2, maas2, lxd containers started with wrong IP, rely on dhclient to switch things <landscape> <juju:Triaged by rharding> <https://launchpad.net/bugs/1607964>
<pragsmike> What I'm seeing looks more like [[https://bugs.launchpad.net/juju/+bug/1566801][#1566801]] is still happening, but that's marked as fixed in beta15.
<mup> Bug #1566801: LXD containers /etc/network/interfaces as generated by Juju  gets overwritten by LXD container start <2.0> <cdo-qa-blocker> <landscape> <network> <juju:Fix Released by frobware> <https://launchpad.net/bugs/1566801>
<pragsmike> 'juju add-machine lxd:1' creates a container, but it doesn't have an address allocated by MAAS.
<pragsmike> Isn't it supposed to?
<pragsmike> It gets an address via dhcp from LXD's dnsmasq, which I didn't think was supposed to happen.
<pragsmike> I do have agent-stream: devel in the controller bootstrap, and confirmed that the host machine's agent.conf says 'upgradedToVersion: 2.0-beta17'
<pragsmike> can someone explain what the intended behavior is (ie what address should the container get) so I can help troubleshoot this?
<PCdude> http://askubuntu.com/questions/820925/how-do-i-set-a-dns-server-in-maas-that-will-be-passed-on-to-the-nodes
<pragsmike> I worked around it by bridging the containers to the host's interface, which allowed them to get DHCP'ed (dynamically) by MAAS
<pragsmike> but that's a crock, and I'd like to know how to do it right
#juju 2017-08-28
<digv> hi.. is there any function that can help to get ip-address of specific interface of machine deployed by charm?
<kwmonroe> if someone sees digv come back around, the bigtop charms have a helper to get the ip for an interface (or network range):  https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/lib/charms/layer/apache_bigtop_base.py#L68
<hallyn> jamespage: yo
<hallyn> (or in general) is anyone on here who is/has been part of setting up or maintaining the canonical in-house openstack cluster, juju-deployed?  I'm seeking guidance for hardware requirements.
<rick_h> hallyn: beisner might have a link for you. I thought we had some suggested docs sitting around
<hallyn> rick_h: thanks.  hey beisner  :)
<stormmore> o/ juju world
<beisner> o/ hallyn
<hallyn> hey beisner.  so do you typically use 3-4 nics per server for openstack deploys?  Do you get a set of like 5+ identical machines, or do you tailor them to storage, network, etc nodes?
<beisner> will holler back shorty, after lunch hallyn
<beisner> hi hallyn - now back for real:
<hallyn> \o
<beisner> we do it both ways actually.  in a couple of labs, i have a pile of identical machines.  two nics, two disks in each machine.
<beisner> but i know that most folks in production have certain machines geared more toward storage, and use maas tagging to identify and allocate those to ceph (for example).
<hallyn> two nics and two disks per host works ok for general usage for you then?
<hallyn> this isn't production, i just don't want to end up realizing that i can't run anything i want to on top of it bc it's just a toy :)
<beisner> yep that's exactly what two of our labs have.  then we don't have to pin applications to machines in any way, knowing that any one of them will have the goods neededd.
<hallyn> just spinning rust disks?  like 2x1T?
<hallyn> if there aren't any (and i haven't seen any) blog posts talking about the internal juju-deployed stacks, you should write them :)
<beisner> i like to use 1xSSD as a bcache front to spindles.
<hallyn> so one ssd one regular?  hmm.
<hallyn> wonder whether maas/juju will automatically set that up the smart way
<beisner> depends on tolerance/needs, but yep.  in a case where we can afford to lose a node (this is a cloud afterall), then that is tolerable for my use case.
<beisner> maas is bcache-aware.  there is a tickbox essentially.
<hallyn> cool
<beisner> juju doesn't really need to know
<hallyn> hm, where's that juju gui demo site
<beisner> this one?:  https://jujucharms.com/new/
<hallyn> beisner: i landed at https://demo.jujucharms.com but looks the same :)
<hallyn> (just want to see what it recommends for a medium install)
<hallyn> thanks beisner
<beisner> yw hallyn - and ack on the blog post.  the tricky thing about that is, when we build those clouds, they are HA.  and with HA come network-specific things like VIPs and other machine-specific things.  we've not published an "HA bundle" to-date, since it would definitely not-work without users editing quite a few things.
<beisner> but still, planning to do something/some time with lots of caveats and notes.
<hallyn> beisner: yeah, but that's also exactly the kind of thing that potential juju users may not think of, get started, then run into trouble with, and then say "juju sucks, it's for demos only"
<beisner> hallyn - we do have this openstack charms deployment guide (not released yet) but you might find this quite helpful (just clone it, run 'tox', and read for now):  https://github.com/openstack/charm-deployment-guide
<beisner> this aims to address that audience.  the we-want-more-than-a-demo audience.
<hallyn> cool, thanks
<beisner> we have some MPs outstanding against that still, broken links, etc., but that is intended to publish by 17.10 to openstack.org.
<beisner> cheers, hallyn
<hallyn> hm, no mac brew version of tox :)
<hallyn> thanks beisner \o
#juju 2017-08-29
<gaurangt-> hi, I've a requirement where I need to get the specific network IP address in the charm for further processing. I know we can get unit IP address using private-address or public-address but what happens if the unit has more IP addresses dedicated for specific purposes (for example, separate network for storage, management, external access etc)
<thumper> wallyworld: ^^ you have dealt with this haven't you?
<wallyworld> gaurangt-: the hook tool you want to use is network-get. that should give you a list of all the addresses on the machine hosting the unit. it also takes account of and space related endpoint bindings
<wallyworld> unit-get is deprecated
<wallyworld> it will just return the one "preferred" address. i'm not sure off hand how that address is determined
<wpk> wallyworld: with unit-get the only thing guaranteed is that it will be the same address every time (unless network setup changes)
<gaurangt-> wallyworld, thanks for the info. I think network-get is the way forward.
<gaurangt-> wpk, but I guess we cannot control which ip unit-get picks up. It can be any one of the many that are available, right?
<wpk> gaurangt-: exactly, that's why network-get is the way to go.
<gaurangt-> wpk, yep, sounds good. Let me try that out.
<gaurangt-> wallyworld, looks like there is no python interface to call network-get
<gaurangt-> is it something to be run from cmdline only?
<wallyworld> gaurangt-: network-get is the name of the hook tool. it calls the NetworkInfo() api on the uniter facade
<ryebot> Does deploying to a subnet work on AWS?
<ryebot> Docs seem to indicate it might be MAAS-only, but I'm not 100% sure.
<rick_h> ryebot: should for bootstrap constraints and such.
<ryebot> rick_h: excellent, thanks
<stormmore> o/ juju world
<ybaumy> anyone at work is into nutanix?
<ybaumy> would be interessted in hearing what this is about and how good it is
<ybaumy> ive seen some yt videos and they look very promising
<rick_h> ybaumy: not checked it out
<ybaumy> rick_h: have you used hyperconverged systems yet? if not take a look at nutanix. i downloaded the community edition today and will take a good look at the hypervisor and so on
<rick_h> ybaumy: cool
<ybaumy> rick_h: my boss got a presentation of it and is really hyped now. so i have to evaluate
<ybaumy> the cool thing is that you have local storage on your nodes and not like ceph only network nodes and the compute is running somewhere else
<ybaumy> anyways i think we get easily and appliance for testing. at least i hope so
<rick_h> ybaumy: cool, let me know how it works out.
<ybaumy> rick_h: sure thing i will report back. i was testing opesntack for several month now. and openstack clearly has its trade offs. so im looking forward to a new cloud solution
<ybaumy> that https://www.simplivity.com/ would be another solution which is interessing.
<ybaumy> but personally i dont like HP
<ybaumy> but HP bought simplivity so its not their product. which is good if you take 3PAR storage for example which was also aquired
<ybaumy> and 3PAR storage is really nice
<ybaumy> with ssd approaching 50TB and ethernet 100Gbit who need fibre channel and traditional storage anymore
<ybaumy> and storage is the problem in our company not compute power so the future looks bright i guess
<ybaumy> we are operating so 2010ish currently
<ybaumy> http://www.prnewswire.com/news-releases/supermicro-previews-1u-petabyte-nvme-storage-system-supporting-new-ruler-form-factor-for-intel-ssds-at-flash-memory-summit-300501628.html
<ybaumy> https://www.youtube.com/watch?v=OcaID9XLHBU
<ybaumy> that will kick ass for bigdata
<thnkpad> rick_h ping
<rick_h> thnkpad: pong
<thnkpad> Hiya Rick - did you message marco 'bout that discourse webpage update to Xenial, or nay ?
<thnkpad> I have had puri.sm talking about it - that's really why Iam asking.
<rick_h> thnkpad: not gotten a hold of yet. Working on it. Apologies, was out for a week last week so behind the 8-ball. My fault
<thnkpad> rick_h, wait a second - wasn't there talk of setting up a Launchpad group to cover it while Marco's away ?
<rick_h> thnkpad: sec, otp
<bdx> are x-model relations supported on 2.2.2?
<rick_h> bdx: no, it's a 2.3 thing
<bdx> ok, does x-controller come with that too?
<rick_h> bdx: unsure, probably not.
<bdx> ok
<bdx> rick_h: thx
<rick_h> bdx: it'll probably be feature flagged in 2.3 as kinds get worked out across providers
<bdx> totally
<thumper> well kinda
<thumper> I think the intent is to not have a feature flag for 2.3
<thumper> for CMR
<bdx> thumper: CMR will be included in the 2.3?
<bdx> not behind any flags
<thumper> that is my current understanding of intent
<thumper> it won't be perfect
<thumper> but should be usable
<thumper> wallyworld: is that your understanding too?
 * wallyworld reads
 * rick_h just wants to play safe in promises :) 
<thumper> rick_h: sure
<wallyworld> CMR is intended to be in 2.3 without a feature flag
<wallyworld> working to get data model and APIs stable
<wallyworld> includes cross controller. there may be some tooling missing and polish etc that will be done in a point release
<bdx> oh man
<bdx> :) :) :)
<wallyworld> cross controller is usable now if you run the edge snap
<bdx> Ooooooo
<wallyworld> there's networking aspects that will require some charm changes
<bdx> ok, thats great news none the less
<bdx> going to take it for a spin
<wallyworld> so not all charms are guaranteed to work out of the box, but most should if they have been written crrectly
<bdx> I have the perfect use case for x-controller
<wallyworld> try it out and let us know of any issues
<bdx> I have fiber to us-west-2 from my datacenter
<wallyworld> what charms were you looking to use?
<bdx> and no xfer charge to and fro
<bdx> wallyworld: my own
<bdx> alongside postgresql, redis, elasticsearch
<bdx> I wanted to also look at getting a beats/logstash stack going
<wallyworld> ok. make sure that you use either network-get or pick "ingress-address" from relation-get data to figure out the address to connect to on the otjer side of the x-model relation
<wallyworld> if you use unit-get it won't work
<wallyworld> and many older charms still use unit-get
<bdx> ok, that good to know
<bdx> lol
<wallyworld> but just ask if you try it and get stuck
<bdx> I'll start looking around to see where I'm using that
<bdx> will do
<bdx> wallyworld, rick_h, thumper: thx
<wallyworld> np, good luck with it
<bdx> wallyworld: are there any prelim docs on creating and consuming offers?
<wallyworld> bdx: there's been some blog posts, but doc is a little sparse right now as it's all wip. let me dig something up
<bdx> ok, thx
<wallyworld> bdx: http://paste.ubuntu.com/25428383/
<wallyworld> there may be some typos, i just splatted it out
<wallyworld> bdx: also, http://mitechie.com/blog/2017/7/7/call-for-testing-shared-services-with-juju
<bdx> that is great
<bdx> rick_h: nice blog post!
<rick_h> bdx: :)
<bdx> super exciting stuff
<bdx> wallyworld: many thanks
<wallyworld> my pleasure
<wallyworld> it's a wip
<wallyworld> so nit fully cooked
<rick_h> the man's put in some major work there so <3
<wallyworld> rick_h: the latest is I have a PR where you can suspend/resume individual x-model relations
<wallyworld> suspend = run departed/broken hooks etc. resume = run joined/changed hooks
<wallyworld> useful if someone needs to be locked out temporarily
#juju 2017-08-30
<Ting_> I have short-live access key to aws which have been confirmed working well by aws command line, but fails when deloying juju on aws with this access key. Anyone have idea about this?
<Ting_> The error says: authentication failed. please ensure the access key id you have specified is correct.
<tvansteenburgh> bdx: you around?
<ybaumy> has anyone tested the grafana charm?
<rick_h> ybaumy: just put up a PR against it yesterday
<rick_h> ybaumy: will show it off a little bit in the juju show today
<ybaumy> rick_h: juju show?
<rick_h> ybaumy https://www.youtube.com/watch?v=NUx6kYE60Mc&list=PLW1vKndgh8gI6iRFjGKtpIx2fnJxlr5FF
<rick_h> ep #20 coming in 3.5hrs
<ybaumy> rick_h: is that you?
<rick_h> ybaumy: rick is me yes
<rick_h> tim is tvansteenburgh in that last episode
<ybaumy> rick_h: never seen. will watch it tomorow. would be cool if you could get into grafana
<ybaumy> rick_h: i have a real world case which came in this week so i have to set it up for our network guys
<ybaumy> rick_h: they want it for their switches
<rick_h> ybaumy: cool yea I'm adding MySQL support to it currently so I can build dashboards on data in mysql
<rick_h> ybaumy: will talk about it in the show today
<ybaumy> rick_h: great will be looking forward to it
<rick_h> ybaumy: I did a blog post using grafana for monitoring juju controllers a while ago here as well: http://mitechie.com/blog/2017/3/20/operations-in-production
<ybaumy> rick_h: thanks lad
<ybaumy> rick_h: i hope grafana will be the first charm that reaches production for me. curretly im having a testcluster for kubernetes and openstack. kube is fine but openstack i cant get storage tiering really to work which is the problem currently.
<ybaumy> rick_h: first i was planing using influxdb for grafana but i guess mysql is fine too
<stub> I believe we have at least one grafana instance in production backed by influxdb. Most are talking to prometheus.
<ybaumy> i read influxdb is best for performance
<ybaumy> in this case
<ybaumy> thats why i wanted to use it
<ybaumy> but im open for other stuff
<stub> Unless you are going to hit scaling limits, I'd go with whatever fits with grafana best and lets you write your queries easily.
<stub> (I haven't used influxdb, but most of the grafana docs seem to be based around that backend so it is likely the best fit)
<ybaumy> well i have no experience in scaling this application. we have like circa 100 switches in the datacenters
<ybaumy> i still need information on which metrics the network department want to see there
<ybaumy> so i cannot say who may datasources there will be in the end
<stub> if you are just graphing metrics, 100 switches is small scale.
<ybaumy> ok
<ybaumy> thats good to hear
<stub> Something like prometheus talks about tens of thousands of devices with hundreds of time series on them
<stub> Assuming decent hardware, my guess is any backend is good for you
<ybaumy> ok
<ybaumy> time to watch GoT final episode. then wait until 2019 .. i hope i make it there
<stormmore> morning /o juju world
<rick_h> Juju Show #20 in 30 minutes!
<rick_h> are you ready?
<rick_h> juju show link to join in the fun: https://hangouts.google.com/hangouts/_/vamq45vtirbtrefyry63x4ccsee (tvansteenburgh marcoceppi hml bdx kwmonroe and anyone interested)
<rick_h> the link to watch the stream https://www.youtube.com/watch?v=iSVd7g0I4pI
<rick_h> ybaumy: ^
<rick_h> arosales: aisrael cory_fu ^ going to chat some charming if you can make it
<bdx> lol
<bdx> yes
<arosales> rick_h: thanks for the invite, but I have a conflict
<bdx> rick_h: and now this https://bugs.launchpad.net/bugs/1602192
<mup> Bug #1602192: when starting many LXD containers, they start failing to boot with "Too many open files" <lxd> <verification-done> <verification-done-xenial> <lxd (Ubuntu):Fix Released> <lxd (Ubuntu Xenial):Fix Released> <https://launchpad.net/bugs/1602192>
<kwmonroe> rick_h: production lxd:  https://github.com/lxc/lxd/blob/master/doc/production-setup.md
<bdx> the fix was released for lxd for the too many open files
<bdx> rick_h, kwmonroe: nice work, men
<kwmonroe> thanks bdx!
<rick_h> arosales: np, always want to reach out to folks
<rick_h> bdx: <3
<arosales> rick_h: I appreciate that :-)
<rick_h> kwmonroe: hah, camera was at 6% battery life left
<kwmonroe> woohoo!  perfect!
<rick_h> note to self, camera as webcam means full battery only
<kwmonroe> or, ya know, plug it in.
<rick_h> kwmonroe: :P
<bdx> are you guys tracking ^ bug?
<bdx> the "too many open files" one
<rick_h> bdx: haven't thought about it in a long time tbh. What's up? You still getting it?
<bdx> look at the latest in that bug
<bdx> from stgrabber
<bdx> and yea, I have been
<bdx> all the while
<bdx> so I was excited to be able to apply the sysctl configs (the production lxd sysctl configs that @kwmonroe listed above)
<bdx> and see a significant increase in the # of lxd I could deploy via localhost lxd provider
<bdx> but the fact is
<bdx> I *really* only care about deploying my lxd to MAAS or AWS
<bdx> so the production fix is almost useless unless you want to go around hacking face
<bdx> :)
<bdx> but now
<bdx> with the "The verification of the Stable Release Update for lxd has completed " - per that bug
<bdx> we should be seeing resolution for the "too many open files" across all providers
<bdx> because its fixed in lxd
<bdx> if I'm reading this correctly
<rick_h> bdx: right
<bdx> yessssss
<bdx> ok
<bdx> so
<bdx> how can I get that stable verified lxd from 'updates' to be the lxd that gets installed on all my maas/aws nodes?
<rick_h> so it just means the -updates repos of xenial need to be enabled. Is that out of the box? /me doesn't recall on the cloud images
<rick_h> I think they are, but that should be all that's needed.
<rick_h> you're looking for the specified lxd on there when it comes up:
<rick_h> This bug was fixed in the package lxd - 2.0.10-0ubuntu1~16.04.2
<bdx> rick_h: this http://imgur.com/a/FZ9B5 ?
<bdx> needs to include updates?
<rick_h> bdx: maybe? I'm not sure on maas. My GCE instances I was using for the demo in the show today have them enabled it looks like
<rick_h> deb http://us-east1.gce.archive.ubuntu.com/ubuntu/ xenial-updates main restricted
<rick_h> Version: 2.0.2-0ubuntu1~16.04.1 - from apt-cache show lxd on there
<bdx> rick_h: OOOoooo
<rick_h> so it's not quite there...hmmm
<bdx> sudo apt list --upgradable | http://paste.ubuntu.com/25433632/
<bdx> run apt update
<rick_h> might need to be sync'd out to the stuff out there
<bdx> then you will get it
<rick_h> k, cool
<bdx> its just sitting there
<bdx> I've been waiting so long for this moment ... I don't even want to install it yet
<bdx> :)
<rick_h> lol, the cake is a lie?
<bdx> I'm soaking it in man
<bdx> might even take a few days off work
<rick_h> lol, /me feels like bdx got a new toy for christmas
<rick_h> ok, I need coffee /me walks away to make some
<bdx> definitely ..... not being able to get any density from lxd on my maas nodes has been a thorn no doubt
<BarDweller> Hi all.. If I'm running canonical kube, via juju, inside a vm, where I have a bridged networking interface, how do I get the kube to recognise the bridged network i/f as the external IP for ingress etc ?
<magicaltrout> "there should be ~400 cores and 2 TB of memory available for your Kube cluster." finally i get to properly deploy some k8s stuff
<tvansteenburgh> wow, jackpot
<magicaltrout> tvansteenburgh / rick_h i see you folks "messing" with grafana, if i want resource stats out of CDK, would that be the way to go?
<magicaltrout> i'm used to deploying nagios on juju, not messed with grafana yet
<tvansteenburgh> magicaltrout: our internal clusters use prometheus + grafana
<magicaltrout> cool, thanks
<tvansteenburgh> BarDweller: we're not ignoring you, i just don't know the answer
<BarDweller> no probs.. got any general advice for configuring ingress external ip's ?
 * BarDweller kube noob, but learning fast =)
 * tvansteenburgh googles
<tvansteenburgh> BarDweller: this might be what you want: https://stackoverflow.com/questions/40136891/gcloud-ingress-loadbalancer-static-ip/40164860#40164860
<tvansteenburgh> define a Service with the external IP
<BarDweller> hmm.. mebbe.. I'm way off in the weeds reading about the nginx kubernetes ingress controller
<tvansteenburgh> no, strike that, you don't want type LoadBalancer
<tvansteenburgh> https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
<tvansteenburgh> that looks better ^
<tvansteenburgh> and map that Service to the ingress
<stormmore> time to figure out how to charm!
<BarDweller> I mean.. I can bring up kube ok, deploy services ok, but the external ip is always blank even if I tell juju to expose it, I think because the ingress thingy doesn't know it's external ip, and the ingress seems to be nginx-ingress-controller, created by the replication controller nginx-ingress-controller which does setup a config map for the controllers it creates.. am reading thru the docs for those to see if I can spot how to tell it to
<BarDweller> use a particular ip/interface for the external side
<tvansteenburgh> BarDweller: right, and I think the way to do that is the create a Service with an externalIPs entry, and put that Service in front of the nginx-ingress-controller
<magicaltrout> tvansteenburgh: other question that seems obvious but i shall ask
<magicaltrout> for cdk HA Master we just deploy more?
<magicaltrout> scratch that found that github issue that claims that is the case
<tvansteenburgh> magicaltrout: affirmative
<magicaltrout> okay other random question
<magicaltrout> if i'm doing a manual cloud deployment
<magicaltrout> how do I add units to a model thats not "default"?
<magicaltrout> or when you add unit are they model  specific?
<magicaltrout> model specific
<magicaltrout> nifty
#juju 2017-08-31
<parlos> Good Morning, I've got a question wrt. Landscape (standalone) and MAAS. My aim is to use autopilot to deploy OpenStack. In my initial MAAS node commisoned nodes, i only had single nics. Landscape/Autopilot complained, so I hooked up one more network, recommissioned that node. However, Landscape/Autopilot did not detect the change. So I then removed the node, and started it from scratch, and commissioned it.. MAAS detected the new network automatically, b
<gaurangt-> hi, is it mandatory to specify the network spaces while deploying the applications into LXD?
<orf__> has anyone here actually ever successfully deployed Juju to a vsphere host?
<orf__> it apparently needs a direct connection to the vsphere host, as well as the API
<orf__> something which isn't documented anywhere.
<rick_h> gaurangt-: basically if you use spaces somewhere in the model then you have to do it everywhere to make sure it's clear. If there's no spaces in the model then it should just work sans spaces.
<rick_h> orf__: I've not, but some folks have as they've tested the documentation and stokachu had some updates about conjure-up working better with vsphere recently
<rick_h> orf__: http://blog.astokes.org/conjure-up-dev-summary-aws-cloud-native-integration-and-vsphere-3/
<stokachu> orf__: yea juju needs to actually talk to the api
<stokachu> orf__: im not sure how else it would work
<stokachu> as for the host access im not entirely sure on that
<rahworkx> Hello all, Is there a way to search all controllers/models for a aws instance-id?
<gaurangt-> rick_h, thanks.. that's what I have observed too.
<orf__> stokachu: sure, but it tries to contact the vsphere *host*
<orf__> which is firewalled off, as it should be
<orf__> `juju.cmd.juju.commands bootstrap.go:492 failed to bootstrap model: cannot start bootstrap instance: failed to create instance in any availability zone: uploading ubuntu-xenial-16.04-cloudimg.vmdk to https://10.32.252.51/nfc/52774700-37f1-4a46-cc1f-de20c50f94e5/disk-0.vmdk: Post https://10.32.252.51/nfc/52774700-37f1-4a46-cc1f-de20c50f94e5/disk-0.vmdk: Service Unavailable`
<orf__> that IP is the host, the API is accessible
<orf__> our vsphere guy says it should upload it to the datastore, then create a VM from that vmdk in the datastore
<orf__> it shouldn't be uploading anything to 10.32.252.51 as far as I can tell
<stokachu> orf__: ok, sec
<orf__> thanks for the link rick_h :)
<stokachu> orf__: can you add your input to https://bugs.launchpad.net/juju/+bug/1711019
<mup> Bug #1711019: vsphere: cache VMDKs in datastore to avoid repeated downloads <juju:Triaged> <https://launchpad.net/bugs/1711019>
<stokachu> it's about repeated downloads but also applies to your issue
<stokachu> orf__: ill make sure it gets on the radar
<orf__> thank you :)
<stokachu> orf__: anytime, sorry about the hiccup
<orf__> done, no problem stokachu :)
<stokachu> orf__: awesome ty!
<orf__> I've been shaving yaks with this setup. Going to see if conjur-up dev channel is better
<stokachu> yea edge is much better
<stormmore> morning juju world o/
<rick_h> morning stormmore
<Dwellr> still playing with juju kubernetes-core / canonical-kubernetes .. I can see that once I bring up the world, and deploy microbot as per https://jujucharms.com/kubernetes-core/ that I _can_ reach my service if I access it via the kubernetes-worker/0 machine ip.. but that machine ip is 10.102.82.* and not reachable via my machines adapter address of 10.0.2.15, nor via it's other adapter address of 192.168.1.* .. I feel I'm missing
<Dwellr> something obvious..
<Dwellr> like in the example url, when it does kubectl get ingress, it has a reply come back with 172.31.26.109 as an address, where as when I do the same, that field is blank.
<Dwellr> hmm.. looks like this might be relevant https://github.com/kubernetes/kubernetes/issues/49614
<tvansteenburgh> Dwellr: interesting, are you gonna try that fix?
<tvansteenburgh> maybe our ingress controller needs to be updated
<Dwellr> I tried deploying the rbac ingress, but it wouldn't let me create the roles..
<Dwellr> Error from server (Forbidden): error when creating "https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml": roles.rbac.authorization.k8s.io "nginx-ingress-role" is forbidden: attempt to grant extra privileges: [.... long list of privileges ... ]
<tvansteenburgh> Dwellr: yeah, rbac is not on by default
<Dwellr> well I'm just looking for the simplest way to make this work.. should I figure out how to enable rbac? or figure out how to run a newer ingress that isn't rbac ?
<tvansteenburgh> Dwellr: we have a test bundle with rbac enabled by default if you want to try that
<Dwellr> sure.. how ? =)
<Dwellr> (do I need to start fresh? I'm in a virtualbox pc, so pretty each to spin up a new one..  or is this something I can magically switch to from a non-rbac enabled conjure-up kubernetes-core install)
<tvansteenburgh> you'd need to redeploy. this is something we're working on but isn't released yet
<tvansteenburgh> or you could try updating to a newer ingress that's not rbac enabled
<tvansteenburgh> if there is one
<Dwellr> lets try that first =)
<Dwellr> of course, I already blew away my ingress-controller replication controller thing.. else mebbe I could have just altered that ;p
<Dwellr> yeah.. found this too.. https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/279
<tvansteenburgh> Dwellr: good find, i'd like to know if that actually fixes your problem
<Dwellr> hmm.. well.. I'm running the new one, and I can still get to the service via it's 10.102.82.39 address, but not via my 192.168.1.* or via 127.0.0.1 from the host etc
<Dwellr> makes little sense to me.. dont understand how other ppl are routing any traffic into their conjured up kubes.. since they seem to live on their own network range, disconnected from the connectivity of the host
<Dwellr> hmm... lxc network attach interface-name kubernetes
<Dwellr> (from a comment on https://stgraber.org/2017/01/13/kubernetes-inside-lxd/)
<Dwellr> although lxc doesn't seem to have a network arg
<Dwellr> oookie.. I'm on lxc 2.0.10
<Dwellr> sounds like 2.3 changes a lotta stuff
<stokachu> Dwellr: yea cli arguments changed/updated
<Dwellr> I used conjure up to deploy to lxd ..
<Dwellr> probly explains why my `sudo lxc list` comes back empty when running kube inside lxd ?
<stokachu> nah we bundled lxd with conjure-up
<stokachu> conjure-up.lxc list
<Dwellr> oooh.. now there's an idea
<stokachu> which is changing in the next release
<stokachu> b/c bundling lxd didnt help us like we thought
<Dwellr> and that gives me version 2.14
<stokachu> yea
<stokachu> that'll have the network commands
<Dwellr> and ... I can see the worker node is connected to my eth0 when I need it connected to eth1
<Dwellr> this might be what I'm looking for =)
<Dwellr> actually scratch that
<Dwellr> eth0 is the lxd's eth0 not mine =)
<Dwellr> so the worker node has docker0, eth0, cni0, and flannel.1 network interfaces.. and the eth0 has the address that I have to use at the mo to access the worker with the ingress on it..
<Dwellr> is the conjureup networking documented somewhere so I can figure out what it's trying to do ?
<Dwellr> eg, if I do `conjure-up.lxc network list` I can see it built 2 bridge interfaces.. etc..
<Dwellr> not too sure why
<stokachu> Dwellr: unfortunately, no, the reason for the additional bridge was for openstack due to neturon needing an additional network
<stokachu> Dwellr: this has all been fixed, and i'm prepping a candidate now which you probably should use
<Dwellr> hehe =) just shout when it's good to go =)
<Dwellr> although I'm still learning a load by digging around
<stokachu> Dwellr: thanks, it's building now shouldnt be to much longer
<stokachu> Dwellr: lxd will be the snap lxd which is version 2.17
<Dwellr> like it's great to have seen the lxc list =) .. I tried adding my physical adapter to the worker container via   conjure-up.lxc network attach enp0s8 juju-d81eff-1 eth1  .. which returned ok, but  conjure-up.lxc list  doesn't show it
<stokachu> what about conjure-up.lxc info juju-d81eff-1
<Dwellr> does not list an eth1
<Dwellr> and no address in the Ips: section matches the current ip for enp0s8
<stokachu> hmm
<stokachu> you can edit the profile which should match the model
<stokachu> so `juju models`
<stokachu> the conjure-up.lxc profile list
<stokachu> but thats for all containers using that profile
<stokachu> not sure why the network attach on the single container didnt update itself with it
<Dwellr> I've not messed with lxc/lxd before =) only docker/virtualbox/vagrant/etc
<Dwellr> so this is all kinda interesting.. more tools to figure out
<stokachu> cool, https://discuss.linuxcontainers.org/ is a great forum to visit
<stokachu> for more help
<Dwellr> aye, tho then they kinda want me to understand what the current stuff is trying to do ;p which I'm still figuring out
<stokachu> :)
<Dwellr> interesting.. ok.. I think mebbe adding it to a profile might work, can I change the profile for a running container? hmm.. think I can..
<Dwellr> let me try lxc profile copy to clone the current one used by the worker, then assign the worker to the clone
<stokachu> yea you can change it for running container
<stokachu> it'll update it
<Dwellr> well.. the profile switcharoo worked, but the container still has no eth1 .. even if I exec into it and check with ifconfig
<Dwellr> mebbe the container needs to restart?
 * Dwellr hits the container with the lxc restart hammer.
<Dwellr> thing is, if I ask lxc network list .. it says the enp0s8 device is used by 1 container
<Dwellr> and if I do lxc network show enp0s8, I can see it's in use by the worker container
<ybaumy> god i love vmware support. they recommand to use vsphere client 6.0 u3 for resizing a lun on vsphere 6.5. that went well. we just lost 13TB of data
<ybaumy> im so happy right now i could die
<Dwellr> 13tb.. ouch
<Dwellr> you has backups.. right ?
<ybaumy> we have backups but they are from last night. and its a sql server where the customer migratates big data into it the whole day .. so basically we lost a whole day
<ybaumy> the good thing is the log backups didnt work
<ybaumy> :D
<ybaumy> and nobody cared
<ybaumy> im not vmware team just storage and linux/unix. so its not my business to check
<ybaumy> so customer looses a day + restore time
<ybaumy> thank god im already at home and there is beer
<Dwellr> stokachu: ahh.. mebbe I can't add a physical device directly to a profile .. mebbe it has to be a bridge..
<stokachu> ah
<stokachu> yea
<xarses> Hi, I'm having problems getting a bootstrap done to a private openstack cloud, I've generated the image meta-data, and either locally, or http hosted, it fails for "index file has no data for cloud"
<Dwellr> this is gonna my my head hurt =) I've got enp0s8 on this system that's a physical interface as far as it knows, but is actually a bridge to my real lan (because I'm in virtualbox, with the network set to bridged) .. so I now need to get that interface into my worker container so I can open ports on it..
<tvansteenburgh> Dwellr: https://www.youtube.com/watch?v=3f57PovdY44
<Dwellr> ta =)
<Dwellr> aha.. type:nic ... supports nictype:physical
<Dwellr> and this is why I play in vagrant.. ended up somehow messing up my network so that lxc thought my physical adapter (that's actually my bridge to my lan via virtualbox) was now actually a bridge, which somehow caused it to move the real adapter to be eth1, which then conflicted with other stuff in lxc, and eventually it wouldnt let me delete that network because it was 'in use'.. yay..
<Dwellr> vagrant destroy && vagrant up =)
<Dwellr> ooh.. I found this.. =) https://github.com/evanhempel/lxc-portforward
<magicaltrout> hello folks
<magicaltrout> i have another CDK question I'm trying to answer before it gets asked again since the first time we tested CDK
<magicaltrout> "I was wondering if it is possible to support OpenStack Cinder and NFS StorageClass for testing for now." does that mean anything to anyone?! ;)
<tvansteenburgh> magicaltrout: sure, cdk supports everything that upstream does
<magicaltrout> ah yeah that "its the same as upstream" sales pitch ;)
<magicaltrout> okay
<tvansteenburgh> magicaltrout: are you asking for how to do it?
<magicaltrout> hehe, no just getting an answer
<magicaltrout> i can fiddle around to figure it out
<xarses> any around that can help with getting bootstrap going on openstack?
<rick_h> hml: have a few min to help out xarses ? or beisner is someone around that might know the process a bit better?
<hml> sure
<hml> xaras: how can I help?
<hml> xarses ^^
<xarses> trying to get going. generated metadata, either passed as `--config image-metadata-url` and a webserver, or via `--metadata-source /path/to/local`  I always get "skipping index ... because of missing information: index file has no data for cloud"
<hml> xarsas: that shoulds like the path provided isnât enough for juju to find it.  if you do the bootstrap with âdebug, the path juju is searching at will be shown -
<hml> xarasa: you can then change the part of the path youâre providing to
<xarses> it find the index when i have the stream data hosted on the webserver, and implies the same over file
<xarses> it just refuses to find my cloud name in the index
<xarses> the generated data doesn't explicity have a cloud name in it
<xarses> I'm guessing its looking for some pattern match, but no clue what pattern its looking for
<hml> xarsas: can you provide a pastebin of the bootstrap output please?
<xarses> I'd have to redact a bit of data, but sure
<hml> xarsas: that should be okay
<ybaumy> great we are restoring 13Tb with less then 3Gbit bandwidth..life is good
<xarses> hml: https://gist.github.com/xarses/307a07d290fcc9f48008b3ae1d192f05
<kwmonroe> hahahaha... i know what rick_h did:  https://github.com/juju/charmstore-client/issues/143
<hml> xarses: juju is looking for the openstack endpoint and region provided with the openstack cloud config within the index.jsonâ¦ and canât find it.
<rick_h> kwmonroe: :)
<rick_h> kwmonroe: 3 times now...
<magicaltrout> i've done that a bunch of times :'(
<magicaltrout> its the saddest thing ever
<hml> xarses: the path to the index.json file listed is correct yes?  there are some files not found messages above
<xarses> ya, one is found
<kwmonroe> so, fwiw rick_h, if you would "charm proof" before you "charm push", you'd see some bizaro (albeit informational) output.  that would tell ya not to push :)
<xarses> hml: ya, that's exactly what I suspect, however the directions for generating the metadata don't have any context for providing the cloud only the region is reflected in the index.json file
<rick_h> kwmonroe: but I'm happy. my interface updates work, charm is working, woot woot
<rick_h> just have to find a path through code review now he
<rick_h> heh
<hml> xarses: the cloud is defined by the endpoint in the metadata
<xarses> well, then the endpoints match
<hml> xarses: iâm thinking the error messages arenât good.
<hml> xarses: does this file exisit:  http://somelocalhost:8000/images/streams/v1/index.json
<hml> at that exact location?
<xarses> hml: https://gist.github.com/xarses/307a07d290fcc9f48008b3ae1d192f05#file-gistfile2-txt
 * hml lookin
<magicaltrout> xarses reminds me of xerces which makes me real sad because those Java libraries are a right PITA......
<xarses> java is a right PITA....
<xarses> =)
<magicaltrout> as a java developer, i am okay with it, some old shit is the worst though :)
<magicaltrout> of course the other pun with that nick is you could say Java is a right Pain In The xarses .......
<magicaltrout> its been a long day
<xarses> hml, I also just posted the metadata generate-image cmd and output
<kwmonroe> well, it'd have to be "Pain In The xArses" because that's how acronyms work magicaltrout.
<xarses> I've partly followed https://jujucharms.com/docs/stable/howto-privatecloud, I haven't done any of the switf nonsense since I dont have an object store, I'm just using python -m SimpleHTTPServer on the folder
<hml> xarses: found the updates - trying to find whatâs going on hereâ¦ not jumping out at me
<xarses> I guess I should add this random endpoint that they added to the catalog though
<hml> xarses: the endpoint added for product-streams assumes that swift etc is used
<xarses> its a http get source at that point, adding it shouldn't matter
<xarses> but ya, thats what I initially thought
<xarses> but this output is useless for triaging this issue
<xarses> I was hoping that ya'll would have a better idea of what's up
<hml> xarses: the usual problem is when the front piece of the path for the metadata doesnât match what juju is expecting and it canât find the file
<hml> xarses: iâm concerned about the file not found messages in the output
<xarses> well, generate-image didn't make any of those
<xarses> should I change the cloudname from custom?
<hml> xarses: no - mine says the same
<xarses> uh, I jut regenerated it a bunch more times with out the endpoint. it looks like I may have had a problem with the region name I passed to generate-image
<xarses> urgh, yep looked back in the data I redacted, the region name was slightly transposed
<hml> xarses:  what would do it.
 * xarses with no hair left to pull out, pulls out random stubble 
<xarses> ok, so now it doesn't respect the zone I passed
<xarses> so how do I control the availability zone passed?
<hml> xarses: yes, openstack is the hardest to bootstrap
<xarses> lol, looks like it went through every az and finally used the one that worked with the network I passed
<hml> xarses: yes, it will do that - though there are some bugs thereâ¦
<xarses> although its still not the az I wanted
<xarses> zone appears to be valid in the models
<xarses> is there an option that bootstrap will take?
<hml> xarses: if the network AZ name doesnât match the AZ for the compute nodesâ¦  so you might have gotten luckily
<hml> xarses: looking for the option
<xarses> no, we don't have a version of openstack that has a working version of both
<xarses> network az don't really do anything useful in mitaka
<xarses> and we have routed provider networks, but the code that make provisioning work with out forcing both network and az is only present in oakta
<xarses> tever, if the instance will come up then I can image it and re-launch it where I need
<xarses> hmm, it seems to be waiting on "sudo: unable to resolve host juju-e290f0-controller-0"
<hml> xarses: not sure iâve seen that one?
<hml> xarses: sometimes the connection take a bit though
<xarses> we don't have a dns service
<xarses> it looks like it set up a new security group
<xarses> that doesn't accept icmp
<hml> xarses: that should be fineâ¦ iâm not running it either
<hml> xarses: yes it does setup a new sec group
<xarses> ah, yep doesn't accept icmp
<xarses> but does accept 22
<xarses> of course it sent the wrong key by default, but network is good
<xarses> its just sitting here doing nothing then
<xarses> just before it tried to login to the ip, then went to fetch agent tools
<xarses> then this sudo unable to resolve
<xarses> hmm
<xarses> its logged into the thing
<xarses> hmm
<hml> xarses: juju bootstrap --to zone=nova - to specify the AZ
<xarses> hml, oh nice thanks
<xarses> it looks like its built the instance ok, I've logged into it
<xarses> however its stuck downloading https://streams.canonical.com/juju/tools/agent/2.2.2/juju-2.2.2-ubuntu-amd64.tgz
<hml> xarses: so thatâs the intance for the controller
<xarses> I was able to wget it and it only took like a 30sec
<xarses> ya, I'm snooping the ps tree on the contoller
<hml> xarses: new toy?  :-)
<xarses> 2^19 pieces. Assembly required. For ages 9+. CAUTION: Contains complex parts may cause brain hemorrhaging and lack of cognitive reasoning
<xarses> its still stuck here ...
<xarses> not sure what to do
<hml> xarses: hrmâ¦
<xarses> ahh, figured out the sudo message
<hml> xarses: that one iâm not sure onâ¦ the bootstrap does have a timeout on it.  it doesnât ctrl-c well.
<xarses> its just a stderr message because the hostname isn't resolvable, otherwise its happy
<xarses> strace of the curl command that stuck pulling its socket
<hml> xarses:  did you bootstrap with use-floating-ips?
<xarses> nope
<hml> xarses: can the instance get to the outside word
<hml> world
<xarses> yea
<xarses> I was able to download the file fine with wget on the controller
<hml> wallyworld: have you seen where bootstrap gets stuck downloading the tools to the new controller instanceâ¦. but you can download them fine by hand to that instance?
<xarses> its downloading the file very slowly with this curl command
<xarses> but then it like gets stuck
<wallyworld> i haven't seen that, i've seen where the bootstrap instance is firewalled and can't download at all
<xarses> well neat
<xarses> curl is broken
<xarses> 0 20.8M    0 32768    0     0    633      0  9:36:08  0:00:51  9:35:17  2896
<xarses> 0 20.8M    0 32768    0     0    498      0 12:12:18  0:01:05 12:11:13     0
<xarses> uhg, something on the network here must be blocking it
<xarses> I can't fetch the file at all now
 * xarses continues to bang head against desk
<hml> xarses: can the instance get things from a local box?  you can provide both images and tools with the metadata flag  - though i havenât tried the tools part.
<xarses> I was looking though bugs that implied that both can't be passed as args
<xarses> its supposed to be able to get things, but my box running the command can't fetch the file currently either
<hml> xares: if you have the images and tools in the same directory structure - it would work.
<xarses> can I generate the metadata for this too? I can get the file from much futher parts in the network
 * xarses tries to get off this marry-go-round
<hml> hml: i think soâ¦ looking for how it works.
<hml> xarses: ^^^ I canât always type :-)
<hml> xarses: i just had to put the tools in a specific directory relative to where i put the imagesâ¦ will gather a pastebin for you -
<xarses> thx
<hml> xarses: https://paste.ubuntu.com/25441295/
<hml> xarses: iâm not sure what will happen if you try the images and tools in different locations on the cli
<hml> xarses: i do have a the product-streams service configured too
<hml> xarses: i downloaded the juju-2.2.2-ubuntu-amd64.tgz from streams.canonical.com - just get the on which matches youâre version of juju and the machine type
<xarses> ya, 2.2.2
<xarses> I have the url that the controller is trying to use
<hml> xarses: thatâs what i used
<xarses> sigh, it finally died trying on gui
<xarses> and on the re-run, its just sitting around waiting for connect
<xarses>  DEBUG juju.provider.common bootstrap.go:497 connection attempt for ... failed: ssh: connect to host ... port 22: Connection refused
<xarses> repeated several times, don't have the tools copy set up yet
<xarses> yay, slowly getting further every time
#juju 2017-09-01
<Dwellr> query.. what does juju expose actually do from an lxc/lxd perspective ?
<rick_h> Dwellr: nothing, expose is meant to update firewall rules to enable ports the charm needs to open properly. On LXD and providers without a built in firewall setup (security groups or the like) there's nothing there to work against
<SimonKLB> is there any reason why the ceph charm does not implement the ceph-admin interface?
<SimonKLB> i would like to try out ceph with kubernetes with a minimal setup as a poc, not having to deploy ceph-mon x3 and ceph-osd x3 if possible
<rick_h> SimonKLB: not sure. Have to check in with the OS folks. Cool experiment to try out.
<SimonKLB> jamespage icey cholcombe ^
<jamespage> SimonKLB: the ceph charm is officially deprecated; as such it does not always grow the same features as ceph-mon and ceph-osd do over time
<SimonKLB> jamespage: got it, is there any way to have a ceph deployment smaller than 6 machines?
<jamespage> only reason its still in the charm store is we've not found a satisfactory migration approach for existing ceph deployments.
<jamespage> SimonKLB: what provider are you using?
<SimonKLB> jamespage: aws
<jamespage> SimonKLB: hmm no not really
<SimonKLB> jamespage: alright, then ill go with that!
<jamespage> with MAAS you can of course place ceph-mon on LXD containers, and ceph-osd alongside k8s
<SimonKLB> ah, would that not work on aws though?
<SimonKLB> ceph-mon on 3 LXDs and ceph-osd on one worker each would be pretty neat
<SimonKLB> just to try it out
<icey> SimonKLB: AWS doesn't have the overlay networking required to get packets into the ceph-mon's in containers
<SimonKLB> icey: ah too bad!
<wpk> (that problem will be solved in 2.3)
<SimonKLB> wpk: im running 2.3-alpha1.1, that fix is not there yet?
<wpk> SimonKLB: no
<bdx> wpk: you are my hero
<bdx> great news
<tvanhove> does anybody know what is up with the bigtop repos?
<tvanhove> kafka charm is currently failing because of 403 forbidden
<tvanhove> http://bigtop-repos.s3.amazonaws.com/releases/1.2.0/ubuntu/16.04/x86_64
<rick_h> kwmonroe: ^
<kwmonroe> hm - not sure tvanhove, but i'll send a "wat?" to the dev list
<Dwellr> wondering if (after installing conjure-up kubernetes-core) I should be using iptables to route traffic to the node InternalIP .. or if I should be figuring out how to add an 'ExternalIP' to the node
<kwmonroe> tvanhove: not that you needed it, but i verified the 403 on a different arch as well.  looks like something is afoul with all 1.2.0 repos:
<kwmonroe> E: Failed to fetch http://bigtop-repos.s3.amazonaws.com/releases/1.2.0/ubuntu/16.04/ppc64le/pool/contrib/k/kafka/kafka_0.10.1.1-1_all.deb  403  Forbidden
<tvanhove> yeah we were setting up for a demo next week and noticed the failure in our juju storm deployments with kafka
<kwmonroe> tvanhove: mail sent to dev@bigtop.apache.org.  i'll keep you in the loop as soon as we figure out what's up.
<kwmonroe> tvanhove: until then, one possible workaround would be to manually set the repo on each affected unit to the CI builders.  to do that, you'd edit your apt sources like this:  http://paste.ubuntu.com/25445303/
<kwmonroe> i've verified an apt-get update / install works from those repos.
<kwmonroe> buuuuut, that's upstream vs the official 1.2.0 release.  so don't go to production with that ;)
<tvanhove> it's just for demos right now
<tvanhove> no production
<tvanhove> thanks :)
<kwmonroe> "it's just for demos" <-- that's what they all say ;)
<tvanhove> ;)
<stormmore> o/ juju world
<kwmonroe> \o stormmore
<kwmonroe> tvansteenburgh: do you recall if stub's "hookenv.principal_unit" fix was the only thing in ch-0.18.1 (vs 0.18.0)?  wanna make sure i'm reading this right: https://code.launchpad.net/~charm-helpers/charm-helpers/devel
<stormmore> b 40
<tvansteenburgh> kwmonroe: yeah it was just that one commit
<kwmonroe> ack, thx tvansteenburgh
<xarses> hml: is there a way to remove/stop the auto subnet scanning neutron subnets for network-space mapping? it keeps finding embarrassing duplicates and crashes the whole install
<hml> xarses: iâm not sure, let me see what I can find out.
<hml> rick_h: ^^^ any ideas?
 * rick_h erads backlog
<xarses> https://gist.github.com/xarses/307a07d290fcc9f48008b3ae1d192f05#file-duplicate-neutron-subnets
<rick_h> hml: xarses is that in the neutron charm itself? Or in juju trying to figure out what's up?
<hml> rick_h: i think itâs juju investigating subnets
<xarses> juju bootsrap on a openstack cloud fails with a duplicate subnet in neutron
<rick_h> yea, gotcha
<hml> xarses: oh, try bootstrapping with the network uuid instead?
<rick_h> normally that's the bootstrap path ^
<hml> xarses: i thought it was a different issue
<xarses> no, its scanning networks in the output of `openstack subnet list`
<xarses> the network to boot the instance selector is not the problem here
<xarses> it's found 2 duplicates on me so far, and I'm guessing I have another 4 based on what happened here
<hml> xarses: sounds like bootstrap is failing when discovering subnets for juju - where juju list-subnets etc would be used
<xarses> but the bootstrap haults on reading the duplicate
<hml> xarses: while we look for a workaround on thisâ¦ can you file a bug on this please?
<xarses> sure, on lp?
<hml> xarses: yes: https://bugs.launchpad.net/juju
<hml> xarses:  I have an idea, but not sure if itâll work - let me test out first
<xarses> cool
<xarses> yay, it only took 3 days, but I finally have a controller installed
<hml> xarses: w00t!  sorry it took so long.   openstack is one of the harder to bootstrap unforunately.
<kwmonroe> tvanhove: bad news!  see the [IMPORTANT] thread here:  http://mail-archives.apache.org/mod_mbox/bigtop-user/201708.mbox/browser.  tl;dr, kafka was removed from the repos due to licensing.
<xarses> hml: dont worry about the work around, I was more easily than expected able to remove the duplicates
<hml> xarses: good news -
<xarses> ya, I wish it wasn't so stubborn, the main issue where with the image and tools metadata process being overly weird to the uninitiated
<hml> xarses: usually the tools arenât an issue - the images are, because they are specific to your openstack.  we canât just download the info - itâs not an issue with other clouds.  :-/
<hml> xarses: that said, weâre looking to improve the docuementation on this
<xarses> it was the offlining that I had to do for the tools to get them through whatever was wrong with the network that was causing them to time out
<xarses> The network guys still haven't come back to me on that
<zeestrat> xarses: Somewhat related is this bug (though that is concerning the generic subnet that is created for HA routers): https://bugs.launchpad.net/juju/+bug/1710848
<mup> Bug #1710848: Bootstrapping Juju 2.2.x fails on a Openstack cloud with Neutron running in HA. <network> <openstack-provider> <juju:Incomplete> <https://launchpad.net/bugs/1710848>
<xarses> oh, well that is about what I was to report
<xarses> close enough anyway
<xarses> I wacked it back to new, while confirmed might be just as valid, its not my project
<xarses> so it doesn't languish in some incomplete filter
<zeestrat> xarses: Great. Feel free to hit the "affects me too" button too.
<zeestrat> Duplicate subnets is a pretty common scenario in OpenStack so Juju will need to handle that anyway
<xarses> yep
<hml> xarses: good thing the subnets were easy to remove - my idea didnât work. :-(
<xarses> happens
<xarses> thanks for your help hml, wouldn't have gotten throug this w/out it
<hml> xarses: glad I could help!
<xarses> now I can continue with the getting started videos
<xarses> =)
<hml> :-)
<hml> thereâs a juju show on youtube also with different topics - maybe thatâs what youâve found?
<xarses> https://www.youtube.com/watch?v=ovsBVZsQqtg
<xarses> for  1.25
<xarses> which means a bunch of these commands aren't around anymore
<hml> 1.25 is a bit different than 2.0
<hml> the concepts have changed too
<xarses> ya, finding useful videos is is hard
<xarses> most are a billion years old
<xarses> I've ran into a few that appear to be 0.x
<hml> hereâs one of the bi-weekly juju show: https://www.youtube.com/watch?v=YoZsP7TDyZI
<hml> let me look for me
<hml> more
<xarses> ya, I've watched most of that and felt lacking from it
<hml> ah
<hml> thatâs more on-going juju news rather than getting started
<xarses> ya
<xarses> zeestrat: hmm, a possible work around is to hide the duplicate networks from the user bootstraping the controller
#juju 2017-09-02
<ybaumy> is juju working with foreman?
<ybaumy> if so how?
<erik_lonroth> Does anyone know if "swift" + "swift-proxy" can be deployed on a single server (with lxd) I would like to try it out but I only have a single machine for it.
<ybaumy> anyone here
<ybaumy> is it possible to integrate juju and foreman
<rick_h> ybaumy: nothing that the team itself is working on atm
<rick_h> ybaumy: not sure if there's ways of making that work together. I've not looked into it myself.
<rick_h> erik_lonroth: hmm, not sure. might just give it a go and see. I'd guess that the biggest thing would be getting the proxy on the network for folks to reach out to. Might have to do something like put the proxy on the main host and the swift into a container?
<erik_lonroth> rick_h: I'm doing some googling and found https://piware.de/2014/03/creating-a-local-swift-server-on-ubuntu-for-testing/ - but it seems the "setup-swift.sh" files (guy) has left the server...
<erik_lonroth> (content is gone)
<erik_lonroth> rick_h: Another hit on google is this: https://askubuntu.com/questions/85868/has-anyone-deployed-swift-in-an-openstack-environment-using-juju
<erik_lonroth> However, its not clear to me how I should create the storage needed for the configuration of the juju deploy.
<erik_lonroth> I'll try figure it out myself, but all help apprechiated
<erik_lonroth> Hmm, seems lxc wont allow me per default to mount loop devices which is being created by the swift-storage charm. Any idea as how to resolve that easily?
<bdx> erik_lonroth: probably something along these lines http://blog.forshee.me/2016/02/container-mounts-in-ubuntu-1604.html
<erik_lonroth> bdx: I'm looking
<erik_lonroth> bdx: I think id rather just turn the container into a privileged one?
<erik_lonroth> after all I'm testing
<bdx> totally ... open it wide up and see if you are still hitting the issue
<erik_lonroth> Nope, it didnt work or perhaps a reboot of the container would be needed ?
<bdx> hmmm
<bdx> possibly
<bdx> worth a shot
<bdx> cory_fu: how's it going?
<bdx> cory_fu: whats he best way to get the latest release of charms.reactive into my charm? if it isnt released to pypi yet
<bdx> ?
#juju 2017-09-03
<atuly> hi
<atuly> I need help to understand that juju relation ship model betwen cinder-volume and cinder
<atuly> currently we have deployment through juju and in one of the container cinder-volume state is down but service is still up and running. So i am not figure out what could be the possible reason for the same. When i check the logs it state volume is down.
<ybaumy> rick_h: too bad that there is no support for it. its a really nice tool which supports puppet chef ansible and so on
<erik_lonroth> I'm trying to connect to my ceph object storage from python but can't figure out how exactly. Anyone that can point me to how you would normally do that? python etc.
<erik_lonroth> Oh, the context is that I have a ceph, ceph-radosgw, keystone, openstack-dashboard which bdx helped me with last night. Its running OK, but I need to understand how to connect to it and try it out.
<ybaumy> anyone experience with terraform?
#juju 2018-08-27
<wallyworld> thumper: for when you are back https://pastebin.ubuntu.com/p/tYVrD3Hw34/
<wallyworld> babbageclunk: lgtm with a suggested improvement
<babbageclunk> wallyworld: thanks!
<babbageclunk> Yeah, those make sense.
<thumper> wallyworld: thanks
 * thumper has been threading *state.StatePool instead of *state.State through the agents and upgrader
<thumper> well it looks like make check is getting past the metalinter now at least
 * thumper crosses fingers
<thumper> it isn't done yet
<thumper> spoke too soon
 * thumper tries again
<thumper> phew
<thumper> I think I'm past the compile time failures...
<thumper> now to the runtime failures
<anastasiamac> wallyworld: empty auth-type was created for lxd and manual
<anastasiamac> wallyworld: so even tho lxd does not need it anymore, i think we r kind of stuk with it because of manual?..
<wallyworld> anastasiamac: ah, forgot about manual
<wallyworld> doh
<anastasiamac> wallyworld: the same :)
<wallyworld> best laid plans
<anastasiamac> wallyworld: PTAL   https://github.com/juju/juju/pull/9115 - cred validation against model's cloud...
<wallyworld> ok
<anastasiamac> \o/
<wallyworld> anastasiamac: what is "nuage"?
<babbageclunk> wallyworld: french for cloud!
<wallyworld> ah, you and your fancy words
<wallyworld> now if it were german i would have been ok
 * thumper sighs
<thumper> I should stop
<thumper> it seems like the multiwatcher stop method doesn't
 * thumper EODs
<thumper> I'll come back to this tomorrow
<thumper> if anyone understands the state multiwatcher well, I'd like to chat
<anastasiamac> wallyworld: 'nuage' is cloud in french :)
<wallyworld> i know that now :-)
<anastasiamac> wallyworld: why would i use a german word for something fluffy?
<anastasiamac> for the error mesage, as per tests, it'll have the cloud for model as well as the one from cred where there is mismatch..
<anastasiamac> the error msg will look like 'validating credential "stratus/bob/foobar" for cloud "dummy": cloud "stratus" not valid`
<anastasiamac> wallyworld: :)
<anastasiamac> thnx for review \o/
<anastasiamac> wallyworld: if fancy 'nuage' is a problem, m happy to rename... think 'c' is a good enough var name here?
<externalreality_> jam, I remember dumping the local state of the uniter somehow. Is there a way to dump the local state of the uniter.
 * rick_h_ loves the flood of "fix released" emails that means a new Juju is on the way
<rick_h_> cory_fu: do you know if any k8s charmers folks around for https://discourse.jujucharms.com/t/load-balancing-with-vip/191 ?
<rick_h_> knobby: ^
<cory_fu> rick_h_: Yeah, I don't know anything about it, but knobby is a good one to ping.
<rick_h_> cory_fu: k, ty
<rick_h_> just want to make sure we feed the discourse :)
<rick_h_> it's cool to see new folks asking stuff in there
<knobby> rick_h_: I'm punting on an official answer for now while I research
<rick_h_> knobby: all good
<rick_h_> knobby: you ok to touch base so they see activity and buy some time for official answers?
<knobby> I'll see if I can come up with a quick answer and if not, I will ping it
<rick_h_> knobby: my hero!
<knobby> no, you
<knobby> rick_h_: can you help with moderation on that thread?
<rick_h_> knobby: k, looking
<rick_h_> knobby: done, wonder why that needed moderation
<knobby> shady people posting...
<rick_h_> :)
<cmars> anyone having issues with lxd machines today? I keep getting errors in open-iscsi ... https://paste.ubuntu.com/p/SM84nJzDCV/ for example
<cmars> apt purge open-iscsi seems to "fix" it
<cmars> maybe something in the cloud image?
<rick_h_> cmars: not hit anything today but not done a ton of lxd
<rick_h_> externalreality_: hml ^ ?
<cmars> i'm on a new host machine so i might be pulling down fresh images that have issues
<hml> iâve bootstraped lxd today, but havenât noticed or checked for errors like the pastebin
<cmars> hml: you'd probably know if you got them, bootstrap would hang or you'd get hook errors
<cmars> maybe just me? i'm running juju 2.3.7 in a multipass vm, using the lxd provider in there
<hml> cmars: i havenât tried that yet
<cmars> i'll try to reproduce on my server
<hml>  
<hml> Juju 2.4.2 is now available!!!!   https://discourse.jujucharms.com/t/juju-2-4-2-release-notes/90
<rick_h_> woot woot
<cmars> can't reproduce that lxd with 2.4.2, so i'm going to drop it. might have been a glitch? if so, sorry for the noise
 * rick_h_ lost canonical IRC is you're looking for him 
<veebers> rick_h_: odd, apparently I'm still connected but I've seen others say it'd down for them too
<rick_h_> veebers: hmm yea got some ssl error and won't reconnect
<rick_h_> veebers: oh actually now it's timing out
<rick_h_> so yea, something is down
<rick_h_> veebers: hmm, some folks on twitter saying gmail is down...
 * rick_h_ wonders who dragged anchor across the pipe
<veebers> rick_h_: ah, I'm connected to the vpn which might change irc things
<veebers> possibly just the end of the world :-P
<hml> rick_h_: iâm having no issuesâ¦ so far
 * rick_h_ wonders if he got some bad dots on his review :P
 * thumper makes a sad face
<anastasiamac> thumper: we have not released 2.4.2?
<thumper> anastasiamac: we have
<anastasiamac> thumper: ok, must have missed the announcement... i would have expected more excitement :D
#juju 2018-08-28
<anastasiamac> a very simple review PTAL https://github.com/juju/juju/pull/9118 - exposes lisitng of credential models from state
<anastasiamac> wallyworld: babbageclunk: veebers: thumper ^^
 * thumper looks
<babbageclunk> anastasiamac: Already looking, should have said, sorry!
<babbageclunk> anastasiamac: approved
<thumper> I had a suggestion
<wallyworld> anastasiamac: i had a suggestion also
<anastasiamac> brilliant, thnx \o/
<anastasiamac> wallyworld: i have been thinking uuid vs tag too.. the trick is that i know where it'll be used and they will all go back to tag....
<anastasiamac> i'll change anyway to b inline with convention
<anastasiamac> thumper: my only bleh with map[string]string keyed on uuid is that it's not blatantly obvious what is actually in the map... with map[names.ModelTag]string, a casual observer can deduce it from type ;D
<anastasiamac> but since it's my bleh, i'll change to uuid
<thumper> thanks
<thumper> babbageclunk: I'm going to make a coffee and then work through some intermittent failures with watchers
<thumper> if you want to chat while I do that, I'd be up for it
<thumper> kinda like pair programming
<babbageclunk> thumper: ok - are those the ones you think the raftleases might be causing?
<thumper> I think vscode has some code sharing features too for this
<babbageclunk> Or just a second brain making light work
<thumper> the later
<thumper> there is a lease issue
<babbageclunk> Sure!
<thumper> in the state package
<thumper> so I have two types of problems to fix
 * thumper goes to make coffee first
<thumper> babbageclunk: heading to 1:1 HO
<thumper> babbageclunk: ugh, installing a kernel before restarting
 * thumper waits
<babbageclunk> heh
<veebers> Hmm, at some point I've accidently pasted the APGL license header into my terminal or so my history shows me :-P
<veebers> gotta love pasting the wrong clipboard into bash and hoping none of the text is actually destructive commands
<wallyworld> babbageclunk: i'm seeing several of these in the logs
<wallyworld> ERROR juju.worker.dependency "lease-clock-updater" manifold worker returned unexpected error: updating global clock: lease operation timed out
<babbageclunk> wallyworld: in startup?
<babbageclunk> Or after that?
<wallyworld> after, when system has been running for a while
<babbageclunk> hmm
<wallyworld> could just be slow connectiivity or something, it's on ec2 and i also see occasional mongo timeouts
<wallyworld> probs nothing to worry about immediately
<babbageclunk> Might be that I've got the timeout threshold too low.
<babbageclunk> Are you running with multiple controllers?
<wallyworld> babbageclunk: no, just the one
<babbageclunk> wallyworld: weird - I could see the timeout being maybe too low for forwarding from a follower to the leader.
<wallyworld> might just be crappy environment
<wallyworld> mog sometimes times out too according to the logs
<wallyworld> *mgo
<babbageclunk> wallyworld: I think jam knows about the mongo errors - I'm not sure they're timeouts exactly.
<wallyworld> the logs claim they are
<wallyworld> io.timeout
<wallyworld> can't recall the exact message now
<babbageclunk> I think it's trying to connect to old addresses.
<wallyworld> when i see it again i'll try and add some debugging
<babbageclunk> yeah, sounds good
<veebers> wallyworld: you have a couple of moments? Slighlty stuck with the status bits, I can the cloud container status is being updated; but status shows "Creating mysql container" which is a unit status (set by the charm), it seems where I think the status message for 'juju status' is not right?
<wallyworld> juju sets "creating mysql container" initially
<wallyworld> IIANM
<veebers> aye, it's setting it as unit status, and my cloudcontainer status bits are being ignored. I can't see where that "creating mysql . . " is intially getting set
<wallyworld> it's set when the unit is first created i seem to recall
<wallyworld> and then the charm can overrite it as it comes up
<wallyworld> i don't quite follow how the cloud container bits are being ignore
<wallyworld> FullStatus() would need to read them
<wallyworld> so it's in our control
<veebers> wallyworld: juju defaults it to 'waiting for container' and the charm sets for mysql: status_set('maintenance', 'Creating mysql container')
<wallyworld> ah that sounds right
<veebers> oh, by ignores it I mean; I'm not changing the right part to thread it through
<wallyworld> i think FullStatus is the apiserver side method whuich reads stuff out of hte db and populates the status params struct
<wallyworld> have you changed that bit?
<veebers> wallyworld: that might be it (only hit (u *Unit) Status() at the moment)
<veebers> I'll check that out
<wallyworld> veebers: right, that won't do anything
<wallyworld> juju status calls client facade FullStatus()
<wallyworld> there's a bunch of logic in there
<wallyworld> it might call unit.Status() though, not sure
<veebers> ack, that looks more promising, thanks!
<wallyworld> but that's the place to look
<thumper> wallyworld: you busy?
<thumper> https://github.com/juju/juju/pull/9116/files
<wallyworld> thumper: about to have a m,eeting with carmine
<wallyworld> can look after
<thumper> thanks
<anastasiamac> wallyworld: babbageclunk: m finding myself in this nice, cozy spot where I wan to return []params.ErrorResults from apiserver facade... how r u feeling about that?
<anastasiamac> (or a struct that contains[]params.ErrorResults)
<wallyworld> hmm, i guess it depends on the call
<babbageclunk> anastasiamac: so, effectively a list of lists of errors?
<anastasiamac> yes, altho i could break it down i guess
<anastasiamac> the reason - we have a bulk call to update credential
<anastasiamac> and each credential can have a number of models that could error out
<anastasiamac> so my choices are - list of lists of errors
<wallyworld> typically the bulk call would be a slice of things and a corresponding slice of errors. well that's been what we've dome till now
<anastasiamac> of a struct where i can identify each credential with a map[model]error and an error
<anastasiamac> yes, so slice of errorresults
<anastasiamac> was tryingt o not change the actual method much
<anastasiamac> but i'll have to to return different result
<wallyworld> it could work
<anastasiamac> so, right now this bulk call does return params.ErrorResults but this, of course, does not cater for the cred models check where each can error out too... hence []params.ErrorResults...
<anastasiamac> it culd, for sure :) i'll sleep on it and experiment a bit more... just wanted to know if ppl will immediately dislike or b k with this...
<wallyworld> it's different but worth considering i think
<anastasiamac> ion one hand, returning []params.ErrorResults will be neater.. but i feel like there will be a lot of complex understanding that is kind of implied and only comes naturally if u read the code... that is the only consideration that is holding me back :)
<anastasiamac> the other option, 3rd one that is, is map[cred]params.ErrorResults :D this way we r going to be very explicit...
<anastasiamac> but kind of unprecedented, so i think i talked myself into breaking a separate struct per credential where each item contains []params.Error or similar....
<anastasiamac> thnx for ur help and listening \o/
<wallyworld> kelvinliu_: this PR makes operators use statefulsets / storage and also i think fixes that termination bounce bug at startup https://github.com/juju/juju/pull/9120
 * jhebden is back from [afk] - 426510h:37m:11s away
<kelvinliu_> wallyworld, looking. and https://github.com/juju/juju/pull/9119 this small fix, would u take a look when u got time, thanks
<wallyworld> sure
<wallyworld> kelvinliu_: lgtm, ty
<kelvinliu_> thanks
<kelvinliu_> wallyworld, the pr looks great.
<wallyworld> tyvm
<wallyworld> i'm happy with it
<wallyworld> seems to solve a few issues
<kelvinliu_> is the terminate bug the `terminating-> creating issue happens it gets deployed in the first time?
<wallyworld> kelvinliu_: yeah, that one
<kelvinliu_> wallyworld, nice!
<rmcd> Hey all, having an issue building a charm... It's saying there's no fetcher for a relation I've written. Recently switched to a new laptop so gone from having the interface locally to grabbing it from Gitlab. Any idea how I can fix this?
<stickupkid> manadart: did we port the lxd stuff to 2.4?
<stickupkid> manadart: just checking we've not regressed the 2.4 branch of lxd 2.0.x
<stickupkid> manadart: we're good, sorry for the noise
<stickupkid> manadart: do you have 5 minutes for a quick hangout?
<stickupkid> anyone know where the provision of machines logs out to?
<stickupkid> can I use "juju debug-log"?
<pmatulis> stickupkid, i would try to use that command and apply it to the controller model (-m)
<hml> stickupkid: sometimes its easier to grep the /var/log/juju/machine-0.log on the controller
<hml> stickupkid: if you up the root logging level to trace, you can see the api calls too, which may help
<hml> and their payload
<stickupkid> hml: pmatulis: thanks
<stickupkid> manadart: LXD 2.0 doesn't have a server name :|
<stickupkid> hml: https://github.com/juju/juju/pull/9121 can you give a quick look at this, I need to fix the deploy still, but it should make sense at least
<hml> stickupkid: iâll look shortly
<stickupkid> hml: take your time, EOD for me :D
<hml> stickupkid:  have a good evening
<rick_h_> kwmonroe: bdx cory_fu zeestrat looking to do the Juju Show tomorrow. You all around to join? I've started a discourse post for show planning/notes afterwards https://discourse.jujucharms.com/t/juju-show-38-wed-aug-29th/202 if you have anything
<zeestrat> rick_h_: I'll try to be there
<rick_h_> zeestrat: cool
<veebers> Morning all o/
<magicaltrout> rick_h_: what time is it these days?
<rick_h_> magicaltrout: it'll be normal 2pm EST
<rick_h_> magicaltrout: though maybe I should re-evaluate the timing if folks want me to move it up in the day more
<rick_h_> veebers: woot woot
<magicaltrout> my biggest issue is that its kids bedtime =/ hour earlier or later would be better for myself, but don't switch stuff on my account.
<magicaltrout> I'll swing by tomorrow, I want to discuss big data/big data ldn and some other stuff
<rick_h_> magicaltrout: sweet
<rick_h_> hmmm, I could do an hour earlier. anyone else have feedback?
<veebers> \o/
 * jhebden is back from [afk] - 426524h:56m:12s away
<magicaltrout> ...
<magicaltrout> thats a long time
<rick_h_> lol
<jhebden> epoch!
<thumper> https://github.com/juju/juju/pull/9123
<veebers> wth, I had this working last night just before eod, and now it's not :-\ /me digs in
<babbageclunk> veebers: time of day related? ;)
<veebers> heh, unlikely :-)
<thumper> babbageclunk, wallyworld: https://github.com/juju/juju/pull/9123 has the hubwatcher fix
<wallyworld> thumper: is there a test? i think i saw one in the other PR?
<magicaltrout> you shoudn't discount time of day related bugs.. in one platform i develop we have a unit test that passes in any other timezone than GMT
<magicaltrout> when you run it in the UK during BST is passes
<magicaltrout> then daylight savings end and it returns to failure....
<veebers> magicaltrout: hah, actually I've had issues with something like that before. Tests always worked in NZ-TZ but my English coworker had occasional failures :-)
<thumper> wallyworld: no, there isn't a test, it is very racy
<wallyworld> ok
<wallyworld> lgtm
<thumper> anastasiamac: I'm pretty sure the default value is an interafce{} type, so a missing value will be nil.
<thumper> so any type could have no default
<anastasiamac> thumper: k
<thumper> and all missing values should just be the empty string
<thumper> I think
<anastasiamac> i agree
<veebers> wallyworld: this is really odd, the status stuff is messed up if I use the model name that I've been using over and over (but destroying controllers between each run). If I use a new model name everything is fine
#juju 2018-08-29
<anastasiamac> thumper: wallyworld: PTAL https://github.com/juju/juju/pull/9125 - fix for critical
<anastasiamac> i've proposed against 2.3 and will forward-port once landed.
<anastasiamac> babbageclunk: veebers: any chance u could review ^^
<babbageclunk> anastasiamac: sure
<anastasiamac> babbageclunk: my hero \o/ tyvm
<wallyworld> veebers: sorry, just got off the meeting. i don't know what the issue might be off hand
<babbageclunk> anastasiamac: approved!
<anastasiamac> \o/
<veebers> anastasiamac: sorry was at lunch :-\
<anastasiamac> veebers: how could u?
<anastasiamac> :D
<anastasiamac> nws
<veebers> :-)
<thumper> anastasiamac: I'm looking to forward port 2.3 into 2.4
<anastasiamac> thumper: k but my stuff will be there shortly (and on develop)
<anastasiamac> thumper: m happy to review, once u propose if u want
<thumper> anastasiamac: it is in 2.3 now
<thumper> I noticed it there
<anastasiamac> yep. i've pr'ed against 2.3 originally, this morning
<anastasiamac> 2.4 and develop landing is happening now
<anastasiamac> gofmt against develop seems to be stricter?
<anastasiamac> thumper: and merged into 2.4
<veebers> wallyworld: I believe one of my issues re: cleanup is conditionally setting statusOps (i.e. if caas use cloudcontainer key, otherwise unit) so removeOps is dying (I haven't touched that yet).
<thumper> anastasiamac: I've found a problem with your patch :(
<thumper> anastasiamac: was QAing it
<anastasiamac> :( what's it>
<anastasiamac> ?
<veebers> wallyworld: is that the way forward, or instead of having the absence of a status via key be the decider should the caas check happen at the GetStatus end?
<thumper> anastasiamac: I deployed keystone like the bug did
<thumper> then went "juju config keystone vip"
<wallyworld> veebers: quick HO?
<thumper> there used to be no value
<thumper> now there is a blank line
<veebers> wallyworld: sure thing
<veebers> wallyworld: standup?
<wallyworld> ok
<veebers> wallyworld: one sec, I smell smoke . .
<anastasiamac> thumper: ahhh, yes coz it's Println... should it not be? just Print..
<thumper> anastasiamac: confirmed with 2.2.6 cli
<thumper> anastasiamac: just nothing
<thumper> anastasiamac: I'll pastebin
<anastasiamac> thumper: k. no need... i understand
<veebers> ;all good, no smoke or stroke
<thumper> anastasiamac: always good to be explicit... https://paste.ubuntu.com/p/sgszfhQTpQ/
<thumper> wallyworld: when do you want to talk watchers
<thumper> ?
<wallyworld> otp now, maybe soon
<anastasiamac> thumper: k. so I have added a *Println as jam's recommendation was to append newline tot he value
<thumper> anastasiamac: which we should do for non nil values
<anastasiamac> i'll go back and remove the newline :D thumper
<thumper> no...
<anastasiamac> thumper: ack
<anastasiamac> newline wihen there is value and none where nil value
 * thumper nods
<thumper> yep
<anastasiamac> k. so hold off porting 2.3 into 2.4
<thumper> yep will do
<anastasiamac> i'll propose agaisnt 2.3 ifrst
 * thumper nods
<babbageclunk> wallyworld: trivial review for bumping up the timeout? https://github.com/juju/juju/pull/9128
<wallyworld> ok
<wallyworld> babbageclunk: i wonder why this change got include in the lock file gopkg.in/goose.v2/testservices/neutronmodel
<wallyworld> doesn't seem related
<babbageclunk> wallyworld: yeah, I was about to put a comment on the PR about that.
<babbageclunk> I'm not sure. I was tempted to remove it, but presumably that would just mean that the next person to do a dep change would include it.
<babbageclunk> kelvinliu: ^?
<kelvinliu> babbageclunk, how's going
<babbageclunk> kelvinliu: I got a weird change in Gopkg.lock when I updated an unrelated dependency, just wondering if you had any ideas about it.
<babbageclunk> kelvinliu: https://github.com/juju/juju/pull/9128/files#diff-bd247e83efc3c45ae9e8c47233249f18R1820
<kelvinliu> babbageclunk, i guess if it's a dep of github.com/juju/pubsub
<babbageclunk> No, I don't think it's related.
<babbageclunk> Maybe it's version differences between different people's dep binaries?
<anastasiamac> thumper: so ha.. if the value is nil then we would not print a newline... if a value is '' (empty string), we would?... previously, say in 2.2.6, we did not... in fact, in 2.3- we have never had a newline...
<anastasiamac> (unless it was included as the actual value for the setting)
<thumper> anastasiamac: I'd say if the value is the empty string then yes to newline
<thumper> nil or missing is no value
<anastasiamac> k
 * thumper thinks
<thumper> this is a break in our behaviour
<anastasiamac> adding newline? yes
<thumper> we broke STSs scripts
<thumper> I'm wondering if adding a new line will continue to break their scripts
<anastasiamac> it could...
<thumper> yeah...
<kelvinliu> babbageclunk, https://github.com/juju/juju/blob/develop/provider/openstack/local_test.go#L37
<anastasiamac> adding newline was not required for the original fix anyway...
<thumper> anastasiamac: I think we shouldn't add the new line now
<thumper> anastasiamac: perhaps leave a note for Juju 3.0
<kelvinliu> babbageclunk, i think it's introduced before ur PR, but someone forgot to run make dep
<thumper> to add a new line
<kelvinliu> babbageclunk, sorry, it's make rebuild-dependencies
<babbageclunk> kelvinliu: ahh, yeah, I think that makes sense.
<anastasiamac> thumper: ack, i'll add the note, card in 3.0 list and a bug :)
<babbageclunk> wallyworld: I think that's the explanation, I'm ok if you want me to pull it out of the PR.
<wallyworld> babbageclunk: if it's to fix a previous error then that seems ok to me
<babbageclunk> It seems like that section in Gopkg.lock is going to change in surprising/unrelated ways when someone changes a dependency, since anytime someone adds an import of a previously-unimported package from an existing dependency it's going to be added here.
<babbageclunk> (and I don't think it would be reasonable to insist that people run rebuild-dependencies when they haven't really changed them)
<anastasiamac> thumper: PTAL https://github.com/juju/juju/pull/9129
<thumper> anastasiamac: a suggestion and a question
<thumper> babbageclunk: I hit the session closed bug again in the lease manager
<thumper> babbageclunk: https://paste.ubuntu.com/p/84rSqXGsjg/
 * thumper goes to make coffee
<babbageclunk> thumper: ah, thanks - looking now
<anastasiamac> thumper: and updated PR
 * anastasiamac crosses fingers and makes tea
<babbageclunk> thumper: ok - I guess the fix is to add a waitgroup to the manager and make sure it waits for all of the goroutines to be finished before stopping. Sound right to you?
<babbageclunk> thumper: how do I reproduce it - running state tests under stress?
 * babbageclunk will assume that until further notice.
<thumper> babbageclunk: it is fucking hard to reproduce
<thumper> babbageclunk: re waitgroup, yeah I think something like that may be necesssary, I'm going to take a peek
<thumper> babbageclunk: got time to talk this through?
<babbageclunk> thumper: sure
<wallyworld> thumper: here's a PR to address the charm version issue https://github.com/juju/charm/pull/258
<veebers> wallyworld: I'm either confused or my plan won't work, the unit workload status is initiall set at unit start (allocating), the caas provisioner updates the details (i.e. started container), if the unit status goes to 'unknown' or maintenance  state (i.e. agent error) then the cloud container status will never hit the criteria to update the status (unless it errors).
<wallyworld> or anastasiamac ^^^^^ :-)
<veebers> resulting in a status of 'maintanence, message: Waiting for mysql container' while the cloud container status is 'Started container'
 * anastasiamac looking
<anastasiamac> wallyworld: could u ptal 9129, just in case?
<wallyworld> veebers: right, remember this is a stop gap until we thread stuff through. if the charm sets status to maintenance, we won't show container status unless it's an error and we choose to use it
<thumper> wallyworld: I thought the entire purpose of this was to not log a warning message
<veebers> wallyworld: ack, but it also means that it'll never go to an 'active' status as that info is now piped though cloud container status (which won't overwrite any unit status)
<wallyworld> thumper: 2 things 1. it was logging a warning unnecessarily due to a git issue, 2. IMO we do want to inform the user if the version generation fails, so i have improved the message
<wallyworld> i think the bug was due to some bogus message wthout enough info
<wallyworld> if there's a genuine error we should surface it
<thumper> I would say it should be informational to the user, it isn't a warning
<wallyworld> i can chabge it to INFO but that will be hiddn by default
<wallyworld> it soimething fails surely that's a warning
<thumper> so... if I understand this, if it is in revision control and it hits an error we get a warning
<thumper> what was the problem they were seeing?
<thumper> why was there an exit code of 128?
<wallyworld> because "git describe" failed
<wallyworld> it couldn't generate a version sha
<wallyworld> and we were not surfacing the true error string
<wallyworld> hence the changed cmd.exec code also
<wallyworld> it was failing because there were no tags available for it to use in the output
<wallyworld> so we needed to be passing in --always
<wallyworld> so that it would at least print the sha (without tag)
<wallyworld> and without complaining
<wallyworld> make sense?
<wallyworld> veebers: for now she should also special case "active"
<wallyworld> *we
<veebers> wallyworld: ack, have done so now :-)
<veebers> if cloud container active use it, currently have && unit status == maintanence but I think that's a bit much
<wallyworld> yeah it is
<wallyworld> could have been error->active
<wallyworld> eg if we fix the stroage claim it will come good
<veebers> ack, thanks wallyworld!
<thumper> wallyworld: can you jump into our 1:1?
<wallyworld> ok
<veebers> what did we ever do without colour log output :-)
<babbageclunk> thumper: I ended up reimplementing the waitgroups fix so I could check my test passed with it. I might just push it as a PR if that's ok? :)
<thumper> sure
<anastasiamac> kelvinliu: veebers just got a 'recipe for target 'dep' failed'... m assuming it's not related to my change, so m !!build!!Ing but would love to know what's going on.. http://ci.jujucharms.com/job/github-check-merge-juju/3211/console
<veebers> anastasiamac, kelvinliu I *think* that's a network connectivity thing, it couldn't hit golang.org for some reason, I suspect it's transient; lets see how the rerun goes
<anastasiamac> thumper: newline saga landed in 2.3.. feel free to move 2.3 to 2.4 :D
<thumper> anastasiamac: ack
<anastasiamac> veebers: \o/ thank you :) let's hope...
<kelvinliu> veebers, anastasiamac yeah, let's see the re-run.
<wallyworld> kelvinliu: i had to make a change to charm.v6 to fix some version string issues. can you add to your todo list making a charm build PR with the same fixes https://github.com/juju/charm/pull/258
<kelvinliu> wallyworld, sure
<wallyworld> gr8 ty
<kelvinliu> np
<babbageclunk> thumper: review pls? I just extended the test to check for both claims and ticks. https://github.com/juju/juju/pull/9131
<thumper> kk
 * thumper looks
<babbageclunk> thumper: oh hang on - still pushing that last bit.
<thumper> k
<babbageclunk> bally gometalinter
<babbageclunk> thumper: ok done
<thumper> ffs
 * thumper has a failing test but I think it is a lxd test isolation bug
<anastasiamac> thumper: fwiw, i'd rather forward-port my hcange manually.. there is a new test that was added in 2.4 that will fail so test needs to b adjusted too.
<thumper> anastasiamac: ok, do you want to take my change as well?
<thumper> if you do the 2.3 merge, it is just yours and mine
<anastasiamac> thumper: honestly? no... but i can if u command ;)
<thumper> mine is only 4 lines and clean mege :)
<thumper> pretty please
<anastasiamac> thumper: k
 * anastasiamac saluts
<anastasiamac> salutes even?*
<thumper> anastasiamac: thank you
<wallyworld> babbageclunk: were you going to join us in this meeting?
<babbageclunk> oh yes - send me an invite?
<anastasiamac> thumper: merge PR - https://github.com/juju/juju/pull/9134
<thumper> anastasiamac: lgtm
<anastasiamac> \o/
<veebers> wallyworld: you have a moment to talk migration_export? Do I need to add to juju/description a SetCloudContainerWorkloadStatus or so (so I can export the #container status data) or should I finagle it in with the existing SetWorkloadStatus()?
<wallyworld> on a call give me 5
<anastasiamac> 'finagle' is totally a word too
<veebers> ack
<veebers> hmm, I didn't realise it meant in a devious manner, I thought it meant in a tricky manner :-)
<anastasiamac> 'devious, dishonest' :( i did not know either ...
<veebers> anastasiamac: I take it your 2nd build run went fine?
<anastasiamac> veebers: yes, thnx :) must have ben connectivity like usaid...
<wallyworld> babbageclunk: here's a small dep change https://github.com/juju/juju/pull/9135
<wallyworld> veebers: free now
<babbageclunk> wallyworld: looking
<wallyworld> HO?
<veebers> wallyworld: sweet, shall we HO ?
<veebers> hah omw
<babbageclunk> wallyworld: approved. You and veebers should hang out.
<wallyworld> ty. we alrerady are :-)
<babbageclunk> hey thumper are you happy with https://github.com/juju/juju/pull/9131?
<veebers> lol babbageclunk the match maker ^_^
<veebers> hey, I know a cool guy. You should hangout and talk migration exports some time
<babbageclunk> you crazy kids
<anastasiamac> noticed, i've introduced a test where ordering matters on develop... working on a fix now :(
<veebers> wallyworld: FYI https://github.com/juju/juju/pull/9081 I'll need to add another test for the conditional cloudcontainer/unit status usage too
<wallyworld> ok, ty
<wallyworld> will look soon
<veebers> ack, thanks
<anastasiamac> trivial change, could someone review plz? https://github.com/juju/juju/pull/9136
<wallyworld> anastasiamac: looking
<wallyworld> anastasiamac: also, i'm an idiot https://github.com/juju/charm/pull/259
<anastasiamac> wallyworld: ta :)
 * anastasiamac looking too
<anastasiamac> thumper: 2.4 branch is upto date as far as urs and my changes :) just wallyworld's one needs to come in and we r golden i *think*
<wallyworld> ty
<wallyworld> i'll have to wait for snap to build i think soi can copy across
<anastasiamac> k
<babbageclunk> wallyworld: can you take a look at this https://github.com/juju/juju/pull/9131, thumper has forsaken me.
<wallyworld> ok
<wallyworld> babbageclunk: i have some questions aboput the test
<wallyworld> i think it may be racy and we need some extra selec/retry loops
<wallyworld> we use retry loops in other tests with an attempt strategy
<wallyworld> vinodhini: how goes the export bundle fix?
<vinodhini> wallyworld: i pinged u this morning that i pushed the commit. in canonical #juju
<wallyworld> vinodhini: ah, you said "wallyworld." not "wallyworld:"
<wallyworld> so i didn;t get pinged
<vinodhini> ok.
<vinodhini> :)
<wallyworld> i'll take a look
<vinodhini> i am sorry wallyworld
<wallyworld> tis ok :-)
<vinodhini> wallyworld : do u need that to be changed ?
<wallyworld> i'm leaving a few small comments
<vinodhini> that check* - now it checks "instance"
<vinodhini> oh ok.
<vinodhini> sure
<wallyworld> vinodhini: so yeah, just a few small things, but the relation scope test appears inadequate as written I think, but i could be wrong about that
<vinodhini> wallyworld u mean the unit test for relationscopeskipping ?
<wallyworld> yeah
<wallyworld> we need to ensure that the would be an error if skiprelationscope were nnot true
<wallyworld> i'm not sure that is the case but am not 100% sure
<vinodhini> wallyworld : I agree it is in adequate. But can we look at it in a way that if the relation scope is missing it just skips that.
<wallyworld> but we need to prove it in a test
<wallyworld> it failed before, so we do a fix. but unless we test the fix, we are not doing the job
<wallyworld> all bug fixes should have tests to catch the rror
<vinodhini> wallyworld I get what u say. like how it is proven in other tests
<vinodhini> this just going to hit the error : missing relation scope for wordpress:db mysql:server and mysql/0' if we comment that SkipRelationScope
<wallyworld> vinodhini: if we can comment out the skip=true and the test fails and we add skip=true and it passes that is sufficient
<vinodhini> there needs to be still be some entity to prove when we skip that check
<vinodhini> yes it happens now.
<wallyworld> vinodhini: also, FYI, i fixed an issue with charm version string. part of the issue was that there were more cases of errors being ignored https://github.com/juju/juju/pull/9135
<wallyworld> ok, if the test fails that way then i think it is ok
<wallyworld> i just wasn't sure it would
<vinodhini> wallyworld i do that PR charm version fix u did with develop.
<vinodhini> this timedelay issue i am working on 2.4.2
<wallyworld> vinodhini: i already fixed it just now, just wanted to show you so you knew about the issue i had to fix for learning
<vinodhini> ok wallyworld this is something i did in June - charm version
<wallyworld> yeah. it was broken but no one uses it yet really so no one noticed
<vinodhini> wallyworld :  so the unit test is for now ok. i can address the other 2 comments
<wallyworld> i think so
<vinodhini> wallyworld : can u chk again now the PR ?
<vinodhini> so we can land it.
<vinodhini> i have done state unit tests
<manadart> Need a review of https://github.com/juju/juju/pull/9139
<manadart> jam, stickupkid ^ I am actually still doing the QA system testing steps, but some feedback in the interim...
<stickupkid> "cmd/jujud/updateseries/updateseries_test.go:1::warning: file is not goimported (goimports)
<stickupkid> service/systemd/service.go:1::warning: file is not goimported (goimports)"
<stickupkid> manadart: you need to fix those files
<manadart> stickupkid: Ack.
<manadart> Pushed.
<stickupkid> manadart: nice one
<stickupkid> manadart: so 2.0.x LXD didn't send the server name, so when we're doing zone distribution the machine zone would be empty for all servers, even though there was one, it caused it to show the error message
<manadart> Yes, I surmised this when I saw your patch yesterday.
<stickupkid> manadart: the fix was easy enough, the issue is, now we've got the same issue as before, where by the it can't find the lxd-config file for the network setup
<stickupkid> so i suspect something is amiss
 * manadart sighs.
<stickupkid> manadart: got a sec?
<manadart> stickupkid: Yep. HO.
 * rick_h_ grumbles about relogging into irc and such
<rick_h_> manadart: do you have time to review https://github.com/juju/juju/pull/9122 today please?
<manadart> rick_h_: Yes.
<rick_h_> manadart: ty!
<stickupkid> nice, that's an annoying message in the status output
<manadart> hml: Any chance you can look at https://github.com/juju/juju/pull/9139?
<hml> manadart: sure.
<manadart> Ta.
<cory_fu> Has anyone seen "could not get environ: Get https://10.105.194.1:8443/1.0: Unable to connect to: 10.105.194.1:8443" from Juju when provisioning machines on lxd before?  http://p.ip.fi/FWSs
<stickupkid> cory_fu: edge or 2.4.x?
<cory_fu> stickupkid: 2.4.1, it would seem.  Note: I'm asking this on behalf of someone in #conjure-up
<stickupkid> cory_fu: i've never seen it, was just wondering what code base to at least look at
<stickupkid> mandart: you see that?
<rick_h_> cory_fu: that seems like a network issue to the lxd api server. Is this already bootstrapped and failing on provisioning new containers? Or is this during bootstrap?
<cory_fu> rick_h_: Link is a full juju status.  It's late in a deployment of openstack (machines 15 and 16), and, while retry-provisioning did nothing, remove-unit + add-unit worked and got them moving.  Maybe all that's needed is a retry, or even just getting retry-provisioning to do the right thing
<cory_fu> Obviously, there's not much juju can really do if it can't talk to lxd, but Juju does know that clouds are unreliable so should be able to recover more gracefully
<rick_h_> cory_fu: yea, I wonder if retry-provisioning would work in this case to try again and pick up the machines?
<cory_fu> rick_h_: It didn't
<cory_fu> Seems like it should
<rick_h_> cory_fu: :( then yea seems like it should
<hml> manadart: unrelated to your pr, i found a false positive with the jujud-updateseries cmd
<hml> manadart: just a heads up, not sure if it will impact the new work
<hml> manadart: https://pastebin.ubuntu.com/p/hmfPPdgPMm/
<hml> manadart: services are reported restarted when they werenât
<manadart> hml: I can't see the issue in the paste. They are loaded and active...
<hml> manadart: theyâve been running for 5 min, but i ran the jujud-updateseries command 30sec before
<manadart> Ah.
<hml> manadart: iâm pretty sure they would have failed during the restartâ¦ if really a restart, because the series of the link to tools wouldnât have matched the current versions
<hml> manadart: since i ran the command without doing the upgrade to check the links
<manadart> hml: Do you still have that environment up?
<hml> manadart: yes
<manadart> hml: Can we do a quick HO?
<hml> manadart: sure - omw
<aisrael> cory_fu: I just ran into a case where a charm's reactive flag was cleared before a second @when was invoked, leading me to discover register_trigger (which is very useful). I had a @when('config.changed') that wasn't running, presumably, because the flag was cleared. Was there a change in that behavior sometime recently, or did I just get lucky before?
<jam> guild: if anyone is around, https://github.com/juju/juju/pull/9140 is a potential fix for the errors they are running into in bug #1789211 I still need to grab VMWare credentials and actually test that it fixes the problem
<jam> but it does seem likely to fix it.
<cory_fu> aisrael: You must have just gotten lucky.  It's been the case that clear_flag re-checks the queue for some time, though I think it's a mistake
<rick_h_> kwmonroe: cory_fu magicaltrout zeestrat bdx 37min warning for juju show time. I've started up an agenda/notes doc https://discourse.jujucharms.com/t/juju-show-38-wed-aug-29th-17-00-utc/202
<rick_h_> feel free to add anything
<cory_fu> kwmonroe: You wanted me to mention the Azure integrator charm?
<cory_fu> kwmonroe: It doesn't look like the support for that has been released in the k8s charms yet
<kwmonroe> cory_fu: yeah, mention it anyway.  thataway, i can be like "and now, here's the vsphere integrator, which is totally functional and released in the k8s charms.
<cory_fu> lol
<cory_fu> kwmonroe: I'm happy to join, but it might flow better for you to just talk about both
<kwmonroe> roger that cory_fu
<cory_fu> kwmonroe: Added to the Discourse
<rick_h_> cory_fu: kwmonroe put into the main notes linup
<rick_h_> lineup
<rick_h_> cory_fu: kwmonroe bdx magicaltrout zeestrat https://hangouts.google.com/hangouts/_/dp4hqk72ubd7bllbbvr3hj2xhae is the HO url
<rick_h_> for folks that want to watch the stream at home without joining https://www.youtube.com/watch?v=i5TGTiwXrmc
<kwmonroe> rick_h_: i have a problem making my discourse post about k8s/vsphere... the top K8s category is about deploying workloads on k8s (juju add-k8s).  where should posts about traditional k8s charms (like cdk) go?  perhaps Charming -> Kubernetes to follow the big data subcategory model?  i think we'll confuse people if we (read: you) don't get this right.
<rick_h_> kwmonroe: bah, so I'm not a fan of all the k8s stuff in root ATM and all pinned. I agree there's some room for "best place".
<rick_h_> Basically not sure for now. I'll have to get wallyworld on board with a better setup
<kwmonroe> roger dodger
<stickupkid> hml: thanks for QA'ing my PR :D
<hml> stickupkid: np
<bdx> excellent juju show - sorry I missed -  druid looks sweet!!
<rick_h_> bdx: :) we missed you dude
<bdx> im hustling trying to get this controller out the door
<bdx> getting cut by the "I'm the only person to try and do it" knife I feel
<bdx> trying to use jujucharms.com as identity for my own controller is not as straight forward as I was hoping
<bdx> is there something I'm missing here ?
<bdx> concerning using jujucharms.com for identity
<rick_h_> bdx: honestly I'm not sure. I saw your reply about the 509 error and I'd not expect that but I know we put some magic in Juju for "juju login jaas" to work
<rick_h_> bdx: the guys that work on that stuff are in the EU and EOD atma
<bdx> got it
<bdx> rick_h_: I think the problem is "no way to register external users on controller using jujucharms.com identity"
<rick_h_> bdx: right, that's the issue to figure out. Juju knows about jaas baked in so any external users just login and they're off
<bdx> I see
<rick_h_> bdx: so I'm not sure if you can/need to preseed the controller info? Or if there's another tool/bit to it
<bdx> ok
<rick_h_> bdx: the good news is that mhilton that wrote the post for you wrote most of candid and works on that stuff
<rick_h_> bdx: so you're on the right track there and just need to prod them for moving forward another couple of steps in the doc he started
<bdx> ok perfect
<babbageclunk> wallyworld: morning! Sorry I missed your qs about the test for that PR - I've put some answers in now. Alright for me to hit merge on it?
<veebers> wallyworld: I'm looking for a better way, currently want to add something like "func (u *Unit) SetCloudContainerStatus(...)" so I can write this test (i.e. to mirror Unit.SetStatus() so I can prime statuses to then test the results
<wallyworld> veebers: what do we do currently in the tests? i think we use UpdateUnitOps or something
<veebers> wallyworld: ah good point I'll have a deeper look now
<veebers> wallyworld: yeah that should do the job, thanks1
<babbageclunk> wallyworld: mind if I land that pr given the answers?
<wallyworld> babbageclunk: which PR? ECONTEXT
<babbageclunk> wallyworld: sorry https://github.com/juju/juju/pull/9131
 * babbageclunk lolles at ECONTEXT
<wallyworld> babbageclunk: yes, go for it. thanks for clarifying. i should have looked at the content of those methods sorry
<babbageclunk> wallyworld: thanks! No worries!
<babbageclunk> What the... how does running a state test suite require building apiserver/facades/controller/migrationtarget?!
<wallyworld> rick_h_: that export bundle branch compile error you had - it has to be something in your setup right?
<wallyworld> did you run the make target which removes the vendor folder?
<wallyworld> make godeps
<wallyworld> veebers: veeeeeebeeeeers
<rick_h_> wallyworld: I didn't get anchance to go back at it.
<wallyworld> rick_h_: no worries
<veebers> Only an agent can be status.Allocating right? A unit cannot?
<thumper> wallyworld: any update on the firewaller test problems I see on my branch?
<wallyworld> thumper: i'm still finishing all my morning meetings. almost ready to start
 * thumper nods
#juju 2018-08-30
<wallyworld> thumper: huh, really is intermittent for me. i also get TestRemoveMachine failures
<veebers> hah, love some of the dummy data in the tests: "pew.pew": "zap",
<blahdeblah> veebers: Needs moar "pow!" and "thwok!"
<veebers> hah indeed!
<thumper> wallyworld: I'm in our HO early if you want to start early
<wallyworld> righto
<veebers> wallyworld, thumper seems the queue of jobs is due to goodra being offline, just trying to access it now to see what the haps is
<thumper> thansk
<wallyworld> veebers: jump in standup HO when ready?
<veebers> wallyworld: yep omw
<wallyworld> veebers: sorry, too quick on clicking mouse
<veebers> lol no worries
<veebers> huh, turns out I could be using fixup instead of squash when rebasing my branches to save time
<babbageclunk> yeah fixup's the bomb
<thumper> wallyworld: hurry... there are already conflicts...
<thumper> how's it going by the way?
<wallyworld> been doing code review etc after our call, so back onto it now
<wallyworld> trying to multitask
<wallyworld> will have it done today
<thumper> :)
<veebers> kelvinliu_, wallyworld oh we should re-visit us filling up spaces in aws with our testing and needing to clean it up (kelvin you linked me to something that I lost the tab of, that looked useful)
<kelvinliu_> veebers, u mean awscli?
<veebers> kelvinliu_: it was tags in the vpc we were talking about, I got mixed up
<kelvinliu_> veebers, yeah, the tags on subnets
<veebers> kelvinliu_: I might have just hit a related but different issue, complaining about FAN on spaces
<veebers> I'll revisit tomorrow morning, I need to go sort dinner for the fam now
<kelvinliu_> veebers, ok, enjoy the dinner
<veebers> you too o/ see y'all tomorrow
<wallyworld> thumper: fixed it, just tidying up
<stub> cory_fu: A non-reactive charm is being upgraded to a reactive charm using a RelationBase relation per https://github.com/cmars/nrpe-external-master-interface/blob/master/provides.py
<stub> cory_fu: But none of the RelationBase hooks get triggered, so the nrpe-external-master.available flag doesn't get set
<stub> cory_fu: I'm working out a work around, as we likely don't want to fix it for the legacy RelationBase? I don't know what happens with Endpoints.
<wallyworld> thumper: https://pastebin.ubuntu.com/p/mSbw5HtWMp/
<jam> guild: https://github.com/juju/mgopurge/pull/26 is an update to mgopurge that just brings in our updated dependencies.
<manadart> jam: Was just looking at that. Looks like merging #25 gave that one conflicts.
<jam> manadart: yeah, it will, but it should be easy to work out, because 26 should be a superset
<jam> in the end
<manadart> jam: Ack. Like you, I couldn't see anything outside of vendor in the diff, but I'll pull it down and see if I can build it.
<jam> manadart: it now has a small patch in main, because juju/txn changed a uint64 to an int
<cory_fu> stub: Yeah, @hooks not firing for relations for non-reactive -> reactive upgrades is a known issue and a big part of why I strongly recommend avoiding @hook whenever possible
<cory_fu> stub: The Endpoint implementations don't depend on @hook so they should be upgrade cleanly
<stub> I think I need to redo that interface as an endpoint since it is so commonly used in my area
<cory_fu> stub: I've generally been converting interface layers whenever I get the chance, as it makes them easier to understand and less error prone.  The conversion usually isn't difficult
<cory_fu> stub: Any time scope is part of the exposed API, I replace it with a (relation_id, unit_name) tuple and otherwise keep the API identical, unless the Endpoint pattern suggests a clear improvement that can be done in a backwards-compatible way
<hml> stickupkid: rick: perhaps we should have just one lxd-profile charm for dev/testing instead of an lxd-profile and an lxd-profile-subordinate.
<hml> comment out the blacklist parts in the yaml
<hml> no single blacklist profile will capture all cases.
<hml> if we leave as a local charm, then the file can easily be changed for dev/testing
<rick_h_> hml: I'm not following.
<hml> rick_h_: k, chat after mtg?
<rick_h_> hml: to declare a charm a subordinate we have to define it in the metadata.yaml so I'm not sure how we can get one charm to serve both tests
<rick_h_> hml: k, wfm
<hml> rick_h_: stickupkid: one for lxd and one for lxd subordinateâ¦ but no specific black list charms.
<hml> put the blacklist items in the profile yaml,but leave commented out until needed
<kwmonroe> magicaltrout: should we block this before it gets merged? https://github.com/apache/bigtop/pull/387 -- we'd need to make a case for keeping oozie (it's currently on the block because it doesn't work with hive2:  https://issues.apache.org/jira/browse/BIGTOP-2986)
<maaudet> Is there any ways to force AWS constaints to allow the instance types t3.* when creating new machines ? Right now I'm getting an invalid constraint value error.
<stickupkid> rick_h_: really enjoyed the juju show, some good stuff in there :)
<rick_h_> stickupkid: yea, I need to get back into making time for those. I think they're good stuff.
<rick_h_> manadart: hmmm, no. Do we not support t3's?
<manadart> rick_h_: sec.
<rick_h_> oh sorry, that was tab complete fail manadart
<rick_h_> maaudet: was what I meant to type
<manadart> maaudet rick_h_: The instance types recognised from AWS are here: https://github.com/dustinkirkland/instance-type/blob/master/yaml/aws.yaml
<manadart> Support of the particular aliases is a LXD concern, we just pass it through.
<maaudet> Ok, I'll create a pull request
<magicaltrout> kwmonroe: yeah i replied in another ticket somewhere about it and said you could go ahead and drop it and we'd repackage it later in the year when we had some cycles to work on it.
<magicaltrout> That said, oozie does seem to support Hive2 so i'd like to put the effort in to getting it migrated
<magicaltrout> and we could/should update it to 5.0 at the same time anyway
<stickupkid> hml: can I get a CR for this one https://github.com/juju/juju/pull/9142 ?
<hml> stickupkid: sure
<stickupkid> hml: let me know if you need more QA steps, I've made a demo as well, so you can see it in action :D
<hml> stickupkid: nice
<magicaltrout> also you owe me an update on the spark pr kwmonroe
<stickupkid> hml: i like making the demos, it forces me to make sure it works :D
<hml> :-)
<kwmonroe> magicaltrout: i have a reply in draft.  please continue to hold.
<magicaltrout> https://www.youtube.com/watch?v=6g4dkBF5anU
<kwmonroe> oh yaaas magicaltrout!  much <3.  i'm gonna use this a lot.
<kwmonroe> man those comments are gold... "No point of this video, if you want to hear an hour of this music just call Comcast tech supportï»¿"
<magicaltrout> heheh
<hml> stickupkid: i pushed a non reactive charm to the PR: https://github.com/juju/juju/pull/9141/filesâ¦
<hml> stickupkid: checkout the profile yaml, as an example of my proposal above
<stickupkid> hml: nice, i'll take a look
<stickupkid> hml: i thought we where just having key/values in the lxd-profile, I though juju provided the whitelist ?
<stickupkid> s/though/thought/
<hml> stickupkid: thatâs correct
<stickupkid> hml: so what's the blacklist config comments in the profile?
<stickupkid> hml: is that just a reminder to whom is writing it?
<hml> stickupkid: in lue of creating a separate blacklist charm.
<stickupkid> ah
<stickupkid> hml: i think just make a new charm, that way we don't need to edit it?
<hml> stickupkid: thereâs a snag with thatâ¦ weâd need a bunch to catch all the black list items possible.
<hml>  the charm doesnâ tneed to be built
<hml> so we can edit the profile.yaml as weâre testing
<hml> it doesnâ tneed to be commited unless we find a bug in it
<stickupkid> hml: i.e. well I was wondering we could just have a charm that just failed, so you can test it manually
<stickupkid> and  we know it always fails, then like you said, we can do what you've got there
<stickupkid> i'd be happy with that?
<hml> stickupkid: hrmâ¦ could open up issues.  the 3 of us chat when rick_h_  getâs back? :-)
<stickupkid> hml sure
<stickupkid> hml: but i like the layout of the lxd-profile :)
<hml> stickupkid:  i hope all those work, i was trying to add things from the spec that were specially mentioned
<hml> and i found a few we might want to add to the do not use
<stickupkid> "security.privileged: "true"" <- that one requires a restart
<stickupkid> that'll be interesting :D
<hml> cool
<stickupkid> maybe we should give feedback about which ones require a restart - add it as a nice to have down the road...
<stickupkid> hml: I've updated the PR, i'm going to do some manual testing, to make sure it works, for LXD 2 :)
<hml> stickupkid: ack
<stickupkid> hml: the error message when entering an empty trust-password is really fruity
<stickupkid> hml: "ERROR finalizing credential: missing or empty "client-cert" attribute not valid"
<hml> stickupkid: is an empty trust password valid?
<stickupkid> hml: about to ask the LXD guys
<hml> stickupkid: i have a 1 liner with comments if you have a sec.  https://github.com/juju/juju/pull/9144
<stickupkid> hml: nice and easy one
<stickupkid> yay
<hml> stickupkid: ty!
<rick_h_> stickupkid: hml sorry reading backscroll
<hml> rick_h_: np - not sure stickupkid is around, but we can chat in the am (our) if not
<rick_h_> hml: k, did you want to sync?
<hml> rick_h_: letâs wait for stickupkid, not a rush for today
<rick_h_> hml: ok
<veebers> Morning all o/
<rick_h_> howdy veebers, heads up. I saw that we got the +1 for the 2.4.3 from solutions today
<veebers> rick_h_: ack, yay :-)
<magicaltrout> kwmonroe: if i wanted to look at the oozie issue
<magicaltrout> do i just build the master branch?
<kwmonroe> magicaltrout: you mean from a bigtop perspective (you want to see how bigtop fails)? or from upstream oozie?
<magicaltrout> i'm mailing the mailing list kwmonroe cause bigtop is making me sad
<kwmonroe> magicaltrout: well, if you wanted to try to repro what bigtop sees for oozie, you can use the prebuilt docker images (bottom of https://cwiki.apache.org/confluence/display/BIGTOP/How+to+build+Bigtop-trunk), and "docker run blah ./gradlew bigtop-oozie-pkg"
<kwmonroe> er, s/bigtop-oozie-pkg/oozie-pkg/
<kwmonroe> also magicaltrout, if you wanted to build an oozie snap, you can reference the spark snapcraft.yaml... just gonna leave this here: https://github.com/juju-solutions/snap-spark/blob/master/snap/snapcraft.yaml
<magicaltrout> i have enough work on... i hate you kwmonroe !....
 * magicaltrout goes to dig out an espresso
<magicaltrout> i also told Ryan you complained about his missing icons.... he swore a bit then complained about inkscape so I believe he's fixing them slowly
<rick_h_> magicaltrout: :)
<kwmonroe> magicaltrout: sudo snap install inkscape
<magicaltrout> indeed
<kwmonroe> that's the hard part.. the rest is just making lines and colors.
<magicaltrout> he said it had too many buttons.......
<rick_h_> magicaltrout: sounds like it's time to start a design and UX team in the company :P
<magicaltrout> hell if Ryan starts generating some cash
<magicaltrout> he can have whatever team he so desires...
<magicaltrout> https://www.youtube.com/watch?v=ovZ9iAkQllo this went past my tweetstream a couple of hours ago
<magicaltrout> we'll be trying to replicate this in Juju soon
<magicaltrout> rick_h_: we're right in thinking you cant sent a mimumum constraint at charm level right, only bundle? We stuck in some code to warn users when their machines were too small today
<magicaltrout> but it feels a little clunky
<rick_h_> magicaltrout: I'm trying to parse that atm
<rick_h_> magicaltrout: so yea, the charm can go into an error/etc I guess during install if it fails some sanity checks
<kwmonroe> yeah magicaltrout, afaik, constraints can be set in bundles, but not, for example, in a charm's metadata.yaml
<kwmonroe> @hook(start): if `free -h < 8GB`, echo "dummy" && exit 1; fi
<magicaltrout> thats basically it
<magicaltrout> ah look at that Oozie test failures
<magicaltrout> winning
<magicaltrout> kwmonroe: whats up with your attic, have you moved house or something?
<wallyworld> thumper: https://pastebin.ubuntu.com/p/Y6yX4DdvrQ/
<wallyworld> oh wait
<wallyworld> wrong package
<wallyworld> doh
<thumper> wallyworld: when you have time, we can HO to discuss
<wallyworld> let me check the correct package
<wallyworld> and now i see the failure
<wallyworld> i'll try reverting the changes
<wallyworld> thumper: yup, just revert the changes
 * thumper nods
<veebers> hmm, I wonder if my 'hanging' mouse is batteries and not system related :-P Occam's razor and all that
<babbageclunk> veebers: yeah, that always gets me too
<externalreality> veebers, I use a mouse that has a cable that supplies it with power
<veebers> externalreality: hah but then you have clutter and, uh, can't take your mouse away from your computer very easily :-P
<babbageclunk> for mouse walks
<externalreality> veebers, touchÃ©
<thumper> wallyworld: it landed \o/
<wallyworld> i saw :-)
<kwmonroe> magicaltrout: it's the attic in my garage, which i was all gung-ho into converting into an office like 2 years ago, and then i started working on bigtop.
<kwmonroe> so i still have to do carpet and sheetrock the knee-walls
<wallyworld> babbageclunk: veebers is crying because he misses you
<magicaltrout> it looks rather too clean kwmonroe
<magicaltrout> where did the golf clubs and stuff go?
<magicaltrout> alright kwmonroe i updated that ticket with some stuff
<magicaltrout> chime in BIGTOP-3073 as well if you have an opinion. And reply to my spark PR :P
<magicaltrout> bloody kubernetes nonsense
#juju 2018-08-31
<wallyworld> thumper: see the comment on the juju status bug? we have had discussions previously as to whether juju client should auto retry and have said no it shouldn't. it seems that's something juju wait could reasonably do?
<anastasiamac> +1 that 'wait' should do it
<anastasiamac> veebers: so under stress, cannot reproduce this inteermittent failure ;( however, if i interupt my stress testing, I am getting the same error.. m pondering what to do next (m not really keen to jump on ci machine... rather work locally)
<veebers> anastasiamac: odd. Sounds like something is interrupted. Next time we see it in CI lets have a poke around on the lxd container that gets left there and check logs etc.
<veebers> anastasiamac: as to repro locally, um, are you running it in a container?
<veebers> anastasiamac: I'm just checking out what else we do in those tests
<veebers> (to run those tests)
<anastasiamac> veebers: no, not in the container locally...
<anastasiamac> veebers: yes, we should
<veebers> anastasiamac: is there a specific arch that it fails on?
<veebers> anastasiamac: so the only things I can think of that are different from you doing it locally and in ci are: It's a 'shared' machine as in many things can be running on it, it's happening in a lxd container, it's setting up tmpfs of 20G and exporting TMPDIR .. . (although that machine has 250+GB of ram so should be sage)
<anastasiamac> yeah, veebers, none of these strike me as potential cause..
<anastasiamac> if it was due to a 'shared' ,achine, i would have expected failure to b visble under stress...
<anastasiamac> i'll keep digging
<wallyworld> kelvinliu_: a small k8s PR - just some cleanup https://github.com/juju/juju/pull/9145
<kelvinliu_> wallyworld, looking now
<wallyworld> kelvinliu_: maybe we should delete the namespace first
<wallyworld> i think that would be safer
<kelvinliu_> wallyworld, yeah,
<wallyworld> kelvinliu_: changes pushed
<kelvinliu_> wallyworld, looks great, thanks
<wallyworld> ty
<wallyworld> babbageclunk: those lease timeout errors do seem to be gone now; my aws k8s deployment seems happy
<babbageclunk> wallyworld: oh, awesome
<kelvinliu_> github is down?
<wallyworld> kelvinliu_: look like it :-(
<kelvinliu_> wallyworld, :-(
<wallyworld> kelvinliu_: but just cam back
<kelvinliu_> yeah, it's just down for ~ half hr
<kelvinliu_> wallyworld, got a minute to talk about kubeflow doc?
<wallyworld> kelvinliu_: yeah, sure
<veebers> anastasiamac: hopefully you've cracked the invalid zip issue!
<anastasiamac> veebers: so far so good...
<anastasiamac> but i'd like to have at least 10 runs without the problem...
<veebers> sweet
<stickupkid> anastasiamac: thanks for doing this, it's been the bane of my life :D
<anastasiamac> stickupkid: don't thank me yet... m just poking it a bit for now...
<manadart> So what is the story with 2.3 and 2.4 branch dependencies? I check out the latest, run (the old) godeps command, but get build failures.
<manadart> https://pastebin.ubuntu.com/p/266CSCfpKp/
<stickupkid> ouch, which branch is doing that one 2.4?
<manadart> 2.3 and 2.4
<stickupkid> let me test that
<stickupkid> works for me?
<stickupkid> did you run "export JUJU_MAKE_GODEPS=true; make godeps"
<anastasiamac> manadart: stickupkid: for 2.3 and 2.4 u need to run ^^
<anastasiamac> for 2.5 (develop) run 'make dep' to get dependencies
<manadart> anastasiamac: Thanks. It's the removal of the vendor directory that I missed.
<anastasiamac> manadart: oh yeah, it is easily missed but important :)
<stickupkid> it does that automatically now in latest 2.4
<rick_h_> howdy juju world
<maaudet> Is there any ways to decide which root-disk type to use in AWS when adding a new machine?
<kwmonroe> maaudet: i *think* you only get to specify the root-disk size.. i don't see a "volume-type" here: https://docs.jujucharms.com/2.4/en/reference-constraints -- that said, some aws instances have different storage characteristics (eg, the I3 instances have SSDs; see the storage-optimized tab here: https://aws.amazon.com/ec2/instance-types/).  so you might be able to get a root disk type indirectly by --constraints instance-
<kwmonroe> type=foo.
<kwmonroe> maaudet: there *is* support for volume-type when using juju storage (https://docs.jujucharms.com/2.4/en/charms-storage), but that's not root-disk, so it may not be what you're after.
<rick_h_> maaudet: there's an iops setting I believe. Maybe that's just in juju storage?
<maaudet> rick_h_: Yeah, I tried the setting and it only works for additional storages
<maaudet> kwmonroe: I think that all instances types defaults to the same disk type in AWS
<maaudet> Although, the default disk type differs from regions <_<
<maaudet> Some regions have standard types, others SSD as defaults
<maaudet> On another hand, it seems easy to switch to another disk type from the AWS console
<admcleod> so... using 2.4.3, on s390x - i have model-config image-stream set to daily
<admcleod> but /etc/cloud/build.info says 20180823, whereas "lxc image list ubuntu-daily:bionic/s390x" shows 20180830
<admcleod> is there something else i can check before i log a bug?
<kwmonroe> hey!  you're right maaudet.  i didn't grok the instance-types correctly, but a quick deploy on i3-large yields the same 'ol 8g slow disk as /dev/xvda1.  the difference in i3 vs our default is in the instance store (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html#instance-store-volumes), which is temporary storage, and not at all related to the root-disk :/
<admcleod> oh
<admcleod> image-stream is not container-image-stream
<admcleod> thanks kwmonroe !
<kwmonroe> :)
<stickupkid> hml: so when you say verify lxd-profile, are you thinking that charm repo would validate the whitelist - or passing a validator to the charm validation?
<stickupkid> hml: walking through the bundle validation, it's interesting
<admcleod> alright so i changed container-image-stream to daily in the model and added a container and didnt get the daily image
<admcleod> 5 min til bug
<hml> stickupkid: I was just thinking to have validate in the charm â¦ , but it doesnât have to be.
<stickupkid> hml: cool, just working my way through
<hml> stickupkid: if we use the Profile structure of the lxd api - itâs very easy to unmarshall the yaml
<stickupkid> hml: ooo, let me check that
<stickupkid> hml: yeah, 100% agree, we could use api.Profile
<hml> stickupkid: i looked at 9141, added a comment.  anything else?  iâll put up a failing charm shortly
<stickupkid> hml: done :D
<hml> stickupkid: the failing charm pr: https://github.com/juju/juju/pull/9147
<stickupkid> hml: looks good to me :D
<hml> stickupkid: ty!
#juju 2018-09-01
<Ablu> Hi, I try to create a "manual" cloud and then add a machine which actually is a docker container. However, after the add-machine exits the machine is stuck in `pending` and I also do not see any process running within the docker container... Are there any specific requirements regarding how the machine which is about to be added needs to be configured? (https://docs.jujucharms.com/2.4/en/clouds-manual does not help a lot as soon
<Ablu> things go wrong unfortunately)
#juju 2018-09-02
<babbageclunk> veebers: did anastasiamac mention that she'd fixed that "not a valid zip file" intermittent failure?
<veebers> babbageclunk: I see that she had a branch up that might and she was running the pr-check over it many times
<babbageclunk> ah, right - thanks!
<veebers> I haven't gone through the runs yet to see if that test passes/fails
<thumper> morning folks
<veebers> hey thumper o/ Good weekend?
<thumper> pretty good
<thumper> had a much longer walk on Saturday than I prepared for
<thumper> went up Mt Cargill
<thumper> got some great photos
<veebers> Nice! Better Sat then yesterday for a walk and photos :-)
<thumper> veebers: did Linc get you anything for father's day?
<babbageclunk> accidentally long walks are the best
<veebers> thumper: Link and Kerri gave me a sleep in (yay \o/) and a nice hand-blown glass keep cup
<veebers> We had my dad and FIL over for dinner too, was really nice
<thumper> cool
<thumper> babbageclunk: how about you? get anything interesting?
<thumper> I got the delicious dunedin cook book
<thumper> it is really cool
<veebers> thumper: oh sounds interesting
<thumper> one or two recipes from many different resturants and cafes around Dunners
<babbageclunk> thumper: some nice chocolate and an amazing crown
<thumper> you must wear the crown for our next hangout
<veebers> babbageclunk: crown? awesome
<babbageclunk> yeah it's pretty good
<babbageclunk> thumper: I'll wear it tomorrow
 * babbageclunk is feeling rubbish and is out of cold medication that works.
<thumper> :(
<veebers> babbageclunk: oh no :-(
<anastasiamac> babbageclunk: :(
<anastasiamac> but yes, i think i did fix it
<anastasiamac> coz even tho i do have failures in some runs, they r not the same
<anastasiamac> so if u r keen to review, i can land it :)
<veebers> anastasiamac: those might be the ones that hml mentioned (that might be related to thumpers change)
 * thumper is writing up a long discourse post about watchers
<thumper> https://discourse.jujucharms.com/t/state-watchers/217
<anastasiamac> veebers: PTAL  https://github.com/juju/juju/pull/9146 - hte PR that i believe fixes 'invalid charm zip' in ci
#juju 2019-08-26
<thumper> wallyworld, babbageclunk or anyone really... https://github.com/juju/juju/pull/10556
<wallyworld> ok
<wallyworld> thumper: lgtm until the next wack-a-mole moment :-)
<thumper> wallyworld: thanks
<wallyworld> babbageclunk: small CI fix? https://github.com/juju/juju/pull/10557
<babbageclunk> wallyworld: looking
<babbageclunk> wallyworld: looks good to me!
<wallyworld> yay, ty
<hpidcock> CI fix for anyone https://github.com/juju/juju/pull/10558 thx
<wallyworld> hpidcock: suprised k8s bootstrap ends up requesting 7GB
<anastasiamac> ci fix for strings.replaceall https://github.com/juju/juju/pull/10559
<hpidcock> wallyworld: 3.5gb for mongo 3.5gb for the controller
<wallyworld> hpidcock: default is 3.5GB as a const
<wallyworld> ah right
<wallyworld> we should reduce the default for k8s i think
<wallyworld> maybe 1.5 or 2
<wallyworld> maybe /2 num containers is ok
<hpidcock> I think there is a bunch more work we need to do on resource limits/requests in k8s
<hpidcock> possibly podspec v2 might need to look into it?
<wallyworld> there is a lot more work, yes
<wallyworld> podspec v2 would need to work with existing constraints model, but i think we could do something
<rick_h> guild remember CI day party today
<manadart> rick_h: Partying hard.
<rick_h> manadart:  wheeee
<jam> rick_h: indeed. though stickupkid and achilleasa are out for UK holiday today
<manadart> Need a review: https://github.com/juju/juju/pull/10561
<hml> babbageclunk: pr review pls: https://github.com/juju/gomaasapi/pull/82
<hml> babbageclunk: should look familar.  :-D
<hml> just s/device/machine
<babbageclunk> hml: oops, looking now!
<babbageclunk> hml: approved
<hml> babbageclunk: ty!
<babbageclunk> Can I please get a review of https://github.com/juju/juju/pull/10562
<babbageclunk> wallyworld, thumper, someone else?
 * thumper looks
<babbageclunk> fanks
#juju 2019-08-27
<babbageclunk> thumper: take another look at https://github.com/juju/juju/pull/10562 plz?
<thumper> babbageclunk: ack
<thumper> babbageclunk: lgtm with one minor change
<thumper> adding a clock to the unit agent manifold config
<thumper> there are three other uses of clock.WallClock in that file.
<babbageclunk> thumper: ok wilco
 * thumper starts looking at some of these intermittent failures
 * thumper wonders if there is some nifty scripting we could do to scrape jobs to determine frequency of intermittent failures
<ec0> sounds like you need a job status prometheus exporter
<babbageclunk> thumper: I keep meaning to write something to do that but getting distracted
<thumper> babbageclunk: yeah, understood
<babbageclunk> would be super useful though
<wallyworld> hpidcock: when thumper is done with you.... https://github.com/juju/juju/pull/10563
<thumper> crap...
<thumper> my fix didn't
 * thumper pokes some more
<hpidcock> wallyworld: looks good, few comments that's all
<wallyworld> ty
<wallyworld> hpidcock: because the v3 CLI is strictly opt in, there's no need to be too verbose about reminding users what the v2 syntax is
<hpidcock> wallyworld: yeah, I understand that, but I imagine at some point v3 won't be opt in. Was just trying to highlight that we should provide feedback when something is deprecated.
<wallyworld> i can add a note to the help text for "juju run"
<thumper> I think I have it now
<thumper> over 200 concurrent successes without a failure and during a stress run of the entire test suite
 * thumper pushes
<thumper> wallyworld: https://github.com/juju/juju/pull/10564
<wallyworld> ok
<wallyworld> thumper: lgtm
<thumper> wallyworld: ta
<thumper> landing that branch, and EOD
<thumper> later peeps
<stickupkid> jam, got a sec?
<stickupkid> jam, scratch that, i've worked it out
<achilleasa> can someone do a quick CR on https://github.com/juju/charm/pull/290?
<achilleasa> jam: ^^ is the fix for #1841105
<mup> Bug #1841105: panic: runtime error: invalid memory address or nil pointer dereference <juju:In Progress by achilleasa> <https://launchpad.net/bugs/1841105>
<manadart> achilleasa: Looking.
<manadart> achilleasa: approved.
<achilleasa> manadart: can you also please check https://github.com/juju/juju/pull/10565 (brings the charm.v6 changes to juju) and try the QA step?
<manadart> achilleasa: Sure,
<hml> manadart: review please: https://github.com/juju/juju/pull/10555
<manadart> hml: Looking.
<jam> guild in 9
<manadart> hml: What provider are you using to add subnets?
<hml> manadart:  :-)  I used maas, but the call failed because the subnets were already created.  so i made sure that the correct facade versions were being used and the errors stayed the same.  iâve since learned that aws might work there better.
<manadart> hml: OK, yeah. I was using AWS.
<hml> manadart: you had no issues bootstrapping a aws on a develop branch today?
<stickupkid> rick_h, omitempty won't work for us, as the bundlechange api doesn't work the way other apis work
<rick_h> stickupkid:  doh, otp atm but let's table it then. We don't need to solve it until 2.7
<rick_h> stickupkid:  and go with the release for the bug affecting the customer please
<stickupkid> rick_h, ok sounds good
<rick_h> stickupkid:  and we can look into what it'll take to have bundlechanges behave like a real boy api
<stickupkid> rick_h, https://github.com/juju/python-libjuju/pull/349 release bump
<rick_h> stickupkid:  k, +1'd
<stickupkid> rick_h_, ty
<manadart> hml: If you are interested, I have started to put together https://github.com/juju/juju/pull/10566. It will accrue context as I push the other pieces up.
<hml> manadart:  cool - will start to look
<thumper> morning
#juju 2019-08-28
<babbageclunk> wallyworld: application data bag state changes: https://github.com/juju/juju/pull/10539
<wallyworld> babbageclunk: why do we need Role in relationApplicationSettingsKey? isn't "r#123#mysql" enough?
<babbageclunk> wallyworld: I wasn't sure it was guaranteed to be unique - can you ever relate an application to itself?
<wallyworld> no, that's what peer relations are for
<babbageclunk> well, peer relations are for relations that can only be for this application. I mean, could you have an application that has a require and provide endpoint with the same interface, where the other end could be a different application, but could also be the same application
<babbageclunk> ?
<babbageclunk> It just seemed like including the role avoids that question
<wallyworld> technically but i would hope we disallow that
<wallyworld> now you've made me want to check
 * babbageclunk makes a charm to see
<babbageclunk> ha
<wallyworld> babbageclunk: doesn't seem possible, i get "no relations found"
<wallyworld> which is reassuring
<babbageclunk> well still, there's nothing in the data model that prevents it - if anything it would make more sense to remove application, but that seems a bit weird.
<wallyworld> but it is something juju disallows and doesn't make sense semantically
<wallyworld> babbageclunk: lgtm modulo the role in the id
<thumper> ugh...
<thumper> while looking at one intermittent failure in TestCharmProfilingInfoError I found another
<thumper> perhaps it is the same problem in a different guise
 * thumper digs more
<thumper> quick review for someone: https://github.com/juju/juju/pull/10570
<wallyworld> thumper: lgtm
<thumper> wallyworld: cheers
<thumper> anastasiamac: speak of the devil
<thumper> anastasiamac: I was wanting to hand you a nice PR, but wallyworld just looked at it instead
<thumper> anastasiamac: was this one... https://github.com/juju/juju/pull/10570
<anastasiamac> thumper: devil!! DEVIL! maybe at least d'evil :D ?
<thumper> heh
<thumper> Dr Evil?
<anastasiamac> niiice! thnx wallyworld  - so delighted u  r holding the fort :D
<anastasiamac> nuh.. not a dr and dont want to b one...
<thumper> I had thought about studying for a PhD
<anastasiamac> thumper: but since u r in the talking mood, do u have a sec to talk about that test i mentioned yesterday
<thumper> but I couldn't think of anything that I really wanted to study that badly for three years
<thumper> anastasiamac: sure
<anastasiamac> 1:1?
<thumper> omw
<timClicks> thumper: could you create the hello-juju repo and authorise juju hackers to push to it?
<thumper> timClicks: in a minute
<anastasiamac> wallyworld: loved 'show-action' PR btw :) thank you !
<thumper> otp
<wallyworld> thanks for reviewing
<anastasiamac> anytime :D
<wallyworld> i have another almost ready, pending tests
<anastasiamac> \o/
<wallyworld> all part of actions v2
<anastasiamac> i have one to go too.. jsut doing live tests now to fill out details.. 'll b epic
<wallyworld> anastasiamac: i didn't change the existing test infrastructure for show cmd, as i neede to use what we had already for list action tests
<timClicks> thumper: oh also hello-juju-charm
<anastasiamac> wallyworld: yeah it's fine.. m more curious if u r keen to change to use cmd.Output :)
<wallyworld> probs, otp, will look properly in a bit
<anastasiamac> nws
<thumper> anastasiamac: https://github.com/juju/juju/pull/10571
<thumper> anastasiamac: a very simple fix in the end
<thumper> jam: https://github.com/juju/juju/pull/10571 for what we talked about before
 * thumper opens up the PR for any takers
<thumper> hpidcock, kelvinliu, wallyworld, babbageclunk: https://github.com/juju/juju/pull/10571
<kelvinliu> thumper: lgtm ,thanks!
<thumper> kelvinliu: thanks
<anastasiamac> thumper: \o/ how did i miss the ping?  :(
<anastasiamac> well done!!!
<thumper> anastasiamac: no idea
<thumper> I thought you were just ignoring me :-P
<anastasiamac> i would not dare :P
<kelvinliu> np
 * thumper is done
<thumper> later peeps
<babbageclunk> hey jam, the application data bag prs we were talking about are https://github.com/juju/juju/pull/10539 (state changes) and https://github.com/juju/juju/pull/10572 (apiserver)
<parlos> Good Morning Juju!
<stickupkid> anyone around for a CR https://github.com/juju/os/pull/11
<hml> stickupkid: looking
<stickupkid> hml, yeah, good shout, i'll do that now
<hml> stickupkid: iâm not parsing the comments around DefaultSupportedLTS().  â¦
<hml> stickupkid: the example reads to be what juju 2.3.x shouldnât have for a defaul tto me
<hml> stickupkid: done
<stickupkid> hml, that was copy and pasted from juju source code
<hml> ha
<hml> stickupkid: from < 2015
<hml> ?
<stickupkid> yeah
<stickupkid> hml, done
<hml> stickupkid: looking
<hml> stickupkid: approved
#juju 2019-08-29
<babbageclunk> thumper: fix for the watcher tests: https://github.com/juju/juju/pull/10573
<babbageclunk> I don't really understand why it works though - shouldn't the call to StartSync in the NotifyWatcherC have the same effect?
<thumper> babbageclunk: do you want me to explain why?
<thumper> babbageclunk: and there is a better method
<babbageclunk> thumper: yes please!
<thumper> babbageclunk: otp now, see PR
<babbageclunk> ok, that's clear - thanks!
<babbageclunk> updating now
<wallyworld> anastasiamac: any chance to look at those PRs? i've tweaked the use of ctx.out in the first one
 * anastasiamac looking
<anastasiamac> wallyworld: almost :) i think it's painfully close... commented
<wallyworld> anastasiamac: ty, will look real soon after meeting
<anastasiamac> nws. m not in rush \o/
<thumper> babbageclunk: https://github.com/juju/juju/pull/10573 wasn't approved, so your merge didn't happen
<thumper> I have approved it now
<wallyworld> anastasiamac: using a struct for args results in different output to the spec (the Arguments header has a ":" instead of free text). And there's still the need to print the action description as free text
<wallyworld> maybe the extra ":" doesn't matter
<anastasiamac> wallyworld: yeah i would not worry about ':"
<anastasiamac> :D
<wallyworld> i also like the extra \n at the end
<wallyworld> otherwise it's hard to see where the output ends and the prompt starts
<anastasiamac> but inconsistent with all other commands, especially show- ones
<wallyworld> the other ones are wrong then :-)
<anastasiamac> haha
<wallyworld> anastasiamac: there's still the matter of some free text being needed for the description. that still requires fprintf
<wallyworld> in which case may as well use fprintf for the "Arguments" header as well
<wallyworld> unless we insist on the use of "Description:"
<anastasiamac> wallyworld: m wondering if u r really thinking this or just not keen to change the PR...:) fwiw m happy with it to land as is but i think it has these rough edges that could have been avoided from the start....
<anastasiamac> wallyworld: i do not think that "DEscription:" is bad :)
<wallyworld> what rough edges? just sticking to the spec
<wallyworld> the spec calls for description as free text
<wallyworld> i can see why that is not a bad thing as it's user readable
<anastasiamac> like too much fmt.Printf directly to stderr and stdout - one of the things i want to tackle next and it'd be (and laready is a nightmare) coz of the rogue direct writes to stderr/stdout
<wallyworld> it just so happens the printing of the action args is "yaml-like"
<anastasiamac> already*
<anastasiamac> want to talk it over rather than type?
<wallyworld> sure
<anastasiamac> stdup?
<wallyworld> ok
<babbageclunk> thumper: ah, right - thanks!
<wallyworld> anastasiamac: https://github.com/juju/juju/pull/10568 should be ok now, has the rename
<anastasiamac> yes i saw the update and m looking already :D
<wallyworld> you are on the ball
<anastasiamac> haha yes u do keep me on my toes :)
<rick_h> hml:  looks like the build came through this time so woot woot
<hml> rick_h: rgr
<achilleasa> hml or stickupkid: can either of you please take a look at https://github.com/juju/packaging/pull/7?
<stickupkid> achilleasa, looking
#juju 2019-08-30
<pmatulis> i rebooted a machine and status shows 'hook failed: "leader-settings-changed"'. how do i get out of that?
<anastasiamac> pmatulis: 'juju resolved <unit-name>', see for 'juju help resolved' for more info
<anastasiamac> pmatulis: u might need to do it for all units on that machine.. altho we r hoping u have a unit/machine
<pmatulis> anastasiamac, hi! awesome. i did it for three units on the machine and everything looks fine now
<anastasiamac> pmatulis: \o/
<pmatulis> anastasiamac, btw, what is an elegant way to simulate a downed unit? nova-compute/1 to be precise
<anastasiamac> pmatulis: i dont know specifics for nova-compute but when i need to have a 'downed' unit, i stop it's machine
<pmatulis> anastasiamac, yeah, that's what got me into trouble :) i neglected the fact that this is a hyperconverged openstack node
<anastasiamac> pmatulis: :)
<pmatulis> i guess i'll go monkey around with the processes in the machine
<anastasiamac> k
<thumper> pmatulis: the best way is to ssh into the machine and stop the unit agent
<pmatulis> thumper, and that will stop the corresponding "service" (e.g. nova-compute)?
<pmatulis> (same as 'systemctl stop nova-compute.service ?)
<thumper> pmatulis: no
<thumper> if you are trying to replciate a workload down
<thumper> then you need to take the workload down, not the agent
<thumper> unless you are trying to replicate a machine down
<pmatulis> thumper, right, that's what i thought
<wallyworld> kelvinliu: lgtm! a few small things before landing. let's get it in and make progress
<kelvinliu> wallyworld: just back from lunch, thx for review,
<wallyworld> no worries
<kelvinli_> hi wallyworld saw some of the comments are not different with the spec, got time HO to discuss further?
<wallyworld> sure
<wallyworld> kelvinli_: forgot to ask - with ken's external-ip issue - is it just sufficient for us to assign a user supplied external ip value passed in at bootstrap time to the correspondoing "external-ips" controller service attribute
<kelvinli_> wallyworld: im not sure, need to take a look further
<wallyworld> ok, next week :-)
<kelvinli_> yep
<manadart> Trivial review: https://github.com/juju/juju/pull/10579
<elox>  /msg NickServ identify 1ircpassword
<elox> fantastic passwordchange?
<manadart> achilleasa: I was talking to rick_h about the network/space remodelling work last week and he mentioned you would be in the slot to move on to this soon.
<manadart> This is worth a read, as it is something we are looking into as part of the work: https://discourse.jujucharms.com/t/multiple-space-bindings-per-endpoint/1999
<achilleasa> manadart: thanks for the link!
<stickupkid> achilleasa, if the series isn't valid and we don't ask the user to use force, can they still use force?
<achilleasa> stickupkid: I guess they could but it would still fail right? Could it be a valid series that the client doesn't know of yet?
<stickupkid> achilleasa, so if the client doesn't know about it, we don't either, so in that instance we would need a new release... using force wouldn't help either, as no binaries... but it seems very total.
<achilleasa> stickupkid: I guess we could leave it as-is then. It's highly unlikely that people will try to bootstrap with an invalid series name to begin with, right?
<achilleasa> stickupkid: I pushed a commit to my packaging PR which addresses the review comments. As per John's suggestion I will extract the FromURL method and move it to my upcoming juju PR
<stickupkid> achilleasa, i'll swap you then https://github.com/juju/os/pull/12
<achilleasa> stickupkid: approved
<stickupkid> achilleasa, ta
<stickupkid> achilleasa, i approved yours as well
<stickupkid> good spot about users in urls
<stickupkid> that was a disaster waiting to happen
<achilleasa> stickupkid: that's why we should never log errors :D
<achilleasa> stickupkid: I removed the URL bits. Can you do a final check before I merge?
<stickupkid> achilleasa, yeap, happy with that
<manadart> stickupkid: I responded to you comment. in my patch. Take a look when you've the time.
<rick_h> stickupkid:  manadart made a suggestion for the wording. Let me know what you think.
<manadart> rick_h: Works for me; will mod.
<stickupkid> rick_h, yarp, much better
<hml> manadart: i updated the comments for items you had questions in my pr.  pls take a look and see if they make more sense
<manadart> hml: Thanks. All looks good.
<magicaltrout> hello fine people
<magicaltrout> i need to bootstrap a kubernetes cloud
<magicaltrout> and its been a while and i'm stuck
<magicaltrout> rick_h: wake up! ;)
<stickupkid> magicaltrout, think he's out atm, where you stuck?
<magicaltrout> just trying to figure out the bootstrap docs stickupkid
<magicaltrout> https://paste.ubuntu.com/p/MHvTVQVyyd/
<magicaltrout> so anyway
<magicaltrout> i have a k8s cluster with not much in it, few namespaces and a couple of pods and i need to write some k8s charms
<magicaltrout> so i'm trying to bootstrap it
<magicaltrout> works using kubectl on the same box as juju
<magicaltrout> but i get that, but i can't find any docs telling me what blanks I should be filling in there
<magicaltrout> for kubernetes spun up from juju, what is the cloud name/type/region blah
<magicaltrout> I just get told its wrong when I guess
<stickupkid> magicaltrout, is it a local one?
<stickupkid> magicaltrout, "juju add-k8s kubernetes --local" would work in that case
<magicaltrout> well thats the other thing, i saw local in the help.. i have zero clue what local refers to
<magicaltrout> local to what?
<magicaltrout> oh like "If you just
<magicaltrout> want to update the local cache and not a running controller, use
<magicaltrout> the --local option."
<stickupkid> magicaltrout, yeah
<magicaltrout> well I have a controller, is this not the generic controller?
<magicaltrout> like its on a box which already has a juju connected to an openstack cloud...
<magicaltrout> don't i just use that controller?
<magicaltrout> its not clear
<pmatulis> magicaltrout, generally you add a cloud to your local client
<magicaltrout> consider me well confused... we always use controllers and now i'm being told to add stuff to a local client...
<tvansteenburgh> magicaltrout: it sounds like you're trying to do the bootstrap step before you do the `juju add-k8s` step
<magicaltrout> no i'm running the juju add-k8s step tvansteenburgh
<magicaltrout> infact
<magicaltrout> local gives me the same error
<magicaltrout> i still dont' know what to type! :)
<tvansteenburgh> And I don't know what you've already typed! :)
<magicaltrout> https://paste.ubuntu.com/p/MXkSKsmQgH/
<magicaltrout> that was my last guess cause stickupkid told me to do --local
<pmatulis> magicaltrout, just curious, did you look over any of the documentation? maybe that stuff needs improving
<magicaltrout> on that box, juju status, shows my bootstrapped controller and kubectl get namespaces runs fine
<magicaltrout> pmatulis: https://discourse.jujucharms.com/t/using-kubernetes-with-juju/1090
<magicaltrout> i have this page open
<magicaltrout> i have run  juju add-k8s --help
<magicaltrout> and i'm absolutely non the wiser
<tvansteenburgh> magicaltrout: Pipe your kubeconfig to add-k8s
<magicaltrout> i've tried a few different methods of getting kubectl in
<magicaltrout> but i also know its reading it cause i messed the file up
<magicaltrout> and it failed with another error message
<magicaltrout> https://asciinema.org/a/X2LWuBEpolBYYvoT79JFT8zwm
<magicaltrout> i mean, it all seems to be working, so i'm clearly missing something dumb but its not obvious in the docs
<magicaltrout> i mean
<magicaltrout> if i wanted to fill out the cloud and region
<magicaltrout> what the hell goes in them?
<magicaltrout> I do see any examples anywhere
<achilleasa> stickupkid: I am trying to find a place in the juju code-base to add the "get snap store assertions" helper. Any ideas? "snap/assertions" would be great but "snap" is used for the snapcraft bits
<achilleasa> (there is also a service/snap which doesn't seem right)
<stickupkid> achilleasa, do they reference any of the juju/juju code base, if not, then core/snap/assertions?
<stickupkid> although core should be renamed to pkg or internal :)
<achilleasa> stickupkid: no, it's just the bit that I removed from the juju/packaging PR. Ok, I will put it there for now...
<magicaltrout> jeez
<magicaltrout> it actually added something
<magicaltrout> so tvansteenburgh to get it to add the cloud i had to run
<magicaltrout> juju add-k8s k8s-test-cloud --debug --region openstack/RegionOne --storage openstack-standard
<pmatulis> https://discourse.jujucharms.com/t/tutorial-installing-kubernetes-with-cdk-and-using-auto-configured-storage/1469#heading--adding-the-cluster-to-juju
<pmatulis> magicaltrout, ^^^
<pmatulis> also: https://bugs.launchpad.net/juju/+bug/1830949
<mup> Bug #1830949: [k8s] add-k8s command has ambigious UX <usability> <juju:Fix Committed by anastasia-macmood> <https://launchpad.net/bugs/1830949>
<magicaltrout> ah yeah that tutorial has the region in as well pmatulis yeah, i picked that up a few minutes ago from another tutorial
<magicaltrout> trying to bootstrap now thanks
<rick_h> magicaltrout: geeze, had to take the dog to the vet. Sorry :P
<rick_h> magicaltrout:  you get going?
<magicaltrout> no probs rick_h
<magicaltrout> the thing seems to be trying to bootstrap
<rick_h> magicaltrout:  that's a good thing
<magicaltrout> that add-k8s command is a mindfuck
<rick_h> lol, a little bit
<rick_h> magicaltrout:  going to get some food, but feedback/etc in discourse is helpful for sure.
<magicaltrout> I think the issue pmatulis linked to in launchpad captures the problem pretty well
<rick_h> I'm hoping we can get to a point that add-cloud/add-k8s are pretty much the same walk through of stuff vs the different worlds they have now.
<rick_h> magicaltrout:  yea
<magicaltrout> but if i hit more I'll bring it up. I need to get some k8s charms written for Druid
<magicaltrout> so I'm sure i'll hit some more fun
<magicaltrout> epic, bootstrap came to life
<magicaltrout> thanks folks
<magicaltrout> random k8s charm question
<magicaltrout>       username: %(docker_image_username)s
<magicaltrout>       password: %(docker_image_password)s
<magicaltrout> what are they when they're at home?
<magicaltrout> like, don't you push the docker image as a resource, so what does it relate to?
<magicaltrout> image_info.registry_path..
<magicaltrout> etc. is that just some juju thing then?
<rick_h> magicaltrout:  hmm, not seeing that in the spec definition. Is that something image specific?
<magicaltrout> na its in all of them... it looks like its how juju authenticates with the juju docker repo I guess
<rick_h> oh maybe
<magicaltrout> well when I say all of them
<magicaltrout> I mean mediawiki and mariadb
<magicaltrout> my sample pool :P
<magicaltrout> okay basic charm works
<magicaltrout> thats pretty cool
<magicaltrout> deployment:
<magicaltrout>   type: stateless | stateful
<magicaltrout>   service: cluster | loadbalancer | external
<magicaltrout> anyone know where that goes? to get a loadbalancer IP?
<magicaltrout> it seems to suggest it goes in metadata.yaml
<magicaltrout> "Charm metadata syntax looks like this"
<magicaltrout> but if I stick it in there charm build tells me to get lost
<magicaltrout> proof: E: Unknown root metadata field (deployment)
<magicaltrout> ah thats just a proof thing
<magicaltrout> it still deploys and does what it claims
<magicaltrout> good stuff
#juju 2019-09-01
<atdprhs> hello everyone, i changed the server's ip, but juju bootstrap doesn't work anymore
<atdprhs> I keep getting "Unable to allocate static IP due to address exhaustion."
<atdprhs> but maas kvm can commission new machines just fine
<atdprhs> I think juju is using the wrong subnet, is there a way that I can tell juju to use a specific ip when bootstraping?
<thumper> fix for peer grouper test https://github.com/juju/juju/pull/10581
<anastasiamac> lgtm thumper
#juju 2020-08-24
<stickupkid> manadart, should I be able to upgrade a controller whilst doing a upgrade-series of a machine?
<stickupkid> seems pretty risky
<manadart> stickupkid: Hmm probably not.
<stickupkid> manadart, the sticky situation is that it makes it harder to iterate on the feature... i.e. fix something on the controller side and run upgrade-controller
<stickupkid> manadart, maybe force can tell it, it's fine and yolo it
<manadart> stickupkid: Yeah, OK.
<icey> hey, anybody seen something like https://pastebin.ubuntu.com/p/HTSFzH68mJ/ recently?
<icey> I can't deploy things with spaces on maas suddenly :)
<manadart> icey: Juju version?
<icey> manadart: as noted in the pastebin: version: 2.8.2
<icey> manadart: client is 2.8.1-groovy-amd64
<manadart> icey: OTP at the mo', but definitely want to look at this with you. 10 mins?
<icey> manadart: 10 minutes if brief, otherwise I'd rather lunch first :)
<icey> manadart: it's managed to hang around since mid last week, doubt it'll stop happening this moment :)
<manadart> icey: OK, go ahead. Ping me when you have a little time.
<icey> +1
<stickupkid> manadart, CR - https://github.com/juju/juju/pull/11929
<stickupkid> manadart, another ho when you've got 5...
<manadart> stickupkid: In daily.
<stickupkid> manadart, I know why, it's because most of upgrade stuff is on client/client facade, which we shouldn't touch
<stickupkid> manadart, I'll make a new facade
<stickupkid> manadart, thinking about using modelmanager actually... ValidateModelUpgrade
<icey> manadart: ok - I'm now well fed :-D
<manadart> icey: Righto, this will be faster: https://meet.google.com/pxd-zjad-bgh
<manadart> icey: Ping.
<icey> manadart: pong
<manadart> icey: I did some playing around here, and it seems that if you delete the documents from `toolsmetadata`, juju will re-fetch the agent binary from streams.
<icey> oooh nifty
<manadart> icey: The only downside is the the old one will still be in the blobstore as an orphan.
<manadart> If you cared, you could locate and delete it to save a few MBs.
<icey> manadart: so... db.delete('toolsmetadata') or something?
<icey> manadart: hahaha yeah, no
<manadart> `db.toolsmetadata.deleteMany({})`
<icey> { "acknowledged" : true, "deletedCount" : 4 }
<icey> :-D
<manadart> icey: Ah bugger. It still won't upgrade the controller. One sec.
<icey> manadart: I see that
<manadart> icey: Looks like the only thing for it would be to jump on the controllers and curl/wget it from `https://streams.canonical.com/juju/tools/agent/2.8.2/juju-2.8.2-ubuntu-amd64.tgz`
<icey> manadart: that's fine :)
<icey> where should I put that after untarring it?
<manadart> /var/lib/juju/tools/2.8.2-{series}-amd64 should have the jujud/jujuc currently running.
<manadart> That *should* do it - the controllers have the correct binary, and new machines will cause the controller to get the new (correct) one when not found it toolsmetadata.
<manadart> *in toolsmetadata.
<manadart> I have to EoD. Let us know how you get on.
<icey> manadart: pushing the binaries around now, hopefully done and working shortly :)
<icey> geez manadart - is there an easy way to replace these binaries? it's annoyed that the binary is running :-P
<icey> manadart: after a slightly big hammer to upgrade the controllers: "0/lxd/0   pending                           pending        focal          starting"
<stickupkid> hml, going to review your PR now, been fighting with error messages
<icey> drat manadart - 0/lxd/0   down                                             pending               focal          host machine "0" has no available device in space(s) "ceph-access-space", "ceph-replica-space", "public-space"
<icey> I'll look again in the AM
<hml> stickupkid: added comments to: https://github.com/juju/juju/pull/11928
<stickupkid> hml, fixed
<hml> stickupkid:  ack
<hml> stickupkid: approved
<stickupkid> hml, ta
#juju 2020-08-25
<kelvinliu> hpidcock:  wallyworld_: https://github.com/juju/juju/pull/11932 this PR sync k8s-spike with dev, +1 plz
<wallyworld_> looking
<wallyworld_> kelvinliu: +1
<kelvinliu> ty
<kelvinliu> wallyworld_: hpidcock and here is the 1st PR for adding the api layer for k8s provider https://github.com/juju/juju/pull/11933
<stickupkid> manadart, I changed the PR, can you review again
<manadart> stickupkid: Stand by.
<stickupkid> https://github.com/juju/juju/pull/11929
<stickupkid> it's a pretty drastic change, but I think we should do this more often
<icey> hey manadart: it gets more awesome: https://pastebin.ubuntu.com/p/PSQJ95XWzd/
<icey> one machine managed to get the spaces correct, but the rest are failing :)
<stickupkid> wow
<stickupkid> ha
<icey> ir looks a bit more mixed actually, there are some containers on machine 2 that did come up, but a lot that have missing spaces errors
<icey> s/ir/it
<manadart> icey: Hmm, and those container hosts were provisioned fresh after the work to replace the agent binaries?
<icey> manadart: they're all ina bundle I deployed this morning
<manadart> icey: Can you confirm that the good ones have the br-{nic} device (according to Juju) and the failed ones are missing it?
<icey> and I did a `db.toolsmetadata.find({})` search before deploying, got no results
<icey> manadart: so, even just looking at juju status --format=yaml, there are some strange differences :)
<icey> manadart: https://pastebin.ubuntu.com/p/gSNWxcKtQc/
<icey> machine 0 has all containers up
<icey> machine 2 is mostly down
<icey> and there are br-{nic} entries, but not br-{nic}-{vlan} (mostly) in linklayerdevices on machine 2
<manadart> icey: What about the tools metadata after deploying?
<icey> manadart: db.toolsmetadata.find({}) returns nothing
<stickupkid> manadart, any idea what the restrictions are for upgrading a model/controller?
<stickupkid> manadart, i.e. do you need to be a super user/admin user?
<manadart> stickupkid: For upgrade-model you need super-user and write.
<stickupkid> wicked, did it right :)
<manadart> icey: I think I have it. Got a sec? https://meet.google.com/fgr-xbog-nyb
<bthomas> kubectl shows that juju init container (juju-pod-init) is in status Running, and the charm specific container is in status PodInitializing. I was under the impression that the init container must run to completion. If this is correct, how can I find out why the init container is persistently in the Running state. I have checked the init container log already.
<icey> manadart: in the end, one of the machines (2) had containers that wouldn't start
<manadart> icey: OK, can you get me the machine log for it?
<manadart> stickupkid: https://github.com/juju/juju/pull/11934
<icey> sent via PM as it has some IPs in it :)
<stickupkid> manadart, is that it, SupportsSpaces == true
<stickupkid> manadart, ah, SSHAddresses
<manadart> That's it.
<stickupkid> I'll do the Q&A in a bit...
<stickupkid> manadart, I'm going to be annoying and say I want an integration test for this
<manadart> stickupkid: Fair enough. I'll do another card.
<qthepirate> Hello everyone!
<qthepirate> Having an issue with a cached IP address in juju. Essentially: My charm (nova-cloud-controller) keeps pointing to an IP address of an OLD percona charm. The old charm was removed AND all relations/applications are gone from it. I then removed N-C-C charm and redeployed it and its STILL pulling the old IP address.
<qthepirate> Is there a way to clear/refresh the metadata-cache in the juju controller?
<mirek186> has anyone expirinece odd juju ssh timeouts, hangs
<mirek186> for some charms is good every single time, for others like keystone or nova-cloud-control juju will ssh in and then after few minutes hang and then timeout
<qthepirate> is there a way to clear out the metadata-cache on a juju controller?
<qthepirate> I think its holding on to a variable that needs to be updated
<wallyworld> qthepirate: you may want to ask in #openstack (which is where the openstack charming folks tend to hang out) as this sounds like an openstack charming question. the charms themselves are responsible for passing around data about how the deployment is set up. you could also ask on https://discourse.juju.is and openstack charming folks can better see the post there
<qthepirate> wallyworld: I would agree, but the issue i've chased down leads to this config line: destinations=metadata-cache://jujuCluster/?role=PRIMARY
<wallyworld> that's not a juju config item though right? that looks like charm config?
<qthepirate> right, its in an app .conf file, but its looking to the juju cluster for an item
<wallyworld> in the context of juju itself, juju knows nothing about "jujuCluster" - it seems like that's somewthing the charms manage
<qthepirate> wallyworld: Thanks for letting me bounce it off you. I checked the logs and its definitely something else. Still trying to track down this error
<wallyworld> qthepirate: no worries, sorry if i've been a bit vaque as the openstack charms are not something i know a lot about
<thumper> hpidcock: https://github.com/juju/collections/compare/master...howbazaar:use-base-testing
<hpidcock> thumper: looking good :)
<hpidcock> thumper: and love the use of testing.T subtests
<qthepirate> wallyworld: do you know anything about juju not releasing unused ip addresses?
#juju 2020-08-26
<stickupkid> manadart, for the validate upgrade model, I did originally make it bulk API, but I'm unsure that's a wise decision as it's going to do a lot per model... thoughts?
<stickupkid> manadart, I know we prefer bulk APIs, but should we?
<manadart> stickupkid: Realistically it's only ever going to be called with one right? If it's already bulk, just keep the pattern I guess.
<stickupkid> sure
<stickupkid> manadart, this error message is the most annoying message when testing upgrade series `ERROR unit ubuntu-lite/0 is not ready to start a series upgrade; its agent status is: "allocating"`
<stickupkid> argh, i don't care, just let me do it
<stickupkid> manadart, spot the error here
<stickupkid> https://paste.ubuntu.com/p/TcJFND8bqg/
<stickupkid> winner gets beer at the next sprint
<stickupkid> I've just lost about 1 hour on that
<stickupkid> that's it, getting a drink
<manadart> stickupkid: stateShim by val, so machines have a copy of the shim?
<stickupkid> manadart, line 2 of the paste, it's a recursive call to itself
<manadart> stickupkid: Haha, yeah, done that.
<stickupkid> manadart, everything was bouncing, like everything! no panics, no logs, nothing
<stickupkid> manadart, is there a way to know if a model is a controller from a UUID, or is there some method?
<manadart> stickupkid: model.IsControllerModel()
<manadart> Oh, from a UUID.
<stickupkid_> nice, internet bounced
<jam> petevg: fortunately I was still Operator on this channel. Is there another bit / flag to be updated?
<petevg> jamI think that's it. Thx!
#juju 2020-08-27
<wallyworld> hpidcock: tiny PR to fix an build number issue https://github.com/juju/juju/pull/11939
<hpidcock> @wallyworld looking
<hpidcock> @wallyworld approved
<wallyworld> ty
<hpidcock> kelvinliu: tlm: https://github.com/juju/juju/pull/11940
<kelvinliu> looking
<kelvinliu> hpidcock: lgtm ty
<hpidcock> kelvinliu or tlm https://github.com/juju/juju/pull/11941
<kelvinliu> i will leave this one to tlm since im reviewing his pr. :)
<hpidcock> kelvinliu: ð
<tlm> looking now, thanks kelvinliu
<tlm> lgtm hpidcock
<kelvinliu> tlm: just left a few comments. ty
<tlm> ta, look in a sec
<jam> morning all
<stickupkid> CR for https://github.com/juju/juju/pull/11936
<stickupkid> manadart, upgrade-series validate ^
<manadart> stickupkid: OK to do it first thing? Need to get home and take the kids. This should fix the NIC update problem too: https://github.com/juju/juju/pull/11943
<stickupkid> manadart, take your time
<manadart> Might actually make things a little faster when landing a bunch of containers.
<stickupkid> can we get icey to test this patch?
<icey> stickupkid: if you can get me a built jujud soon, I can even test it today - otherwise, tomorrow morning
 * icey doesn't know how to build go things
<manadart> icey: This one poses an issue the other way around from how we've been doing it - this is needed agent-side rather than controller-side.
<icey> manadart: ah - super fun
<manadart> So we have to do a --build-agent, or get it into a stream.
<icey> does the --build-agent have to be at bootstrap time?
<stickupkid> you can do it for upgrade-controller
<icey> (alternately, streams somewhere sounds nice :-P )
<manadart> I'll see what I can come up with tomorrow.
<stickupkid> or you could totally do some upload tools thing I bet
<stickupkid> and bounce the controller
<icey> ok - tomorrow AM works for me too
<icey> whatever works, I'm probably 20 minutes from EoD
<manadart> o/
<stickupkid> hml, approved
<hml> stickupkid: ta
#juju 2020-08-28
<kelvinliu> wallyworld: do u remember why we have the uniqID in statefulset annotation for naming storage?
<kelvinliu> is it because PVs are global?
<wallyworld> kelvinliu: so that if a pod gets destroyed and a new one comes up, we know which pvc to reattach. i think, that's from memory
<kelvinliu> wallyworld: for statefulset, we only mange pvc template but not pvc, so storage config is always in statefulset level rather than pod level
<wallyworld> yeah, there's some slightly different reason. it may be to do with allowing juju storage model to correctly tie the juju bit to the k8s bit. i'll look at the code and see if i can remember
<kelvinliu> wallyworld: the application package does have those special storage handling logic, I am trying to understand if all those logic are necessary for the new stuff.
<kelvinliu> wallyworld: I will probably just start from the simple version and land what we have, then rotate and enhance later when we need?
<wallyworld> sure, we can iterate
<icey> Morning manadart
<manadart> Morning icey.
<stickupkid> manadart, HO?
<Chipaca> status-set is immediate, yes?
<Chipaca> or is it only applied when a hook finishes successfully?
<stickupkid> manadart, before you leave, HO?
<jam> Chipaca: just to look like we're responsive... :) status-set is immediate
<Chipaca> jam: :) thanks
