#juju 2012-06-04
<mwak> hi
<jcastro> m_3: ping me when you're around!
<m_3> jcastro: yo
<m_3> greetings from Utah!
<benji> I wasted a little time the other day because a relation was in an error state and I didn't notice it in the juju status output.  That got me thinking about colorizing the output to make errors pass less silently.  If there that functionality is desirable, I'll cook up a branch.
<benji> I suspect it will be off by default and those who want it on will use an alias to enable it.
<hazmat> benji, that sounds nice.. it be nice to start to have it as a juju-jitsu command..
<hazmat> benji, i always run things in emacs shells and escape codes cause foobar... but i'm probably one of a small few in that regard
<benji> hazmat: not having used juju-jitsu, what is the attraction of having that functionality there?  It is a proving ground for inclusion in juju-proper?
<hazmat> benji, i think of it as such
<benji> yeah, as much as I like colorized output, making it the default is not really feasible
<hazmat> benji, the primary benefit of jitsu is it needs no coordination or acceptance, its very JFDI friendly
<hazmat> benji, if you want to just do it core, that also sounds good, just drop an email proposal to the list.
<benji> I'll think abotu which would be the best approach.  Thanks for the input!
<jcastro> jamespage: you're going to europython right?
<jcastro> jimbaker: there's nothing left to do for the Usenix config management thing other than for you to go and execute right?
<jimbaker> jcastro, correct
<jcastro> mramm: ^^^ I think you might both be there
<JoseeAntonioR> jcastro: ping me once you're free, I'll be here for 3 more hours
<zooko> Hi folks!
<zooko> Please suggest a charm that I can copy and edit in order to create a charm for Tahoe-LAFS.
<zooko> There is one distinguished coordinating server plus any number of other servers.
<zooko> The other servers can come and go without any response being required on anyone's part, so there's no configuration change or anything required to add or subtract servers.
<zooko> So, it ought to be pretty simple.
<jcastro> jimbaker: ^^^
<jimbaker> zooko, you can browse the charms at http://jujucharms.com/charms - off the top of my head, i don't remember which ones have implemented a leader election (via a peer relationship) but they're out there. there's a charm helper for that
<jimbaker> zooko, want to get together for coffee today?
<jimbaker> zooko, probably the cassandra charm is a good place to start, see for example, http://jujucharms.com/charms/precise/cassandra/hooks/cluster-relation-joined
<hazmat> there's a known issue with smaller unit sequence as leader
<jimbaker> hazmat, so is there a bug in the cassandra charm?
<marcoceppi> the peer.sh is still experimental as it's hard to guarantee that the leader will always exists based on lowest unit number
<hazmat> jimbaker, yes, but its timing dependent
<hazmat> if you add one unit, wait a while, then add more, its fine
<hazmat> but if you have a bunch coming up at once for the initial deploy -n 5 for example
<hazmat> then they may come up and not see a stable leader till unit/1 shows up
<zooko> jimbaker: thanks!
<zooko> jimbaker: it is simpler than that -- there is no leader election. The distinguished server ("introducer") is a different type of server from the others and is created specifically by the human.
<zooko> jimbaker: I'd love to! But, I can't, because I'm responsible for three boys while I work today and all this week. :-)
<jimbaker> zooko, that makes things easier
<zooko> And, I need to pack us all up for a two-month trip to Canada.
<zooko> Yes, I think Tahoe-LAFS should be fairly simple to charmify.
<zooko> You create one introducer, then you get a string from it which identifies it and you configure any number of storage servers, giving them that string.
<zooko> That's almost all that needs to be done.
<jimbaker> zooko, sounds straightforward
<jcastro> lynxman: you're set for reviews on Wednesday (reminder)
<negronjl> 'morning all
<jcastro> hi!
<jcastro> SpamapS: around?
<m_3> negronjl: yo!
<negronjl> m_3: 'morning
<surgemcgee> How can I get my unit out of debug mode? The debug-hooks shell quit after prolonged activiaty. Now is remains in it's perma debug state. Blah
<surgemcgee> *in-activaty
<SpamapS> surgemcgee: kill the shell under tmux
<jcastro> imbrandon: I appreciate you taking a screenshot of the HP account #'s instead of something I could copy and paste. :p
<imbrandon> jcastro: i totaly thought i sent both, one was a joke
<imbrandon> heh
<imbrandon> i'm sorry if i did not send the other right behind it, twas my intention
<imbrandon> heh
<jcastro> it's ok, if some other random person with a close enough digit gets free HP cloud instead it's not my fault. :)
<imbrandon> was supose to look like yours on the wiki :)
<adam_g> is it currently not possible to push to / create quantal charm branches? ie, lp:~gandelman-a/charms/quantal/mysql/testing ?
<vila> adam_g: I ran into the same issue last week. Got no definitive answer. Let me know if you learn more (I'm not there, I'm sleeping ;)
#juju 2012-06-05
<surgemcgee> =8>
<surgemcgee> just checking.. and streching my legs.
<m_3> SpamapS: ^^ (quantal branches) I couldn't push to quantal branches either... bzr: ERROR: Permission denied: "~mark-mims/charms/quantal/junk/trunk/": : No such distribution series: 'quantal'.
<m_3> adam_g: give that another try when you get a chance
<m_3> SpamapS: please double-check https://launchpad.net/charms/quantal when you get a chance
<adam_g> m_3: looks like it works now. :)
<m_3> adam_g: cool thanks
<m_3> we hadn't actually created a distro series for quantal yet
<imbrandon> m_3: you catch my note about the github svn + bzr svn interface ?
<m_3> imbrandon: yup... I'd have to try it out
<imbrandon> cool cool ,me too, i just tried it enought to see if it was r/o or r/w
<imbrandon> once i seen it was r/w i was hmmmm i bet the bzr-svn and git-svn tools are much more mature :)
<imbrandon> but its just for $sometime
<m_3> cool
<m_3> yeah, that's a great idea if we keep up the gitpad mirror
<imbrandon> yea and the url is the same as the git https one except it drops the .git from the end
<imbrandon> for co etc
 * zooko looks at https://juju.ubuntu.com/docs/write-charm.html
<zooko> "Assuming the current directory is the juju trunk"
<zooko> Hm... does that mean I need to check out some repository first?
<zooko> Perhaps https://code.launchpad.net/charms
<marcoceppi> zooko: You don't need to checkout any repositories to start writing charms
<marcoceppi> That tutorial is a little...out of date.
<zooko> Where should I start?
<marcoceppi> Well, it's not that bad actually
<zooko> Should I go ahead and check out https://code.launchpad.net/charms?
<marcoceppi> that wording is just a little confusing, it's supposed to be charm trunk, so for instance you want to create a directory like ~/charms then inside that directory create the series directory, (precise), then the name of the charm.
<marcoceppi> so, pretty much if you're making a "foo" charm, mkdir -p ~/charms/precise/foo
<zooko> Ok.
<zooko> :-)
<zooko> Thanks.
<zooko> marcoceppi: what's an example of a good README I should emulate?
<marcoceppi> zooko: there are a few, you can take a look at http://jujucharms.com/charms/precise/hadoop
<surgemcgee> Ohh, ohh, me me!
<zooko> surgemcgee: you you what? You wrote a good README? :-)
<zooko> Should I list Python as a "requires" for Tahoe-LAFS?
<zooko> And I wonder if I should "provide" anything?
<surgemcgee> zooko: I am pretty sure, it is here in my branch -->  http://bazaar.launchpad.net/~robertsteckroth/charms/precise/django-rainbow/trunk/files
<zooko> Having written his README, zooko returns to https://juju.ubuntu.com/docs/write-charm.html
<zooko> Errrrmm... now I get to choose bash or Python for my intall script.
<zooko> Hm,.
<zooko> Well, that was easy.
<zooko> http://codepad.org/hy5Hh7BI
<zooko> Hm, I don't understand in which script I should do the creation of the state.
<zooko> Time for some sleep anyway.
<koolhead17> zooko: gn :)
<zooko> :-)
<lynxman> jcastro: roger that :)
<jcastro> http://news.ycombinator.com/item?id=4069164
<jcastro> woo!
<jcastro> Here's what m_3's been working on for a day or so
<jcastro> well, the post was only that long, the rest was a bunch of hard work. :)
<negronjl> 'morning all
<hazmat> negronjl, g'mroning
<negronjl> 'morning hazmat
<hazmat> just noticing its rather silly to distinguish tcp/udp in open/close port
<jcastro> Only 3 upvotes, come on people, we need to get on the front page to share this with the world! http://news.ycombinator.com/item?id=4069164
<mramm> Up to 7 upvotes now, but could use a few more;) http://news.ycombinator.com/item?id=4069164
<negronjl> jcastro:  how do you vote ?
<jcastro> sign up
<jcastro> and then click the little arrow to the left of the title
<m_3> negronjl: btw, please check the article for stupidity :)
<negronjl> m_3:  I will ... but I doubt I will find any :)
<jimbaker> i rather liked the article (which i upvoted), it gives great insight into why this was an interesting if preliminary experiment
<m_3> jimbaker: thanks... yeah, I think it's a good story
<m_3> and the results are pretty much expected
<lynxman> jcastro: hey, I had on my calendar charm review on Thursday, was it moved?
<jcastro> you're the only one this week, so however you want to slice it is fine
<jcastro> I don't think anyone moved it
<lynxman> jcastro: k :)
<lynxman> jcastro: any official guide for review or just go with my gut?
<lynxman> (just in case)
<jcastro> https://juju.ubuntu.com/Charms
<jcastro> yeah
<jcastro> look in "Reviewers"
<lynxman> jcastro: lovely, thanks :9
<jcastro> marcoceppi: wotcha up to right now
<marcoceppi> jcastro: sitting through a boring meeting, what's up?
<jcastro> spdy is default in firefox13
<marcoceppi> booooya
<jcastro> clint's charm is basically done, I think it needs a review and promulgation?
<marcoceppi> I'll take a look
<jcastro> maybe we can whine to ben to fix that subordinate opening a port bug
<_mup_> Bug #1009221 was filed: Adds format: 2 support to juju, while maintaining backwards compatibility for old charms. <juju:In Progress by jimbaker> < https://launchpad.net/bugs/1009221 >
<RaptorsAtatck> heya, so first off juju is awesome(yeah you guys already know).  I'm looking at a way to manage what charms are installed on systems and see network diagrams. I was considering using mcollective for this but it seems like something that would be already solved by someone else. Any ideas?
<james_w> RaptorsAtatck, "juju status --format=dot" might interest you
<james_w> I'm not sure what you mean by "what charms are installed on systems" though
<RaptorsAtatck> @its the services/units bit that juju already handles
<RaptorsAtatck> so never mind on that part ;)
<james_w> ok :-)
<RaptorsAtatck> reading into the docs while trying to poke holes
<james_w> the status output will draw a graph of the services/units you have and the relations between them
<RaptorsAtatck> gah love the relationship hooks.
<RaptorsAtatck> configuration management has been a pain, even with puppet.
<RaptorsAtatck> hey so I have an environment deployed and configured how I want it with the appropriate relationships. Is there a way to take a snapshot of this and deploy a mirror of this to a new environment?
<surgemcgee> A simple bash script can do that, no need to go higher than juju already is.
<RaptorsAtatck> yeah, but maintaining that will suck
<RaptorsAtatck> So, going to make a simple script that will take the output of the status and rebuild that in the selected environment
<RaptorsAtatck> then I can just check status outputs in when I'm happy with how they look
<RaptorsAtatck> oh man can take that idea/concept alot farther.
<surgemcgee> Not on a professional/production level
<surgemcgee> Although, destoyed my environment like 20 times (don't tell amazon. ha).
<zooko> If somebody else will pay for the EC2 fees, I'll measure how Tahoe-LAFS behaves with 5000 servers and write a blog post about it. :-)
<RaptorsAtatck> well its actually for managing private clouds
<RaptorsAtatck> a common issue I've ran into is developers needing complex environments setup
<RaptorsAtatck> so we normally make a VM image w/ puppet/mcollective and then update those as needed
<RaptorsAtatck> huge pain in the ass when the developers change things
<RaptorsAtatck> this way we can keep a snapshot of what a developer environment looks like and just execute that as needed
<RaptorsAtatck> (the developer that is)
<surgemcgee> Sounds like high level software development. Most web projects can easly re-configure and deploy to some user friendly cloud like use app engine.
<RaptorsAtatck> yeah,  many people on the project ;)
#juju 2012-06-06
<zooko> Oookay, so should my juju scripts inform one node of the existence of another node in "install", or in "start", or in some other script?
<zooko> The example in here: https://juju.ubuntu.com/docs/write-charm.html does it in db-relation-changed, but I don't exactly see how .. what would be analogous for my software.
<zooko> Oh, I think I see it.
<zooko> From https://juju.ubuntu.com/docs/charm.html, I think that I should define a relation of "introducer".
<zooko> Getting sleepy. Maybe bang on it some more after some sleep...
<koolhead17> zooko: best way is to grab a exisitng charm and see/dig into
<zooko> koolhead17: well I've done that with the drupal one in the example.
 * zooko looks  at the hadoop one
<koolhead17> zooko: hadoop one will fit the charm your writing
<zooko> So... yeah I think I'll define a "relation" of "introducer"...
 * zooko looks at http://jujucharms.com/charms/precise/hadoop/hooks/datanode-relation-changed
<zooko> Yikes,there's a lot of code in there.
<zooko> Hm...
<zooko> http://codepad.org/EYUOJnKk
<zooko> Now I've defined a relation.
<zooko> But, will the "tahoe-lafs" charm have both "requires" and "provides" for this relation?
<zooko> Or should I have a separate "tahoe-lafs-introducer" and "tahoe-lafs-storage-server" charm?
<zooko> Okay, in another window my upload of a 1.6 GB file through Tahoe-LAFS to Amazon S3 is almost completed successfully, at which point I'm going to shutdown the Amazon EC2 instance and get some sleep. :-)
<koolhead17> zooko: requires is for is there any deps for the charm
<zooko> So, tahoe-lafs uses any number of storage servers.
<zooko> Each storage server has some storage, such as having a direct attached spinning disk
<zooko> or having access to an S3 bucket.
<zooko> The storage servers could each be running on different machines in different locations on the globe, or whatever -- they just need to be reachable via TCP.
<zooko> Now, there is exactly one introducer, whose job it is to inform each client about the IP addresses and public keys of all the storage servers.
<zooko> So, I'm trying to figure out how to write into the charm that when deploying a new Tahoe-LAFS grid, you have to first create and launch the singular introducer,
<zooko> then get its IP address and public key from it
<zooko> then every time you add a storage server, you have to tell that storage server the IP address and public key of the introducer.
<zooko> That's all I'm trying to accomplish at the moment.
<zooko> I'm wondering if I need to have a hook script that gets called after a storage server launches, and that script calls
<zooko> https://juju.ubuntu.com/docs/charm.html
<zooko> calls relation-set to set its IP address and public key?
<zooko> And then the hook script that gets run for each storage server... at some point in the process of creating that storage server, could call relation-get to get that information ??
<zooko> Okay, I'm about to crash, but I'll return to this channel when I wake and I'll hope that someone will guide me through this.
<koolhead17> when am trying to get juju credentials from <IP>/settings/juju/ it throws internal server error
<koolhead17> is it a known issue? am using Essex on PRecise
<koolhead17> *precise
<koolhead17> i am able to download Ec2 credentials and openstack credentials
<koolhead17> lynxman: ^^
<jamespage> jcastro, I think I remember volunteering for something like that at UDS
<jamespage> but I can't remember the detail - I would not normally attend pycon so it would be todo a charm school/presentation or whatever if so
<hazmat> marcoceppi, how'd you do at trivia night?
<hazmat> the dcpython meetup was fun, someone brought a nao robot
<marcoceppi> nice!
<marcoceppi> hazmat: we placed 2nd of 12 teams
<hazmat> marcoceppi, impressive
<marcoceppi> hazmat: I keep forgetting you're in the DC area, would love to hang out and hack on juju stuff some time
<jcastro> m_3: hey don't forget to submit for puppetconf
<jcastro> (unless you did already, in that case "great!")
<jcastro> jimbaker: heya, check the scrollback for some stuff zooko needs help with
<jimbaker> jcastro, sounds good
<zooko> Morning!
<zooko> Yeah, this probably reflects on me more than on Juju, but I was perplexed.
<zooko> Maybe what's going on is that Tahoe-LAFS is subtly or not-so-subtly different from typical software, so it doesn't map well to Juju.
<jcastro> it's ok, this is the sort of thing we should figure out
<jimbaker> zooko, this hub & spoke topology works well w/ juju
<zooko> I suspect one thing that complicates it is that in Tahoe-LAFS some information gets generated inside one node when it first starts up, and then that information is required to be provided to other nodes to configure them
<zooko> .
<zooko> So you can't just script "Set this up then set those up"
<zooko> without scripting "extract that information from the first one" somehow.
<jimbaker> zooko, that's really the point of the negotiation seen in service orchestration
<zooko> Hrm. Well, I don't understand this part very well.
<jimbaker> zooko, so what you need to do is publish the info on relation settings
<zooko> Shall I restate my outstanding questions?
<zooko> Aha, that's the answer to one of them. :-)
<zooko> "Should I publish that info on relation settings?" ;-)
<zooko> That was 1.
<jimbaker> each time a unit changes its relation settings, the other unit(s) in the relation run their <relation name>-relation-changed hooks
<zooko> 2. Should I have separate charms for different services, or one charm that defines both "provides" and "requires" of that service because it defines both that service and the other things that require it?
<zooko> 3. Will someone please spend a lot of money on EC2 instances so that I can run a 5000-node Tahoe-LAFS grid?
<zooko> jimbaker: ah, interesting. But, this relation setting would only be done one time, on setup.
<jimbaker> zooko, 2) my first instinct is that what you're describing is a peer relation
<zooko> So it would be better, I think, to use <thing>-relation-joined ?
<jimbaker> zooko, relation-joined is usually not what you want. more specifically, it's a good place to know that a unit is part of the relation. but that unit may not actually be ready, and has published any info
<zooko> jimbaker: I see.
<zooko> Okay, I guess it will work for <thing>-relation-changed.
<zooko> So there'll be a start hook (is that the right hook) that runs when the introducer is created, and that will get the special data (the "furl") and publish it as a relation setting.
<zooko> That publication will trigger the other things to have their lafs-introducer-relation-joined hooks run, and they'll query it from the relation settings.
<zooko> Perfect! âº
<jimbaker> zooko, like the furl is published as a config setting
<jimbaker> likely
<jimbaker> zooko, maybe not on second thought. going back, i don't believe the furl is not a human-generated config item
<jimbaker> sorry, is a human generated
<zooko> It is generated by the introducer itself (a computer program) when it first runs.
<jimbaker> instead it's a relation setting between the spokes and the central hub of the introducer service. so that also works
<zooko> So, wait, what? Is the plan sketched out above good?
<jimbaker> zooko, cool, so it's a relation setting
<jimbaker> zooko, probably :)
<jimbaker> zooko, so do the spokes need to talk to each in their setup?
<jimbaker> each other
<surgemcgee> Umm, is it possible to add a my_config.yaml file to a existing service. Will that trigger the config-changed hook? Just want to get some options into a service.
<surgemcgee> Says it is not accessible.
<zooko> jimbaker: nope -- the spokes need nothing but that one "furl" from the introducer.
<zooko> Plus, you know, human-chosen config options.
<jimbaker> zooko, ok, you don't have a peer service
<zooko> Right.
<jimbaker> surgemcgee, this is the point of upgrade-charm. make certain you call it config.yaml
<jimbaker> zooko, so each spoke is a client (requires) of the introducer (provides)
<zooko> Agreed.
<imbrandon> zooko: there is also peer relations too eg requires: provides: peers: in the metadata
<zooko> imbrandon: hi! I don't quite understand peer relations, but I'm fairly sure that Tahoe-LAFS doesn't need them.
<imbrandon> zooko: well like in 1 of my setups i have a loadbalancing scheme where all the nodes loadbal to each other, so they have a peer join/depart relation that fires and add or removes their IP from the relation config
<imbrandon> so each of the nodes knows about all the others to add to their own LB config
 * zooko nods
<imbrandon> it was somewhat similar to what you described thus thought i;d mention it :)
<m_3> morning y
<m_3> 'all
<imbrandon> heya m_3
<imbrandon> i tried a perl script they had linked on the prowlapp.com page, works great , i was gonna modify my script for ya but someone already wrote a irssi plugin :)
<imbrandon> just fyi :)
<jamespage> jcastro: things all set for tomorrow?
<m_3> jamespage: hey... how was jubilee?
<jamespage> m_3: rocking!  I think the country finally remember what being British was all about :-)
<jamespage> m_3: nice article BTW
<m_3> ha! my fav description of what it means to be British was john cleese and kevin klein in a fish called wanda from years ago... pretty funny
<m_3> jamespage: yeah, lemme know if you have changes to the article
<koolhead17> hi all
<x_or> I'm getting public address: null after I run juju expose wordpress from the link https://juju.ubuntu.com/
<x_or> I don't see anything when I run juju debug-log other than "enabling distibuted debug log ... ctrl-c to stop."
<m_3> x_or: the instance is probably still coming up and doesn't have an address assigned yet
<x_or> m_3: Yeah, I thought that, but it has been over five minutes.  My other lxc machines come up in a few seconds.
<m_3> x_or: the first service you start using a local provider takes a _long_ time to come up... dpending on your connection
<x_or> Is there somewhere that juju stores the network interface to use?
<x_or> Oh, OK.
<x_or> Why is that?  What is happening the first time?
<m_3> x_or: (it's downloading a new image).. anywhere from 10-30mins
<m_3> x_or: it's using libvirt's "default" network
<x_or> Oh, OK.
<x_or> So, it does this in the background, I see.
<m_3> x_or: and expecting that to be 192.168.122.0/24... although I'm not sure that's necessary any more
<m_3> x_or: yeah, we've got a bug open to give a little better messaging while it's doing this :)
<x_or> Great.  juju is very exciting.
<m_3> yeah, it's fun
<x_or> I'm loving lxc, so much lighter than virtualbox or vmware.
 * m_3 loves lxc too... will love it even more once local provider is easier to setup/use
<m_3> I've done a bunch with libvirt before... new to lxc in the last 6mos or so
<m_3> love that it's lightweight
<x_or> libvirt is pretty nice.  I was getting a bit confused between docs for lxcbr0 and it, but I got it figured out now.
<x_or> I'm having so much better experience with it on a 3 GB linux laptop and five VMs than two machines on 8 GB OSX machine with VirtualBox.
<m_3> x_or: confused is understandable... the network config is _the_ hard part right now.  that'll clear up over time though.
 * m_3 notes to get peeps to blog more about that setup
<x_or> I started a blog post on it, I will see whether I can get that finished first thing next week.  I'm glad it was not just me.  :)
<m_3> x_or: awesome!
<imbrandon> yea but your not working with something on the level of VBox or VMWare , its more like a chroot on crack
<imbrandon> :)
<x_or> imbrandon:  This is why I like it.  Lightweight.
<x_or> VBox is so heavy, ditto for VMWare.
<m_3> yup
<m_3> x_or: we're still trying to figure out the best way to do this with osx too.  We have a juju osx client that'll run remote cloud stuff fine... but the local provider story on osx is still pretty weak
<imbrandon> depends on your perspective i guess, i see them as light , or even lighter it using a xen hyper, and in turn you get a real virtualized env, not a container
<imbrandon> that isnt
<imbrandon> m_3: i got 3/4 of a vbox provider for OS X ( and others ) written
<x_or> Are you guys canonical employees?
<imbrandon> i hoped to have it done by this weekend, well useable by then
<m_3> imbrandon: awesome!  vagrant?
<imbrandon> m_3: yup
<m_3> or just pure
<m_3> gotcha
<imbrandon> x_or: i am not. most of the devs in here are
<m_3> x_or: yeah, it's a mix
 * m_3 is
<imbrandon> yea actually i take that most part back
<imbrandon> heh
<imbrandon> its a good mix
<imbrandon> m_3: but yea, it is *almost* pure
<m_3> vagrant's cool though... great handler stack, like rack
<jcastro> imbrandon: RPMs yo
<imbrandon> and if i take a extra few days ( i may after the first push ) then it would be
<imbrandon> jcastro: rpms are cookin this second
<jcastro> unf.
<jcastro> hey post on the list when you have something to test
<imbrandon> like thats whta i was/am doing today
<imbrandon> kk will do
<imbrandon> m_3: btw you get charm tools to work on osx ? i have some bad problems with the bsd VS gnu tools but mostly just in charm-tools
<m_3> imbrandon: haven't tried
<m_3> I actually don't have an osx machine to test on... only ones in the house belong to the wife :)
<imbrandon> i might dif some more later, for now i just installed gnu coreutils
<imbrandon> dig*
<imbrandon> yea i'm actually running linux again full time
<imbrandon> but i still am testing / working on the stuff
<m_3> imbrandon: is it possible to run recent versions in VMs now?  (my laptop's a mbp, so legally I can run it)
<imbrandon> so i keep a partition going
<imbrandon> m_3: yup
<imbrandon> but only vmware proper
<m_3> oh nice... I'll have to google
<imbrandon> like esxi or vmware server or player
<m_3> looked into it a year or so ago, but no love
<imbrandon> vbox and others it would "techinily" work but there are checks in the instller, vmware is the only legit unmodified way to install 10.7 or 10.8
<imbrandon> m_3: yea its legit now
<m_3> imbrandon: awesome... thanks!  that'll simplify osx testing considerably
<imbrandon> they changed it when 10.7 was released
<imbrandon> yea it makes it nice, was the reason i was willing to jump ship back to linux so easy too cuz i can keep the old install arround for porting etc
<imbrandon> :)
<m_3> imbrandon: right... I kept my orig osx drive around in an external hd case... could boot from it when I wanted
<m_3> imbrandon: but lately ios update over the air means I don't even do that anymore
<imbrandon> yup i;m lovein that
<imbrandon> ota itunes syncing too, wonder if i can make banshee do that with my ipad  hrm something for later :)
<imbrandon> already got timemachine seeing my linux server as a timecapsul backup dest
<imbrandon> :)
<jamespage> jcastro, ping!
<jcastro> jamespage: pong
<jamespage> jcastro, hey!
<jcastro> do we _require_ install hooks to be idempotent?
<zooko> non-idempotent install hooks sound like a terrible idea. :-)
<jcastro> yeah I added a sterner warning in the template
<marcoceppi> jcastro: all hooks _need_ to be idempotent
<jcastro> ok, it was just missing from the template then
<jcastro> incoming charm-tools merge proposal yo
<jcastro> marcoceppi: actually my question more for m_3: do we check for idempotency in the charm test and kick that back?
<jcastro> I am assuming yes, it wouldn't make sense otherwise
<marcoceppi> jcastro: I'm not sure if we explicitly check idempotency in the tests, but when I do reviews I test for it
<_mup_> juju/trunk r540 committed by kapil.thangavelu@canonical.com
<_mup_> merge maas-provider-non-mandatory-port, default ports are inferred by protocol. [a=julian-edwards][r=fwereade,hazmat][f=972829]
<m_3> jcastro: no, it's hard to verify that automatically
<m_3> jcastro: it's part of the review process
 * jcastro nods
<m_3> jcastro: a non-idempotent install hook would actually be acceptable if there's a good reason... we haven't made it a hard/fast condition
<m_3> I recommend linking upgrade-charm to the install hook... so the idempotency of the hook often depends on that of the underlying tools (like apt-get)
<m_3> depends tho
<SpamapS> m_3: quantl looks good
<SpamapS> m_3: install *still* has to be idempotent
<SpamapS> m_3: if there is an error late in the hook, it will be retried .. so the whole thing has to be idempotent
<m_3> SpamapS: thanks for looking
<m_3> SpamapS: how're things?
<SpamapS> m_3: and symlinking upgrade-charm to install isn't really the best way to go. Better to go with stop,install,start,config-changed (the same order that happens on deploy)
<SpamapS> m_3: good, first time touching the computer since Monday morning. :)
<m_3> ha!
<SpamapS> haven't really had the presence of mind to do anything useful with it anyway
<m_3> gotcha... good idea about upgrade
<SpamapS> m_3: I think we should start thinking about making a declarative charm-helper that does exactly that automatically
<m_3> SpamapS: are you daddy+1 yet?
<m_3> yeah, it's easy enough
<SpamapS> m_3: been thinking about making a feature request for a 'missing-hook' hook that gets called whenever a hook script doesn't exist
<m_3> ha!
<m_3> that smells _dynamic_ even :)
<SpamapS> m_3: yeah it would make declarative charming much easier
<m_3> really like that feature
 * SpamapS files that feature req
<_mup_> Bug #1009687 was filed: charms should be able to provide a 'missing-hook' script <juju:New> < https://launchpad.net/bugs/1009687 >
<SpamapS> imbrandon: hey, you broke charm-tools
<SpamapS> imbrandon: https://launchpadlibrarian.net/107021470/buildlog_ubuntu-oneiric-i386.charm-tools_0.3%2Bbzr145-2~oneiric1_FAILEDTOBUILD.txt.gz
<SpamapS> imbrandon: always run 'make check' before pushing to trunk
<SpamapS> jcastro: ^^ you too :)
<jcastro> yikes!
<jcastro> SpamapS: just make check? I get an error
<jcastro> make: *** No rule to make target `check'.  Stop.
<jcastro> oh, wrong dir, nm
<imbrandon> SpamapS: ahh crap i should know better, was just a readme update :) anyhow i'll fix it here in just a few, got a fire IRL on the phone i'm trying to defuse
<SpamapS> its never "just a readme update" when you're changing templates. :)
<SpamapS> imbrandon: also can you please include a description of the actual changes when you push to trunk, not just 'merging in jorge's changes'
<imbrandon> i did in the lp commit message box. that workflow is so screwed
<imbrandon> i'm just gonna ignore LP from now on and do it the right way
<imbrandon> looks like it was a test that failed , fixed up in a sec
<SpamapS> what lp commit message box?
<SpamapS> imbrandon: no, jorge already fied
<SpamapS> fixed even
<imbrandon> ahh kk sorry was on the phone
<imbrandon> but yea the one on ... let me find it
<SpamapS> imbrandon: I never use the lP gui for doing merges so I don't know how that even works
<imbrandon> yea i tried then when it dident work
<imbrandon> i did it by hand the rest of the way
<imbrandon> thus a little screwy this time
<SpamapS> imbrandon: cool, well thanks for hitting the review queue anyway. :)
<imbrandon> kinda like the github mergin, i hate it, its good to see and overview or a small one off, but pita for multi branch merging like i am normally trying to do
<imbrandon> SpamapS: i plan on a little more today too , but just got wrapped up in that other, will get some more in a bit
<imbrandon> got one my self to toss into charm-helper ( few more bash functions )
<imbrandon> SpamapS: what ya think about a X- prefix for random metadata.yaml fields, that takes care of the namespace issue if it ever were to become official etc and lets us add things like x-vcs to the metadata
<imbrandon> kinda like the debian rule
<SpamapS> imbrandon: no, thats already been rejected and I agree with the rejection.
<imbrandon> it was ?
<imbrandon> ok , i'll dig in the mailing list
<SpamapS> imbrandon: yes, anything thats not ready for metadata.yaml should go in some other yaml file
<SpamapS> preferrably one that is named around the tool that consumes it
<imbrandon> would be useless for what i just said, and you mean rejected based on the uds convos ?
<SpamapS> imbrandon: vcs.yaml would be fine
<imbrandon> sure , but then it litters the dir. i'll think on it some
<SpamapS> imbrandon: I suggested it way before UDS.. I also suggested a single key that would be for extending things.
<imbrandon> ... ?
<imbrandon> ok
<imbrandon> hrm the top level dir is getting quite littered too
<imbrandon> :(
<SpamapS> littering the dir is better than littering metadata.yaml I think
<SpamapS> because it will only be "littered" in there as long as something sees non-ubiquitous usage. Once it becomes ubiquitous it will go in the next format spec for juju.
<SpamapS> right now the TLD has metadata.yaml, config.yaml, and hooks .. not what I'd call littered at the moment
<imbrandon> nah because if that becomse standard then i can see things never getting migrated, it happens all the time becose things start looking in the first place it is
<imbrandon> SpamapS: copyright
<imbrandon> readme
<imbrandon> license
<imbrandon> config.yaml
<SpamapS> license is not a standard file
<imbrandon> meta
<imbrandon> info.yaml
<imbrandon> etc
<SpamapS> info?!
<SpamapS> you making stuff up now? ;)
<imbrandon> well i cant use metadata :)
<SpamapS> seems pretty reasonable to me
<imbrandon> plus a templates dir for config templates
<imbrandon> etc
<SpamapS> and if things are migrated they'll be dropped from the charm store.
<imbrandon> huh ?
<imbrandon> what would be dropped ? i think i missed something
<SpamapS> things aren't migrated I mean
<SpamapS> charms that don't follow format migrations
<imbrandon> oh , right
<SpamapS> there will always be a range of formats supported in juju and the charm store
<imbrandon> yea i know, was tring to avoid pitfalls of the past, no need to beat a dead horse tho
<imbrandon> i'll just passive agressivly stay in da rules :)
<SpamapS> which pitfall are we repeating?
<imbrandon> i was talking about specific example of x-vcs still x- becuse it was used there first and became wide spread
<SpamapS> Its not still X-
<SpamapS> thats a lintian violation now
<imbrandon> but like i said i'm not trying to make a big deal about it, just no one brought up prefixes that i was aware and the arguments i heard at uds were very vague and week so thought i;d mention it, no biggie tho i can use info.yaml or charm.yaml etc etc just as easy
<imbrandon> SpamapS: heh sure , but how many years later ?
<SpamapS> the argument was pretty solid actually, that we should stay strict on metadata.yaml so that charms are well defined and so that tools can enforce formats.
<imbrandon> that doesnt discount prefexs then
<imbrandon> :)
<SpamapS> imbrandon: pretty much immediately after Vcs was added to policy, X-Vcs was added to lintian
<imbrandon> sure but x-vcs has been in use for about 4 years if not more
<imbrandon> before
<imbrandon> so i can see alot of tools still looking in the old key , etc
<imbrandon> perfect world i see your point and would agree in such a place, and do here just bacause its not worth me choosing that battle
<SpamapS> so allowing a prefix will just be the same as pushing things into a different file, except that metadata.yaml stays "clean"
<imbrandon> except the file can be named anything and becomes aother varaible its self to find progmaticly
<SpamapS> file, field, makes no difference I think
<imbrandon> guess not if i just load all the yaml up into spyc with a *.yaml glob :)
<SpamapS> that sounds like extra work
<imbrandon> then look for the keys and throw a E if keys overlap :)
<SpamapS> or just interpret each yaml with the tool that actually is meant to interpret it
<imbrandon> yes it is
<imbrandon> ...
<SpamapS> I'm also puzzled as to your intentions, as Vcs is a bad example I think.. I *HATE* that part of debian and much prefer the Ubuntu way where the path to the source branch is well known from the name+series only
<imbrandon> spyc the lib i'm using is indeed made to intrepret yaml
<SpamapS> imbrandon: thats a lib, whats the actual tool or field you want to populate
<SpamapS> ?
<imbrandon> i'm making the tool
<imbrandon> and i dont wanna populate it, i want to read it
<imbrandon> the x-vcs value in this case :)
<imbrandon> like for real that was a real use case
<imbrandon> the first one i came accross but i'm sure not last
<med_> has anyone out here proposed creating a new provider? Does juju allow for private 3rd party providers?
<imbrandon> med_: yes and yes-ish
<imbrandon> you can see examples in providers/* they are pretty streight forward
<imbrandon> ( in the juju src )
<imbrandon> and a few branches on LP of alternate/testing ones like OpenStack++S3 and OpenStack with swift
<med_> thanks imbrandon
<SpamapS> imbrandon: somebody added X-vcs in metadata.yaml ?!
<SpamapS> well anyway, time to nap
<m_3> SpamapS: night
<imbrandon> SpamapS: that was supose to be only in my branch i pushed both unintentionally and am removeing it
<imbrandon> i think charm.yaml will be better suited anyhow the more i think about it, metadata.y dosent definatively say if its data for the charm or the service or both etc , anyhow, time for food here /me is afk
#juju 2012-06-07
<shang> hi all, anyone awake? I have seen someone can live upgrade a service to a newer version and roll back
<shang> but I am not sure how to do that in the command
<r3d64r_> ahlo
<r3d64r_> what is the juju thing I been hearing about?
<jamespage> bbcmicrocomputer, nice work on the solr charm BTW
<bbcmicrocomputer> jamespage: thanks!
<jamespage> bbcmicrocomputer, I currently working on the lucene/solr3 packaging for quantal - will bump it up to 3.6.0
<bbcmicrocomputer> jamespage: when I get some time, I'd love to hookup Hadoop to it, so you can use Hadoop to build your indexes
<jamespage> bbcmicrocomputer, that would be sweet
<bbcmicrocomputer> jamespage: the bump in package version is great .. what do we normally do with charms in such situations, do we add Ubuntu flavour logic to them so they use packages on Quantal and download on Precise?
<jamespage> bbcmicrocomputer, good question - charms are linked to a distro anyway so its possible to have a different charm for quantal
<jamespage> but it would be nice to have that as a feature in the charm - I suspect it would be quite easy to backport the package to precise anyway....
<jamespage> bbcmicrocomputer, have you tried the existing solr 1.4 packages in Ubuntu?  any feedback?  I'd like to make the solr 3.6.0 packaging rock as much as possible
<bbcmicrocomputer> jamespage: yeah, I tried the 1.4 packages.. the charm uses its layout in most cases
<jamespage> bbcmicrocomputer, good - I've kept much the same for 3.6.0
<jamespage> bbcmicrocomputer, I just adopted the package in Debian as it had been orphaned - hope to get the 3.6.0 version uploaded before Debian freeze this month
<bbcmicrocomputer> jamespage: in general they were good, no problems
<jamespage> bbcmicrocomputer, OK - well let me know if you see anything that would make deployment easier
<bbcmicrocomputer> jamespage: sure, np
<jamespage> bbcmicrocomputer, just out of interest does the charm use the built in solr replication stuff or the older snap* scripts
<jamespage> I could look but I'm feeling lazy
<bbcmicrocomputer> jamespage: built-in
<bbcmicrocomputer> jamespage: those snap scripts are .. well, a bit hacky
<jamespage> bbcmicrocomputer, good
<jamespage> it was definately old school
<jamespage> last big solr implementation I did that was the only option - I think it was 1.2 but I can't remember exactly!
<bbcmicrocomputer> jamespage: plus the 3.6 release they finally closed up some of the security holes regarding the select and qt=<handler>
<jamespage> +1
<bbcmicrocomputer> jamespage: i.e. it's disabled by default
<bbcmicrocomputer> jamespage: which at least makes URL based password restrictions possible
<bbcmicrocomputer> jamespage: 1.2 must have been nasty, it's only really started to grow up as a project recently
<bbcmicrocomputer> jamespage: the config file itself is already a nightmare to get through
<jamespage> bbcmicrocomputer, yeah - I hate xml config - esp ones like solr
<jamespage> its really code IMHO
<bbcmicrocomputer> jamespage: yeah
<lynxman> jamespage: I'm starting to review your Hadoop Charm
<jamespage> lynxman, think the action is on me ATM
<lynxman> jamespage: Ah alright, I'm the reviewer for today so just starting by the newest :)
<lynxman> jamespage: I'll change it to work in progress then
<jamespage> lynxman, just pushed a new version if you want to review....
<lynxman> jamespage: yay!
<m_3> morning gang
<hazmat> m_3, g'morning
<lynxman> jcastro: let me know whenever you're around
<lynxman> m_3: morning!
<jcastro> hi
<lynxman> jcastro: hey, let me DM you
<cheez0r> juju masters: where does juju pull the SSH key from when you run juju bootstrap? I've run ssh-keygen -t rsa, I've got the id_rsa and id_rsa.pub in my ~/.ssh directory, but when I run juju bootstrap and then juju status, I get "Invalid SSH Key" and if I access the node, ubuntu user's ~/.ssh/authorized_keys is empty.
<cheez0r> I'm trying to understand where that node's authorized_keys file gets populated- I'm using maas as the infrastructure.
<m_3> cheez0r: not sure with maas
<m_3> juju usually pulls your ~/.ssh/id_rsa.pub
<cheez0r> m_3: Do you know if juju bootstrap populates the key, or is that supposed to be done by the infrastructure?
<m_3> cheez0r: lemme look
<m_3> cheez0r: looks like it uses user-data for cloud-init
<cheez0r> where is that populated?
<m_3> /usr/share/pyshared/juju/providers/common/utils.py ( format_cloud_init )
<m_3> also look at cloud_init.py in the same directory... shows how it adds authorized keys to user-data
<jcastro> jamespage: have you talked to Mark Baker today yet?
<m_3> cheez0r: I'm assuming maas provides preseed files for cloud-init... don't know though
<jamespage> jcastro, yep - all set
<m_3> cheez0r: note that "invalid ssh key" can be a sign of something else... make sure your juju versions match
<cheez0r> m_3: it's a freshly built infrastructure using maas, I don't think they can not match using that system, but I dunno
<cheez0r> It apt-get updates all of the systems out of the gate
<m_3> i.e., everything is using precise?  you can find juju cli version with `dpkg -l | grep juju`
<cheez0r> yes, everything's in precise.
<m_3> environments.yaml has a 'juju-origin' entry
<m_3> 'ppa' -vs- 'distro'
<cheez0r> mine is set for default-series: precise
<m_3> if juju-origin is not there, then it's defaulting to distro
<m_3> what's your juju cli version?
<cheez0r> 0.5+bzr531-0ubuntu1
<m_3> perms on ~/.ssh/ ?
<cheez0r> this is definitely an issue with ssh keys not being propagated.
<cheez0r> keys are not on the target node.
<cheez0r> they're all fine.
<m_3> ok... hmmmm
<cheez0r> so after I bootstrap, if I manually add the key to the node, juju status still doesn't work, and I get errors "failed while receiving a server response" "server refused to accept the client"
<m_3> check out the provisioning agent log maybe?  probably /var/log/juju
<m_3> also `ps awux | grep juju` on the bootstrap node
<m_3> sorry, maas has been on my todo list... just lower on it than other stuff :)
<cheez0r> no doubt'
<m_3> is there a freenode maas channel?... lemme look
<cheez0r> yeah, I'm on it
<m_3> I'll dig and see if I can come up with anything... I think maas is not doing anything special with ssh keys (just from grepping around in juju/providers/maas)
<lynxman> m_3: not afaict
<m_3> I think it provides a preseed file for cloud-init to read... doesn't do a live metadata service like ec2's 169.254.169.254
<m_3> that preseed file should have authorized keys to inject in the new instances
<m_3> cheez0r: maybe look for that preseed file somewhere on the new instance... might be able to read it to see if authorized_keys looks good
<cheez0r> hrm, ok
<cheez0r> let me play.
<m_3> I'll see if I can spin up some maas nodes today in kvm slices
<cheez0r> thanks bud- I'm hammering away at a 12 node cluster that I'm trying to get running maas + juju + openstack
<cheez0r> in the default preseed file I see no way to configure ssh keys
<lynxman> cheez0r: it should just create a user ubuntu with pass ubuntu
<cheez0r> no, it should not.
<cheez0r> it should pull my ssh public key and embed it into the bootstrapped node.
<lynxman> cheez0r: from there on when you mark a system to be used in juju it adds an authorized_keys file with the juju ssh key in it
<lynxman> cheez0r: I was still typing :)
<cheez0r> maas default creates an ubuntu user with no password set and embeds an SSH key into the node.
<cheez0r> so I'm thinking my problem is with maas.
<lynxman> cheez0r: not according to my experience, good luck though :)
<cheez0r> well, I'm working with the maas dev guys in #maas, feel free to join us and hash it out ;)
<lynxman> cheez0r: good for you
<cheez0r> lynxman: to validate, check the preseed file in /var/lib/cobbler/kickstarts/ubuntu-server.preseed at the line 'd-i  passwd/user-password-crypted  password !'
<cheez0r> if it's got a bang at the end of that line, it's not setting a password, if there's a crypt string there, it is.
<lynxman> cheez0r: thanks
<SpamapS> cheez0r: thats intentional
<lynxman> cheez0r: I've been involved in the project from the beginning and somehow that change didn't come to my attention
<SpamapS> cheez0r: you want SSH key auth. :)
 * lynxman shrugs and keeps reviewing charms
<cheez0r> SpamapS: I do indeed, which is what I'm troubleshooting now. ;) I was ensuring I knew what I thought I did, which isn't always the case.
<m_3> SpamapS: morning.... what's the new little critter's name?
<SpamapS> cheez0r: when you use juju, maas shoves your ssh key into cloud-init
<SpamapS> m_3: Adrian
<m_3> awesome... 'grats
<cheez0r> SpamapS: it's supposed to, but it is not for some reason. I'm trying to debug that now.
<lynxman> SpamapS: congrats! :)
<SpamapS> cheez0r: you should see the results of that in the console on first boot
<SpamapS> cheez0r: and you can always boot the box in recovery mode to look at /var/log/cloud-init-output.log
<SpamapS> anyway.. back to baby stuff
<cheez0r> thanks Spa
<cheez0r> err, SpamapS
<jcastro> SpamapS: man, MarkBaker and James Page's webinar is already better than ours. They have cool accents.
<jcastro> http://www.brighttalk.com/webcast/6793/49171
<jcastro> charm school! if you're interested in hanging out
<jcastro> lynxman: hey so on your review for the juju charm, does that mean you're going to promulgate it?
<m_3> I can promulgate it as long as it's reviewed
<jcastro> yeah it's just not clear to me if he was finished or not
<m_3> that's one that makes me wish a charm could be either standalone _or_ subordinate depending on how you wanna deploy it
<lynxman> jcastro: I might ask for negronjl opinion as well
<lynxman> jcastro: it looks brilliant to me
 * jcastro nods
<m_3> jcastro: I want a british accent
<jcastro> heh
<jcastro> m_3: hey don't forget puppetconf!
<jcastro> also I think meesh needs your oscon travel info to book your stuff
<m_3> yeah, it's in email
<jcastro> <3
<lynxman> m_3: I'm applying for puppetconf too :)
<m_3> lynxman: cool
<lynxman> m_3: it'll be fun :)
<m_3> yup
<m_3> need to figure out my submission
<lynxman> m_3: we did a presentation last year with adam_g about juju+puppet masterless intregration
<m_3> lynxman: well, and clint's charms now
<lynxman> m_3: maybe some more practical presentation about that?
<m_3> lynxman: we have a django / summit charm that's puppet masterless
<lynxman> m_3: that'd be a cool one
<m_3> lynxman: teyo made it clear that they wanna see us using std modules... not having to adapt them in any way
<lynxman> m_3: yeah but that'd be for integrating juju tighter into puppet
<lynxman> m_3: I reckon its still a valid demo, maybe with a twist?
<m_3> yup
<m_3> I'm thinking "puppet and juju... sitting in a tree"
<lynxman> m_3: lol, yeah
<m_3> lynxman: juppetto?
<m_3> we've already got jujustrano underway
<lynxman> m_3: even juju enterprise :D
<jimbaker> m_3, i like the idea of standard modules. it would be a nice win if puppet actually works best when run with juju
<m_3> jimbaker: yeah... it's interesting
<m_3> jimbaker: so the django charm separates out an "install" manifest from a "database" manifest
<m_3> jimbaker: that separation is critical to juju
<m_3> jimbaker: but we'd have to see how to do that with the std puppet modules
<m_3> jimbaker: perhaps there's a way we can selectively call parts of a manifest from each particular hook
<m_3> dunno
<m_3> jimbaker: the modules are a little more monolithic than we need... I think
<m_3> gotta play with it
<m_3> s/parts of a manifest/parts of a module/
<jimbaker> m_3, it would seem feasible given the DSL aspect of puppet
<m_3> yeah, just don't know how the modules are organized
<jamespage> m_3, juppetto I like
<jamespage> :-)
<lynxman> juju charm review queue zero, wh00t! \o/
<jcastro> you mean 5. :)
<jcastro> oh, refresh lag
<lynxman> jcastro: ;)
<m_3> jamespage: that was awesome
<m_3> lynxman: whoohoo!
<jamespage> lynxman, nice one
<jamespage> ta
<marcoceppi> lynxman: I didn't think apt-add-repository prompted users if executed on server environments from a script
<lynxman> marcoceppi: I reckon it does (found myself there) and I can't remember exactly if this was pre-precise or on precise itself so I just add -y to be safe
<hazmat> marcoceppi, lynxman there's a -y flag for it
<hazmat> that won't prompt
<lynxman> hazmat: that's what I recommended to use
 * hazmat nods
<marcoceppi> hazmat: right, I've noticed that precise charms upgraded from oneiric work when adding ppa's without the -y flag though
<marcoceppi> Atleast in my tests, but -y is better in the long run - if that ever changes
<lynxman> marcoceppi: it's just to be safe, for certain
 * marcoceppi nods
<EerieMok> heya getting  a weirdissue
<EerieMok> juju deploy glance
<lynxman> alright reviewing queue done for today, now moving to the pub :)
 * lynxman waves
<EerieMok> says that the entry for glance cannot be found
<EerieMok> yet I see it on jujucharms.com
<jcastro> EerieMok: that is strange, I get that too
<EerieMok> Sounds like the charmstore is having issues
<jcastro> hazmat: ^ around?
<SpamapS> jcastro: perhaps we should do some fake accents for our webinars?
<SpamapS> jcastro: I'll be Austin Powers, you be Scarface
<SpamapS> lynxman: Queue length: 0
<SpamapS> lynxman: NICE
<hazmat> me checks
<hazmat> jcastro, EerieMok it might not be approved
<EerieMok> https://help.ubuntu.com/community/UbuntuCloudInfrastructure#Ubuntu_Cloud_Live_Image
<hazmat> charmstore publishing to the default namespace requires review
<EerieMok> Then that tutorial is basically broken unless you do the optional step 2 xD
<jcastro> http://jujucharms.com/charms/precise/glance
<SpamapS> hazmat: glance is definitely approved
<jcastro> that looks like the default namespace to me
<hazmat> SpamapS, but is it promulgated
<SpamapS> approved, promulgated, same thing
<SpamapS> lp:charms/glance points to it
<hazmat> SpamapS, so its a bug in charmstore then
<EerieMok> There are a few other charms that are broken too
<EerieMok> (on the charmstore)
<SpamapS> argh
<SpamapS> points at 'precise'
<SpamapS> instead of 'trunk
 * SpamapS throws things
<SpamapS> EerieMok: will fix shortly
<SpamapS> hazmat: I think we need to get jujucharms.com and the charm store to use the same algorithm
<SpamapS> and they *both* should expose their import logs
<EerieMok> so was the charmstore pointing to something like precise/glance/precise ?
<hazmat> SpamapS, fair enough.. i think two things, show a url
<hazmat> SpamapS, er. show the store address for it, and show if its promulgated
<hazmat> SpamapS, there still installable
<hazmat> EerieMok, you can install with cs:~charmers/precise/glance
<SpamapS> wow glance doesn't even come close to passing charm proof
<EerieMok> I actually just grabbed the bzr and  have a local repo
<jcastro> SpamapS: hey so I don't think: https://bugs.launchpad.net/charms/+bug/998241
<_mup_> Bug #998241: New Charm: keystone <Juju Charms Collection:Incomplete by gandelman-a> < https://launchpad.net/bugs/998241 >
<SpamapS> EerieMok: thats a better choice for production anyway
<jcastro> IMO that should still be displayed in the queue
<hazmat> SpamapS, ie show it its available in the default namespace it would show cs:precise/glance ... else cs:~charmers/precise/glance
<jcastro> because we'd want to check on it on occassion right?
<hazmat> actually i can add those to the review queue
<SpamapS> jcastro: no, I don't think so. We only want things in the queue that require action now.
<jcastro> ok so what happens in this case, adam says he fixed it
<jcastro> but doesn't change the status.
<jcastro> it slips through the cracks
<hazmat> SpamapS, anything in the default namespace that isn't installable/in the store should be in the review queue, they need attention
<jcastro> also found 2 with code and bugs, just not linked and not subscribed the charmers, so sorry to remove the 0. :D
<jcastro> lynxman: nice job getting those done though!
<lynxman> jcastro: haha no problem, will hammer those tomorrow :)
<lynxman> jcastro: just send more my way, will be glad to get them done
<jcastro> <3
<jcastro> SpamapS: or perhaps be more explicit "Fix this and that, when you're ready for the next round flip it back to fix committed" or something
<lynxman> jcastro: I've told everyone to mark it as "New" again
<jcastro> that works
<jcastro> negronjl: we'll need the graphs for "time to first response" and "average queue size" eventually as well
<SpamapS> jcastro: in the server team we do keep track of Incomplete with response bugs ... its available as a bug search. But I don't think that should be in the queue.. perhaps a separate thing.. hm.. requires thought
<jcastro> SpamapS: ok, we'll talk about it next call or something
<SpamapS> adam_g: btw, I'm adding a copyright file to glance, since it doesn't have one.
 * jcastro gets scared of losing contributions when people forget to flip the right switch in lp
<SpamapS> negronjl: have you tried --ignore-warnings yet btw?
<SpamapS> negronjl: http://paste.ubuntu.com/1029058/
<SpamapS> negronjl: thats options.ignore minus warnings
 * SpamapS fixes in trunk
<SpamapS> jcastro: fear isn't usually a great motivator toward excellence. :)
<m_3> SpamapS: so what's the deal with glance?  the stale /precise branch screwing things up?
<m_3> oneiric/glance/trunk and precise/glance/trunk both look good... and lp:charms/glance points to precise/glance/trunk
<SpamapS> m_3: I fixed it
<SpamapS> m_3: lp:charms/glance was not pointed at the trunk branch
<SpamapS> We really need to fix that in the charm store code
<m_3> ah... oops
<SpamapS> charm store should ignore the branch name if it has an official pointer
<m_3> yeah... and/or not screw it up when upgrading series
<SpamapS> Though the 'trunk' branch had 7 extra commits that the 'precise' one didn't
<m_3> right...
<SpamapS> all fixed now
<m_3> i think the precise branch was pre-existing
<m_3> but we couldn't fix at the time b/c of stacking
<m_3> iirc
<SpamapS> lynxman: you need to push to ~charmers before promulgating btw
<SpamapS> lynxman: you now own lp:charms/juju and lp:charms/drupal6
<SpamapS> actually we need to make promulgate verify that
<SpamapS> since 99% of the time we want ~charmers as the owner
<m_3> there's also a nova-compute/precise branch lying around too
<m_3> oh, nevermind... they're all still hanging about
<SpamapS> hanging around is fine
<SpamapS> being official is not ;)
<m_3> right... really like to remove them though
<SpamapS> its ok. I'm still quite certain that the charm store is useless without a way to fork the charm you've deployed.
<m_3> yeah, it's good for demos... that's about it
<SpamapS> neat trick and all, but not being able to fix the charm myself means our default deploy method is quite flawed.
<SpamapS> ok filed bug 1010145
<_mup_> Bug #1010145: promulgate makes it too easy to promulgate the wrong branch <Juju Charm Tools:New> < https://launchpad.net/bugs/1010145 >
<SpamapS> Like Damian from Clerks, I'm not supposed to be here today.. so.. if somebody else wants to tackle that.. that would be great. :)
<m_3> yeah, I'll take a look
<negronjl> jcastro, SpamapS:  i am at the M$ azure event all day today
<jcastro> oh didn't know you were there
<jcastro> they excited about ubuntu over there?
<negronjl> jcastro:  i am still working on the charts.  Sorry for the delay.  I've been swamped
<negronjl> SpanapS:  not sure what you are talking about re: ignore warnings
<shazzner> is anyone going to be at the Texas Linux Fest in Aug?
<shazzner> they're finally holding one in san antonio :)
<adam_g> SpamapS: thanks
<cheez0r> hey folks, I'm working with a maas environment in juju, I bootstrapped the environment successfully, I deployed a bunch of charms and added relationships, but all of the agent-states in the juju status remain in 'not-started' for the nodes and 'pending' for the charms.
<cheez0r> Any thoughts?
<hazmat> cheez0r, i'd suspect cloud-init is not running on them
<hazmat> cheez0r, same problem as you had with the bootstrap node
<hazmat> cheez0r, if you could pastebin the /var/lib/cloud/instance/user-data.txt from some of them that would help confirm
<hazmat> SpamapS, ping
#juju 2012-06-08
<SpamapS> hazmat: pong (but not sure how long I'll be online ;)
<SpamapS> hazmat: and with that, I disappear
<hazmat> SpamapS, maybe tomorrow then
<hazmat> SpamapS, negronjl http://jujucharms.com/tools/store-missing
<negronjl> hazmat: nice ... thx for the tool
<SpamapS> hazmat: cool :)
<SpamapS> another queue for charmers to work on really
<SpamapS> but sort of pointless without the import logs for the charm store
<lynxman> SpamapS: did that, I've been having some problems promulgating as well :(
<lynxman> SpamapS: tried to fix them with m_3, maybe I can grab ahold of you later
<m_3> quiet friday
<imbrandon> m_3: heh yea, /me has been fighting with RPM's and charm-tools-that-should-be-called-i-hate-the-osx-bsd-tools
 * m_3 grin
<imbrandon> took me an hour to figure out bsd getopt != gnu-getopt
<imbrandon> heh
<SpamapS> imbrandon: hah! perhaps we should rewrite all those in python
<imbrandon> btw i came accross a very slick way to install juju on OSX now without the need for brew at all , gonna script it up and it can live in the official juju branch then ( that assumes it would be approved for such but i dont see why not )
<imbrandon> SpamapS: i thought seriously about it, i got a few patches in either case :)
<imbrandon> added some error checking too and checking of exit codes from the commands / scripts
<imbrandon> and normalized the help where it called them commands and scripts interchangeably
<imbrandon> but its all like one big commit right now, gonna split it out here in a few in case you dont like all the changes its easier to cherry pick
 * imbrandon gets back at it before i get destracted too much
<imbrandon> oh and got a 3-month free window azure thing too today so thought about trying to make a provider , but likely will fail if/when i get to that part
<imbrandon> SpamapS, m_3, jcastro: if i did the leg work and got a Proof-of-concept OS X jenkins deploying code then running unit/regression testing for juju and charm-tools what do you think the chances of getting a VM running in canonical's to do that would be ?
<imbrandon> canonical's datacenter*
<shazzner> imbrandon: you're working on juju for osx right?
<imbrandon> yup
<shazzner> cool, as someone who has unfortunately been forced to use it. I sincerely applaude your effort :)
<imbrandon> doing some touchups today matter fact, some things still not 100% right but its getting there :)
<shazzner> sweet
<shazzner> I wonder if juju can deploy to rackspace's managed cloud, probably not I assume
<imbrandon> fixed up bash completion and got most of the charm tools runnning , most importantly charm proof
<imbrandon> it can
<shazzner> interesting...
<imbrandon> i use rackspace a little
<shazzner> I figured it would be one layer abstracted too much
<imbrandon> nah, they have pure openstack
<imbrandon> e.g. it was them and nasa that started that show anyhow :)
<shazzner> yeah, I'm familar with it all
<shazzner> I'm wondering if a managed cloud user can get openstack creditials to deploy services to
<imbrandon> but yea, you can, you neeed to use SpamapS's external s3 patch or the OSAPI provider thats on LP for testing only yet
<imbrandon> but it works
<shazzner> huh, awesome
<imbrandon> shazzner: i'm not sure 100% i just asked support and now they are in my cpl
<imbrandon> few weeks ago
<shazzner> cool, let me know what they say :)
<imbrandon> no thats what i mean, i asked them , and they gave em to me
<shazzner> oh I see what you mean
<imbrandon> and put them in my cpl alogn with the API key :)
<shazzner> damn that rules
<m_3> imbrandon: there's really nowhere to run such a VM afaik, but get me a VM and I'll see what I can do
<imbrandon> m_3: ok, i'll see if i cant have it sometime next week
<imbrandon> :)
<m_3> imbrandon: the qa lab hardware is up/down all the time with openstack and maas testing... but I'll ask around
<imbrandon> if not cool i understand, more convience for others than me
<imbrandon> kk
<hazmat> m_3, there's one more charm which still showing some ill effects from the distro branches
<hazmat> m_3, cs:oneiric/hadoop-master
<hazmat> ie. can't be checked out, as it references a precise branch
<hazmat> its at this unreachable branch lp:~charmers/charms/oneiric/hadoop-master/trunk
<m_3> hazmat: that's a strange one... b/c it shouldn't be in precise
<m_3> in oneiric, the charms are hadoop-master, hadoop-slave, hadoop-mapreduce
<m_3> in precise, it's just hadoop
<hazmat> m_3, right.. perhaps because it got yanked?
<m_3> no clue... lemme look at the branches
<hazmat> m_3, its the only charm branch thats inaccessible
<m_3> hazmat: man that's just hosed... bzr info's not showing that oneiric/hadoop-master/trunk's stacked on anything but it's referencing the precise/hadoop-master/precise branch
<m_3> oh, I see what's going on here... lp's trying to be smart.. the branch has no reference to series... bzr+ssh://bazaar.launchpad.net/+branch/charms/hadoop-master/... and it gets filled in magically
<m_3> makes sense that we can't manually manage series then
<gmb> SpamapS: around?
<negronjl> 'morning all
<m_3> hazmat: ok, try it now
<hazmat> negronjl, g'morning
<negronjl> hi hazmat
<m_3> hazmat: actually retry all hadoop-{master,slave,mapreduce}
<hazmat> m_3, in progress
<hazmat> negronjl, how'd it go?
<negronjl> hazmat: It went very well... I never expected so many people to be so interested in Ubuntu in Azure
<m_3> negronjl: yeah, it's totally cool imo
<negronjl> M$ is about 60-70% to where Amazon is in features
<negronjl> and they are giving away a lot of very interesting features ( metrics and such ) for free
<hazmat> negronjl,  i'm curious to try out their prod platform to see what the i/o is like
<hazmat> negronjl, if they can just make the config stuff work it would go alot nicer
<negronjl> hazmat: Prod IO is about 30M/s
<hazmat> negronjl, nice.. that's roughly 6x dev
<negronjl> hazmat: yup
<SpamapS> gmb: I'm off this week, but peeking in here and there
<negronjl> hazmat: They were telling me that their dev/beta env was pretty much worthless
<hazmat> negronjl, did you do a juju demo?
<gmb> SpamapS: Okay. I'm about to EoD, so I'll drop you an email instead and we can talk when you're back.
<negronjl> hazmat: No ... I just had the presentation in a loop on my laptop and had people ask about it and such after looking at it
<m_3> negronjl: cool
<SpamapS> gmb: I do plan to get your python modules into Debian/quantal *soon*
<SpamapS> gmb: you're just suffering from an unfortunate series of events of very high priority ;)
<hazmat> SpamapS, i think i'm going to need to rewrite charm-tools check to be more amenable for library usag
<gmb> SpamapS: Okay; is there anything at all we can do to help? Should we rope someone else with packaging experience to help out rather than just loading you up with more work?
<SpamapS> hazmat: library?
<SpamapS> hazmat: its pretty easy to parse....
<hazmat> hmm
<m_3> yeah, I was just trying to fiure out what hazmat was talking about
<SpamapS> why not speak the international language of shell love... exec()
<hazmat> because its nice to avoid fork bombs on small instances ;-)
<hazmat> but fair enough
<hazmat> i'll give that a short first
<SpamapS> Even if there are 10,000 charms... thats not too many forks if you wait() on them :)
<hazmat> SpamapS, there's also some thing this would pick up better if it used the juju.charm class
<hazmat> there's a couple of invalid things it misses
<SpamapS> No
<SpamapS> it must not share code w/ juju
<SpamapS> the whole point is that it is an independent static analysis tool
<hazmat> SpamapS, what value does that provide, other than inconsistency?
<hazmat> it can do additional checks and policies
<SpamapS> its an air gap to protect against bugs in the juju implementation
<SpamapS> charm proof is meant to assert the way the spec says charms are composed
<SpamapS> If it throws errors where juju does not, then *one* of them has a bug.
<hazmat> the juju analysis is static, bugs should be fixed there, we already have multiple implementations of that..
<hazmat> SpamapS, the problem is that its not throwing errors, where juju does
<hazmat> and rather than copy the implementation, why not just use it..
<SpamapS> then thats a bug in one of them too
<SpamapS> (unless its a runtime error, not a static format error)
<SpamapS> hazmat: I'm fine with charm proof *calling* juju's version of "is this formatted correctly" after it has found that it thinks the charm is formatted correctly. But I don't want them sharing code.
<imbrandon> we could do the charm-tools in ruby then it would not be an issue ... just teasin before someone takes me serious :)
<gmb> SpamapS: I'm eod'
<gmb> 
<gmb> arg
<gmb> 'ing now, so please email me if there's anything that we (~Yellow, ~launchpad or others) can do to help you with the packaging of our charm-tools additions and python-shelltoolbox. We're happy to try and find someone else to do the packaging if you've got too much on your plate.
<imbrandon> gmb: are they on LP branch somewhere ?
<SpamapS> gmb: given that buildbot is being abandoned, is anything else going to use these python charm helpers?
<imbrandon> wait should they be charm-helpers or charm-tools
<SpamapS> imbrandon: they exist.. just simple python modules that I want to have packaged before we start relying on them.
<SpamapS> imbrandon: definitely charm helpers
<SpamapS> which is a part of charmt-tools
<imbrandon> SpamapS: i will likely have time over this weekend assuming this rpm stuff falls in line, if that will help
<SpamapS> imbrandon: sure if you can dig them out
<imbrandon> k
<m_3> SpamapS: so python-shelltoolbox needs to be built as a package and marked as a dep of charm-tools? re: https://code.launchpad.net/~yellow/charm-tools/trunk/+merge/101554 or is there something more going on here?
<SpamapS> m_3: right, well ,a dep of the python bit
<SpamapS> anyway, I'm out again
<SpamapS> be back to work on Monday
<m_3> SpamapS: great weekend!
<hazmat> SpamapS, have a good one
 * hazmat gathers a list of all the charms proof has a runtime error on
<m_3> hazmat: did hadoop-xxx work now?
<m_3> (oneiric and precise)
<hazmat> m_3, yup
<hazmat> m_3, thanks
<hazmat> that's not bad... charm proof only barfs completely on two charms (out of 360).
<hazmat> http://paste.ubuntu.com/1030745/
<hazmat> the terracota one though juju is fine with in practice, has an odd definition of its peer interface though
<adam_g> is there some backdoor way to go about clearing the charm cache from an existing juju environment, so that juju deploy will always use the local:charm regardless of revision?
<benji> adam_g: I think you want "juju deploy --upgrade"
<adam_g> benji: will that work for a local charm that has an earlier revision # than whats currently cached/was last deployed?
<m_3> adam_g: http://bazaar.launchpad.net/~juju-jitsu/charmrunner/trunk/view/head:/charmrunner/snapshot.py clean_juju_state does some of this... it was written against the local provider though
<imbrandon> adam_g: or set a cron to rm it from /usr/lib/juju/* on the bootstrap node
<imbrandon> kinda hackish tho
<imbrandon> phh m_3 idea much better
<imbrandon> :)
<imbrandon> adam_g: iirc juju upgrade-charm --repository /usr/local/lib/charms/ local:mycharm
<imbrandon> will make the version auto inc
<imbrandon> err revision
<adam_g> hmm. thanks, ill mess around with all of the above
<imbrandon> actually tho you would need to drop the local: from that it wants the service name, but it should still get it from local if it was deployed from there
<hazmat> adam_g, --upgrade should do the trick
<hazmat> adam_g, it will increment the revision
<hazmat> if not snapshot script should do it
<hazmat> adam_g, actually one more way...
<hazmat> adam_g, upgrade --force
<adam_g> hazmat: it will increment the revision where? in the ./precise/foo/revision, or in the juju environment?
<hazmat> foo/revision afaicr
<adam_g> hmm
<hazmat> adam_g, what's the issue again?
<adam_g> i basically just need juju to ignore the local revision if at all possible, or reset it in its state
<hazmat> you have a rev 5 in the env, and rev 3 on disk?
<adam_g> hazmat: potentially, yeah
<hazmat> adam_g, whats the scenario that arises in?
<hazmat> adam_g, and why couldn't you just increment the revision?
<adam_g> currently MAAS only supports precise, i need to deploy and test services on both precise quantal and precise. to do this, i need to switch the installation between releases in cobbler. from juju's POV, everything is being deployed to precise.  i have two charm branches for each service (one for precise, one for quantal). when i deploy, i branch them into ./precise and 'juju deploy --repository=. local:foo'.  i'd like to not have to re-bootstrap every deploy
<hazmat> adam_g, there different charms
<adam_g> hazmat: yeah
<hazmat> adam_g, the series is part of the charm name
<hazmat> oh..
<hazmat> gotchqa
<adam_g> hazmat: in this case, both charms look the same
<adam_g> precise/nova-compute
<hazmat> adam_g, why not tell juju quantal/nova-compute ?
<adam_g> between deploys, i replace that charm with the quantal version
<hazmat> ie. why try to make two separate charms both pretend to be the precise?
<adam_g> hazmat: does 'deploy local:quantal/nova-compute' work? i thoguht it resolved the directory based on the series specified in environemnets.yaml
<hazmat> adam_g, it resolves the *default* series
<hazmat> adam_g, but you can specify any series when deploying
<hazmat> in the absence of one specified, it uses the default
<adam_g> hazmat: specify the series how? as an argument to deploy? either way, i think MAAS provider will refuse anything other than precise ATM?
<adam_g> unless specifying it at deploy time is independent of the provider
<bkerensa> =o
<hazmat> adam_g, its specified as part of the charm name so local:quantal/nova-compute should do it
<adam_g> hazmat: er, yea
<adam_g> Bad 'ubuntu-series' constraint 'quantal': MAAS currently only provisions machines running precise
<hazmat> adam_g, it needs provider support though
<hazmat> lame
<adam_g> so im back at square one. :)
<hazmat> yup
<hazmat> but with context its so much better ;-)
 * m_3 grimace
<hazmat> adam_g, so the snapshot script is what you want
<adam_g> hazmat: great. ill take a look after lunch
<gary_poster> hey SpamapS gmb mentioned you were wondering whether the python charm helpers would be helpful.  I'm hoping they still are.  I'm proposing that we write a Launchpad dev charm, so I'm selfishly interested, but also I thought we were all pleased that they were good tools for charmers in the future.
<gary_poster> maybe I misunderstood though
<hazmat> adam_g, its in pypi.. so its an easy_install charmrunner away..
<hazmat> also in lp:charmrunner
<adam_g> sweet, thanks kapil
<m_3> gary_poster: think they'll be generally useful... peeps were just balking at deps that weren't packaged
<gary_poster> m_3, cool thanks
<hazmat> hmm.. it does look like snapshot has some local provider specifivity..
<m_3> gary_poster: so I think we'll package python-shelltoolbox or something like that and mark it a dep of charmhelpers-py
<hazmat> we don't abstract out removing files from storage...
<hazmat> :-(
<m_3> hazmat: shouldn't be too hard to extend it... it takes care of the zk part
<gary_poster> cood, m_3, yeah that's what I was hoping
<hazmat> i don't think that should be a problem since its killing the zk state for the charm
<gary_poster> thank you
<m_3> gary_poster: np... sorry it sat in the queue for so long!
<gary_poster> :-) will be very happy when it is all set up
<jimbaker> m_3, SpamapS, i just proposed lp:~jimbaker/juju-jitsu/watch-subcommand which adds a jitsu watch subcommand
<jimbaker> it's a superset of what can be done with polling juju status and parsing; in particular, one can wait for relation settings to be set (or unset). so that should be useful for testing. also it uses the Juju API where possible, otherwise ZK, so it's much lighter weight/instaneous
<m_3> jimbaker: cool... thanks!
<jimbaker> m_3, one other cool thing is that multiple conditions can be specified, with all or any waiting. also being able to wait for --num-units of a service to be in a state, along with some set of relation settings, should hopefully prove useful
<hazmat> negronjl, cassandra charm exposes jmx as a 'cassandra' interface ? that's not right
<negronjl> hazmat: I'll look into it ... would you open a bug and assign it to me?  I'm swamped at the moment and don't want to forget about it.
<hazmat> negronjl, np
<negronjl> hazmat: thx
#juju 2012-06-09
<r3d64r> I been trying to find out how they came up with the name juju, still have not found the answer. anyone in here know?
<imbrandon>  Juju is an African word for magic.
<imbrandon> r3d64r: ^^
<r3d64r> in America we have a magic johnson
<hazmat> SpamapS, http://jujucharms.com/tools/proof-errors
<SpamapS> hazmat: thats great, but, IIRC m_3 was tasked with doing that on charmtester
<hazmat> SpamapS, oh.. didn't know about that
<SpamapS> hazmat: we're also looking at giving each error/warning a symbolic name so they can have their own page
<hazmat> SpamapS, i was going to incorporate it directly on to each charm's page
<hazmat> on the right side bar, along with testing results from charmtester
<SpamapS> yeah that would be tha bomb
#juju 2012-06-10
<twobottux> aujuju: why is it that 'Ubuntu Server. A full deployment will require a minimum of 10 servers.'? <http://askubuntu.com/questions/149012/why-is-it-that-ubuntu-server-a-full-deployment-will-require-a-minimum-of-10-se>
<MarkDude> imbrandon, pingy
<imbrandon> MarkDude: heya
<imbrandon> MarkDude: i gots rpm for you today :)
<imbrandon> fought with it all day friday :)
<MarkDude> You ROCK sir
<imbrandon> heh, well i'm sure they can be much improved by a real rpm packager, but they function
<imbrandon> :)
<MarkDude> It will take me a bit to run by the folks I need to show it
<imbrandon> used a shiney fedora HPCloud instance to try it out
<MarkDude> No stress
<imbrandon> no worries, i was gonna toss it in a git repo on github, will that work for you ?
<MarkDude> one would have to assume that RPM is really a good thing to package, but, I digress
<MarkDude> Yes it would work great
<imbrandon> heh
<imbrandon> kk i'll do that then here in a few ( right in the middle of something ) then ping ya on irc or email if your afk the exact url for it ,but it will be under my username bholtsclaw e.g. http://github.com/bholtsclaw/some-proj-name
<MarkDude> maybe call it:Distros-holding-hands
<imbrandon> btw , talked to my next ex-wife lately ? heh
<imbrandon> hahah good name
<imbrandon> :)
<MarkDude> or We-areALLon-theSAMEteam
<imbrandon> :P
<MarkDude> or in a tip to Ubuntu's early name Project-noname
<imbrandon> noname.com
<imbrandon> i think that redirs to ubuntu.com now
 * imbrandon checks
<imbrandon> ahh it was http://www.no-name-yet.com/ but it dont
<imbrandon> 404
#juju 2013-06-03
<LarsN> I'm trying to get started with JuJu after watching Antonio's presentation at Texas Linux Fest.
<LarsN> when I follow the getting started guide https://juju.ubuntu.com/get-started/hp-cloud/
<LarsN> when I attempt to :~# juju bootstrap
<LarsN> I get the error:  2013-06-03 00:41:35,702 ERROR Unexpected 405: '{"message": {"response" : "Not Allowed" }}'
<LarsN> not being a yaml guy, would I need to escape special characters encapsulated inside " " ?
<davecheney> LarsN: which special characters are you talking about ?
<LarsN> ! as an example.
<davecheney> i'm guessing that is part of a password ... ?
<LarsN> yeah.
<davecheney> i prefer to pass my creds via the envirment
<davecheney> so export OS_USERNAME=...
<davecheney> OS_PASSWORD=...
<davecheney> rather than hard coding them in my environments.yaml
<davecheney> that will also probably avoid encoding issues
<LarsN> so in the environments.yaml (since I'm already doing the exports) is it a simple password: $OS_PASSWORD
<LarsN> or something similar?
<LarsN> I really should sit down and read a quick start on yaml.
<davecheney> I reckon "!&#$#%" < is how you escape strings in yaml
<davecheney> no, actually it's '
<LarsN> single quote.  that makes sense.
<LarsN> I don't know why I didn't just try that.
<LarsN> Thanks davecheney
<AskUbuntu> juju wordpress deploy using lxc | http://askubuntu.com/q/303376
<jcastro> m_3: fyi you're on review this week
<jcastro> negronjl: you're on next week!
<m_3> jcastro: ack... thanks
<negronjl> jcastro: ack
<jamespage> adam_g, I landed all of the MP's you raised for OpenStack charms
<jamespage> all looks good to me
<jamespage> m_3, ^^ adam_g will be reviewing the outstanding openstack charm related MP's
<jamespage> I think they had more testing this cycle than in any other :-)
<m_3> jamespage: awesome... thanks!
<adam_g> jamespage, cool. will merge yours today. was gonna wait till you were back so they all get in together/one the same day
<sinzui> charmers, does anyone have time to review https://code.launchpad.net/~sinzui/charms/precise/mongodb/restore-from-dump/+merge/165408
<arosales> m_3, are you on point for charm reviews this week?
<m_3> arosales: yeah, I'm on the queue
<avoine> m_3: fyi, I know I have added/removed then re-added Gunicorn but this time it is ready for real :P
<avoine> to the queue
<m_3> avoine: ok, great!
<arosales> m_3, did sinzui also sync up with you on the mongdb merge?
<arosales> re https://code.launchpad.net/~sinzui/charms/precise/mongodb/restore-from-dump/+merge/165408
<sinzui> no yet
<m_3> arosales: I saw it in the queue
<arosales> m_3, ah thanks. I think sinzui was just making sure someone knew it was there, which it sounds like you do.
<sinzui> exactly.
#juju 2013-06-04
<nfoata> Hi everyone, does someone knows if it's possible to set the unit status from start to another one (eg: ready, pending, stopped etc.), if I want to remove just for a moment a unit of a service. Thanks
<nfoata> or more information about unit status and its modifications (events on them)
<marcoceppi> nfoata: you can't manually move a unit to a status without taking action agianst it (ie: actually remove the unit, etc)
<_mup_> Bug #1187392 was filed: local provider does not support peer relations <juju:New> <https://launchpad.net/bugs/1187392>
<AskUbuntu> This site http://uistage.jujucharms.com:8080/ doesn't work | http://askubuntu.com/q/303995
<AskUbuntu> agent-state: pending for over 20 minutes | http://askubuntu.com/q/304040
<avoine> do you think "apps-servers" should be the right category for the django charm?
<avoine> or just "applications"
<avoine> or maybe ['databases', 'app-servers'] since it needs a database
<jcastro> I am going to propose "platform" as one
<jcastro> since it's like something you would build your app on top of
<adam_g_> wedgwood, got a min re: charm-helpers organization?
<wedgwood> adam_g_: sure
<adam_g_> wedgwood, so i've started porting one of our bash charms to python and have been leaning on the new charm-helpers while i do
<adam_g_> i've got a bunch of new storage related stuff that i'll be needing in some of the openstack charms, but they are generic enough to go into core.host
<adam_g_> thoughts on moving core/host.py to core/host/__init__.py, and adding core/host/storage/{__init__.py, lvm.py, loopback.py} etc?
<adam_g_> possible doing similar things elsewhere?
<wedgwood> that sort of stuff sounds very Linux-y. I'd actually recommend putting it outside core
<wedgwood> I think core should be as portable as possible (although it needs work to get there)
<adam_g_> sounds reasonable. contrib/storage/linux/ ?
<adam_g_> of course i can just keep it within contrib/openstack/ until someone else wants to use it
<wedgwood> I think either of those are suitable if you're ok with it.
<wedgwood> I think it should come out of contrib, but we'll need to think about how we want to organize platform-specific stuff.
<wedgwood> I think it's OK to make a charmhelpers.storage that has some non-portable code.
<adam_g_> keeping it in contrib/ for now is okay. just trying to organize imports and tests atm
<adam_g_> wedgwood, also any chance this can merge? https://code.launchpad.net/~gandelman-a/charm-helpers/openstack_tests/+merge/166612
<wedgwood> adam_g_: wow, that diff is wacky. lemme grab a copy and take a look
<adam_g_> wedgwood, heh, its mostly just splitting openstack stuff  from contrib.hahelpers  into contrib.openstack
<adam_g_> plus some crappy tests
<wedgwood> adam_g_: I think I see now. You must have done lots of moving, but not with 'bzr mv' eh?
<adam_g_> wedgwood, exactly :P
<wedgwood> adam_g_: I'm afraid there's one change that you didn't pull. if you'll merge that and re-push I'll merge it to trunk
<wedgwood> adam_g_: it's a couple of lines in charmhelpers/core/hookenv.py
<adam_g_> wedgwood, hmm? i see no diff between hookenv.py in my branch and that in trunk?
<wedgwood> yeah? when I merge I get a regression
<wedgwood> adam_g_: bah, never mind that
<wedgwood> adam_g_: merged
<adam_g_> wedgwood, thanks
#juju 2013-06-05
<zirpu> is there a way to recover from a bootstrap node going away?  any HA for that? docs?
<SpamapS> zirpu: in the upcoming 2.0 HA will be built in via mongodb magic.
<SpamapS> zirpu: for 0.x You can make the Zookeeper HA but you have to do it manually.
<SpamapS> zirpu: and if the IP goes away you have to have a way to get to every box and update the IP.
<zirpu> SpamapS: thanks.
<zirpu> i'm using 1.11.x currently and was just trying to work out how to make it more resiliant to losing the bootstrap node.
<zirpu> seems to me a zookeeper cluster of >3 nodes would be more resilient than mongodb.  i'll search the docs and 2.0 code.
<Campbell> Has anyone seen this error when trying to run juju bootstrap on a MAAS environment? error: file 'provider-state' not found
<Campbell> http://askubuntu.com/questions/288681/juju-with-maas-error-file-provider-state-not-found
<Campbell> not the only person with this problem
<jcastro> charm meeting in 10 minutes folks!
<jcastro> Campbell: I tacked a bounty on it
<jcastro> For anyone who wants to join in on the Google Hangout for today's charm meeting:
<jcastro> https://plus.google.com/hangouts/_/9db7982e06dd26d8c4690d33f54e230382f094dd?authuser=0&hl=en
<jcastro> or you can just follow along on http://ubuntuonair.com
<arosales_> jcastro, thanks
<arosales_> Pad and more info @ https://juju.ubuntu.com/community/weekly-charm-meeting/
<arosales_> pad @ http://pad.ubuntu.com/7mf2jvKXNa
<rick_h__> ouch, nickroughing things up
<arosales_> evilnickveitch, be nice to have a juju version selection like this in the docs
<arosales_> http://developer.android.com/reference/packages.html
<arosales_> so you get the recent up front, but you can select which version and see the same page be updated.
<evilnickveitch> arosales, yes, that would be nice!
<arosales> perhaps marcoceppi has some hints on how to accomplish this
<marcoceppi> arosales: evilnickveitch could seed it during the "build" process
<hazmat> Hi Campbell
<Campbell> Hello
<marcoceppi> evilnickveitch: where's the latest docs branch?
 * marcoceppi wants to get redirects ready
<evilnickveitch> marcoceppi, it should be on ~evilnick/juju/go-juju-docs
<marcoceppi> evilnickveitch: thanks
<jcastro> m_3: heya
<jcastro> the liferay guys just pinged me out of the blue
<jcastro> they made a charm and want to know how to submit it, I mailed them the instructions
<jhf> jcastro: hey this is me (James) from Liferay :)
<jhf> gonna submit it asap
<jhf> it was ridiculously easy to create, I should have done this months ago.
<jcastro> no worries, we can review it no problem.
<jcastro> I'll work up on a blog post too that we can run on planet and on the website
<jhf> I also installed juju-gui and 'shopped up a marketing pic
<jhf> http://twitpic.com/cvgtj1
<jhf> :)
<jcastro> yeah!
<jcastro> hey did you try sticking an haproxy on the front and adding liferay units?
<jhf> nope.. but I can try.. this is using lxc, will that blow up my machine (a mac running 13.04 via virtual box) :)
<jcastro> you don't want to go over a few.
<jcastro> but maybe m_3 can fire up more when he reviews it.
<jcastro> we'd like to have a juju vagrant backend at some point so you can do a container-like experience on OSX
<jhf> yeah that would be cool..
<jhf> btw jcastro I just pushed the liferay charm
<jhf> http://bazaar.launchpad.net/~james-falkner/charms/precise/liferay/trunk/files
<jhf> hope I didn't f it all up
<sarnold> jhf: some thoughts from the peanut gallery :) .. 'start' doesn't start it, but the db-relation-changed hook starts it; and since it uses a shell script, it looks like it won't restart automatically on host reboots?
<sarnold> jhf: .. and install doesn't do any verification of what it downloads; are there gpg signatures that you can check? (if not, can there be? :)
<jhf> sarnold: ya I was wondering about that - I am a very new juju charm developer, and was just following the example (which also uses an empty start hook (!))
<sarnold> jhf: oh :) hehe
<jhf> and no, it won't restart automatically on host reboot.  my plan there is to link this to tomcat eventually with a relation, so that tomcat can manage it.
<jhf> which is kinda weird, because now juju and tomcat manage the lifecyle of the charm.
<jhf> also nope; no downloads. I could check the signature.. would it be proper to bake the expected signature into the charm? that seems wrong because it would likely change often, as new releases come out.
<jhf> err..  that should have been "also nope; no signature checking on download" :)
<sarnold> jhf: baking in an sha1 sum of the tarball probably wouldn't be ideal, but if there's a small group of people who do releases, you could bake in their gpg keyids..
<jhf> on the lifecycle thing: it's weird because tomcat could "stop" or undeploy a web app, and juju wouldn't "know" that this happened, so it would think that the web app (potentially representing a juju service) was still running.
<sarnold> (incidentally, this is an area where I think the juju team could provide some Very Useful little utilities to do simple things like check gpg sigs on tarballs.. I'd even like to help make it happen, but with what time, you know? :)
<sarnold> jhf: that is a bit awkward.
<sarnold> jhf: but I think the juju user would probably want juju to manage tomcat.
<jhf> the other issue I was wondering.. reconfiguring a db relation is not exactly a lightweight thing most of the time. in Liferay's case, if you reconfigure it, and it triggers the db-relation-changed hook, I would have to stop and restart liferay for it to 'take effect' which is a pretty major lifecycle operation for a web app.  this is another reason why I didn't know what to do with the 'start' hook.  you can 'start' Liferay without configuring a db (it will
<jhf> default to an embedded hsql db).  But my concern is that if I put "real" start code into the start hook, that juju would install liferay, start it, then an admin would add the db relation, and juju would invoke the charm's db-relation-changed hook while Liferay was still initializing, and all hell would break loose.
<jhf> I mean, if Liferay takes 60s to come up, what if 3 db-relation-changed events occur during that time? do I have to pepper my hooks with "if you're busy, then queue and/or ignore this event" ?
<sarnold> jhf: aha :) that's what makes this cool -- you know that, and you design to make it work well. :) I'd have screwed that up, and people would have grumbled at me. :)
#juju 2013-06-06
<jcastro> jhf: sorry I was at dinner.
<jcastro> jhf: mind if I put your questions from your mail into the bug report?
<blues-man> hello
<marcoceppi> blues-man: hello o/
<blues-man> hi marcoceppi
<marcoceppi> blues-man: anything we can help you with?
<blues-man> marcoceppi, I would investigate orchestration in openstack and analyze differences between heat and juju for a thesis work
<blues-man> over the ubuntu dependecies, the theory beside the project
<blues-man> I'd like to understand how to deploy a multple-node terracota juju with some HA techniques or component
<jhf> jcastro: yup, fine with me (inserting questions into bug report)
<jcastro> jhf: good morning! OK, m_3 is on mountain time so when he's around we'll start the review!
<jcastro> marcoceppi: I sent you a mail where I replied to jhf wrt. charm tools for sha checking
<jcastro> marcoceppi: We still have that convenience function in there somewhere don't we?
<marcoceppi> jcastro: yeah ch_get_file
<jhf> ok great, thanks for such quick responses :)  I'll be sure to update it today/tomorrow as needed.
<marcoceppi> jhf: o/
<bicyus> hello
<bicyus> ;-)
<bicyus> "juju deploy ubuntu"  on local MAAS--->> machine 1: instance-id: pending   forever
<bicyus> maas doesnt alocate nodes, were can i start to look for a failure?
<bicyus> any?
<bicyus> Can't allocate more than one node of my MAAS pool
<marcoceppi> bicyus: hey, not very good with maas but I'll see if I can help you out
<marcoceppi> Were you able to bootstrap?
<bicyus> thanks marcoceppi
<bicyus> y was able yes
<bicyus> but it always get one node
<marcoceppi> So I'm guessing you have more than one node "ready", etc?
<bicyus> yes 4 nodes
<bicyus> juju deploy mysql  ----> OK on machine: 0
<bicyus> juju deploy ceph ---> machine 1 --- pending
<bicyus> forever
<marcoceppi> Do the nodes show up as provisioned, commissioned, (or whatever the status for 'used' is) in maas?
<bicyus> on maas they are ready
<bicyus> machine 0 get's allocated too ...
<bicyus> but the machine 1 pending.... doesn't change anything on maas pool
<bicyus> it should allocate another node
<bicyus> but it doesn't
<bicyus> it would be great to have juju without maas
<bicyus> registering normaly installed servers
<bicyus> i think maas is too buggy.... :-(
<marcoceppi> I know a few of the #maas guys were answering questions about juju and maas in that room. It's been so long since I've used the two together I'm not sure what to say :\
<bicyus> ;-)
<bicyus> thanks marcoceppi, I appreciate
<bicyus> don't worry
<marcoceppi> No problem, best of luck
<bicyus> i'm going the good old mens way! ;-)
<bicyus> xD
<m_3> jcastro: yo... what're we reviewing?  liferay?
<jcastro> m_3: yeah, I sent you a mail, it would be awesome if you could review it today
<jcastro> m_3: jhf is the upstream, he's in the channel
 * m_3 waves to jhf
 * jhf waves
<jhf> please be gentle :) I'm new at this.
<m_3> jcastro: k, lemme bump mongodb merge to later since eveyrbody's here for liferay now
<m_3> jcastro: we've really gotta get the time-in-the-queue display fixed
<m_3> jcastro: can I help with that?
<m_3> i.e., figure out what it has to show and I'll play with charm-tools later
<jcastro> time in queue looks like it's working to me
<jcastro> 108 minutes since he attached the branch
 * m_3 sees 372 days
<jcastro> m_3: CLI tool?
<m_3> `charm review-queue`
<m_3> yup
<jcastro> m_3: oh dude, they fixed the queue in the web UI, I don't think we fixed it in the CLI Juan wrote
<jcastro> http://jujucharms.com/review-queue
<m_3> well, the cli is what we actually use
<m_3> :)
<jcastro> is correct
<m_3> or just make it slurp the web and print it all pretty-like
<m_3> marcoceppi: where's the best place to point somebody for charm-helper bits to verify download hashes?
<marcoceppi> charm-tools/helpers/sh/net.sh
<m_3> marcoceppi: so is that near landing in lp:charm-helpers?
<marcoceppi> m_3: no where near. It's needs to be ported to python first, then the "bash" interface needs to be written
<m_3> marcoceppi: ack, thanks!
<marcoceppi> m_3: err, I think the ppa packaging for old helpers is still broken...
 * marcoceppi double checks
<marcoceppi> m_3: nope, charm-helper-sh is still in ppa:juju/pkgs. He'll have to add it to his charm, then install the charm-helper-sh package to make use of /usr/share/charm-helper/sh/net.sh
<marcoceppi> m_3: finally, cc me on your reploy since jorge forgot to :P
<jcastro> heh
<jcastro> hey so maybe just have him check the sha by hand until we sort out charm tools?
<marcoceppi> He can just copy the ch_get_file code in to his lib/common for now
<marcoceppi> it'd probably be the easiest way to ensure it just works (tm)
<jcastro> marcoceppi: m_3: last call for updating the etherpad from yesterday's meeting
<jcastro> before I push it to the list
<marcoceppi> jcastro: I'm good
<m_3> jcastro: post away
<m_3> jhf: jcastro: first review pass done: https://bugs.launchpad.net/charms/+bug/1006064
<m_3> looks great... just a couple of little things to clean up
<m_3> upgrades, cryptographically verifying dloads, etc
<jcastro> he had some questions in the emails wrt. the db hooks
<jcastro> which you can either post in the bug or respond via email
<m_3> jhf: please ping for any questions
<m_3> emails
<m_3> I'll dig
<jcastro> heh, it's that thing we're supposed to check
<m_3> what century are we living in?
 * m_3 loves email
<m_3> especially mailing lists
<jhf> ok, thanks guys for the review. I'll get on the fixes asap. and yeah, we struggle with email vs. non-email all the time in liferay.  I end up posting to forums, then emailing the same set of people I want to communicate with, because half of them refuse to subscribe to forums.
<m_3> jhf: ack
<m_3> jhf: so your basic idea of delaying startup until the db is related is sound... there're plenty of services that behave that way
<jhf> ok, my only concern is if the relation is set up / torn down quickly, Liferay (and probably many other services) can't react that quickly.
<m_3> jhf: your relation guard looks good too... bail (gracefully... exit0) if the other side isn't up yet
<m_3> jhf: it'll get called again once the other side's up
<marcoceppi> jhf: hooks don't execute asyncronously inside a charm. So if you hooks/start (startup.sh) blocks until complete it won't get to any other hooks queued for that charm until it exits
<m_3> jhf: 'set up' is not a problem.... 'torn down' is
<m_3> jhf: but let's get the primary flow working, then work through removals
<jhf> tomcat's startup.sh doesn't block, unfortunately.
<m_3> jhf: that should be fine as long as it's not called until the juju part of the relation is good (and the configuration is written)
<m_3> which is what the relation guard does
<m_3> i.e., your [ -z "$user" ] && exit 0 bit
<m_3> the rest of the script ( w/ the tomcat strartup ) won't be run until the db is there and happy
<m_3> jhf: this one issue is the real juju differentiator
<jhf> ok, sounds good.. so I'll fix it up then talk about removals (the concern being, if liferay+tomcat is happily running, and a db relation is broken, in order for it to "really" be broken, liferay has to immediately exit, reconfigure, and restart, otherwise it will fail all over the place trying to access the db that was just taken away).
<m_3> ack
<jcastro> arosales: sigh, I suck, have the pad URL handy for yesterday's meeting?
<m_3> jhf: and we should add data um... continuity... to that workflow too :)... but first thing's first
<arosales> jcastro, http://pad.ubuntu.com/7mf2jvKXNa
<m_3> sinzui: ping... do you have any other mongodb tarballs handy for testing restore? (besides the embedded fixtures?)
<sinzui> I do!
<m_3> awesome... chinstrap?
<m_3> sinzui: also, the next part I'm gonna dig through is how this handles ordering wrt replset formation
<m_3> I'm a bit confused by that atm, but still poking around
<sinzui> m_3 this is a real world case http://people.canonical.com/~curtis/charmworld-20130425-184952.tar.gz
<m_3> sinzui: danke
<m_3> downloaded if you want to remove/clean that up
<m_3> sinzui: hit a bug with the 'nothing to restore' case
<m_3> in the MP
<sinzui> oh dear
<Campbell> Anyone seen an issue whereby they bootstrap juju on MAAS and everything seems to be going well in the install process but at a point, on the console of the machine being bootstrapped you get an "Installation Step Failed" message with a UI but the console fails to respond to any keystrokes etc?
<Campbell> I got to the bottom of my PXE install problem. The Dell M620s I am using have 2GB flash drives on them as well as the drives hanging off the RAID controller. Ubuntu was trying to install the flash drives. Resolution was to disable the flash and it goes and installs to the HDDs.
<marcoceppi> hazmat: I noticed you keep referring to juju-deploy -W -T, but I can't find what capital W refers to, is it just the same as -w ?
<hazmat> doh
<hazmat> yah.. probably
<marcoceppi> cool, just checking
#juju 2013-06-07
<nfoata> Hi, after creating a service with several units 'juju deploy -n 3 mysql' (which create 3 VMs) then I put on each a different file on them (to keep their identity), then I remove two units (go into the machine pull), add a unit to the service (machine coming from the pool) , and at last I deploy -n wordpress (a new service), so a machine coming from the pool and the last one is created (pool empty)
<nfoata> I was surprise to have in the last service (wordpress) an old unit of mysql. Is this dangerous not ?
<nfoata> Indeed if a hook do not clean well a unit (log, lib,etc.) , it means that in the future we can have a service with different VMs
<nfoata> So, is there a way to create a pool for each servcie or to specify which Vm of the pool could be add into the service, or force to create a VM with deploy instead of using one of the pool ?
<bbcmicrocomputer> nfoata: #993288
<bbcmicrocomputer> nfoata: in general, destroy-service won't remove the machine, so it will be reused upon next deploy
<bbcmicrocomputer> nfoata: generally you'd need to terminate said machine before deploying if you wanted a fresh machine
<bbcmicrocomputer> nfoata: although as the bug ticket says, you could deploy a null charm to a deallocated machine if you didn't want to destroy it
<bbcmicrocomputer> nfoata: presumably using constraints you may be able to force the deployment of a new machine even where others are in the pool
<nfoata> bbcmicrocomputer: thanks, so perfect this problem is already knows (bug ticket) and I will see the constraint to force the deployment of new machine (for security, to be sure machines are the same)
<nfoata> knows = known
<jcastro> jhf: so I'm so looking into the future, but we could so totally ship this in the charm: http://blog.jelastic.com/2013/06/06/liferay-cluster/
<jhf> hahâ¦ yeah.. that blog post has generated some buzz in our community today
<m_3> awesome, that seems like the perfect next steps to pimping out the liferay charm
<jhf> pimping out :)
<jhf> is wayne brady gonna have to cluster a juju??
<robbiew> jhf: lol
<robbiew> nice one
<jamespage> mgz, around? did you ever get to the bottom of how to upload juju-core tools into an offline environment?
<jpds> Hello, can someone help me figure out why juju can't deploy an instance after the initial bootstrap?
<marcoceppi> jpds: Sure, what's going on?
<jpds> marcoceppi: It just shows 1: instance-id: pending.
<marcoceppi> jpds: what provider?
<jpds> marcoceppi: MAAS.
<marcoceppi> ah
<marcoceppi> jpds: someone else had a similar issue, where they could bootstrap but not deploy. I wasnt' able to help them (not used maas in a long time)
<marcoceppi> There's a maas.log somewhere that might reveal more information, let me see if I can find where that log is
<jpds> marcoceppi: Nothing interesting there.
<marcoceppi> What version of juju are you using?
<jpds> marcoceppi: ppa:juju/pkgs.
<marcoceppi> juju or juju-core?
<jpds> marcoceppi: juju.
<marcoceppi> So, there's a provisioner log on the bootstrap node that's responsible for commisioning the machines, you can use juju ssh 0 to get to the bootstrap node. I believe the log is in /var/log/juju/ and it might provide more details as to where the provisioner failed
<marcoceppi> jpds: again, haven't used juju+maas in quite a long time, so just trying to shepard you to where you can find info on why it failed
<jpds> juju.agents.provision@ERROR: Cannot get machine list
<jpds> Hmm.
<jpds> What would it be trying to reach?
<jpds> I can ping all the relevant IPs I can think of.
<marcoceppi> jpds: are the machines assigned to your maas user for provisioning, etc?
<marcoceppi> Could have just been a timeout, try another deployment to see if it also is listed as pending
<jpds> marcoceppi: I've been destroying the environment for hours.
<marcoceppi> jpds: okay, then that's not it, sorry!
<jpds> marcoceppi: And re-bootstraping; and they are all under my user.
<marcoceppi> jpds: could you pastebin your entire log? I want to see if anyting jumps out at me
<jpds> http://pastebin.ubuntu.com/5742654/
<marcoceppi> jpds: what do you have for your maas-server field in environments.yaml ?
<jpds> marcoceppi: maas-server: 'http://192.168.125.10/MAAS'
<marcoceppi> jpds: add the port in there, http://192.168.125.10:80/MAAS, destroy and re-bootstrap, try again
<marcoceppi> that integer error looks reallly familiar, I think that's the cause of it
<jpds> OK, reinstalling the system.
<jpds> marcoceppi: Well, that worked.
<marcoceppi> jpds: awesome. I'm going to update the docs with that caveat. I know you're not the first to spend too long trying to figure that out
<jcastro> hey arosales
<jcastro> any word on new docs landing?
<arosales> jcastro, I think we are targeting end of month for them to officially be the standard.
<adam_g> wedgwood, around?
<wedgwood> adam_g: yep
<adam_g> wedgwood, nvm. was gonna modify core.apt_install() but decided against it. couldn't find any charms that are using it as-is. know of any?
<dpb1> HI -- anyone seen this error trying to bootstrap on openstack?  http://paste.ubuntu.com/5742835/
<marcoceppi> dpb1: is there a security group already named "juju-precise" in your account?
<marcoceppi> Possibly left over from a bad teardown
<dpb1> marcoceppi: so, apparently goose does not deal with the change of security group ids from id to uuid in grizzly (I think).
<ahasenack> it's never that simple
#juju 2013-06-08
<ram_> hello all
#juju 2014-06-02
<jose> marcoceppi: hey, what do you think of linking to juju's reddit on the topic?
<sebas5384> hey jose!
<jose> hello, sebas5384
<sebas5384> i'm doing some nasty things to access files into the containers into a vbox hehe
<sebas5384> host > vbox > container
<sebas5384> with vagrant building the vbox, so i had to use something like sshfs
<sebas5384> so that solves my problem of working into the charm :P even using mac :P
<sebas5384> jose: do you know how to test a juju plugin made with go ?
<sebas5384> cmars: ping
<sebas5384> i did some modifications for this https://github.com/cmars/juju-nat/issues/5
<jose> nope, no idea
 * jose hasn't played around with juju plugins
<sebas5384> but i don't know how to test it
<sebas5384> hehehe
<sebas5384> gotcha
<sebas5384> :P
<jose> but https://www.youtube.com/watch?v=e96yhMcnHLc may be able to help
<sebas5384> hmm nice
<sebas5384> :)
<sebas5384> thanks jose
<jose> np :)
<sebas5384> jose: there are some community of juju'ers around south america?
<jose> sebas5384: erm... myself?
<sebas5384> like juju meetups and stuff
<sebas5384> hahaha
<sebas5384> other than you
<sebas5384> there are some guys from tsuru
<sebas5384> from the globo.com
<jose> not sure, maybe nxvl, who was part of the Juju team when it started
<sebas5384> hmm don't know them
<sebas5384> jose: register a channel juju-br :)
<sebas5384> let's see if i can rise some community over here at brasil
<jose> sebas5384: not sure if that would be on-topic, plus we would need to know who the group contacts for the juju namespace are
<jose> it's different on freenode :)
<sebas5384> jose: oh didn't know where some bureaucracy
<jose> yep
<sebas5384> jose: if you can help me, te agradezco!
<jose> I'll ask around tomorrow
<sebas5384> gracias!
<jose> np :)
<khuss> Getting an error while deploying service and am not able to figure out the root cause
<khuss> This is the error i'm getting -  cannot run instances: gomaasapi: got error back from       server: 409 CONFLICT (No matching node is available.
<khuss> This started happening after I had to recommission some nodes.. after that I deleted nodes and recreated them
<Tribaal> hi all. Could an external process running on a juju deployed machine trigger a "config-changed" hook run? If so, how?
<Tribaal> (for the whole charm - not just the local machine)
<Mosibi> Hi all. I'm having a problem with the nova-compute charm. When i try to deploy it, MAAS returns a "No matching node is available" message
<Mosibi> I looked in to this with some nice people from #maas and it looks like juju is setting some contrains that i am not seeing.
<Mosibi> Is there a possibility to show all the contraints juju is sending to the (MAAS) provider?
<jamespage> Mosibi, juju get-constraints  nova-compute
<jamespage> will show you any constraints the juju is providing
<Mosibi> jamespage: that returns an empty list.
<jamespage> Mosibi, no constraints then
<Mosibi> When i force a tag on a 'ready' node in MAAS and do a juju deploy --contraints tag=mytag, the deployment works.
<Mosibi> Thus somehow it looks like Juju is sending constraints to MAAS that i did not provision
<Mosibi> and when i give it a contraint it is the only constraint and thus the deployment is started...
<Mosibi> jamespage: also i did not set constraints when i bootstrapped agains MAAS
 * jamespage scratches his head
<Mosibi> :)
<jamespage> Mosibi, I'm wondering if its an implicit constraint
<jamespage> Mosibi, the servers you have are 64bit right?
<jamespage> and show up in MAAS as 64 bit
<Mosibi> jamespage: 64/amd64 idd
 * jamespage scratches his head again
<Mosibi> jamespage: i am making a log with 'does not work' and 'works'
<Mosibi> The 'works' part should be deploying a compute node, that takes some time..
<Mosibi> jamespage: host is still building, but that works..
<Mosibi> So here is the 'command log': http://pastebin.com/raw.php?i=YRFJbZjT
<jamespage> Mosibi, hmmm
<jamespage> Mosibi, it looks like MAAS is not interpreting the fact that >= 20G was requested correctly - does the node show up in MAAS as having more that 20G?
<Mosibi> jamespage: yes
<Mosibi> maas maas tag nodes compute|grep memory "memory": 26624,
<Mosibi>         "memory": 26624,
<nuclearbob> I'm having trouble getting juju setup on canonistack, and I'd appreciate some guidance if anyone has some extra time
<jamespage> gnuoy, hows the neutron split out coming along?
<gnuoy> jamespage, good thanks, I still need to look at ha but I don't think that's too big a deal (plenty of precedent)
<jamespage> yeah
<jamespage> its near liverpool street so we don't have to get to mad
<gnuoy> kk
<jcastro> sinzui, hey I believe you documented series in the docs, do you remember/know where it is?
<jcastro> for like setting "trusty" environment-wide
<jcastro> https://juju.ubuntu.com/docs/charms-environments.html#updating-running-environments
<jcastro> is this it?
<sinzui> jcastro, yes. It is also in the 1.18.1 release notes on that site
<jcastro> ta
<jamespage> niedbalski_: nice work on the fstab management code btw - I had one comment but apart from that ++
<Marek1211> my landscape-client charm stuck in dying state
<Marek1211> is there any way to get rid of it?
<tedg> Marek1211, I learned that last week, you need to resolve the errors with juju resolve.
<tedg> Had charms sitting around in dying states all over.
<tedg> Looking for someone to harass about the Owncloud charm not having SSL yet :-)
<tedg> jose, Could you set it to "Needs Review" ?
<Marek1211> dont have that function "Resolve" in this charm :(((
<tedg> Marek1211, Sorry, it's "resolved" -- juju help commands
<therealmarv> Hi there! Seems I did not logged my last chat. Does anybody know if there is progress in fixing vagrant precise vbox (it keeps napping forever). Iâm also willing to fix it myselfâ¦ which is the best place for juju bugs?
<therealmarv> vagrant trusty vbox works but many things are not yet available for trusty.
<Marek1211> unfortunately it does not work. when I type juju resolved in the cli and hit TAB it shows all services except of landscape-client...
<hackedbellini> hi guys
<hackedbellini> my juju units are spamming this error on their logs since the juju upgrade from 1.18.x to 1.19.2: http://pastebin.ubuntu.com/7573901/
<hackedbellini> anyone knows how I can solve this?
<hackedbellini> there is another problem, but I think it's related to this one. One of my units are in a failed state, but when I try "juju resolved" in it, it says it can't resolve because it's already marked as resolved
<lazypower-travel> therealmarv: for vagrant boxes? we don't have an issue tracker for it yet
<lazypower-travel> therealmarv: however, if you can email me your issue i'll make sure to follow up on it within the next couple of days. I'm out for travel today but will be home starting tomorrow and shouldn't have an issue tracking down a fix for you.
<therealmarv> @lazypower-travel what is your name? Do you work for Canonical? My name is Marvin Reimer btw. my problem is really easy to replicate. Just follow this instructions with precise 64 bit and gui will not pop up: https://juju.ubuntu.com/docs/config-vagrant.html
<lazypower-travel> thedac: I do work for canonical.
<lazypower-travel> gah, therealmarv
<thedac> :)
<lazypower-travel> sorry dac :)
<lazypower-travel> therealmarv: I do work for canonical. I just spoke with another project maintainer and we should get an issue tracker setup shortly.
<therealmarv> ok great!
<therealmarv> Iâm trying to write a Charm for PyBossa (a Shuttleworth Foundation Project) but discovered some bugs. I also had problems with local precise (lxc) â¦ will try to replicate that problem.
<jcastro_> If we don't connect Juju to this then I consider us a failure: http://bt.tn/
<tedg> jcastro, Seems really expensiveâ¦
<jcastro> yeah
<natefinch> jcastro: pretty sure davecheney can whip one of those up for you from a raspberry pi for like half the price.
<mhall119> marcoceppi: please register on summit for uos-1406, I can't add you as a track lead in the system until you to: http://summit.ubuntu.com/uos-1406/registration/
<mhall119> arosales: ^^ you too please
<marcoceppi> mhall119: registered
<mhall119> thanks marcoceppi
<arosales> mhall119, will do
<arosales> mhall119, done
<mhall119> thanks arosales
<arosales> mhall119, thanks for the reminder
<mhall119> np
<jcastro> mhall119, marco is knee deep in reviews, I'm going to do his stuff for UOS
<mhall119> jcastro: thanks, really I need track leads to find other people to run sessions, not to run them yourselves
<jcastro> yeah I have a call out on our mailing list, etc.
<mhall119> thanks
<jose> tedg: I'm working on some idempotency bugs to get it merged
<tedg> jose, Ah, okay. Sounds good, I'm excited to use it!
<jose> tedg: I'm checking it right now to see if I can get it fixed now
<jose> cannot assure you that it's going to be finished today, though
 * jose just came back from a long Ingress field trip
<tedg> jose, I understand, thanks for working on it.
<jose> np :)
<rharper> is there a way to control how long juju status waits?  I'd like to set a specific time to wait on an environment
#juju 2014-06-03
<jose> mind some upvotes? http://www.reddit.com/r/Ubuntu/comments/275r00/do_you_like_cloud_and_automation_with_juju_check/
<designated> Can someone please explain to me how I would deploy a juju charm and specify different network interfaces for different things?  As an example, I have two bonded and vlan tagged 10Gbe NICS that I would like to use for openstack storage and host to host communication, but the charm doesn't seem to account for any variation.  Are we truly limited to a single network interface for everything unless we write custom charms?
<designated> I'm not finding anything in the documentation that addresses this situation.
<jkary1> Hello folks.  Anyone having issues with juju sync-tools?  I seem to be getting connection timeout issuesâ¦
<jkary1> Debug is telling me "cannot load index "https://streams.canonical.com/juju/tools/streams/v1/index.sjson": invalid URL", but curl works fine.
<designated> jkary1, good luck getting a response in here :(
<jose> jkary1: can you try again, please?
<jose> designated: maybe try in business hours? I find more people around 9am-5pm EST
<jose> (well, EDT)
<lazyPower> designated: thats purely dependent on teh charm. if the charm doesn't support designating the interface to bind to, its a defacto missing feature and should be filed.
<lazyPower> designated: Most of the charms that I've reviewed do not support that kind of a scenario, and this would be the first i've seen it come up in chat. However - with that being said, its a valid use case and if you have need, this is a great area that you could contribute back to. If you need some help getting started I could take a look at teh charm with you and lay out a road map on how i would tackle the implementation / review process.
<lazyPower> and +1 for jose's note about business hours. Most of the juju team is around during EDT office hours.
<s3an2> Hi, has anyone had any problems with the openstack vncproxy not getting installed - I would have expected this to be within the nova-cloud-controller charm?
<Mosibi> s3an2: yes..
<Mosibi> a colleage had changed the charms and can start testing it real soon.
<s3an2> Ok - So this is a known problem?
<Mosibi> If his code does what we want, he will submit it and then hopefully it will be commited.
<Mosibi> s3an2: there's is a small workaround. You can put the needed configuration in config_flags in nova-cloud-conroller and nova-compute
<Mosibi> Besides that, you need to install some extra packages.
<s3an2> cool, So I can install the packages on the nova-cloud-controller, and set the nova.conf options in config_flags as a work around - that sounds good.
<Mosibi> s3an2: config-flags: "vncserver_listen=<ip_addr_cloudctrl>,vncserver_proxyclient_address=<ip_addr_cloudctrl>,vnc_enabled=True,novncproxy_host=0.0.0.0,novncproxy_port=6080,novncproxy_base_url=http://<ip_addr_cloudctrl>:6080/vnc_auto.html"
<Mosibi> that's for the cloudcontroller
<Mosibi> and this one for the compute node
<Mosibi> config-flags: "vnc_enabled=True,vncserver_listen=0.0.0.0,novncproxy_base_url=http://<ip_addr_cloudctrl>:6080/vnc_auto.html,vncserver_proxyclient_address=<ip_addr_compute_node>"
<jkary1> Hello folks. Â Anyone having issues with juju sync-tools? Â I seem to be getting connection timeout issuesâ¦
<Mosibi> The problem is that ip_addr_compute_node is static....
<jkary1> Debug is telling me "cannot load index "https://streams.canonical.com/juju/tools/streams/v1/index.sjson": invalid URL", but curl works fine.
<Mosibi> s3an2: cloudcontroller: apt-get install nova-console nova-novncproxy
<Mosibi> s3an2: compute: apt-get install nova-novncproxy
<Mosibi> that's it :)
* marcoceppi changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: bcsaller
<s3an2> cool, got it working :)
<gh0st_> Hello
<Mosibi> s3an2: nice!
<gh0st_> does someone know how to stop hook executing?
<marcoceppi> gh0st_: could you elaborate?
<gh0st_> i want to deploy mongo shard cluster
<alexpilotti> alexisb: ping
<alexisb> alexpilotti, join juju-dev
<lazyPower> gh0st_: whats juju doing that you're not expecting it to do? I don't understand the correlation between wanting to deploy a mongodb cluster and stopping hook execution.
<gh0st_> when I connect charms beetwen them (3 configsvr and 1 mongos) i have a hook error becaues mongos instance try to connect to config servers but it's not ready yet
<lazyPower> gh0st_: ah, ok. Well you can't 'stop hook execution' persay - but i'm assuming your service is now in an 'error' state because it didn't receive the data it was expecting?
<gh0st_> yeah
<gh0st_> hmm, maybe not stopping, but pausing or something
<lazyPower> gh0st_: if its in an 'error' state, it wont be re-executed untill you tell it to re-run that hook or resolve it.
<lazyPower> if your other service is online, you should be able to do a juju resolved --retry unit/#  and it will attempt to re-execute the hook that failed
<hackedbellini> hi guys,
<hackedbellini> I'm having 2 problems here since juju upgraded from 1.18.x to 1.19.2 (btw, don't know why upgrade-juju juju did this since 1.19.2 is a non-stable release).
<hackedbellini> But well, all of my unit logs are being spammed with this: http://pastebin.ubuntu.com/7580562/
<hackedbellini> and one of my units are in an error state but when I try to "juju resolved" it it gives me "ERROR cannot set resolved mode for unit "asyncweb/0": already resolved"
<gh0st_> yeah, that's true, but i want to know is there any option to tell juju which instance it must configure first?
<marcoceppi> gh0st_: no, the charm should do that
<marcoceppi> so if it's not, then it may be a bug in the charm
<lazyPower> gh0st_: if you're using a bundle, juju deployer will parse the relationships and deploy dependent services, and once they have reached a ready state, it will then build the relationships between charms. But to marcoceppi's point, the charm should be robust enough to enter a wait loop while its pending data.
<lazyPower> gh0st_: if you're seeing behavior that doesnt fit with that statement, I encourage you to file a bug against the MongoDB charm so the maintainer can take a look at it and potentially issue a patch.
<gh0st_> ok, i think I do that
<gh0st_> but i have another question..
<gh0st_> so how to put charm into wait loop in code, i need create for this special function or something?
<jamespage> gnuoy, if you have time - https://code.launchpad.net/~james-page/charm-helpers/juno-opening/+merge/221882
<lazyPower> gh0st_: relationships send data over the wire and you read them on the consuming charm. If the data is not present, you can do one of two things. a) return 0 so when its complete the relationship-changed hook will re-execute, or b) sleep & loop until the value is present.
<lazyPower> gh0st_: and correct - the charm code itself would need to implement this behavior
<gnuoy> jamespage, looking
<lazyPower> gh0st_: from what I remember the MongoDB charm is a pretty robust python based charm, and if you're familiar with python you may be able to implement and submit a patch. All community contributions are welcome!
<gh0st_> yeah, and I want to do this
<gh0st_> but for now i try to figure it out how to write this wait loop while charm is waiting for data?
<lazyPower> gh0st_: if you have charm-tools package installed, charm-get mongodb and start looking through the source for the relationship hooks.
<gnuoy> jamespage, merged
<gh0st_> ok, is there option, when charm hook is in waiting loop, is there option to execut another hook to gather data that charm is needs?
<lazyPower> gh0st_: it should be gathering the data it needs during the hook you've put in a wait loop
<jose> lazyPower: HAPPY BIRTHDAY, SIR!
<wwitzel3> cory_fu: just noticed you got bug 1314699 buttoned up last week, nice work :)
 * jose brings balloons and cake
<gh0st_> but what when i put charm in waiting loop and it can't gathering data because, all what charm is need is waiting for configure server on another instance deployed by juju?
<lazyPower> jose: thanks :)
<lazyPower> gh0st_: I think you're misunderstanding the workflow. when that data is propigated via relation-set - the relationship's config-changed hook is fired on the consuming service. It can then read that data. So this is a if / then logic block that says "if this relationship value is present, execute, otherwise return 0 and do nothing until we have the data"
<cory_fu> wwitzel3: It still needs solr and SCM support, but I'm quite happy with how clean the charm-helpers services framework made the charm.
<lazyPower> er, sorry i said config-changed and thats confusing. its relationship-changed hook. gh0st_
<gh0st_> sorry, i get it now
<lazyPower> no need to apologize :) We have to start somewhere
<hackedbellini> so, I found this bug (https://bugs.launchpad.net/juju-core/+bug/1175031) and tried to do what the first comment said, to restart some jujud services running inside the lxc. Now there's a panic on the unit log (http://pastebin.ubuntu.com/7580760/) and I still can't set it resolved (juju insists it's already resolved when it's not)
<gh0st_> so when relation-set is executed, then next executed is config-changed, that's correct?
<marcoceppi> gh0st_: no, relation-changed is executed
<marcoceppi> well *-relation-changed, where * is the relation that the data was changed on
<rbasak> sinzui: thank you for sorting out the 1.18.4 release for me. I hope to upload to Utopic tomorrow.
<rbasak> Then I can backport push it to Trusty.
<sinzui> rbasak, thank you very much
<rbasak> jamespage: ^^, just so I don't step on your toes.
<jamespage> rbasak, you are not :-)
<jamespage> thanks for the headsup
<jose> mbruzek: I'm about to be done with the idempotency issues, just one last manual test about to be ran :)
<jose> jcastro: hey, whatcha think about putting the reddit link on the channel topic?
<jcastro> sure
<mbruzek> jose that is awesome
* jose changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Want even more Juju? We're on Reddit! http://goo.gl/YfeTkX || Weekly Reviewers: bcsaller
<mbruzek> looking forward to it jose
* jcastro changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: bcsaller || News and stuff: http://reddit.com/r/juju
<gh0st_> ok, and when you executed relation-set you put some info like unit, port, address. Is there any way to change that info when relation-changed is executed after relation-set?
<jose> ah, there ya go
<jose> gh0st_: relation-set is not meant to be a hook, it's a command, so you execute that inside your relation-changed
<gh0st_> oh, i get it so finally when relation-set is executed it must be in *-relation-changed not in *-relation-joined hook?
<jose> gh0st_: you can put the command in any of the hooks, when you see it necessary put it there
<jose> so you can run it in *-relation-joined, and then in *-relation-changed
<gh0st_> and if I run it on *-relation-joined it executed relation-changed, is that correct?
<gh0st_> and when i execute relation-set in *-relation-changed hook, what next will be executed, again relation-changed?
<cjohnston> noodles775: hey there.. avoine is working on conversting the python-django charm to use ansbible..
<cjohnston> he has some questions about a way to do a couple of the things that he is doing...
<noodles775> Hi there avoine! How far along are you?
<avoine> noodles775: I'm there
<cjohnston> It looks quite good so far... but there is still more python to be removed that he isn't quite sure how to do.
<avoine> the problem is that I have some extra variables that I would need in the ansible playbook
<avoine> so I was thinking maybe we could modify the module in charm-helpers to do that
<avoine> something like: apply_playbook(playbook, tags=Install,      extra_vars={'my_custom_var': 1234})
<avoine> noodles775: what do you think?
<noodles775> avoine: Yes, something similar was requested by bloodearnest a few weeks ago. +1. But let me know your actual use-case, as you may be able to simply set the variable in your playbook for it to be available in all other tasks/templates etc.
<jose> gh0st_: I don't understand what you're trying to do
<avoine> noodles775: one example is that I need to do some  escaping like this: {{local_unit|dirname|replace("-","_")}}
<avoine> and that makes the playbook horrible
<noodles775> avoine: if it's DRY it wouldn't be horrible would it? (I mean, if you add to your playbook's vars section with, 'myvar: {{ local_unit|dirname|replace("-","_")}}' and just use 'myvar' throughout your playbook?
<avoine> yeah that would do it
<avoine> but I also the settings and python path combinations are quite complicated and I don't want to make a task for every combinations
<avoine> like I'm doing now for the repos + branch combinations
<noodles775> avoine: can I see? Where's the branch
<avoine> noodles775: here -> http://bazaar.launchpad.net/~patrick-hetu/charms/precise/python-django/charmhelpers/view/head:/playbooks/install.yaml
<noodles775> avoine: Just a heads-up, I'm currently working on some re-usable roles for charms for our internal use. I've not yet documented them at all, but will do in the next few days: https://github.com/absoludity/charm-ansible-roles
<noodles775> avoine: the wsgi-app role may be something you could reuse (if you wanted to), but I'll ping you about it in a week or so when I've got it documented/better tested.
 * noodles775 checks the branch.
<avoine> a re-usable role for vcs would do the trick
<noodles775> avoine: so in that install.yaml, a vars section defining repo_destination would get rid of a lot of the ugliness. I'm not sure how you can avoid separate tasks for separate vcs's though.
<noodles775> avoine: great work on the upgrade, it's looking really nice!
<avoine> noodles775: thanks
<avoine> noodles775: I'll give it a try at ansible only and I'll ping you back if that's too complicated
<noodles775> avoine: sounds good. I'll probably be gone, but will take a look first thing tomorrow (or may pop back later)
<avoine> noodles775: ok, btw if you could give your +1 to the Merge proposal that would be awesome (I can't wait to use it in juju-gui ;) )
<noodles775> avoine: I've not looked closely at your charm yet, but one thought is that you could separate those tasks into separate files (not playbooks, just tasks for inclusion) and then conditionally include them in your playbook depending on vcs and/or repos_branch (see the ansible docs)
<noodles775> avoine: will do, feel free to ping me when it's ready.
<avoine> ok
<jose> mbruzek: and it's all set now! :)
<mbruzek> meow?
<mbruzek> jose, owncloud?
<avoine> noodles775: thanks for the tip
<jose> mbruzek: yeah!
<mbruzek> or nyan cat?
<jose> owncloud
 * avoine go to lunch
<jose> damn, I have to fix nyancat too? /me shrugs
<mbruzek> just kidding with you Jose.
<jose> :P
<mbruzek> I will take a look at owncloud when I get a chance.
<jose> thank you!
<Egoist> Hello
<Egoist> I have a problem with charm hooks?
<designated> lazyPower, Thanks for the response.  When do you have some time to discuss charms supporting multiple network interfaces, to include bonded and vlan tagged NICs?
<Egoist> is there any one who can explain me?
<lazyPower> designated: I'll be 'free'ish around 4:00pm EDT if that works for you.
<designated> lazyPower, I'll be here.
<Egoist> i have a problem with mongodb charm, when i want to connect instances who should be config server with router, then i have a problem with hook
<Egoist> I have to instances and want to connect them. The problem is that first instance want to connect with second but second has not finished configuration so refsed connection, and by that i have Error in *-relation-changed hook
<Egoist> someone know how to handle with this?
<sebas5384> ahasenack: pt-br ?
<ahasenack> sim
<sebas5384> opa!
<sebas5384> nÃ£o conheÃ§o ninguem do brasil que use juju, fora os caras do tsuru rsrs
<sebas5384> que bom!
<sebas5384> some wip ideia i'm working on, https://github.com/sebas5384/ansible-juju-local
<lazyPower> interesting sebas5384
<lazyPower> I'll be watching this as it develops.
<sebas5384> lazyPower: great!
<sebas5384> this one was only to prove a concept, now I'm going to modularize more
<lazyPower> sebas5384: my only thought on this - is that you're driving lxc with vagrant to drive juju, which is an added wrapper
<lazyPower> why not use charms?
<lazyPower> and cut out the vagrant portion?
<lazyPower> oh wait, i missed something. You're driving lxc inside of vagrant for mac users.
<lazyPower> carry on then
<sebas5384> lazyPower: because i'm using mac, and others guys here too
<lazyPower> sebas5384: i was brain bending around this a bit and think I'm going to write a wrapper for sending commands to the vagrant box via a juju plugin
<lazyPower> eg: juju vagrant deploy mycharm
<sebas5384> lazyPower: wow
<sebas5384> thats interesting
<lazyPower> that way you dont have to open an ssh connection inside the vagrant box, and it exposes a set of convenience methods for users consuming the vagrant boxes
<lazyPower> which will accomplish some of what this is doing.
<sebas5384> hmmm
<lazyPower> i haven't done anything beyond drink beer and think about it... but you'll see some prelim code land in the next week or two.
<sebas5384> hahaha
<sebas5384> if you want to talk more about that lazyPower maybe a hangout
<sebas5384> because that idea is great!
<sebas5384> something i would be interesting in working in too
<lazyPower> sure
<lazyPower> Tomorrow evening work for you?
<sebas5384> lazyPower: anyway i'm planning to make some juju roles for ansible
<sebas5384> yeah of course
<sebas5384> but whats your time zone?
<lazyPower> brilliant.
<lazyPower> EDT
<lazyPower> sebas5384: whats your timezone?
<designated> lazyPower, you available?
<lazyPower> designated: shortly - i need to finish composing this email
<lazyPower> designated: Ok. So - where do we begin? You want to specify network interfaces for binding on which charm?
<designated> lazyPower, any of the openstack charms.  I want to have the ability to use separate interfaces for different functions, for example the mgmt network should be em1, the private and storage network should be two bonded 10Gbe interfaces, etc...
<lazyPower> jamespage: is this on the roadmap for the OStack charms? ^
<lazyPower> designated: let me validate this isn't already being worked on
<designated> lazyPower, thank you
<lazyPower> designated: without any response from an openstack charmer, my first thought is as follows. How do you determine which interfaces used for which task as it stands now? is this an arbitrary thing you do in your network, or do you follow a convention. In addition, is there an easy way to determine this programmatically?
<designated> lazyPower, all of the servers in the resource pool will be configured identically so just having the ability in the charm to assign the bonded interface to certain tasks
<designated> lazyPower, instead of being limited to a single interface
<lazyPower> designated: In this instance, I would install the charm-tools package and 'charm get' the charm you wish to modify. Add a config parameter in config.yaml and modify the relevant hook(s) config-changed block and start testing the functionality.
<lazyPower> once you've got something that accomplishes your goal, then submit it for Review process if you want it included in the upstream OpenStack charms.
<lazyPower> This sounds like it would be a great addition to the OpenStack ecosystem. I'm not sure if it's already being worked on, so before i wrote any code, I would suggest you mail the juju mailing list and see if any traction has been made around the topic.
<lazyPower> In terms of deployment and testing, I can help by 'sponsoring' your efforts and donating time to test / review your initial code prior to the official openstack charmer review
<designated> lazyPower, thank you
<lazyPower> but without an easy way to know 'this interface for public, this interface for private' since its kind of arbitrary between setups - the logic would need to be dependent on config options, and need to be added to the charm.
<lazyPower> and the charm, by default, should assume a single network interface so it works with most deployments - which it does now.
<lazyPower> designated: any time! We have a great community. Each submission counts :) And it sound's like you're working towards a highly optimized revision to the existing openstack experience, which would be great to see on top of our existing optimizations.
<lazyPower> designated: i'm normally around unless i'm attending a conference circuit or on holiday. Feel free to ping me anytime for Q/A & reviews.
<sebas5384> lazyPower: sorry!! was called to a meeting Â¬Â¬
<sebas5384> lazyPower: my timezone is EDT+1
<lazyPower> sebas5384: ok. I can carve out some time around say, 2PM EDT for our Q/A?
<sebas5384> lazyPower: what about the beer?
<sebas5384> hehe
<lazyPower> I cant drink at that hour, but you're more than welcome to :)
<sebas5384> can be something more late, like 7pm ?
<sebas5384> hehe
<lazyPower> I don't see why not.
<sebas5384> sure then
<sebas5384> 7pm EDT time
#juju 2014-06-04
<jose> marcoceppi: around?
<jamespage_> lazyPower, designated: I do have a plan to work on that this cycle - we had some work done for a customer to enable this but I was not happy it was up for general use.
<jamespage_> I have a proposed reference network topology and approach - basically you would be able to specify which network is used for each type:
<jamespage_> admin-network: 10.5.5.0/24
<jamespage_> for example
<jamespage_> and the charm will figure out which configured network interface should be used for internal/admin traffic
<jamespage_> there is already stuff in charm-helpers to discover network interfaces by cidr
<jamespage_> I think its still up to MAAS/juju to actually configure the nics... the charms will not do that
<Mosibi> designated lazyPower jamespage :: that's functionality we also like to have in the openstack charms. We are working with the openstack charms and for example we have a VLAN interface where we want the tenant/neutron traffic to go/flow over
<Mosibi> Besides that, in our previous (pre-juju) deployment, we used vlan for the tenant networks. We would love to have that possibility in the charms.
<Mosibi> As i mentioned yesterday on this channel, a colleage started working on the charms (for some other things we need) and when he has working code, he will submit it
<jamespage> Mosibi, sounds good - I have a design in mind for a reference network architecture based on best practices I've seen
<jamespage> Mosibi, once I have a little more polished I'll circulate it
<jamespage> Mosibi, juju + MAAS are due to grow first class support for managing network connectivity which will help alot
<jamespage> Mosibi, configuring the charms themselves is not actually that hard - the work done before was implemented on a deployment with bonded 10G nics, with traffic separation via VLAN
<stub> Freshly bootstrapped local provider with 1.18.4, I get a happy machine 0, status etc. working. But I can't deploy a service. The new machine just hangs 'pending', and there are no logs.
<stub> Any hints on diagnosing this further?
<stub> The jujud process is regularly kicking in chewing a little CPU, but no idea what it is doing.
<stub> Looks like the machine is created, but ssh is failing. All my keys being refused, including ~/.juju/ssh/juju_id_rsa
<stub> machine-0: no kvm containers possible ?
<stub> So for those reading along at home, ufw had been enabled and my lxc stuff on 10.0.3.* was unable to talk to my main lxc IP, including the log messages.
<lazyPower> jamespage: thats awesome to hear. Thank you for the follow up
<lazyPower> stub: ah, good catch. I haven't heard of any UFW based issues in a bit. Looks like its back to haunting around and stirring up trouble.
<lazyPower> stub: did you see jorge's fix for rsyslog-forwarder? https://code.launchpad.net/~niedbalski/charms/precise/rsyslog-forwarder/lp-1323627/+merge/221903
<stub> yeah. I was copying from the charm, as it seemed to be the only place the syslog interface was documented
<stub> I'd review it but I have no idea what the double @@ or lack thereof would do
<lazyPower> stub: i'm looking at it myself - it appears to just change the port on the destination
<lazyPower> from 10514 to 514
<lazyPower> niedbalski_: any light wrt the @@ in the config change proposed for rsyslog-forwarder?
<stub> lazyPower: https://bugs.launchpad.net/ubuntu/+source/ufw/+bug/573461 , fixed in utopic but trouble for me
<_mup_> Bug #573461: UFW blocks libvirt bridged traffic <amd64> <apport-bug> <lucid> <ufw (Ubuntu):Fix Released> <https://launchpad.net/bugs/573461>
<lazyPower> bizarre. Apparently I'm running the devel release
<lazyPower> i thought i was on 1.18.4, nope. 1.19.3
<niedbalski__> lazyPower, ,@host is udp, @@host is TCP, now by default is forwarding to UDP  (ref: http://www.rsyslog.com/doc/rsyslog_conf_examples.html)
<lazyPower> ahhhhh ok.
<stub> I knew that.
<lazyPower> stub: haha :)
<lazyPower> that's amusing. I got bit by that earlier today too. knowing something but not knowing it.
<khuss> i am installing havana openstack on Ubuntu 12.04 using juju. How do I find the compatible ceph version
<khuss> does a juju charm install the actual package or is it done outside the charm?
<lazyPower> khuss: the charm is responsible for installing any packages required to run the service.
<khuss> lazyPower:  the cs:precise/ceph is installing the very old version of ceph..
<khuss> lazyPower:  the charm is the latest though... i;m wondering how to upgrade the ceph itself
<khuss> lazyPower: the readme doesn;t say what version of the ceph is installed by this charm
<lazyPower> khuss: it may be prudent to open a bug against the precise ceph charm to have the ceph version updated.
<lazyPower> I know the openstack charmers are really responsive on the bug tracker.
<khuss> lazyPower: is there a way to manually update the ceph version
<lazyPower> khuss: it's recommended that you use the charm to perform any upgrades, to keep support consistent. If a newer version of ceph breaks the charm functionality, teh charm itself would need to be upgraded. I'm not that familiar with ceph, so its hard for me to say what the implications are with just cowboying an update.
<khuss> lazyPower: are you aware about any irc channel for juju openstack charms
<lazyPower> khuss: they are in #juju
<khuss> lazyPower: if I can find them :)
<lazyPower> niedbalski_: approved LP221903
<elarson> would a charm be the correct unit to release an app? I say "app" in the sense that "services" deployed with juju might be mongo or hadoop where as the "app" would be an application that is used by the organization and is their code.
 * elarson is just starting with juju and trying to understand where it fits
<khuss> lazyPower: i've a charm in a weird state. there are no machines associated it with. juju destroy-service doesn't remove it so I'm not able to deploy the new charm. here is the output from juju status: ceph:     charm: cs:precise/ceph-26     exposed: false     life: dying     relations:       client:       - glance
<lazyPower> khuss: is there a dependent service that's related to ceph in an error state?
<lazyPower> if hook relations fail, the charm will be stuck in a dying state pending you resolving them.
<khuss> lazyPower: well there is a broken relationship but it is almost like a deadlock. Both services can't be killed
<lazyPower> khuss: have tried resolving on the service its related to?
<lazyPower> juju resolved service/#
<lazyPower> thats kind of risky if you're in ap roduction setup - as it may leave behind some traces that the charms departed hook performs. YMMV if you go this route.
<lazyPower> it would probably be better to debug-hooks into the service and re-execute the hook to determine why its failing if its a production system.
<khuss> lazyPower: i tried resloved but it didn't work.. may be I will do juju destroy-environment and start all over
<designated> jamespage, That sounds awesome.  You have already figured it all out or you'll be working on it?  I'm interested in whatever information you can provide.
<themonk> i am facing a problem with config-changed and relation-chabged, problem in i have a template in this template a variable is ip, this ip is dependent on relation-changed, when relation happen it comes from other service, but that template has other variable which are comes from config.yaml so if i change config after relation-changed ip will change to default 'localhost' and relation will bw broken, how do i fix this?
<themonk> lazyPower: hi
<themonk> marcoceppi: hi
<lazyPower> themonk: you need to set some kind of a sentinel value, like touch $CHARM_DIR/relation-lock, and check for the presence of that sentinel to determine of the value should be updated.
<lazyPower> thats one method, but this makes me pause to ask "Why are you exposing a configuration value that comes from a third party? Shouldn't this by definition be opinionated in the point its nto exposed to the user and only updated via relationship?.
<themonk> lazyPower: my problem is elation related variables and config related variables both are in same config file template :( i cant change it
<themonk> lazyPower: about the sentinel value method, in this way after relation happen, i cant change config values ryt
<lazyPower> themonk: you're giving me a catch 22 that your config variable is overriding the incoming value from a remote service. You either need to unexpose teh configuration variable, or change the relationship variable. it's 'OK' to be opinionated in charm development.
<lazyPower> we encourage it, as its spoon feeding the end user best practices with their deployments.
<lazyPower> but without further context, i cant comment - if you have a branch to look at i'll be more than happy to review whats happening and make a proper suggestion.
<james_w`> themonk: I think you need to store the relation variables in a file in relation-changed
<james_w`> and then generate the template using that file and the config values
<james_w`> and you can do that from both relation-changed and config-changed, and they both have all the values that they need
<james_w`> unless you are talking about the relation value being for the same thing as one of the config values
<james_w`> in which case it will be more work to get the behaviour you want
<themonk> lazyPower: i cant show my code right now :(
<themonk> james_w`: i concidered this option too like make a json file :)
<Egoist> Hello
<Egoist> i have a question, where is store information what hook is executed after start hook?
<lazyPower> Egoist: depends on the next action that is triggered.  The typical hook flow during deployment:  install => config-changed => start -- afterwords, its dependent on the action that is taken. It could be any *-relation-joined, config-changed, or upgrade-charm hooks that are fired.
<lazyPower> or stop, if the service is destroyed.
<Egoist> is there any imformation stored about what happend when you deploy more than 1 instance?
<Egoist> i mean something like this:
<Egoist> juju deploy mysql -n 3
<lazyPower> Egoist: after the traditional deployment hook sequence is executed, the peer-relationship hook will be triggered on all 3 instances.
<Egoist> is it possible to make changes in peer-relationship?
<lazyPower> can you refine that a bit more? as 'make changes' is kind of open and broad, and could mean many things.
<Egoist> i don't want to make relationship beetween this mysql instances that I mentioned earlier
<Egoist> I just want to have 3 separate instance
<lazyPower> Egoist: to that end, it would be better to deploy them as juju deploy mysql "alias"
<lazyPower> they then become independent service units, without a peering relationship.
<Egoist> ok, i get it
<lazyPower> and can be independently scaled/managed
<Egoist> may I ask one more question
<lazyPower> sure
<Egoist> why juju sometimes don't have relation in list, even if relation-set was executed? Is it possible that relation-set is return false because few instances try to executed this command in the same time?
<lazyPower> Egoist: do you have an example? i'm not sure what you're saying.
<lazyPower> s/saying/asking/
<Egoist> when i want to try make a relation beetwen two instance, and each instance execute relation-get and it's return nothing
<gQuigs> I'm just curious and couldn't find backstory on the decision to move from lp to github, anyone know where it took place?
<marcoceppi> gQuigs: for juju-core?
<gQuigs> marcoceppi: yup
<lazyPower> Egoist: relation-get returning nothing means it wasn't set on the other side. During which hook is this call being executed? It should be the relationship-changed hook and not the joined, as joined is pre-setup work like archiving an existing database provider or installing pre-req packages.
<marcoceppi> gQuigs: it's been decided for a few months. Discussion would have happened on juju-dev mailing list
<gQuigs> marcoceppi: yea, I've been looking over the mailing list... went back to oct 2013
<Egoist> ohh, ok, i get it. Thank you very much
<Egoist> :_
<Egoist> :)
<marcoceppi> gQuigs: ask in #juju-dev they should be able to enlighten
<gQuigs> marcoceppi: ok, thanks!
<jcastro> ... and all my stuff is stuck on pending today
<lazyPower> jcastro: stub ran into an issue last night with UFW blocking his deployments.
<lazyPower> have you disabled UFW to validate that's not the over-arching issue? his was specific to 1.18.4 i believe, as i'm on 1.19.x and haven't seen teh same behavior.
<jcastro> my ufw is off
<khuss> lazyPower: I've filed a bug regarding the ceph issue we are seeing - https://bugs.launchpad.net/juju-core/+bug/1326466
<_mup_> Bug #1326466: Openstack install failing because of outdated Ceph charm? <juju-core:New> <https://launchpad.net/bugs/1326466>
<lazyPower> khuss: brilliant. thanks for the bug report!
<lazyPower> jcastro: boo :(
<ahasenack> $ charm get ntp
<ahasenack> (...)
<ahasenack> bzr: ERROR: Transport operation not possible: http does not support mkdir()
<ahasenack> really, no anonymous charm get?
<jcastro> git clone https://github.com/charms/ntp.git
<jcastro> :)
<jcastro> ahasenack, that sucks though, can you file a bug on charm-tools?
<ahasenack> ok
<ahasenack> jcastro: I would need the ntp charm to be available on trusty, what are the steps to get that done?
<jcastro> ahasenack, currently, ask marcoceppi to push it to trusty
<ahasenack> marcoceppi: hi, are you in a position to do that? ^^^
<marcoceppi> ahasenack: have you tested it on trusty?
<ahasenack> about to
<ahasenack> hitting issues with charm get, and now a overzealous firewall that doesn't allow connections to bazaar.lp.net
<ahasenack> marcoceppi: I can give both (ntp and ntpmaster) a shot and report back, then you can promote them to trusty if no blockers are found? Is that more or less the procedure?
<marcoceppi> writing tests for it is the fastest way to get it in to trusty
<ahasenack> oh, that
 * marcoceppi looks at the charm
<ahasenack> I'd say it's a bit lacking in the unit test area
<jcastro> ahasenack, yeah that's why we haven't mass-moved the charm store to trusty
<ahasenack> still, it says "recommended" in the charm store ;)
<jcastro> precise is recommended
<nooky> Hello, I've a question about the Bundle import using charms that are local, when I try to import the bundle I'm getting errors on invalid configs. For example: An error occurred while deploying the bundle: Invalid config charm precise/true-node-app-1 app_url=https://github.com/zarffg/prototype Invalid config charm precise/true-node-app-1 app_name=prototype Invalid config charm precise/true-node-app-1 app_port=3000
<jcastro> nooky, can you pastebin your bundle somewhere?
<nooky> jcastro: yes, wait a minute
<nooky> jcastro: http://pastebin.com/PPAdNe8P
<jcastro> I have  " " around my charm: local/blah stuff
<jcastro> I don't think that's your problem though
<jcastro> marcoceppi, ideas? ^
<nooky> jcastro: yes, I've tried with double quotes around the charm definition, but isn't working
<rick_h_> nooky: what is 'bundle import'? How are you importing?
<nooky> rick_h_: via juju gui
<rick_h_> nooky: the gui doesn't support deploying bundles with local charms currently
<nooky> oh
<nooky> rick_h_: juju deployer ?
<nooky> or quickstart ?
<rick_h_> nooky: yep
<nooky> fine
<nooky> let me check
<rick_h_> sorry, juju has an issue we need to correct for that to work properly from the GUI at the moment
<nooky> rick_h_: I've tried to deploy via juju-deployer and didn't work, the errors are the same that importing using juju-gui
<rick_h_> nooky: ok, well at least now have an idea that it's something that will work once the issue is worked out.
<nooky> rick_h_:  i need to use double quotes to charm and options definition?
<rick_h_> nooky: can try it. Not sure. It fussed about all three config options. Stuff like port shouldn't need the quotes, but it complained about that as well.
<rick_h_> does those cross refence to the charm itself exactly?
<nooky> rick_h_: look at this: https://bugs.launchpad.net/juju-deployer/+bug/1297940
<_mup_> Bug #1297940: deployer does not support a local: prefixed charm url.  <api> <charms> <state-server> <juju-core:Triaged> <juju-deployer:Confirmed for hatch> <https://launchpad.net/bugs/1297940>
<hazmat> nooky, its a different syntax
<hazmat> nooky, that deployer file you pasted is wrong in a few ways
<hazmat> nooky, s/charm: local:precise/true-varnish-0/charm: true-varnish
<hazmat> you can also point to github repos as the charm branch
<nooky> hazmat: ok, thanks, I'll try to replace the charm definition in the bundle file
<hazmat> nooky, charm: <charm_name> should do the trick
<nooky> great
<nooky> hazmat: using the way charm: <charm_name>, I'm getting  this error, 2014-06-04 20:06:58 Invalid charm specification true-varnish
<nooky> hazmat: Is possible that the errors are because I'm using a new environment and not the environment where I've exported the Bundle?
<hazmat> nooky, well with local charms you have to access to them... locally there not in the bundle
<hazmat> gui exports of local charms are not functional
<hazmat> nooky, i'd also toss a series: precise  right under envExport
<hazmat> you can specify branch: https://github.com/name/repo.git for the charms to pull them straight from vcs
<nooky> hazmat: ok ok
<nooky> so, can  i define in this way? charm:  https://github.com/name/repo.git
<marcoceppi> nooky: not quite
<marcoceppi> you would have to change charm to branch
<nooky> marcoceppi: ok, thanks
 * hazmat signs off
<l1l> a mysql charm that sets up blank databases with no tables
#juju 2014-06-05
<sebas5384> lazyPower: ping?
* jose changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: bcsaller || News and stuff: http://reddit.com/r/juju || VOTE FOR JUJU! http://goo.gl/60eBX2
<leotr> hi! we are going to buy one server (64 gb ram, dual processor) for virtualization purposes. Can juju be useful for us?
<Bidule> hi
<Bidule> amazing project !
<Bidule> really thanks for that
<rbasak> jamespage: I think the juju-core (in Utopic) dep8 tests are fetching tools from streams.canonical.com even though they're using the local provider. I wonder: if we want the archive to be as self-sufficient as possible, then can we eliminate that?
<rbasak> Is that what juju bootstrap --upload-tools does, and if so, do you think I should use that in the tests?
<rbasak> The way I see it, juju need not have any external dependencies in the local discover/test/development case.
<rbasak> (if that's achievable)
<jamespage> rbasak, --upload-tools is good I think
<mivtachyahu> hi all, when I spin up Ubuntu 12.04 machines for Azure in juju, all the host names (/etc/hostname etc.) are set to "default". What is the best way to automatically set that to be the public-address / dns-address?
<niedbalski__> hey bcsaller , please review https://code.launchpad.net/~niedbalski/charms/precise/rabbitmq-server/trunk/+merge/219909
<l1l> new mysql charm seems broke, its creating empty databases with no tables
<avoine> l1l: your application should creates table I think
<Egoist> could someone tell me how relation-list command work in juju?
<arosales> marcoceppi: Is there documentation somewhere for relation-ids and relation-list?
 * arosales also browsing https://github.com/juju/juju/tree/master/doc
<arosales> cory_fu ^
<arosales> I don't think we cover that in juju.ubuntu.com/docs . .  .
<arosales> cory_fu: mention here https://github.com/juju/juju/blob/master/doc/charms-in-action.txt which we also should have in our docs.
<marcoceppi> arosales: we do
<marcoceppi> we cover it in the docs
<marcoceppi> arosales: Egoist https://juju.ubuntu.com/docs/authors-hook-environment.html#relation-list
<arosales> marcoceppi: ah even better
<arosales> marcoceppi: we desparately need better searching in the docs :-(
<arosales> cory_fu ^
<marcoceppi> arosales: we need actual searching, we don't have that yet
<arosales> good point, we should just do SCE for our sub pages
 * marcoceppi could implement pretty quickly
<Egoist> marcoceppi: why relation-list sometimes don't return all remote unit in current relation?
<marcoceppi> Egoist: because that remote unit isn't available at the time the hook started
<Egoist> marcoceppi: how to check that remote unit is available? Ping it, or something?
<marcoceppi> relation-list
<marcoceppi> if you have it listed in relation-list, then it's available
<Egoist> marcoceppi: but how charm will know how many units should be listed?
<marcoceppi> Egoist: it doesn't know how many should, it knows which ones currently are
<mbruzek> Can someone tell me what the command `relation-ids` does and what it is good for?  https://juju.ubuntu.com/docs/authors-hook-environment.html#relation-ids
<lazyPower> sebas5384: Hey!
<sebas5384> lazyPower: hey!
<lazyPower> sebas5384: really sorry about missing you lastnight. I forgot to put it on my calendar and totally spaced it off :(
<lazyPower> i'm a terrible person
<sebas5384> hehe thats ok, it happens :)
<sebas5384> next time we put in the agenda hehe
<lazyPower> agreed
<lazyPower> i felt pretty bad when i woke up this morning and went through my irc backlog to see you hanging out with no lazy in sight.
<lazyPower> oops
<lazyPower> hopefully you didnt cancel any plans to attend.
<sebas5384> no worries, if you can we can do it today
<sebas5384> i will be working on vagrant juju things with another colleague here at Taller, so we are going to make a brainstorm about the that
<lazyPower> Sure
<lazyPower> I'll be around today until late prepping for another proejct going live tomorrow.
<sebas5384> i'll ping you, and if you can, more than welcome to join us
<lazyPower> so I can host a hangout and order a pizza
<sebas5384> ooh nice!
<sebas5384> great!!
<cjohnston> can anyone help me investigate bug #1319947 please
<_mup_> Bug #1319947: LXC local provider fails to provision precise instances from a trusty host - take 2 <juju-core:Confirmed> <https://launchpad.net/bugs/1319947>
<jcastro> http://summit.ubuntu.com/uos-1406/track/devops/
<jcastro> please submit your sessions!
<avoine> noodles775: do you think that would be a good occasion to talk about juju+ansible roles? ^
<noodles775> avoine: yep, I've got a meeting proposed, which seems to be approved... just not in the schedule yet: http://summit.ubuntu.com/uos-1406/michael.nelson/meetings
<avoine> noodles775: ok , great!
<avoine> noodles775: btw I'll will have something to show soon do you want me to do a pull request or should I put it else where?
<noodles775> avoine: A pull request would be great, if it's a shared role (or fixes/additions to the shared roles). Otherwise just pointing me at your repo/branch is fine too :)
 * noodles775 goes to put kids to bed.
<avoine> noodles775: ok, perfect
<cjohnston> jcastro: are you still unable to use juju local on trusty?
<jcastro> yeah my stuff is stuck on pending
<l1l> are constraints supposed to work when add-machine kvm/0 is used?
<l1l> I can't get a kvm to launch without the default settings (1core, 512meg, 8gb)
<dpb1> What is the right way to refresh lxc templates on juju local provider?  lxc-destroy --name <template> and just let juju refresh it?   Specifically the apt package cache gets out of date
<sebas5384> jcastro: remember we talk about having more than one juju local?
<sebas5384> jcastro: i did a poc to test that, https://github.com/sebas5384/ansible-juju-local
<sebas5384> and now i use that for my vagrant workflow
<jcastro> sebas5384, that looks awesome, sec, on the phone!
<sebas5384> jcastro: :)
<cjohnston> tvansteenburgh, marcoceppi, is there a way with amulet to load multiple deployer files?
<marcoceppi> cjohnston: no, not at the moment, what's the use case?
<cjohnston> marcoceppi: a true integration test...
<cjohnston> we have ~a dozen deployer files...
<marcoceppi> so you want to deploy all of those at once?
<hazmat> marcoceppi, what if the deployer files load other ones.. ie are you parsing it are passing through?
<cjohnston> all or subsets
<tvansteenburgh> iirc amulet just takes the first target from the deployer file
<marcoceppi> hazmat: if you load multiple deployments, I'm not sure exactly what happens, I think it just resets the deployment, tvansteenburgh do you know?
 * marcoceppi is reminded about doing a non-amulet test, will have that for hazmat's modest proposal next week
<tgz> Hi all. Just starting my investigation into juju. I am wondering what happens if a node launched with juju goes down. Does juju monitor and relaunch a node?
<marcoceppi> tgz: so, it used to but we found that sometimes there's network latency in a cloud environment, and a node would appear offline but it wasn't, so it'd launch a duplicate and now you've got two nodes registered as the same
<marcoceppi> tgz: so, juju status will show agent-state as down (unavailable) and you can juju add-unit to add another unit of that service group
<marcoceppi> tgz: juju also is fully driven by a websocket API, so you could write tools that monitor the status and does this. You could also extend that same pattern to implement autoscaling etc
<tgz> OK. Thanks for the reply. Does the agent-state have the same false positives issue?
<marcoceppi> tgz: it's susceptible to timeouts,I hhink the timeout is set to 2m
<tgz> Sounds good. Thank you very much. I appreciate your time.
<cory_fu> How much of an issue is interface name collision between charms?  That is, if two completely unrelated charms both use the same interface name but are never likely to be related, does the conflict matter?
<marcoceppi> ppetraki: hey, remember your relation-get out of band issue?
<marcoceppi> cory_fu: you should never have an interface that doesn't connect with it's counterpart
<ppetraki> marcoceppi, yeah
<marcoceppi> cory_fu: if you have an interface, then it should connect
<marcoceppi> if it doesn't then one charm is using the wrong interface name
<marcoceppi> ppetraki: `relation-get -r engine:0 - nginx/0`
<marcoceppi> the - will get you all output
<ppetraki> marcoceppi, oh come on...
 * ppetraki adds to quirks list
<ppetraki> marcoceppi, thanks
<marcoceppi> np, I'll make a note to document it in the docs
<lazyPower> hey marcoceppi, i'm probably the last to know but did you know about this? http://paste.ubuntu.com/7597307/
<lazyPower> it even puts you in teh failed hook context
<marcoceppi> lazyPower: dude, I wrote that plugin
<lazyPower> oh
 * lazyPower is the last to know then
<marcoceppi> https://github.com/juju/plugins/blob/master/juju-debug
<l1l> are constraints supposed to work when add-machine kvm:0 is used? I can't get a kvm to launch without the default settings (1core, 512meg, 8gb)
<thumper> l1l: yes
<l1l> hmm, must be broke then.. i've tried deploying with more CPU's and more ram, and it still used the default
<cory_fu> marcoceppi: I'm asking about the case when there's a conflict.  That is, two charms use the same interface name for different protocols.  Have we had to deal with resolving such a case before?
<marcoceppi> no, we haven't
<marcoceppi> charms when they go through review are generally checked to make sure there are no collisions
<cory_fu> That's going to get exponentially harder as time goes on
<cory_fu> But obviously we should look for interface names that may be too generic when reviewing
<marcoceppi> cory_fu: well, an interface registry/testing framework will make lite of that work
<marcoceppi> cory_fu: well, we have a few generic interfaces already, and they're great! For example, monitors and local-monitors
<marcoceppi> super generic, but super flexible
<cory_fu> I'm coming at it from the case of having a set of charms that are likely to only ever talk to each other, and wondering how much I should worry about potential conflicts ahead of time
<designated> what about being able to specify the network interface used when deploying charms?  as an example, I would like to use two bonded 10Gbe interfaces as bond0 instead of the 1Gbe NICs in the servers.  Is there a way to do this?
<designated> or is there logic to use the network interface with the most available bandwidth?
<designated> marcoceppi, can you answer my question?
<marcoceppi> designated: juju is adding network topology to it's constraints, but it's not available at the moment
<designated> marcoceppi, so there is currently no way to specify the use of one network interface over another when building relationships?
<marcoceppi> designated: not in the interface, but you could add it to your charm's configuration
<marcoceppi> designated: openstack does this, it allows you to specify the nic you want to configure your service to talk on
<designated> I'm deploying openstack on maas and I just want to tell the charms to use bond0 instead of the single 1Gbe interfaces.
<designated> is that fiarly easy?
<marcoceppi> designated: most openstack charms have that option, let me see if I can find it
<marcoceppi> maybe not, I'm not well versed in the openstack charms
<marcoceppi> or, maybe it's part of neutron
<designated> i see where you can set things like HA interface or vip interface
<marcoceppi> right
<designated> but ceph for example has nothing about specifying an interface.  how will other openstack services talk to ceph nodes on my 10Gbe interfaces instead of the 1Gbe interfaces?
<marcoceppi> designated: well, I assume the 10Gbe are on a different network?
<designated> marcoceppi, yes they are
<marcoceppi> so if all of openstack has been configured for that network they should reach other on those interfaces
<designated> but each node also has a single 1Gbe NIC on the same network used for maas
<designated> what's the stop them from building a relationship on the 1Gbe NICs?
<marcoceppi> I can see the issue, so it's advertising itself as on the MAAS network IP range
<marcoceppi> and not the bonded networking
<designated> right
<marcoceppi> This is where juju knowing the network topology will come in handy, I think setting the VIP on ceph, MySQL, etc will make it so the charm advertises itself on that range rather thanthe 1Gbe nics
<designated> i found this: https://lists.ubuntu.com/archives/juju/2014-January/003392.html which provides a workaround when deploying on openstack but doesn't mention deploying on maas
<marcoceppi> I believe that's how we overcame it in customer deployments
<b0c1> hi
<marcoceppi> designated: again, not as well versed in the openstack charms
<b0c1> I try to use juju in vagrant... but I have a little problem
<marcoceppi> jamespage and the ~openstack-charmers would be much more adept at answering, but most of them are European timezone
<b0c1> after the first start everything works, but if I restart the vagrant machine the internal routing (juju gui iptables command) not run
<marcoceppi> designated: https://launchpad.net/~openstack-charmers/+members
<marcoceppi> designated: you could also mail the juju mailing list about it, so those in other timezones with more knowledge can reply
<b0c1> so juju-gui accessible only inside the vagrant machine
<marcoceppi> juju@lists.ubuntu.com
* marcoceppi changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: bcsaller || News and stuff: http://reddit.com/r/juju
<designated> marcoceppi, thank you
<marcoceppi> designated: np, sorry I couldn't be of more assistance
<lazyPower> b0c1: you'll want to suspend the virtual image instead of halting it.
<lazyPower> s/image/machine/
<lazyPower> when you halt it and fire it back up, the cloud-init routine to setup that host bridging doesn't get executed.
<b0c1> hmmm
<b0c1> lazyPower: thnx...
<b0c1> good to know...
<marcoceppi> lazyPower: is there anyway to trap the vagrant halt command and warn the user before proceeding
<lazyPower> marcoceppi: not that I'm aware of
<lazyPower> we could monkeypatch vagrant, but thats not a great idea. its subject to breakage and odd behavior down the road as vagrant updates.
<b0c1> lazyPower: only the routing not work after vagrant halt?
<lazyPower> b0c1: thats about it. Everything else is a single time setup.
<b0c1> why not write the cloud init script the routing into the rc.local ?
<b0c1> fast and simple way...
<marcoceppi> no, no monkeypatching
<lazyPower> marcoceppi: i didnt' say it was a great idea, but you asked if there was any way.... thats all i've got :(
<marcoceppi> http://i.imgur.com/tnk4BBl.gif
<lazyPower> Really, we should probably add the port mapping to the Vagrantfile
<lazyPower> that way it will persist through a halt
<b0c1> ummm... the internal cloud init script will write the external vagrant file?
<b0c1> I think rc.local is much cleaner...
<lazyPower> b0c1: no, the port juju gui deploys to doesn't change. So adding it to the vagrantfile is the safe-bet.
<lazyPower> the cloud-init script is calling the vagrant api to set that host-guest mapping.
<b0c1> yeah but the juju-gui deployed into internal vagrant with different ip (maybe itÂ´s random?)
<b0c1> ohh it can?
<lazyPower> b0c1: actually it may even be the shell provisioner thats setting it up, i haven't looked at it in a couple of weeks.
<lazyPower> if you look in the box directory, you can see all of the provisioner statements.
<lazyPower> which should be something like $HOME/.vagrant/boxes
<b0c1> lazyPower: maybe I not understand the problem... but the juju gui run in a separated vagrant machine inside the vagrant machine...
<lazyPower> b0c1: there's a juju-gui redirector service that gets deployed on first run.
<lazyPower> when you watch teh scrolling output of a first run, it implicilty calls out its setup declaration "deploying the juju gui redirector"
<lazyPower> and that's always bound to 6080 on the VM machine.
<b0c1> lazyPower: in this case I can access the internal vagrant juju-gui as localhost in the main vagrant machine
<lazyPower> i don't know if its in the upstart scripts to restart after a shutdown. I haven't gotten that far into it.
<lazyPower> but really, aside from not being able to access it via a port redirect - you can also use sshuttle to route to the gui as a work around. Just hit the IP of teh juju-gui instance and you should be right back where you left off.
<b0c1> but after restart I canÂ´t access, I can only access when I direct connect the internal vagrant ip
<b0c1> hmmm.... I never used sshuttle...
<lazyPower> sebas5384: still kicking around/
<sebas538_> lazyPower: yeah! but i'm planning to leave early today :(
<sebas538_> here is +1h hehe
<lazyPower> sebas5384: all good my friend. Just poking you since we were talking about linking up today
<sebas5384> lazyPower: what about monday?
<lazyPower> I tell ya what sebas5384, with tomorrow being friday and nobody works late on friday - lets shoot for next week
<lazyPower> yeah
<lazyPower> Lets try for monday to sync on a vagrant plugin and cook up some awesome sauce
<sebas5384> lazyPower: that looks awesome for me!
<sebas5384> same bat time?
<sebas5384> 7pm EDT
<sebas5384> ?
<thumper> lazyPower: so about this pending python-django charm
<thumper> lazyPower: what's happening with that?
 * thumper wants to deploy django
<lazyPower> thumper: have a MP for me to look at and speak to?
<Egoist> why unit do not appear in relation-list even if relation-set was executed?
<Egoist> because machine is busy handling some kind of hook?
<marcoceppi> Egoist: the unit on the other end has to have successfully called relation-joined for that relation before the hook which you're calling relation-list in is executed
<marcoceppi> Question about subordinates. If I add two subordinates to a service, and one calls open-port 80 and the other does a close-port 80 what happens?
<marcoceppi> assuming open was called first and closed called next
<marcoceppi> Does juju know not to close the port since it was opened by another charm?
<marcoceppi> btw, ppetraki that relation-set thing was in the docs this whole time
<marcoceppi> I went to update it and found it in there
<marcoceppi> https://juju.ubuntu.com/docs/authors-hook-environment.html#relation-get
<marcoceppi> what is the app app-servers category entail?
<jose> http://www.reddit.com/r/Ubuntu/comments/27cfhw/ibm_app_throwdown_canonicals_juju_selected_as_top/chzmn1y
<jose> happy people is what Juju gets
<Egoist> marcoceppi: Sorry but i don't get it
<lazyPower> wooo!
<lazyPower> marcoceppi: depending on which subordinate executes last, the yielding result will happen.
<lazyPower> i dont think there's any kind of notion in juju of the open/close ports from one service locking it out to another.
<marcoceppi> lame
#juju 2014-06-06
<noodles775> jcastro: Hi! Could you schedule my "Simpler re-usable charms with ansible" session when you get a chance? Weds 14UTC would suite me best (looks like DevOps1 is free then): http://summit.ubuntu.com/uos-1406/michael.nelson/meetings
<thumper> hi stub, got a few minutes?
<stub> thumper: sure
 * thumper is trying to get the storage subordinate working
<thumper> stub: any idea what values I should put in the block-storage-broker config for endpoint?
<thumper> stub: using the default ec2
<thumper> us-east-1 according to the config
<thumper> s3.amazonaws.com?
<stub> thumper: I don't know EC2 sorry.
<thumper> hmm... ok
<thumper> stub: any idea what chad's irc nick is?
<thumper> stub: the default precise postgresql charm now supports the storage subordinate?
 * thumper tries the deployer config he set up
<stub> csmith isn't around, but our we-bops or landscape could help
<stub> thumper: yes, the default precise postgresql charm supports the storage subordinate (and only the storage subordinate)
<stub> trusty... not sure if that is live yet (in which case, I should push) or still being setup
<thumper> kk
<thumper> yeah, gone for precise
<thumper> likely to shoehorn everything onto machine 0 :-)
<thumper> hmm... endpoint is wrong
 * thumper sighs
<thumper> bugger
<natefinch> marcoceppi_: are you up this early?
<marcoceppi_> natefinch: I am!
<natefinch> marcoceppi_: cool
<marcoceppi_> (un)fortunately?
<marcoceppi_> what's up?
<natefinch> hah
<natefinch> I'm writing a charm!  WHich means I'll have a lot of questions :)
<natefinch> the charmtool outputs a README.ex .... if I make that README.md will it still work?
<marcoceppi_> Fantastic!
<marcoceppi_> natefinch: well, it "will" work but you'll get a bunch of proof errors about it being boilerplate
<marcoceppi_> also, for the sake of the rest of this conversation, what verions charm tools are you running? charm version should tell you
<natefinch> I just mean, if I change the extension it'll still show up.  I'll modify the contents
<marcoceppi_> natefinch: oh, yeah, that's all you need to do
<marcoceppi_> in fact, proof will complain if you have a .ex instead of a .md in your charm
<natefinch> ok
<natefinch> charm versoin says it's not a subcommand
<natefinch> version that is
<marcoceppi_> natefinch: you're not using the ppa version
<marcoceppi_> safe yourself a lot of time
<marcoceppi_> add ppa:juju/stable
<marcoceppi_> install charm-tools from there
<marcoceppi_> its' the differences between 1.0.0 and 1.2.10 are huge
<marcoceppi_> including the readme.ex going from RST to MD
<marcoceppi_> so hopefully you haven't written too much code
<natefinch> marcoceppi_: barely any :)
<marcoceppi_> whew, I'd go so far as to saying once updated, just run charm create again
<marcoceppi_> you'll get like an icon template a better readme template, better metadata templates
<marcoceppi_> better boilerplates
<marcoceppi_> and better proof command
<marcoceppi_> basically, better everything
<natefinch> awesome,
<natefinch> I'll do that once apt-get update finishes
<marcoceppi_> \o/
<natefinch> I wish adding a ppa always ran update-get update, at least to get the stuff from the ppa, sheesh
<natefinch> btw, is there a way to have different charms for different architectures?  I want to make the charm in Go, but you have to have a different executable for each architecture
<marcoceppi_> yeah, like to a partial apt-get update for that source to get the delta
<marcoceppi_> that's, no - not at the moment
<marcoceppi_> You could just compile the hooks during the install hook
<natefinch> yeah
<marcoceppi_> to make the runtime for the hooks at deploy time
<marcoceppi_> but that seems, heavy
<marcoceppi_> otherwise you'd have to package /hooks/$ARCH/<file> and just build a light weight dispatcher for each in /hooks/
<natefinch> yeah, that's what I was thinking.  Also kinda heavy, since you'd need an executable per architecture
<marcoceppi_> but the execs wouldn't be very big
<marcoceppi_> it's a trade off, "JIT"-ish compilation, or possible missing archs
<natefinch> go execs are bigger than normal, like 4 megs generally
<marcoceppi_> could combine the two ideas
<natefinch> how bad is it to have an architecture-specific charm?
<marcoceppi_> pretty bad considering it doesn't exist yet
<natefinch> so.... here's the thing.  I'm writing a charm for discourse
<natefinch> discourse says it requires 64 bit ubuntu
<natefinch> marcoceppi_: https://github.com/discourse/discourse_docker
<marcoceppi_> well, it's a support requirement. I definitely have run discourse on i386
<natefinch> *nod* I know, just sayin'
<marcoceppi_> could do both, have hooks/$arch where arch is [amd64, i386] then have the install hook attempt to compile the src hooks if the arch isn't pre-loaded
<natefinch> marcoceppi_: that's not a bad idea
<marcoceppi_> so, you could have arch dependant but that's going to be sad news for people running power, arm, i386, etc
<natefinch> yep
<marcoceppi_> and we try to avoid as many sad pandas as possible
<marcoceppi_> can you compile go for, say i386 on an amd64 machine?
<natefinch> yes... cross compile works 100% awesome
<natefinch> except for power which requires gccgo
<marcoceppi_> interesting. So, you /could/ and I mean it'd be beat up pretty badly in review, have it just for amd64 and provide instructions or a simple make file for people to pull the charm, build the hooks in their arch of choice, then deploy local
<natefinch> not my fault juju keeps supporting all these wacky-ass platforms :)
<marcoceppi_> hah, well we strongly frown on provider specific charms, I think only one has made it past review
<marcoceppi_> so I can only imagine (well I can imagine, actually) we'd really beat up a charm that only supported an architecture
<natefinch> heh
<marcoceppi_> but, I wouldn't let that slow you down
<natefinch> yeah, but it would work on every provider!  And you're not supposed to care about the OS
<natefinch> just sayin' :)
 * marcoceppi_ places head on his hands, and looks dreamily in to the distance
<natefinch> what is charm home?
<marcoceppi_> in relation to?
<natefinch> charm create
<natefinch> also, are there no help docs?  charm help create doesn't seem to do anything
<marcoceppi_> ah, that's just the directory you want the charm to be created in, for example `charm create foo /tmp/baz` will give you /tmp/baz/foo
<marcoceppi_> charm create -h
<natefinch> ahh
 * marcoceppi_ needs to make a help helper
<natefinch> I saw the -h under charm create usage, but it didn't say what -h was for, so I didn't realize it was for help :)
 * natefinch has chicken and egg problems
<natefinch> gah... I want a way to spit out the charm in *this* directory, not a subdirectory :/
<natefinch> apologies for the complain-a-thon :)
<marcoceppi_> omit the charmhome
<marcoceppi_> oh
<marcoceppi_> you mean like, mkdir -p /charm/trusty/my-charm; cd $_; charm create .
 * marcoceppi_ narrows eyes at the complain-a-thon
<natefinch> exacrtly... I expected charm create . to create it *here* not in a subdirectory.  If I wanted a subdirectory, I'd do charm create ./foo
 * natefinch does charm create ../
<natefinch> ...which doesn't work because the directory already exists
<marcoceppi_> bugs are "welcome"
<natefinch> marcoceppi_: point me the way :)
<marcoceppi_> though, I'm going to say this is more the operator cutting himself with paper
<marcoceppi_> natefinch: http://launchpad.net/charm-tools :D
<natefinch> I dunno... there's ought to be SOME way to dump a charm in the current directory
<marcoceppi_> I can add a --natefinch-really-wanted-this flag :P
<natefinch> it's like git clone ... git clone juju  creates a juju directory here, if you instead do git clone juju ., it dumps the repo in the current directory, and if you do git clone juju ./foo, it creates the foo subdirectory and dumps the files in there.
 * marcoceppi_ nods
<marcoceppi_> we can make charm create smarter
<natefinch> marcoceppi_: btw, I love the charm tools logo.  That's clever.
<marcoceppi_> natefinch: hah, thanks!
<natefinch> btw, what does this mean: Failed to find discourse in apt cache, creating an empty charm instead.
<marcoceppi_> natefinch: it attempts to look the charm name up in apt, if it finds it charm create will pre-fill metadata.yaml with some details
<marcoceppi_> it's something that was moved over from the original charm-tools. i'll probably drop it eventually because it just flat out doesn't work on mac or windows machines
<marcoceppi_> and it's not a huge value add
<natefinch> yeah.... I could see it being something that happens with a flag, like --apt or something, if you know that's what you want it to do, but I could see names that happen to match things in apt that would not be the correct behavior
<natefinch> the instructions to install docker involve piping the output of wget https://get.docker.io to sh... seems legit
<marcoceppi_> natefinch: yeah, I hate those. Same with RVM
<marcoceppi_> there's not much you can really do. Security is a thing fo the pass when you got a `wget | sudo sh -` one liner
<marcoceppi_> s/pass/past/
<natefinch> yep
<natefinch> le sigh
<lazyPower> security? thats still a thing?
<marcoceppi_> no, nooo...
 * marcoceppi_ pulls the wool over lazyPower's eyes
<lazyPower> ahhh ignorance is bliss
 * hazmat adds some bliss to morning coffee
<hazmat> natefinch, you can also grab from their deb repo..
<marcoceppi_> that's a way friendlier idea
<natefinch> hazmat: that sounds much nicer
<hazmat> natefinch yeah.. its wget | gpg --verify --sign | sudo sh - one liner ;-)
<lazyPower> haha
<lazyPower> #trolledhard
<mhshams_> hi, got some basic questions about juju, i hope here is the right place to ask.
<mhshams_> Does juju provides an API, (REST for example), so that one can use it from another application ?
<marcoceppi_> mhshams_: yes, it's a websocket
<mhshams_> @marcoceppi : where can I found some document or other info about it ?
<marcoceppi_> mhshams_: https://github.com/juju/juju/blob/master/doc/api.txt
<marcoceppi_> Not the friendliest of docs
<marcoceppi_> mhshams_: if you're using Python, there's a jujuclient python package which does all that ofr you
<mhshams_> @marcoceppi thanks for the hints.
<jamespage> someone was asking about the ceph charms and using different networks for client access vs cluster resyncs
<jamespage> I have some branches ready for review that implement this if anyone wants to test
<lazyPower> designated: ^
<marcoceppi_> jamespage: ah, awesome, yeah I told himto ask on the list
<marcoceppi_> oh good he's around!
<nuclearbob> can anybody here help me with something in the juju-tools branch?
<marcoceppi_> nuclearbob: juju-tools?
<nuclearbob> marcoceppi_, sorry, got distracted.  Can I use canonistack-sshuttle from a canonistack instance?  That seems to be causing me problems
<rbasak> jamespage: looks like there are about ~21 bugs for Trusty SRU 1.18.1 -> 1.18.4.
<rbasak> jamespage: do we want to individually do SRU paperwork for each of those then?
<designated> jamespage, where can i find those charms you wrote?
<jamespage> designated, one sec
<jamespage> designated, top three branches here - https://code.launchpad.net/~james-page
<jamespage> designated, I have further work todo to enable all things that relate to ceph to use the public network addresses
<jamespage> but that should give you a feel on what's possible
<jamespage> rbasak, I'd really hope we don't need todo that
<designated> jamespage, thank you
<marcoceppi_> jose: I need ubuntu on air stuff setup
<jose> marcoceppi_: I'm here now, what's up?
<lazyPower> jose: troubleshooting I on charm school schedule today
<lazyPower> can we get UoA setup for it?
<jose> lazyPower: yeah, I know
<jose> erm, you want me to host the hangout?
<jose> sure, I can do that
<lazyPower> i know nothing of this UoA sorcery
<lazyPower> you are the holder of the magic beans on that project sir
 * mbruzek wants to learn
<jose> lemme grab a quick snack and I'll be around
<mbruzek> I want to learn how to do the UoA
<jose> you gotta have some secret codes for that ;)
<lazyPower> jose: we g2g?
<jose> lazyPower: I was having lunch, almost ready
<mbruzek> Ubuntu on Air
<mbruzek> With Jose !
<mbruzek> we need a url
<marcoceppi_> mbruzek: looks like lazyPower is doing this one solo ;)
<lazyPower> marcoceppi_: you got an invite
<marcoceppi_> lazyPower: can you pm me the URL, it's not ringing
<axisys> is there a view only url?
<lazyPower> axisys: ubuntuonair.com
<axisys> http://ubuntuonair.com/ is showing the one from yesterday
<axisys> I guess it will refresh when ready?
<lazyPower> You will need to refresh the page, but it will update with teh current session when its active.
<axisys> I see it now.. thanks
<mbruzek> marcoceppi_, are you OK?
<lazyPower> stream is live
<axisys> yep
<lazyPower> Feel free to ask questions!
<arosales> stream looks good
<axisys> I had to run this to make juju lxc work
<axisys> https://gist.githubusercontent.com/allomov/11373534/raw/bd430c2c92116340f2fa5d124b769c555946609d/solution.sh
<axisys> on Apr 28
<lazyPower>  axisys interesting. what was the use case that installing juju-local didn't resolve?
<axisys> right
<axisys> I am going to try again after this hangout
<axisys> with may be newer juju
<axisys> referring to this thread
<axisys> https://lists.ubuntu.com/archives/juju/2014-April/003749.html
<axisys> so what does bootstrap do? do you need to run bootstrap before doing any juju <environment> ?
<mbruzek> https://juju.ubuntu.com/docs/config-LXC.html
<axisys> would be nice if you can start a hadoop local?
<axisys> I was never been successful
<axisys> here is my thread from apr
<axisys> https://lists.ubuntu.com/archives/juju/2014-April/003737.html
<axisys> I have 8G ram
<axisys> and 1TB disk
<axisys> running trusty
<lazyPower> asanjar: axisys was asking about hadoop on local
<asanjar> axisys: how may I help
<jose> axisys: that's a fixed bug
<axisys> ok, I will try again.. thanks for your help
<axisys> good to know.. I use vagrant a lot
<lazyPower> Do any of our viewers have any burning issues they've run into that they would like to see how a ~charmer would debug?
<axisys> thanks a lot!
<lazyPower> Thanks everyone for attending. Great success of a charm school today.
<mbruzek> jose you still there?
<dpb1> Hi -- I'm trying to give a user permission to bootstrap and destroy a local env through sudo.  what command do I need to allow?
<roadmr> dpb1: hm, my tests show that for the initial bootstrap, juju basically does "sudo /bin/bash -s"; so maybe you'll have to allow bash invocation like this
<roadmr> dpb1: given that, the user will essentially be able to do anything as root, so you may as well give him access to everything I think :/
<dpb1> roadmr: :(
<roadmr> dpb1: I may be wrong, though, my investigation was a bit primitive
<dpb1> roadmr: ok, thanks.  I will probably go another route then. :0
<roadmr> dpb1: (I just did a juju bootstrap and looked in /var/log/auth.log to see which commands get run by sudo)
<dpb1> roadmr: thankfully the jenv files are portable (with some catches), so that works
<roadmr> dpb1: cool :) sorry I couldn't provide a nicer answer
<dpb1> roadmr: np, appreciate the help
<themonk> where should i put lock file so that between restarts of lxc it will persist
<themonk> lazyPower: hi
#juju 2014-06-07
<Volka> hi
<Volka> does it have anything to do with juju island?
<jcw4> Volka: no
<Volka> too bad
<Volka> juju island is great
<jcw4> hehe
<Volka> have you been?
<jcw4> no.  sounds great though.
<Volka> https://en.wikipedia.org/wiki/Jeju_Province
<dweaver`> I have a problem with juju bootstrap: fails with invalid SSH key, using the same key as I have uploaded to Launchpad.  Used to work, but now fails on EC2 and HPCloud both tested and both fail with the same error.
<dweaver`> lazyPower, ping^
<lazyPower> dweaver`: juju uses the sshkeys in ~/.juju/ssh - can you juju ssh into the unit in question?
<lazyPower> oh wait, bootstrap?
<lazyPower> hang on let me get some coffee
<dweaver`> juju-core version 1.18.1
<lazyPower> dweaver`: can you show me the output from a bootstrap with debug output? Juju bootstrap --debug
<lazyPower> Make sure you scrub any keys, it may output sensitive info
<dweaver`> Aha!  Actually when using juju bootstrap --debug it reports that the SSH key is invalid, but then completes, but without debug it fails.
<dweaver`> I can work around it using --debug then
<lazyPower> thats... strange
<dweaver`> http://pastebin.ubuntu.com/7607387/
<lazyPower> dweaver`: can you file a bug on this?
<dweaver`> Shows the beginning of the output where it states invalid SSH key.
<lazyPower> I see the line complaining : 2014-06-07 11:44:53 WARNING juju.utils.ssh authorisedkeys.go:310
<dweaver`> Yes, I can file a bug on Monday when I am back at my desk, just trying to get the demos running for some presentations here at Tadhack
<lazyPower> brilliant
<lazyPower> Hows that going?
<dweaver`> lazyPower, going OK, just setting up orange box demo and EC2 and openstack demos of telco apps, still debugging, but we are getting there.
<lazyPower> dweaver`: are you going to be trying to get the metaswitch charms up on orangebox today?
<dweaver`> lazyPower, yes, going to try it when the guys here have finished the bundle.
<lazyPower> if so, you're in for a rough time. The clearwater- prefix charms all call from the magic IP in AWS for metadata. I just ran into this on my MAAS cluster. You may need to ping matt to slime whaver values he's expecting from the AWS metadata.
<dweaver`> lazyPower, we are also seeing some issues with Juju defaulting to i386 on EC2
<dweaver`> But we are specifying amd64 as a constraint to get round it
<lazyPower> dweaver`: constraints: "arch=amd64"
<lazyPower> yeah, thats the proper method to get 64 bit hosts.
<dweaver`> has this behaviour changed recently, I thought the default was 64 bit
<lazyPower> not sure. I know its possible to set as an environment constraint so you dont have to specify it on every charm ever.
<dweaver`> lazyPower, which version were you using to test, Matt reckons that issue was fixed last night, but if you deploy with the bundle specifying versions, it would have pulled the older version of the charm
<dweaver`> The latest bundle should be deploying the latest charm versions.
<lazyPower> dweaver`: I haven't used any of their bundles - it's with revision 4 from ~matt-williams-x branch
<lazyPower> specifically on clearwater-sprout
<dweaver`> He is just checking it
<lazyPower> if he wants additional eyes on it have him join #juju and i'll run through some test scenarios with him. I'm testing the latest release of DNSAAS that i just cut
<dweaver`> We haven't had any issues with DNSaaS, I found it deployed OK on both the OB and on EC2
<lazyPower> \o/
<dweaver`> Nice charm too.
<lazyPower> it had some issues on Thursday with zonefile corruption
<lazyPower> I've since resolved those and made it more fault tolerant
<lazyPower> + backups! - atomic file rewriting is scary
<lazyPower> thanks for the +1 dweaver`
<dweaver`> Matt's going to join the channel.
<dweaver`> lazyPower, We ran out of time and didn't get it working for the OB, but we got everything working on EC2 so we are good for now.
<dweaver`> And Matt had to leave for his flight home
<haxx0r> www.anon.fm
<designated> jamespage, for the ceph charm, I can see functions getting defined in host.py (def list_nics(nic_type)).  but I don't see this function being called anywhere else in the charm.  from what I understand when building relationships, the charm will just use the first NIC it finds.  How can I just statically assign this to a variable somewhere?
#juju 2014-06-08
<_thumper_> mhall119: around?
 * thumper is reading a charm written with the python charm helpers
<thumper> pretty nice...
<jcw4> thumper: which charm?
<thumper> looking at python-django right now
<thumper> to try and work out WTF is going on
<jcw4> heh
<thumper> but the charm helpers themselves are pretty nice
<jcw4> cool
<vorpalbunny> well that's frustrating
<thumper> no local charm support for the deployer
<thumper> hazmat: is that right still?
<thumper> hazmat: can't specify a local charm in a bundle description?
<thumper> grr...
<thumper> well this confused me for a while: unit_name = service_name()
<lazyPower> thumper: didja file a bug?
<thumper> lazyPower: no
<thumper> lazyPower: but I feel I may be forking and tweaking the python-django charm
<lazyPower> thumper: Thats the beauty of our system :) Use as little or as much of the charm as you need, remix to fit your purpose and contribue or hoard as you see fit.
<thumper> lazyPower: well...
<thumper> I feel there is a big deficiency in our system too
<lazyPower> do tell
<thumper> mainly in consistency, and usability of the charms
<lazyPower> how so?
<thumper> if you try and do anything non-vanilla
<thumper> you get stuck
<thumper> unless you read all the code
<thumper> which I have had to do
<thumper> not entirely sure if that is what we want
<lazyPower> I'm not sure what you mean - and thats probably why the new python-django charm isn't ack'd into the store.
<thumper> the charms just don
<thumper> don't have enough documentation to understand enough
<lazyPower> if you're talking about non-default settings.
<thumper> right
<lazyPower> Yeah, i'm combatting that with teh DNS charm - trying to raise teh bar
<lazyPower> want to take a peek?
<thumper> lazyPower: I've talked with alexisb about my problems
<thumper> and we are going to bring it up with your group
<thumper> but I want to get things going and write stuff up first
<thumper> that way it is a positive contribution
<thumper> rather than just bitching :-)
<lazyPower> thumper: having a good list to guide with yoru pain points, to revise the review process would be welcome.
<lazyPower> s/yoru/your/
<thumper> I feel that others have probably done what I'm doing before, but it just isn't clear
<thumper> nor are there examples or samples to move forwards
<lazyPower> thumper: https://github.com/chuckbutler/dns-charm
<lazyPower> take a look through the readme and doc
<thumper> for example: a subordinate charm for python-django to provide the application
<thumper> I'm having to figure it out as I go
<thumper> and it is a big pile of trial and error
<lazyPower> thumper: I'm a fan of that architecture - is it not very well documented? or ...
<thumper> I know others would just not bother
<thumper> but I know the benefits I want from the system
<thumper> just not well documented
<lazyPower> a good example of that is the new Tomcat charm from mbruzek
<lazyPower> apps occupy space as subs, with tomcat providing just the app server
<thumper> lazyPower: for local charms, how do I upgrade them?
<thumper> where is the versioning?
<lazyPower> juju upgrade-charm --repository=$foopath charmname
<lazyPower> it auto-increments the REVISION
<thumper> ah...
<thumper> ok
<thumper> also, the source of python-django recommends using nginx as a reverse proxy in front of gunicorn
<thumper> but examples?
<thumper> nope
 * thumper goes back to making dinner
<lazyPower> thumper: until yesterday we didn't have an NGINX charm
<lazyPower> so there's no example to provide
<lazyPower> there was a general consensus among a few charmers that NGINX should be rolled into whatever framework charm you're deploying instead of deploying NGINX + <framework>
<lazyPower> marcoceppi wrote an alpha quality NGINX charm to bundle into a demo that I'm farily certain he is planning on releasing in the coming weeks that will fit that bill pretty well
<thumper> hmm...
<thumper> ok, so perhaps I'll just deploy gunicorn at port 80 for now
<thumper> given that the site load will be very small to start with
<thumper> and look to fix later?
<thumper> although that feels wrong too
<thumper> as the static area really shouldn't go through python
<thumper> lazyPower: is nginx a subordinate?
<lazyPower> Negative
<thumper> hmm...
<lazyPower> Its a stand-alone
<thumper> so...
<thumper> how should I do it then?
 * thumper wonders
<lazyPower> I would imagine you would co-locate services. I'm not really familiar with how its put together
<thumper> also, what's the status of the new python-django charm?
<thumper> I have forked it locally
<thumper> to read and play
<lazyPower> I haven't looked at it since my last review.
<thumper> well, your review said approve
<thumper> I didn't look at the details though
<thumper> what's the process?
<lazyPower> Latest change: 2014-05-12 Charles Butler Patrick Hetu 2014-05-02 fix a wrong configuration variable name - is the commit message on python-django in the store. i'm looking for this MP you're referencing
<lazyPower> thumper: what you're referencing is upstream. its whats provided by cs:precise/python-django
<lazyPower> cs:trusty/python-django
<lazyPower> rather
<lazyPower> http://bazaar.launchpad.net/~charmers/charms/trusty/python-django/trunk/files
<lazyPower> thumper: however, this looks like it was an oversight on my part. I went back through my review notes and I have an action item to follow up on - this repository doesn't share a common ancestor with the precise python-django charm - and I didn't make it back to wrapping that up. I don't know how to promulgate this over whats existing.
<lazyPower> complete failure on my part
<thumper> lazyPower: so is this new one going to be trusty or precise or both?
<lazyPower> It's in trusty - I need to defer the precise/both decision until I've spoken with other charmers
<thumper> lazyPower: it looks like I'm going to have six or seven charms for my setup
<thumper> ack
<lazyPower> that's not something i'm comfortable cowboying myself, as it has implications with existing deployments.
<thumper> so trusty/python-django is the new hotness?
<lazyPower> thats the story for today
<thumper> lazyPower: you can get a common ancestry easily enough
<lazyPower> it may change on monday after i've convened with ~charmers
<thumper> I think this will work:
<thumper> start with the new branch
<thumper> merge in the original
<thumper> bzr revert .
<thumper> which reverts the changes but not the merge
<thumper> then commit
<thumper> then you can merge that into the old charm
<thumper> shazam
<lazyPower> thumper: Thats good info to have, but again it depends on if its going to break how existing deployments using cs:precise/python-django - and that's something I need to re-validate.
 * thumper nods
<thumper> the behaviour needs to match, right?
<thumper> I can see that this needs careful testing
<lazyPower> exactly
<lazyPower> if it winds up breaking the existing story (which if i remember correctly it doesnt, but -- this is > 2 weeks old so...) then we broke deployments everywhere to anybody that upgrades
<lazyPower> and I don't want to poo on our existing users
 * thumper nods
 * thumper heads back to the kitchen
<lazyPower> I'd rather tell you tod eploy trusty, and know its good, vs assuming.
<thumper> lazyPower: hmm... due to me being cheap, I was wanting to put everything on one box
<thumper> not sure if postgresql is trusty yet or not
<thumper> easy way to check?
<lazyPower> Put it in your local trusty charm repo and deploy it.
<thumper> need block-storage-broker
<thumper> ah
<thumper> will try that
<thumper> but then I can't use deployer script...
<thumper> oh well
<thumper> bundle can't reference local can it?
 * thumper sees places for things to improve a lot
<lazyPower> You can push that LP and deploy from there
<lazyPower> Also I believe if you pass full path to the local charm you can deploy local charms.I may be wrong though
<lazyPower> I know that works in deployer manifest format. Ppetraki built an offline archiver with that principal
<jamespage> designated, those branches introduce two new configuration options - ceph-public-network and ceph-cluster-network
<jamespage> you specify the network cidr and the charm figures out which configured network addresses to use
<mhall119> thumper: off and on, what's up?
<Eegoist> why relation-get return None, even if unit is on relation-list?
#juju 2015-06-01
* lazyPower changed the topic of #juju to: Welcome to Juju! || Office Hours, here 16 April 2000UTC || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://charmproof.com || Unanswered Questions: http://goo.gl/dNj8CP
<jcastro> mgz: any word on that dreamhost issue?
<lazyPower> jcastro: whens our next office hours scheduled?
<lazyPower> I want to say June 19'th?
<jcastro> 18th
<jcastro> let me double check
* lazyPower changed the topic of #juju to: Welcome to Juju! || Office Hours, here 18 June 2000UTC || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://charmproof.com || Unanswered Questions: http://goo.gl/dNj8CP
<jcastro> lazyPower: confirmed, 18th
<lazyPower> ta
* marcoceppi changed the topic of #juju to: Welcome to Juju! || Office Hours, here 18 June 2000UTC || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<Johncr1> Where can i define lxc bridge in juju charms for lxc ?
<lazyPower> Johncr1: I'm assuming this is for juju deploy --to lxc:unit?
<lazyPower> Johncr1: if thats the case, juju currently always assumes lxcbr0, even if you override the config on the unit itself to specify another bridge. The only known work-around is to edit the charm config on the unit and re-assign the networking bridge.
<lazyPower> sorry, by "edit the charm config on the unit" - i mean edit the lxc container config in /var/lib/lxc/<string>/config
<Johncr1> Yes and there is one more way.
<Johncr1> Create a bridge with the same name i.e. lxcbr0
<thumper> lazyPower: ping
<jrwren> why does juju install cloud-utils metapackage on bootstrap?
<marcoceppi> jrwren: no idea, but probably go figure out some things about the cloud?
<jrwren> i see it in 1.22.1 changelog now.
#juju 2015-06-02
<lazyPower> thumper: pong
<lazyPower> thumper: i assume you found your answer by now? i saw a fairly lengthy thread with whit
<thumper> lazyPower: yeah, I have a way forward
<lathiat> Hi Folks.. is there any documentation/exmaple charms I can look at for generic/recommended ways of supporting multiple install sources (e.g. package, install from source, install from a standard python source bundle, etc) .. a la the newer openstack charms
<jose> lathiat: hey, are you still around?
<Odd_Bloke> marcoceppi: Will https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/fix_cron_path/+merge/260696 address https://bugs.launchpad.net/charms/+source/ubuntu-repository-cache/+bug/1455649?
<mup> Bug #1455649: ubuntu-repository-cache: hard-coded cron path to juju-run is wrong for juju v1.24 <ubuntu-repository-cache (Juju Charms Collection):In Progress by daniel-thewatkins> <https://launchpad.net/bugs/1455649>
<Odd_Bloke> Does Juju ensure that only one hook can be running at a time?
<lathiat> jose: yeah
<marcoceppi> Odd_Bloke: yes, hooks are executed serially on each machine
<Odd_Bloke> marcoceppi: Thanks; that explains the behaviour I was seeing.
<nunutu> hi
<kingsman_> hi all
<kingsman_> i can't see the video :(
<Battaglin_> hi
<wakawaka_> hi
<wakawaka_> why am i on juju???
<wakawaka_> what is the Community Team Q&A irc channel?
<yoann54> BQ announced a convergence product too
<wakawaka_> guys.. ubuntu on air channel is #ubuntu-on-air
<Silviu> QUESTION: Why did you guy ditched the "Love the bottom edge" design direction from Ubuntu for phones? Ubuntu now has an edge that it's not used all that much, not to mention that there is no longer an exit option for apps.
<supereman16> Does anyone know if there is any way possible to manage juju services across environments?
<supereman16> Our business is trying to deploy a lot of our stuff on Amazon, but some stuff is required to be on bare metal services in our own vpc.
<supereman16> So I was sondering if we could manage these services with juju both over maas and amazon, but so far I haven't found anything saying that's possible
<supereman16> Anyone?
<med_> marcoceppi, ^ arosales ^ cross env juju?
<med_> http://curtis.hovey.name/2014/06/10/building-trans-cloud-environments-with-juju/
<arosales> med_, hello
<arosales> supereman16, at this time spanning services across different environment is a feature juju core is still working on
<med_> 'k
<arosales> supereman16, a good feature, just still being worked on.
<supereman16> Sad, any sort of eta?
<arosales> supereman16, to be clear juju can of course manage envX in maas and envY in amazon, but  cross environment meaning in one envZ service 1 in AWS related to service 2 in maas -- is still a feature
<arosales> supereman16, not sure on an eta. I hope by  16.04
<supereman16> Ok. Yeah. That would be nice.
<arosales> for sure
<supereman16> med_, thanks for the link, looks interesting. I think I'll play around with that. :)
<med_> just a google result, not an endorsement supereman16
<supereman16> med_, but it looks interesting and I hadn't found it. Thank's anyways.
<med_> yw, caveat emptor
<Odd_Bloke> What's the process for getting a MP in to charm-helpers?
<Odd_Bloke> (Specifically: https://code.launchpad.net/~daniel-thewatkins/charm-helpers/lp1370053/+merge/260864)
<lazyPower> marcoceppi: correct me if i'm wrong, but i believe that they need to propose against charm-helpers/next right?
<lazyPower> Odd_Bloke: ^
<lazyPower> i'm pretty sure thats the baseline, propose against charmhelpers-next, and our devx maintainers will take a look during their review queue time slot
<lazyPower> s/-next/\/next/
<Odd_Bloke> lazyPower: I can't find a likely looking charm-helpers/next.
<lazyPower> tvansteenburgh: ping
<lazyPower> Odd_Bloke: let me ping and see if i cant find a proper answer for you - sorry about my lack of knowledge here
<lazyPower> Odd_Bloke: but as it stands w/ that MP open, it should be on the docket regardless, looking more in terms for next time around
<Odd_Bloke> lazyPower: No worries; I'm EOD'ing anyway, so I'll read whatever is said tomorrow. :)
<tvansteenburgh> there is no next, propose against trunk
<Odd_Bloke> tvansteenburgh: trunk == lp:charm-helpers?
<tvansteenburgh> yes
<Odd_Bloke> tvansteenburgh: Thanks! :)
<Odd_Bloke> (So much for reading it tomorrow ;)
<tvansteenburgh> sure thing :)
<lazyPower> tvansteenburgh: thanks for following up on that.
<tvansteenburgh> np
<lazyPower> tvansteenburgh: is the /next thing only an idiom for openstack then?
<tvansteenburgh> yeah
 * tvansteenburgh wanders off to lunch
<lazyPower> Ok, im' going crazy then :)
<beisner> marcoceppi,  so i'm trying to systematically compare series in a future-proof way in test writing.   ie. tests need to do more or do less, or do things slightly differently, depending on ubuntu release.
<marcoceppi> beisner: so just special cases where like if series > trusty do this?
<beisner> right
<marcoceppi> where series is the series of the deployment?
<beisner> right, not necessarily matching the charm series
<beisner> before pulling in distro_info python module usage, where I know we can do version comparisons, i'm wondering if amulet already has a way to do series comparison?
<beisner> as Wily just overlapped with Warty, alphabets no longer mean so much
<beisner> fyi, distr_info.UbuntuDistroInfo.all() spits out the following, so that would be pretty easy to incorporate.  just don't want to reinvent a wheel.
<beisner> ['warty', 'hoary', 'breezy', 'dapper', 'edgy', 'feisty', 'gutsy', 'hardy', 'intrepid', 'jaunty', 'karmic', 'lucid', 'maverick', 'natty', 'oneiric', 'precise', 'quantal', 'raring', 'saucy', 'trusty', 'utopic', 'vivid', 'wily']
<marcoceppi> beisner: there's no notion of series based testing in amulet at all, but seems like a useful thing to have
<marcoceppi> esp when we move series out of Deployment() and instead use as an environment variable
<beisner> marcoceppi, in this case, i have a specific need for tests to behave differently >= vivid.   upstart vs systemd.
<beisner> ie.  on < vivid, we check system service status on each unit via   `status <service-name>` but on >= vivid, that needs to be `service <service-name> status`.
<coreycb> beisner, what about that enum in charm-helpers?
<beisner> so it's a pretty simple pivot point, and could be resolved easily now, just want to choose an approach that survives.
<coreycb> beisner, I was thinking about the last 2 functions here: http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/view/head:/charmhelpers/contrib/openstack/amulet/deployment.py
<beisner> coreycb, yep, looking at extending that.
<beisner> or rather, using the data in that dict
<coreycb> beisner, not sure how to future proof though without knowing names, so I think it'd need an update when we find out
<beisner> coreycb, distr_info.UbuntuDistroInfo.all() is maintained and always returns an ordered list
<coreycb> beisner, nice
<beisner> so for the purpose of      if series >= 'vivid':    the existing dict data could be inspected.
<beisner> and we'll likely have to maintain that UbuntuRelease:OpenstackRelease  table anyway
<arosales> marcoceppi, do you know if the mysql charm can do multi-master?
<arosales> marcoceppi, I think the current charm, https://jujucharms.com/mysql/trusty/25, only supports multi-slave but thought I would check here.
<lazyPower> arosales: it doesn't support multi-master in its current form
<arosales> lazyPower, ack
<arosales> and thanks
<lazyPower> np
 * lazyPower hattips
<stokachu> anyone running into issues with juju failing to create containers
<stokachu> http://paste.ubuntu.com/11530516/
<stokachu> this is with MAAS and ive hit this with both trusty and precise
<stokachu> it just dumps the entire script into the agent-state-info :\
<lazyPower> whoa, and this is on -stable
<lazyPower> stokachu: no thats a new one on me. I haven't been using with MAAS recently however.
<stokachu> yea this is the first ive seen it too, makes me wonder though b/c i was using the same bits yesterday with no problems
<stokachu> wondering if images were recently updated
<mahmoh> stokachu: everything worked for you the day before just fine?!
<stokachu> yea i ran it yesterday all day
<mahmoh> stokachu: it was broken for me yesterday after 4pm EST
<mahmoh> stokachu: so is this an lxc bug or juju bug, I'm guessing start with lxc?
<stokachu> im trying to find where the images are for lxc that juju uses
<stokachu> see if it was recently uploaded
<mahmoh> stokachu: since you're not using proxies in your env, and actually so more likely lxc
<stokachu> mahmoh: https://bugs.launchpad.net/juju-core/+bug/1417594
<mup> Bug #1417594: failure to retrieve the template to clone: lxc container with 1.22 beta2 <lxc> <oil> <juju-core:Fix Released by wallyworld> <juju-core 1.22:Fix Released by wallyworld> <https://launchpad.net/bugs/1417594>
<stokachu> says it was fixed though
<mahmoh> when where though?
 * mahmoh guesses 1.24
<stokachu> the milestone says 1.23-beta1
<mahmoh> reported back in Feb
<stokachu> so it should be included in the current juju
<mahmoh> stokachu: could you check on one of your problem nodes if " cloud-init " is installed , it should be if not that's the problem, cloud-archive meta problem
<mahmoh> pls ^
<mahmoh> stokachu: my env is remote and a watch-me-type so I'm avoiding it until I have a fix
<stokachu> mahmoh: it doesn't even get that far
<stokachu> the template has cloud-init
<stokachu> but the cloning of that template is failing
<mahmoh> stokachu: but you can download the template where I cannot, right?  So maybe a slightly different problem
<stokachu> ah
<stokachu> yea it looks like it downloads the template but the actual lxc clone is failing
<mahmoh> I don't know which problem I'd rather have, you out of disk space by any chance?
<stokachu> nah im on 2T
<stokachu> i did run out of ips again :(
<stokachu> but fixed that
<mahmoh> lol
<mahmoh> IPs
<mahmoh> this isn't your problem is it: https://bugs.launchpad.net/lxc/+bug/1410876
<mup> Bug #1410876: Error executing lxc-clone: lxc_container: utils.c: mkdir_p 220 Not a directory - Could not destroy  snapshot %s - failed to allocate a pty; Insufficent
<mup> privileges to control  juju-trusty-lxc-template <lxc> <oil> <stakeholder-critical> <trusty> <juju-core:Triaged> <lxc:New> <https://launchpad.net/bugs/1410876>
<stokachu> doesnt look like it
<mahmoh> stokachu: this might be my problem: https://bugs.launchpad.net/lxc/+bug/1331920
<mup> Bug #1331920: keyserver workarounds in templates/lxc-download.in not accessible <lxc:New> <https://launchpad.net/bugs/1331920>
<stokachu> looks promising
#juju 2015-06-03
<marcoceppi> stokachu: I've seen this, well something similar, before
<redelmann> hello, im trying to remove a relation between service and subordinate in juju 1.23. It's posible?
<redelmann> juju destroy-relation says: ERROR relation "logstash-agent:juju-info ubuntu:juju-info" not found
<redelmann> but juju status show a relation
<mahmoh> marcoceppi: any idea for a fix?
<marcoceppi> redelmann: can you show your juju status?
<marcoceppi> mahmoh: I'm not sure, but it is likely a proxy issue fwiu
<mahmoh> marcoceppi: yeah, I agree just unsure what; I think it's in Juju since the deployed node can now download the lxc templates manually as sudo, sudo lxc-create ... and it works fine but fails with juju ... --to lxc:0
<mahmoh> marcoceppi: I'll keep on it
<marcoceppi> mahmoh: have you tried juju retry-provisioning now?
<redelmann> marcoceppi, http://paste.ubuntu.com/11533810/
<redelmann> redelmann, juju status is extra large, so i remove irrelevant parts
<mahmoh> marcoceppi: no, nothing changed: deployed env w/ lxc, lxc fails in juju, juju ssh # ... sudo lxc-create ... works
<redelmann> marcoceppi,  juju status is extra large, so i remove irrelevant parts
<marcoceppi> well, logstash doesn't apper to be on ubuntu, which service are you trying to remove it from? juju-gui? fail2ban?
<rick_h_> redelmann: all services have a relation with juju-info I believe. Any relation that starts with juju-* isn't exposed to users to modify/mess with
<marcoceppi> oh, duh, you also can't remove subordiantes at this time.
<rick_h_> redelmann: https://jujucharms.com/docs/devel/authors-implicit-relations
<redelmann> marcoceppi, i deploy logstash to machine0.... big mistake
 * rick_h_ runs away now
<redelmann> rick_h_: yes, i read that, i just dont understand why
<redelmann> ok, thank marcoceppi and rick_h_ . Bed time.
<redelmann> good bye
<merkurus> what does juju mean - literary
<blr> merkurus: a juju is a an object of spiritual/magical significance, from west african religions.
<blr> one too many articles there...
<merkurus> blr : aah
<jam> wallyworld: I'm available now, we'll need to keep it pretty quick since I'm at EOD and my son needs some ho
<jam> homework help
<jam> wallyworld: https://plus.google.com/hangouts/_/canonical.com/tanzanite ?
<wallyworld> ok
<pmatulis> if i want to add-unit will all the parameters of the existing service be used [ such as --constraints (tags) and --config (file) ]?  if so, any way to see that metadata of existing service?
<Odd_Bloke> I have a place where I'm catching an exception in a charm, I'd like to log about it (ideally including traceback); is there any good way of doing that?
<Odd_Bloke> I notice that hookenv.log is just a function (i.e. I can't do hookenv.log.exception('...') as with normal Python logging).
<marcoceppi> Odd_Bloke: you could use the logging library, stdout is captured as DEBUG level output in the juju log for that service
<Odd_Bloke> Ah, shiny.
<nodtkn> juju add-machine  ssh:root@junor1-1 returns "ERROR error checking if provisioned: subprocess encountered error code 1"
<nodtkn> with --debug I set that it creates the ubuntu user but then fails when checking to see if machine is already provisioned
<marcoceppi> nodtkn: was this machine at one point provisioned?
<nodtkn> marcoceppi: maybe, but I don't think so
<marcoceppi> nodtkn: if you log on to that machine, what does `initctl list | grep juju` show?
<nodtkn> marcoeppi:  It does not show anything.... I asked around and found out that it has been provisioned before
<nodtkn> do I need to reinstall or is there a easy way to convince it that it is no longer provisioned
<marcoceppi> nodtkn: you may just need to do some clean up
<marcoceppi> nodtkn: is there a /var/lib/juju directory?
<Makyo> Trying to deploy a local charm (having successfully deployed it earlier today), and getting "ERROR invalid character 'w' looking for beginning of value" with no further info.  Is there a way to dig into that a bit more? 1.24-beta5-utopic-amd64
<marcoceppi> Makyo: check metadata.yaml or actions.yaml or config.yaml for typos
<nodtkn> marcoeppi: /var/lib/juju does not exist
<Makyo> marcoceppi, Everything looks good.  Like I said, I deployed this morning fine, and now I'm getting that error. bzr diff/stat show no changes.
<mbruzek> marcoceppi: Is there a way to wait until a charm has started or juju is idle after a deploy?
<marcoceppi> mbruzek: inside a charm or from cmdline?
<mbruzek> marcoceppi from command line
<mbruzek> juju deploy ubuntu
<marcoceppi> mbruzek: if you use 1.24, yes
<mbruzek> wait_until_started
<mbruzek> lay it on me man
<marcoceppi> does it have to be bash? python would be faster
<mbruzek> python is fine
<mbruzek> amulet.Deployment.wait()?
<marcoceppi> mbruzek: no, that's the old way
<marcoceppi> I've got a better way if you're using 1.24
<kwmonroe> juju deploy ubuntu && sleep 36000 && do_work
<kwmonroe> should be g2g mbruzek
<kwmonroe> hey friends, can i kick off a quickstart deployment, juju switch to another substrate in a new terminal, and kick off another?
<marcoceppi> kwmonroe mbruzek https://gist.github.com/marcoceppi/002ce8122f49a15c0153
<mbruzek> marcoceppi: my computer froze up just before your answer, I got the link.
<mbruzek> marcoceppi: thanks
<kwmonroe> to answer my own question, yes, yes you can juju switch from another terminal and deploy to multiple substrates.  good on juju!
<denverjoel> I installed juju-local on ubuntu 14.04.2 LTS. followed the directions on https://jujucharms.com/docs/stable/getting-started. when i get to the part about testing the wordpress / mysql setup, nothing pops up on the browser, or curling on the host machine, or while SSH'd into the juju machine.
<denverjoel> what else can i check to see if something is failing?
<kwmonroe> denverjoel: try running 'juju debug-log'.  that should show you what the units in your setup are doing.. perhaps still installing packages?
<denverjoel> kwmonroe: this is my log http://pastebin.com/tR1ws0P8
<denverjoel> it must be the wordpress charm because loading deploying an nginx charm works perfectly fine
<denverjoel> now i can access nginx via curl but still not externally.  i have ufw enabled and tried disabled as well.  there are iptables rules in place but all are set to accept.
<thumper> lazyPower: hey, did you see my email?
<kwmonroe> denverjoel: try 'juju status' and take a look at the wordpress service.. does it say "started" in that output?  and/or have opened ports in that output?
<kwmonroe> also denverjoel, what version of juju are you using (juju version)?
<Bialogs> Is there a way to have multiple configurations attached to the same Juju charm?
<kwmonroe> denverjoel:  double check that you did 'juju expose wordpress'.. juju status should show exposed: true in the output..
<kwmonroe> Bialogs: you can deploy the same charm with different configs, if that's what you're after.. juju deploy --config=<config.yaml> <charm>
<denverjoel> kwmonroe: it is exposed but no open ports
<Bialogs> kwmonroe: No way to do that in the GUI?
<kwmonroe> ok denverjoel, and navigating to (or curling) the public_address for the wordpress service doesn't get you anything? (perhaps open ports arent listed in your version of juju)
<denverjoel> kwmonroe: curling inside the machine, outside of it: to localhost and the private IP doesn't yield anything.
<lazyPower> Bialogs: are you saying have 2 of the asme service stood up with different configurations?
<lazyPower> Bialogs: if thats the case, you will need to give them sep. service names, eg: juju deploy --config foo.yaml cs:trusty/mediawiki mediawiki-north
<lazyPower> and  juju deploy --config bar.yaml cs:trusty/mediawiki mediawiki-south
<Bialogs> lazyPower: that's exactly what I'm looking for
<lazyPower> or once you've deployed the 2 services, edit the configs in the GUI - the key indicator is same service, different names for the service.
<lazyPower> service representations in juju essentially group the configuration for each unit that comprises the service
<lazyPower> so if you scale mediawiki-north, it inherets the config of the service for mw-north, same rule applies for mw-south
<denverjoel> how do i go about opening up ports?
<kwmonroe> denverjoel: juju expose wordpress should have done that for you.  are you doing your deployment locally (lxc)?  or with a cloud provider?
<denverjoel> local
<kwmonroe> hmm. in that case, open ports shouldn't even matter as you should be able to access container ports from your host machine willy nilly.
<denverjoel> im removing wordpress and mysql and re-deploying.
<kwmonroe> ack denverjoel, i'm gonna run through those steps locally too and see if i can repro.
<denverjoel> kwmonroe: immediately after the wordpress service started i curled and got 502 Bad Gateway. I retried curl and got nothing at all.
<denverjoel> strange, that was all without exposing the service. after exposing, i get no response from curl
<kwmonroe> ah, i think i know what's up denverjoel.. curl <private-ip> doesn't return anything because the index of http://<private-ip> just php includes some other stuffs.  so i bet it's working, there's just nothing for curl to see there.. can you try "w3m <private-ip>" or lynx or elinks or even a real browser if you can?
<kwmonroe> for me, chrome'ing to 10.0.3.x shows the wordpress install wizard
<kwmonroe> but curl'ing that gives me nada.
<kwmonroe> that's without juju expose too, fwiw.
<Syed_A> hey guys, how do i remove a unit which is stuck in error state. juju remove-unit seems not to have any effect.
<thumper> Syed_A: you can move it out of error by going 'juju resolved unit-name/0' or whatever the unit name is
<thumper> then it should end up getting removed
<Syed_A> thumper: Thanks! It worked :)
<Syed_A> can i deploy heat in HA mode with juju ?
 * thumper shrugs
<bleepbloop> jamespage: you around?
<bleepbloop> I am having an issue with config-changed failing for the nova-compute charm when trying to change virt-type, would anyone have any experience with this?
<Bialogs> What's the difference between gray and green connection lines in Juju
#juju 2015-06-04
<kwmonroe> Bialogs: if you're using the gui, i think grey means the relation has been tentatively requested.. it goes green once you commit that request and the command gets sent to juju.  that's my best guess, but #juju-gui would have the authoritative answer.
<rick_h_> kwmonroe: Bialogs correct
<rick_h_> kwmonroe: Bialogs I think that the relation lines for subordinates are also grey though and deemphasized
<thumper> lazyPower: I don't suppose you are working late into the night again are you?
<jose> in his birthday?
<jose> (actually, I was going to ask the same question :P)
<suchvenu> Hi, I am an IBM charmer . I am getting the following error while trying to create a bug in Launchpad
<suchvenu> Oops!  Sorry, something just went wrong in Launchpad.  Weâve recorded what happened, and weâll fix it as soon as possible. Apologies for the inconvenience.  (Error ID: OOPS-fa7f47217944bd43836662fe22cb1415)
<suchvenu> Can someone help with this
<ludvigx> Has any of you guys ever experienced that the juju cli tools become unresponsive ? returning nothing. Only command that returns anything is juju version
<andyjeffries> Does anyone know of a good blog post/document on how to run your own Juju charms store?  We want private charms that aren't publicly accessible (we may open source them later, but we'd like that to be an explicit action, we may not if they are complely custom to our systems).
<marcoceppi> ludvigx: yes, typically means it can't connect to the environment you're trying to talk to/it thinks it's still bootstrapped (if it's not)
<marcoceppi> andyjeffries: you'd have to just create a local charms repository https://jujucharms.com/docs/stable/charms-deploying#deploying-from-a-local-repository
<ludvigx> marcoceppi : hmm, it was a functioning environment before. So possibly a problem on the bootstrap node ?
<marcoceppi> ludvigx: it's possible,  best to run `juju status --debug --show-log`
<marcoceppi> let that run until it completes, which may take a while
<andyjeffries> marcoceppi: OK, so we couldn't use the Juju GUI to deploy charms from a local repository on our development machines?  Or we could, but we'd put the local repository on our Juju server and then would run the juju command only from that server (because our development machines wouldn't have the local repository on it)?
<debochat> I am trying to install "libgcc1:i386" through my Juju install hook but the command is failing with the below error:
<debochat> 2015-05-26 08:59:23 INFO unit.wxs/0.install logger.go:40 E: Unable to locate package libgcc1
<andyjeffries> I saw there was a repo on GitHub https://github.com/juju/charmstore and thought there'd be a doc/blog somewhere on how to run this for a local/private/hosted install.
<debochat> but the same command works when trying manually
<marcoceppi> andyjeffries: you *can* deploy local charms to the GUI, by dragging and dropping zips of them, but it's not a perfect experience
<debochat> my juju env is local
<debochat> and working on ubuntu 14.04
<marcoceppi> andyjeffries: you can also run a charmstore locally, but juju has no way of knowing how to point to it, so you'd have to poision DNS to get that to work. Also, charmstore ingests it's data from launchpad atm, so even if you did get it running still not perfect because you'd have to have a private launchpad running IIRC
<andyjeffries> marcoceppi: Sorry, I may be confusing terms (I'm new to Juju and OpenStack).  We want to publish our charms to a local (inside our network) charm store, so that later they can be searched for in the same way as the rest of the charms.  We'd probably want to point our Juju server to our local charm store server and then we'd fork/clone any public charms we wanted to use to in to there (unless there's an easy way of having Juju use multiple stores and
<andyjeffries> clearly differentiate them).  Clearer?
<andyjeffries> marcoceppi: That's great, thank you.  So this isn't a use-case that's ready to run, it's a bit of hackery...
<marcoceppi> andyjeffries: so that use case is coming, it's just not a clean story now. I'd say in the next 2-3 months that will be much stronger, running a private to you charm store
<marcoceppi> debochat: that's odd, maybe run apt-get update before your install command?
<andyjeffries> marcoceppi: Thank you.  We're going to be launching our env after that point, but will need a hacky solution until then - we'll work on that ourselves then (unless the work in making it a clean story is a public repo that we can test/contribute to?)
<debochat> marcoceppi: you mean, include the code - apt-get update before calling apt-get install in my install hook?
<marcoceppi> andyjeffries: you'd have to pop into #juju-dev to figure out that, I don't think it's yet been worked on
<marcoceppi> debochat: yup
<andyjeffries> marcoceppi: OK, thanks very much for the pointers :-)
<andyjeffries> Appreciate it!
<debochat> marcoceppi: many thanks, will try that.
<schkovich> how to avoid downtime during upgrade-charm? i diligently tested everything in staging environment but still upgrading all units in production at once makes me feel uncomfortable.
<marcoceppi> schkovich: we're working on ways to model blue-green upgrades in juju, though it's not entirely possible at the moment
<schkovich> marcoceppi: define entirely possible ;)
<schkovich> marcoceppi: is it possible to some degree?
<schkovich> marcoceppi: the only idea that i have at the moment is to define a new environment but that is kind of way to complicated since there is a number of charms running and i need just to upgrade one
<marcoceppi> schkovich: you could deploy the new charm under a different name
<marcoceppi> lets say the service is mysql
<marcoceppi> so you already have juju deploy mysql
<marcoceppi> if you do juju deploy mysql mysql-green
<marcoceppi> then do all the relations, you can then just "failover" to the green one
<marcoceppi> and destroy the previous one
<schkovich> marcoceppi: okay, does make sense, in that case i could deploy charm to each server one by one.
<Odd_Bloke> jose: Thanks for your review, sorry for missing all the conflicts; I'll look at it today.
<Odd_Bloke> I would have commented there, but LP isn't letting me. ;.;
<schkovich> marcoceppi: i have local charm store trust/x-charm, will creating symlinks x-charm-green and x-charm-red work? eg will i be able to say juju deploy x-charm-green or juju deploy x-charm-red?
<Odd_Bloke> jose: Or, alternatively, it was waiting so you could get the same comment 5 times. Â¬.Â¬
<marcoceppi> schkovich: no, all you need to do is type `juju deploy local:trusty/x-charm x-charm-green`
<marcoceppi> it'll deploy x-charm but call it x-charm-green to avoid collisions
<schkovich> marcoceppi: awesome! i was not aware of that. is it documented?
<schkovich> marcoceppi: it's like charm alias
<marcoceppi> schkovich: it really should be, doesn't appear to be. I'll fix that
<marcoceppi> `juju help deploy` covers it, very sparsley too :\
<schkovich> marcoceppi: <service name>, if omitted, will be derived from <charm name>. this one?
<marcoceppi> schkovich: yes
<schkovich> marcoceppi: i read that line so many times but it never came to my mind in which way could it be useful :)
<schkovich> marcoceppi: thanks a lot!
<web> wow active this morning.  :)
<schkovich> marcoceppi: juju deploy local:trusty/charm-x charm-x-green --to y is not working. :( i am still getting the old version of charm
<schkovich> marcoceppi: perhaps i should include revision number as well?
<stub> If the leader updates the leader settings, is it possible that other units will see the changes first in any hook? Or is the first place they will see the changes guaranteed to be the leader-settings-changed hook?
<stub> fwereade: Do you know off the top of your head?
<Odd_Bloke> aisrael: I don't really know the charm MP process; are your +1s enough for me to merge things myself?
<tvansteenburgh> Odd_Bloke: only a member of ~charmers can do the merge
<fwereade> stub, it's not quaranteed to wait until leader-settings-changed
<fwereade> stub, similarly to relation settings -- you may see values from the future that render future change hooks redundant
<jose> Odd_Bloke: can you link me to that MP that has a +1 so I can take a quick look?
<jose> Odd_Bloke: also, LP may be weird sometimes, don't forget it works with karma :P
<Odd_Bloke> jose: But I've been a good boy! :p
<jose> hehe
<Odd_Bloke> jose: You mean https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/update_charm-helpers/+merge/260956 ?
<jose> oh, I'm looking into that one right now
<Odd_Bloke> (I don't mean to brag, but I have _several_ MPs with +1s ;) )
<jose> check your email
<Odd_Bloke> jose: \o/
<Odd_Bloke> jose: Thanks! :)
<jose> Odd_Bloke: thanks to you for the fixes - greatly appreciated!
<Odd_Bloke> :)
<Odd_Bloke> jose: Will those changes now have to pass CI to get published?  (If there's a link to a doc about the process, I'm happy to read up on it :)
<jose> Odd_Bloke: when you create an MP, it goes into the review queue (review.juju.solutions). the CI instances catch up and process the tests, and then we review (accept/reject) based on what we see and the CI results
<jose> and then we merge, which is the final step if it's all good
<stub> fwereade: Ta. It may have been a pedantic question, but it actually affects my design :)
<arosales> any folks knowledgable on how the "charm collection" is set up in LP?
<stub> arosales: Its modelled as a distribution, like Ubuntu or Debian.
<arosales> specifically why I see things like https://bugs.launchpad.net/charms/+source/platform-lsf/
<arosales> and https://bugs.launchpad.net/charms/+source/crate
<arosales> stub these aren't recommended charms, but  I still see their URLs in LP. However, no code attached.
<arosales> I am guessing when folks push to a personal name space it still shows up in the charm collection but doesn't attach the code branch . .  .?
<stub> arosales: I think every sourcepackage name has a URL in that space. https://bugs.launchpad.net/charms/+source/pytz works for example
<stub> arosales: #launchpad might know more
<arosales> stub, hmm thanks
<marcoceppi> arosales: stub is correct
<marcoceppi> every source package has a place holder in charms
<marcoceppi> but only charmers can link the source package alias to the branch
<arosales> marcoceppi, why isn't the source code branch attached?
<marcoceppi> because there is no source code for it
<arosales> ah, so its an alias
<arosales> and I can't submit a bug against it either correct
<arosales> ie I couldn't submit a bug against https://bugs.launchpad.net/charms/+source/pytz
<marcoceppi> arosales: you probably could tbh
<arosales> marcoceppi, I get the following error
<arosales> "platform-lsf" does not exist in Juju Charms Collection. Please choose a different package. If you're unsure, please select "I don't know"
<arosales> https://bugs.launchpad.net/charms/+source/platform-lsf/+filebug
<marcoceppi> arosales: oh, well there ya go
<arosales> so I was just trying to figure that piece out
<arosales> it looks like IBMers are actually getting an oops from LP when trying to do this though
<arosales> so I was trying to reproduce that oops
 * arosales can't reproduce the oops, but I get an error message back from LP at least
<upper--> hi guys, is it possible to reference variables inside an amulet sentry 'run'?
<upper--> e.g. self.unit.run('grep ', somevariable, '/etc/whatever')
<mbruzek> upper--:  how do you mean?  You can build the run command *with* variables.
<mbruzek> string = 'grep {0} /etc/whatever'.format(variable)
<mbruzek> self.unit.run(string)
<mbruzek> upper does that help?
<upper--> yep cheers
#juju 2015-06-05
<lazyPower> thumper: pong
<thumper> lazyPower: hey, just back from the gym, will shower then get in touch :)
<Mmike> Hi, guys. 'juju expose', how is that implemented? What should happend when I do 'juju expose'? I'm testing on local provider, maas provider and openstack provider and doing 'juju expose' for a service really does nothing.
<rick_h_> Mmike: normally expose will poke a firewall rule in the security settings for the service
<rick_h_> Mmike: in the lxc provider there's not one so it's already open
<rick_h_> Mmike: but on ec2, for instance, it would make that port open over the public network vs just the internal ec2 private one
<Mmike> rick_h_: what about maas and/or openstack?
<rick_h_> Mmike: so I'm not 100% sure but since I don't think they run a firewall ootb on the machines brought up they don't have the same need either
<Mmike> rick_h_: ack, thnx for the info
<rick_h_> np
<jose> tvansteenburgh: sorry about that, it's been a long time without charming. missed my bzr add :)
<jose> will fix as soon as I'm back from university
<tvansteenburgh> jose: yeah, i figured, np
<jose> also, re: tests, good idea, will write some
<tvansteenburgh> +1
<jrwren> Where are the notes and links from the last juju office hours?
<jrwren> https://youtu.be/L4isOfzr8c4?t=36m00s jcastro mentions including status update and marcoceppi mentions a raleigh (or ralley) charm as example. Where is this example?
<marcoceppi> jrwren: https://github.com/juju-solutions/rally
<marcoceppi> jrwren: the upates were emailed to the list
<marcoceppi> jrwren: https://lists.ubuntu.com/archives/juju/2015-May/005353.html
<jrwren> marcoceppi: thanks.
<jrwren> marcoceppi: i wonder if I'm not on that list.
<marcoceppi> jrwren: you should probably join it
<jrwren> I'm on it. That email fell through teh cracks.
<jrwren> Thanks again.
<jcastro> someone want to check this one out?
<jcastro> http://askubuntu.com/questions/630983/docker-in-kilo-with-nova-compute-juju
#juju 2015-06-06
<styles> Hey guys I'm trying to install JuJu on 14.04 LTS and I'm running into issues. Created new user, gave them sudoers, ran juju quickstart -i .. generated everything. First issue was it threw "undefined symbol: cgmanager_get_pid_cgroup_abs_sync" assumged it was some LXC thing .. upgraded system .. but now it's just stuck at retrieving the enviorment status
<styles> (been there for a few minutes)
<marcoceppi> styles: that's...weird
<marcoceppi> what version of juju are you using?
<styles> latest
#juju 2015-06-07
<firl> Anyone have some resources on how to submit a patch for a charm that has been failing ?
<jrwren> firl: branch it on launchpad.net and submit a merge proposal
<firl> thanks jrwren
#juju 2016-06-06
<jamespag`> gnuoy`, https://review.openstack.org/#/c/323264/ if you please :-)
<gnuoy`> jamespag`, good morning to you too :)
<jamespag`> morning
<jamespage> that's better
<jamespage> ta
<autonomouse> Hi, I wonder if anyone can help me out a bit here please: I'm trying to use juju with LXD seeing as though juju 2.0 no longer has a local provider. However, I've cleary done something wrong as I keep hitting ERROR invalid config: can't connect to the local LXD server: Response was missing `api_compat` - if anyone has any ideas whyt this is happening, could you let me know? thx
<tasdomas> lazyPower, ping?
<tasdomas> anyone have experience with the relation side of charms.reactive ?
<tasdomas> lazyPower, ping?
<tinwood> beisner, I've got the tempest charm working with charms.openstack now: https://github.com/openstack-charmers/charm-tempest/pull/8
<beisner> tinwood, excellent!  (however, we need that dev to occur @ https://github.com/openstack/charm-tempest)
<tinwood> hmm, beisner, when did it move?
<tinwood> beisner, and more to the point, why is the old one still there?  Very confusing.
<beisner> tinwood, Thu.  that was sort of my main topic of convo last wk ;-)    and yep, we need to retire the old repo.
<tinwood> beisner, I didn't realise they had shifted over yet.  Okay, I'll drop the PR there, and retry it on the other one (plus try to sync it, if needed).
<beisner> tinwood, much appreciated
<beisner> gnuoy, jamespage - we got the openstack-charmers/hacluster sync'd into the gerrit repo on Fri.    I think we'll have some maint/housekeeping @ LP and openstack-charmers GH, yah?    also as tinwood points out, with tempest also moved into gerrit, we should rm the repo from openstack-charmers.  just lmk if/what you want me to tackle on that.
<tinwood> beisner, okay, on the new one: https://github.com/openstack/charm-tempest/pull/1
<tinwood> beisner, or should this now be in gerrit?
<beisner> tinwood, it's gerrit.
<tinwood> beisner, so it's a git review one now?
<beisner> tinwood, indeed
 * tinwood sigh
<gnuoy> beisner, thanks for getting the charm repo populated, how did you manage it?
<lazyPower> tasdomas o/ hey sorry i didnt see the pings this morning. What can I help you with?
<beisner> gnuoy, yw, happy to do it.   mea culpas and begging in openstack-infra ;-)   only infra-root members can do it.  one dev pushed back telling us to just do 1 big commit.  but persistence...
<beisner> and ellipsis
<gnuoy> beisner, cool, welll thank you so much for getting it done. It would have been a shame to loose the commit history
<beisner> gnuoy, indeed.  welcome so much ;-)
<tasdomas> lazyPower - so I'm still working on relation departure handling with charms.reactive
<tasdomas> lazyPower - and I think there's a subtle bug in the conversation code
<tasdomas> lazyPower - it's impossible to get remote data from a relation that is being departed from
<tasdomas> even though the remote information is still available via the command line tools
<lazyPower> really? conversation.get_remote() fails when juju relation-get works?
<tasdomas> lazyPower, yes
<tasdomas> lazyPower I think it's because get_remote relies on the list of units in the relation
<lazyPower> oo we def. need to take a look at that. Do you mind filing a bug against charms.reactive? https://github.com/juju-solutions/charms.reactive
<tasdomas> lazyPower - will do
<tasdomas> lazyPower - another question - what could be the reason for charms.reactive trying to pull in relation information from relations that have already been departed from?
<lazyPower> i'm goign to have to tap cory_fu_ to answer that one. I'm not super familiar with that area of the code
<cory_fu_> tasdomas, lazyPower: The first thing that jumps to mind is if an interface layer uses the -broken hook, it will often get states set on conversations attached to a remote-unit of None, which will then always act like it's set and can't be removed
<tasdomas> cory_fu - so is the proper approach then to ignore -broken hooks in the interface layer?
<cory_fu_> Yes.  They don't give you any useful information in reactive anyway.  They fire when all related units are gone, which is easy to detect in reactive by the lack of {relation_name}.joined state
<lazyPower> oo cory_fu_ - thats a problematic revelation
<lazyPower> https://github.com/juju-solutions/interface-etcd/pull/5/files
<lazyPower> i was unable to discern the departing unit without using that
<lazyPower> is there an alternative pattern i can use?
<cory_fu_> lazyPower: Just change "broken" there to "departed"
<lazyPower> ok so departed only runs on the departing unit?
<cory_fu_> Correct
<lazyPower> i'm pretty sure i had this flipped and it tanked caused every unit the self unregister
<lazyPower> so/the self/to self/
<cory_fu_> I sent an email about this to the Juju list a while back, but clearly I need to push it more.  Broken is broken in reactive.
<tinwood> beisner, for tempest charm: https://review.openstack.org/325966 Move files to new layered location
<beisner> tinwood, [testenv:pep8] needs to be the tox enviro name for lint
<beisner> if we are to keep this in line with other openstack projects and other os-charms
<tinwood> beisner, kk
<lazyPower> cory_fu_ - yeah i just flipped it from broken to departed and i get behavior i absolutely dont want
<lazyPower> it nukes everything but the leader
<beisner> gnuoy, tinwood - fyi transitory os-charmer hacluster and tempest gh repos deleted. â
<tinwood> beisner, yay!  Thanks, it will help to reduce my confusion!
<gQuigs> hi there, I'm trying to get sosreport ready for Juju 2.0 and otherwise improve out automated logging plugin
<gQuigs> I want to capture juju debug-log -n 1000, but the syntax appears to be materially different between juju 1 and 2
<gQuigs> for juju 2.0 - juju debug-log -T -n 1000
<gQuigs> for juju 1.25 - juju debug-log -n 1000
<gQuigs> and if you do with -T on 1.25 it fails, if you do without -T on 2.0 it will hang indefinitely
 * gQuigs really doesn't want to have to special case versions;
<marcoceppi> gQuigs: you're going to pretty much have to, 2.0 is a backwards breaking release
<anita_> trying to install JUJU 2.0 as nonroot
<gQuigs> marcoceppi: this is the first change I've ran into that actually breaks anything
<anita_> but getting error
<marcoceppi> gQuigs: wait until you run `juju get` ;)
<marcoceppi> err, now juju get-config
<gQuigs> marcoceppi: could the tail be only when run in an interactive session?
<gQuigs> running commands that just don't work is fine.. it's the hanging indefinitely that really gets us
<gQuigs> so we can go both juju get and juju get-config and not care the output failed
<marcoceppi> gQuigs: open a bug, not a core developer so i can't say for sure
<gQuigs> marcoceppi: will do, thanks
<anita_> when installing juju 2.0 getting error as "ERROR unable to contact api server after 1 attempts: cannot load cookies: open /home/charm/.go-cookies: permission denied"
<gQuigs> reported - https://bugs.launchpad.net/juju/+bug/1589581
<mup> Bug #1589581: Consistant basic use of debug-log between 1.25 and 2.0 <pyjuju:New> <https://launchpad.net/bugs/1589581>
<gQuigs> bah humbug
<gQuigs> there we go
<gQuigs> I think I ask this every time I report a bug.. can we close the old project https://launchpad.net/juju?
<lazyPower> anita_ - sounds like you may have run juju with sudo at somepoint and logged in?
<lazyPower> interestingly enough my .go-cookies is owned by root with permissions: 0600
<anita_> lazyPower_:I followed the document juju2.0
<anita_> But I logged in as root and then su - <nonroot user>
<lazyPower> anita_ https://jujucharms.com/docs/devel/getting-started
<anita_> Hmm for me -rw------- 1 root  root       5 Jun  6 02:49 .go-cookies
<anita_> Ok I will follow this
* lazyPower changed the topic of #juju to: || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta8 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<anita_> I did exactly like this
<anita_> somewhere i missed something?
<lazyPower> my suggestion would be to remove the .go-cookies and re-login
<anita_> Ok
<anita_> let me try that way
<anita_> just to see when i changed the owner of .go-cookies, bootstrap is successful
<anita_> any idea how to delete a controller
<anita_> juju destroy-controller?
<lazyPower> yep
<anita_> ok
<lazyPower> juju help commands will lend a hand from here
<anita_> k
<anita_> I delete .go-cookies and re-login as nonroot after i destroy default controller
<arosales> is anyone not seeing the navigation toolbar @ https://jujucharms.com/docs/devel/getting-started
<anita_> now its successful
<lazyPower> arosales confirmed, i am not seeing the nav bar
<rick_h_> arosales: yes it's erroring in the JS
<lazyPower> anita_ great :)
<arosales> lazyPower: rick_h_ thanks for confirming
<arosales> rick_h_: need me to file a bug, or are folks already working on it?
<anita_> Then the same step i should follow as mentioned in doc
<rick_h_> arosales: I'd file a bug and see if either something in docs broke things or the GUI folks did
<anita_> before bootstrap, i will delete the .go-cookies and re-login?
<anita_> is it a correect method?
<arosales> rick_h_: will do, thanks
<anita_> also i have done one more extra step comapred to doc, i.e : sudo chgrp lxd /var/lib/lxd/unix.socket
<anita_> is this two steps correct?
<anita_> lazyPower_: please confirm if the above two steps that followed for nonroot juju2.0 installation is valid? one is sudo chgrp lxd /var/lib/lxd/unix.socket and then deleted .go-cookies and relogin as nonroot
<bdx> how's it going everyone? happy monday!
<lazyPower> anita_ i'm not sure where that .go-cookies came from. so its hard for me to say. I would say thats a valid working fix for the error you encountered, but i dont think its required every time you setup juju, no.
<bdx> does anyone know the status of cross-model-relations?
<anita_> ok thanks a lot
<bdx> rick_h_: ^
<rick_h_> bdx: on this roadmap's cycle of work
<bdx> rick_h_: nice! thats exciting!
<bdx> thx
<arosales> bdx: hello and happy monday indeed. It seems summer is upon us as well
<magicaltrout> summer is relative
<bdx> magicaltrout: ha, is it raining for you today?
<magicaltrout> june gloom
<magicaltrout> i left london last week in rain, i arrive in san diego for the week and its grey-ish
<bdx> arosales: yea .... Portland winters last a long, dark, rainy 9 months .... so pumped for the sun
<bdx> its great for programming though
<lazyPower> magicaltrout *hattip* glad you enjoyed
<magicaltrout> indeed lazyPower had it on a few times now
<beisner> marcoceppi - curious:  i've got a "top" layer which charm-proofs ok -- but what's are the intents and expectations for charm-proof wrt layers/interfaces?
<marcoceppi> beisner: proof is really only designed to run against the built layer
<marcoceppi> we've not really gotten a layer/interface proof yet
<arosales> bdx: I hear you're getting lots of sun as of late
 * marcoceppi opens a feature for it on charm-tools
<beisner> marcoceppi, ack thx.  well fwiw, proof passes on an unbuilt layer currently, which may actually be a bad thing?
<marcoceppi> beisner: not really
<marcoceppi> but not all layers are expected to pass proof, though I  suppose top layers are
<marcoceppi> beisner: but intermediate layers may not have a complete metadata.yaml, for example
<beisner> marcoceppi, indeed
<skay> is there a juju command to generate dot files?
<skay> aha, in a side channel I learned of https://code.launchpad.net/juju-viz
<lazyPower> That looks neat skay, what does it do?
<skay> lazyPower: I think it will generate a dot file with a graph showing a juju deployment, but I haven't tried it out yet, just got the url just now
<skay> lazyPower: I am working some changes to a charm and a mojo spec that will remove a website from our environment, and having a diagram to show what the current state is will make it easier for people
<skay> lazyPower: I have a doc, but it has lots of words. no pictures. generating a dot file will really help
<skay> (I am not great at using drawing programs)
<lazyPower> i completely understand skay  :)
<lazyPower> they are tedious at best
<lazyPower> have you thought about generating a bundle/export and sending it through svg.juju.solutions?
<skay> aha, in that repo in bin there is juju-dotty.py for the curious. it takes output from juju status
<skay> lazyPower: no. I am not familiar with how to do that
<lazyPower> we use the raw bundle files in merges to visualize our prs.  eg https://github.com/juju-solutions/bundle-beats-core/pull/1
<lazyPower> pardon the unwiedly url - but the markdown in the pr comment shows the magic:  http://svg.juju.solutions/?bundle-file=https://raw.githubusercontent.com/juju-solutions/bundle-beats-core/cc14520c94ed69b29c667c3d59d189ce3a6166ee/bundle.yaml
<cory_fu_> Anyone else seeing the side-bar missing on https://jujucharms.com/docs/devel/getting-started
<lazyPower> i suppose this may not be as  useful, i think the api requires a url to curl to fetch the bundle
<lazyPower> cory_fu_ - there's an open bug about it
<cory_fu_> Ah, ok
<lazyPower> cory_fu_ https://github.com/juju/docs/issues/1141
<lazyPower> cory_fu_ if you have a sec i'm curious if there's another way i can get the unit thats actually leaving the relationship other than using the -broken hook as its broken.
<cory_fu_> -broken will never give you a departing unit because it's only called after every single remote unit is gone
<cory_fu_> What you actually want is the -departed hook, which behaves exactly like you're thinking -broken does
<lazyPower> i have code that may be side-effecting into working
<skay> lazyPower: wow that is pretty
<cory_fu_> That is, within the -departed hook, conv.units (or hookenv.remote_unit()) will contain the unit that is leaving
<cory_fu_> And if you add a state to the conversation, it will only apply to the unit that is departing
<cory_fu_> lazyPower: ^
<skay> lazyPower: /me searches. https://github.com/marcoceppi/svg.juju.solutions yes? looks like it
<lazyPower> cory_fu_  do you have an example? https://github.com/chuckbutler/interface-etcd/blob/c58568c4ffaa9099deea8f20199d9ec501f5aeff/peers.py  -- this conversation is scope unit
 * skay reads readme and churckles. hehe
<cory_fu_> kwmonroe: I did see that bdx said "working on a complete fix now" in https://github.com/jamesbeedy/layer-puppet-agent/issues/2 -- I mentioned that during daily and said that the commit he referenced before that looks to me like it should resolve the issue, so I'm not sure what "complete solution" is required
<cory_fu_> kwmonroe: I'd say go ahead and test the current fix
<kwmonroe> cool
<cory_fu_> lazyPower: In your etcd peers.py, the only thing you should need to do is change -relation-broken on line 28 to -relation-departed.  It should function correctly with just that change, though you should also add a "dismiss" method that removes that state once it's been processed.  Let me find you an example where we use the same pattern
<cory_fu_> lazyPower: https://github.com/juju-solutions/interface-mapred-slave/blob/9493fab49a447a317b276523595e40df064299d5/requires.py#L31
<cory_fu_> lazyPower: And it's used in the charm (now in upstream!) at https://github.com/apache/bigtop/blob/master/bigtop-packages/src/charm/hadoop/layer-hadoop-resourcemanager/reactive/resourcemanager.py#L155
<lazyPower> cory_fu_ - this is different behavior in peers i think. Every unit is getting the .departed state set
<lazyPower> its not scoped between just the departing unit and the leader. every peer is getting that state and in this case telling itself ot unregister and terminate the application.
<lazyPower> when i switch from -broken to  -departed
<cory_fu_> lazyPower: You are incorrect
<cory_fu_> :)
<lazyPower> cory_fu_ - hang on i'll gather logs and make my case
<cory_fu_> https://github.com/juju-solutions/layer-etcd/blob/master/reactive/etcd.py#L315
<cory_fu_> lazyPower: That handler will match if *any* unit has the departed state.  I think what you actually want there is @when_not('cluster.joined')
<icey> charm build seems to not recognize / use the series: key in the metadata.yaml
<lazyPower> well for one
<lazyPower> i love the .dismiss method on the interface
<lazyPower> i'm clearly doing interface state handling wrong
<cory_fu_> lazyPower: Ok, I think I see the problem
<cory_fu_> You're using the pattern of setting a state on join and then immediately removing it.  You want to do that because you want to track the indvidual unit that is new, which is understandable.  However, because you're not leaving a state set on the unit, you have no way of tracking when there are still units remaining
<cory_fu_> So you're trying to use -broken to detect when all units are gone.  Also understandable.  However, that doesn't really play well with reactive because when -broken fires, there's no longer any remote units to be participating in conversations.  You end up setting a state on a None unit conversation, which will cause problems for you later because you can never remove it
<lazyPower> also a good point
<cory_fu_> lazyPower: What I would recommend is to have a .joined state that you don't remove in provide_cluster_details whose purpose is to keep track of all units that are still joined.  You then remove it in -departed, and you can detect "no remaining peers" by @when_not('cluster.joined')
<cory_fu_> You can keep the .joining state to make it easy to tell which unit is new.  But I'm not sure what the .declare_self state is for
<cory_fu_> Can you explain that?
<lazyPower> cory_fu_ - sure, the leader acts as the single unit we care about during turn up. It performs a single node bootstrap and waits for peers. When a peer knows its about to come online, it has to talk to the leader first. declare_self triggers the peer saying "Hey i'm here with this detail" and the leader responds with the static initial cluster configuration.
<cory_fu_> lazyPower: Also, to clarify why I said you were incorrect before.  The state does not get attached to every peer; rather, it gets attached to a single peer and that is sufficient to trigger the @when('cluster.departed') test.  Does that match up with what you were seeing in your logs?
<lazyPower> this was the source of a lot of headaches before, so this may need additional love in the layer, but peers were racing, and static configuration requires staggered start.
<lazyPower> so you have ot gate based on direct communication with the etcd application running on the leader
<cory_fu_> Hrm.  This is all symptomatic of the failure of the conversation model.  It's too divergent from how people are used to thinking about relations and is causing more confusion than clarity.  :(
<lazyPower> i'll give it another go based on our findings here. There's a goldmine of feedback here
<cory_fu_> In the next pass, I want to make the underlying process more explicit by dropping the idea of "conversations".  A unit joins the relation, so you attach a state to it.  Handlers match states and the relation classes group all units that are in that state and let you operate on them individually or as a group.  (That's basically how it works now, but we tried to abstract some of that out in the Conversation model and ended up making
<cory_fu_>  it more opaque)
<lazyPower> from what i understood of that, it makes sense :)
<lazyPower> or, alternatively, we can do some better docs and maybe some example interface seminars for these use cases
<hatch> has anyone ever got the error "ERROR Get https://.... x509: certificate has expired or is not yet valid" when bootstrapping lxd (with master)
<lazyPower> our current interface docs were primarily cut/paste and remixed by me, so there's some gaps in there
<lazyPower> hatch not with the lxd provider no, i have with juju stable releases and the local provider
<hatch> lazyPower: do you recall what you did to fix it?
<arosales> devel side bar is back
<arosales> thanks matthelmke and evilnickveitch :-)
<matthelmke> :)
<arosales> https://jujucharms.com/docs/devel/ that is if folks were wondering about the context
<lazyPower> hatch https://bugs.launchpad.net/juju-core/+bug/1245550 - ooolllddd bug
<mup> Bug #1245550: ERROR TLS handshake failed: EOF waiting for stateserver <bootstrap> <ui> <juju-core:Fix Released> <https://launchpad.net/bugs/1245550>
<hatch> oh heh
<hatch> hmm
<hatch> lazyPower: do you know where this certificate is stored? So that I can check the date?
<hatch> or maybe delete it :)
<lazyPower> not with juju2, check the $JUJU_DATA Directory
<lazyPower> $home/.local/share/juju
<hatch> btw $JUJU_DATA isn't defined
<hatch> :)
<lazyPower> its in the variable reference guide
<lazyPower> hatch https://jujucharms.com/docs/devel/reference-environment-variables right thurrrr at the top
<hatch> ahh maybe because it's built and not installed
<marcoceppi> skay: there's a charm for that too, as well as http://svg.juju.solutions
<skay> marcoceppi: I didn't realize that was a real url I could use. I did a search and found your repo and was trying to install it
<skay> spiffy
<marcoceppi> skay: oh yeah, it's just a web service
<skay> marcoceppi: I started down a rabbit hole of finding jujusvg
<marcoceppi> skay: you can, with juju 2.0-beta8, just `juju deploy cs:~marcoceppi/xenial/charm-svg` (https://jujucharms.com/u/marcoceppi/charm-svg/3)
<marcoceppi> skay: it was such a pain in the ass to setup, I ended up writing a charm for it
<skay> should I be able to generate svg based on juju status --format yaml?
<marcoceppi> skay: not really, though you should be able to
<marcoceppi> skay: it'd be a nice feature
<skay> marcoceppi: it complained about icons and didn't generate an svg. I didn't want to go down a rabbithole on that if it wasn't intended that way
<skay> marcoceppi: yeah, it would be neat
<marcoceppi> skay: it expects a v3 bundle
<marcoceppi> skay: are these local charms?
<skay> marcoceppi: private ones
<skay> marcoceppi: some are private, some publi
<marcoceppi> skay: all the private ones won't show up
 * skay nods
<bdx> cory_fu, kwmonroe: sup
<kwmonroe> yo yo bdx
<bdx> hey! I've a few things to run by you
<bdx> concerning layer-puppet
<bdx> specifically puppet-agent
<kwmonroe> sure thing bdx
<bdx> kwmonroe, cory_fu, firstly -> http://apt.puppetlabs.com/
<kwmonroe> fwiw, i think layer-puppet should go away in favor of puppet-agent as the base layer.. and a new layer-puppet be a charm that includes puppet-agent and is like a deployable puppet service.
<bdx> what I'm running into is that puppet3 and puppet4 are not both supproted equally on a per-release basis
<bdx> kwmonroe: entirely, thats how I've been using it .... I'm running into complications though
<bdx> e.g. xenial isn't supported for puppet3, but it is for puppet4
<kwmonroe> ugh, lame
<bdx> the puppet3 debs are the puppetlabs-release-<ubuntu version>.deb
<bdx> found here apt.puppetlabs.com
<bdx> it looks puppet4 is packaged for all releases since precise
<bdx> kwmonroe, I guess my question is, should I make 'puppet-version' config default to puppet4
<bdx> and then throw a conditional for puppet3 that xenial isn't supported?
<bdx> or should I throw away everything puppet3
<bdx> and make the charm only puppet4
<bdx> ?
<admcleod> bdx: we need 3 for bigtop
<bdx> darn
<bdx> admcleod: mind if I enquire as to why?
<admcleod> bdx: it doesnt work with 4 :}
<kwmonroe> bdx: what's the puppet4 package name?  i don't see stuff like puppet_4.x for trusty: http://apt.puppetlabs.com/pool/trusty/main/p/puppet/
<kwmonroe> or am i looking in the wrong place?
<bdx> kwmonroe
<bdx> kwmonroe: just go to the base url
<bdx> anything with 'puppetlabs-release-pc1-<release>.deb' is puppet4
<bdx> admcleod: how does bigtop currently use puppet-agent/puppet-master
<bdx> puppet3 -> puppetlabs-release-<release>.deb
<bdx> puppet4 -> puppetlabs-release-pc1-<release>.deb
<admcleod> bdx: its masterless - i cant remember the exact issue, let me test it again now just incase i was wrong
<kwmonroe> bdx: i don't see a big deal with defaulting to 4 as long as there's a config opt to specify 3 if we need it.
<bdx> admcleod, kwmonroe: I think masterless and masterfull puppet should be separate layers/charms .... the deps are entirely different, so is the implimentation and use case
<bdx> kwmonroe, ok, just  add a conditional to check for xenial/vivid/wily for puppet3?
<admcleod> bdx: well. we dont need any deps for masterless, we just need to install the package. so ideally 'if puppet master not defined, dont configure it - assume its masterless'
<bdx> admcleod, kwmonroe: should I just set status to 'blocked' if ubuntu_release is xenial/vivid/wily and puppet-version => 3?
<kwmonroe> bdx: i think you mean == 3 there
<bdx> totally my bad
<bdx> yea
<kwmonroe> bdx: you can either be nice and detect that series won't have the package available, or you can just let apt fail and be like "you should have read the readme where i talked about puppet3 only available < vivid"
<bdx> gotcha
<kwmonroe> bdx: because who knows -- maybe somebody has a xenail ppa with puppet3 that they have configured as an install_source.. we shouldn't block them just in case puppet3/xenial is available in their configured env.
<bdx> totally
<cory_fu_> I'd like clarification on why bigtop doesn't work with puppet 4.  Are there significant syntax changes that make the bigtop manifests not work?
<cory_fu_> admcleod: ^
<admcleod> cory_fu_: testing
<bdx> kwmonroe: thats the other thing .... because of the ubuntu release in the deb url, the functionality of layer-apt is limited e.g. install_sources config can't be used :-(
<kwmonroe> so bdx, i like the idea of a single puppet layer that can work in master/less mode, but i don't know enough to know how disjoint those 2 modes are.  if it's totally separate code paths depending on the config, that's one thing.. but if it's master = masterless + some, then i think we can still have a single puppet layer.
<bdx> kwmonroe: here the interesting part
<bdx> kwmonroe: puppet-agent for puppet3 depends on puppet AND puppet-common
<bdx> puppet-agent for puppet4 depends on neither
<bdx> kwmonroe, the dependency for masterless puppet is included by in the  default archives
<beisner> thedac, can you land this?  testing ci/push flow on hacluster post-move.  https://review.openstack.org/#/c/325478/
<mattrae> hi, with juju 2.0 is it possible to use a bundle with stages like was supported with juju-deployer. 'juju deploy bundle.yaml stage1' isn't working for me in this example https://pastebin.canonical.com/158108/
<beisner> thedac, also this one plz & thx:  https://review.openstack.org/#/c/324795
<rick_h_> mattrae: no, the built in bundle support is basic and does not do that.
<bdx> kwmonroe: the masterless puppet dep doesn't change between 3 and 4, unless you add the deb sources for puppet3, which just gives you the full set of puppet releases
<rick_h_> mattrae: the deployer was updated to work with 2.0 if you need it
<mattrae> rick_h_: ahh cool, thanks!
<bdx> but adding puppet4 deb has no effect on masterless puppet deps, and as it looks to me masterless puppet3 doesnt need deb sources add either
<admcleod> bdx: cory_fu_ hrm looks like it might just be a sys path problem
<bdx> admcleod, cory_fu, kwmonroe: cloud-init doesn't have support for puppet4 yet ....
<cory_fu_> Why does cloud-init need to support puppet4?
<admcleod> bdx: cory_fu_ kwmonroe right, puppet3 installs /usr/bin/puppet, 4 is /opt/puppetlabs/bin/puppet which isnt added to juju sys path so the hook fails
<bdx> I was just saying ... a lot of people use cloud-init to automate puppet getting provisioned on their infra
<bdx> totally
<cory_fu_> Really?  I wasn't aware that cloud-init had any tie-ins with puppet, but then, I don't know that much about cloud-init
<bdx> theres a cloud-init puppet module
<cory_fu_> I see
<bdx> it allows me to spinup instances and have them auto puppet from my puppet master
<bdx> very usefull
<bdx> it's use should be replaced by layer-puppet
<bdx> for my infra at least
<bdx> useful*
<admcleod> bdx: cory_fu_ kwmonroe /opt/puppetlabs/bin is added to /etc/environment but i guess its not re-sourced after its added. so hooks fail, and juju run fails.
<bdx> yeah, I think the cloud-init puppet module needs a similar fix
<beisner> a drive-by o/ howdy -> bdx
<bdx> beisner! you got me excited last week with all that multi-hypervisor support talk
<bdx> beisner: by the way, who is heading that initiative, you?
<admcleod> bdx: cory_fu_ kwmonroe created a symlink but it fails because theres no /etc/puppet/hiera.yaml. this will need more testing and i have to EOD, can pick it up again tomorrow morning
<beisner> bdx! yo
<bdx> admcleod: sweet, I'll have pushed some new up by the am, I'll ping you tomo
<admcleod> bdx: AFAIR, there are quite a few breaking changes between 3 and 4 - not just the config but also how relaxed it is about broken syntax and other issues. i seem to recall it not being a simple upgrade
<beisner> bdx, the collective 'we' os-charmers really.  we've got test bundles and a readme documenting the process.
<bdx> admcleod: yea, its not, we just underwent the upgrade here at dhc .... huge refactoring to get our puppetstack code base to puppet4
<bdx> beisner: where is this happening?
<admcleod> bdx: yeah ive mostly respressed the experience :}
<beisner> bdx, PoC @ http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/README-multihypervisor.md   which exercises nova kvm and nova lxd in a single deploy, firing up both types of instances.
<bdx> beisner: has juju the comprehension of multiple image uuid's now, or is that part of the WIP?
<beisner> bdx that example is completely independent of juju features (and currently only really validated @ 1.25.x, but i see no reason 2.x would be an issue, known bugs notwithstanding)
<beisner> bdx, but:  i believe there are some juju features growing to support the flip side of that.  ie. juju deploying on top of that multi-hv cloud.   right now, i've only validated that the cloud itself listens and responds appropriately to `nova boot` type of operations.
<bdx> nice
<bdx> awesome
<beisner> bdx, so tldr:  glance images are tagged with hypervisor types, then when a user nova boots one of those images, it 'just works'
<bdx> beisner: nice, so you are just adding '--property hypervisor_type', or is there other config?
<beisner> that's the one bdx
<beisner> bdx, worth having a look at the test bundle though too
<bdx> yea, I am now
<bdx> thats a good repo
<bdx> great*
<bdx> excellent examples
<beisner> bdx, that's where openstack bundles start life.  then we test, generally automate, and selectively publish once everyone's convinced.  ;-)   just bear in mind anything in there is subject to breakage as that's where we iterate and dev things.
<bdx> entirely
<bdx> beisner: I've been running openstack on lxd all on one node, I have it currently supporting a copy of all of our staging infra .... its f***ing awesome .... check it
<bdx> http://imghub.org/image/U3D4
<bdx> http://imghub.org/image/UvyQ
<bdx> http://imghub.org/image/UD10
<bdx> beisner: lxd openstack bundle + dvr + vlan tenant nets .... bet I'm the first :-)
<bdx> 10x 128GB Samsung 850 Pros under the raidz :-)
<bdx> http://imghub.org/image/UnxF
<beisner> bdx, sweet!
<beisner> hi marcoceppi, any known issues with md rendering @ charm store?  i've got this [1] which renders a code block all in one line (i believe unexpectedly) [2].
<beisner> [1] https://raw.githubusercontent.com/openstack/charm-tempest/master/README.md
<beisner> [2] https://jujucharms.com/u/openstack-charmers-next/tempest/xenial/0
<beisner> ie. i've used 3 diff md tools and they all render it as two lines in the code block
<arosales> beisner: ya it is known issue, let me see if I can find the bug link
<beisner> arosales, ack thx for the info.  i'm not too worried about it.  mainly wanted to confirm that it's a known thing vs. adjusting the .md content specifically for the cs:.
<arosales> beisner: https://github.com/CanonicalLtd/jujucharms.com/issues/276
<arosales> beisner: thanks for confirming it is a known issue or if a bug needed to be filled
<jhobbs> https://mail.google.com/mail/u/1/#inbox
<jhobbs> oops
<mattrae> hi, i'm deploying services to lxc containers using juju 2.0 beta8 and maas 2.0 beta6. i see the containers are getting an interface bridged to physical interfaces that are configured. one issue i'm seeing is even though i have a default gateway configured on one of the networks, its not getting added to the container. https://pastebin.canonical.com/158125/
<mattrae> when i add the default gateway manually its working. i'm wondering what i'm missing to have the default gateway added
<arosales> mattrae: hmm, interesting not sure what the correct answer here is. Suggest if no one else chimes in to give the juju mail a try and possible get more eyes on it
<bdx> mattrae: have you configured the maas space to use the gateway?
<bdx> mattrae: I'm sure you've already seen this, but the bottom half of this page might hook you up if not -> https://maas.ubuntu.com/docs2.0/rack-configuration.html
<gugpe> I need to debug a service that is stuck in a "pending" state. How do I do that?
<gugpe> Specifically I'm trying to deploy a custom (local) charm I wrote.
<gugpe> I have all the start-up lifecycle hooks "install config-changed start", but nothing in them.
<arosales> gugpe: is the unit or the machine stuck in pending
<arosales> gugpe: I am assuming you have a machine but the unit for the given charm[service] is stuck in pending, correct?
<arosales> gugpe: also what version of juju 1.x or 2.x?
<gugpe> Correct, the lxc machine is pending
<gugpe> 2.0-beta8-xenial-amd64
<gugpe> Both machine and unit.
<gugpe> sudo juju deploy /home/wurde/GitLab/charm-gitlabomnibus
<gugpe> that's the command I'm sending.
<gugpe> What's the best way to debug this?
<bdx> gugpe: don't be executing this as root for a start
<bdx> gugpe: there is a debug flag '--debug' I thing
<bdx> think
<gugpe> Ok. I needed to set ownership of controllers.yaml. I'll look for the --debug flag.
<gugpe> The message JUJU-STATUS allocating Waiting for agent initialization to finish
<gugpe> I'm running on xenial series
#juju 2016-06-07
<lazyPower> gugpe - how long has the unit sat in pending?
<lazyPower> gugpe - what series charm are you deploying?
<gugpe> infinite
<gugpe> xenial
<gugpe> ive noticed progress
<gugpe> I disabled ufw
<gugpe> things seem to be rolling now
<mattrae> thanks arosales and bdx for the suggestions :) i'll see if that page has what i need or i'll try the juju mailing list
<arosales> mattrae: that was all bdx, but if you are still stuck be sure to mail the list cc maas list as well
<arosales> gugpe: sorry missed you reply, reading back scroll now
<gugpe> confirmed
<gugpe> ufw was blocking juju agent
<arosales> gugpe: ok if the machine is pending that the issue is with the provider not your machine
<gugpe> the provider is localhost lxd
<lazyPower> gugpe - ah yeah thats in the docs :)
<lazyPower> tricky one to debug at first too
<arosales> gugpe: if this was your first time using local provider the initial deploy can take some time as it needs to download the cloud image
<arosales> hmm seems ufw got you resolved
<arosales> lazyPower: is that in the docs?
<gugpe> It wasn't a bootstrap. I actually ran into this earlier.
<arosales> ok
<gugpe> That's kind of how I came to this conclusion. I didn't see it in the docs though.
<lazyPower> i'm full of lies, i swore i saw that in the getting started guide though
<arosales> I don't see it here, and that is most likely where it should live
<arosales> but I haven't had to do this on stock Xenial cloud image
<arosales> perhaps desktop and cloud are different though . . .
<arosales> hmmm, there was an askubuntu question on this sometime ago
<arosales> folks weren't really happy about disabling ufw, which is understandable
<gugpe> right, read that so post. I suspect the better way is to add rules to ufw.
<gugpe> ufw rules to enable juju agent locally operate over lxdbr0
<arosales> or juju should does this on the user behalf on install, but be very transparent about it
<lazyPower> arosales i filed a bug about it https://github.com/juju/docs/issues/1142
<gugpe> I'd be fine either way. Better docs, or do this as default.
<arosales> lazyPower: well I am not sure we should just tell folks to disable their firewall
<lazyPower> i think the package should be adjusted to apply those ufw rules on install
<arosales> lazyPower: I think we can do this a little more tastefully with juju core
<arosales> lazyPower: ya seems juju core should be more knowledgable here, and also observable
<lazyPower> but until that happens, a doc callout to investigate UFW rules/disable is better than the nothing we have right now.
<arosales> signal to the user that this firewall rule is getting updated
<lazyPower> my humble opinion :)
<arosales> lazyPower: gugpe: I am activing ufw on my system for a test and see if I can reproduce. If so I'll file a bug
<arosales> lazyPower: thats reasonable. I'll comment in the issue with the lp bug
<arosales> lazyPower: I thought there was a command to get leader on a service
<arosales> lazyPower: did I imagine that command?
<lazyPower> arosales is-leader
<lazyPower> juju run --service foobar 'is-leader" --format=yaml
<lazyPower> i've been hacking around this with my more recent work
<arosales> ah
<arosales> juju help commands | grep leader
<arosales> didn't return anything for me
<lazyPower> so, what i propose is this
<lazyPower> https://www.evernote.com/l/AX6rlgwUROtLw7DCTvObHDxZ3s3gRfL4MjwB/image.png
<arosales> gugpe: btw have you seen https://jujucharms.com/gitlab/precise/5
<gugpe> arosales: thanks. I have seen the existing gitlab charms.
<gugpe> as with most things my requirements are more specialized.
<gugpe> I'm actually deploying the gitlab omnibus package with mattermost chat server and what not.
<lazyPower> gugpe - highly recommended you look into layers then
<lazyPower> a lot of that work has been done for you, such as deploying mattermost
<gugpe> I'll publish when I can. I'll look at layers.
<lazyPower> cmars wrote an excellent mattermost layer
<arosales> gugpe: ah good to hear
<cmars> lazyPower, thanks. gugpe, one word of caution, the charm only works with juju 2.0
<bdx> gugpe: the gitlab solution is legit
<arosales> lazyPower:  I like it. The swarm bits currently say something like, "Swarm leader running "
<cmars> because it uses resources
<cmars> mattermost, that is
<bdx> gugpe: you can mod it post deploy to your specialized config
<lazyPower> arosales - yeah thats an older pattern. it was a full takeover of potentially useful health information.
<lazyPower> i want leader status regardless of actual message coming along on the pipeline. its a transparent way to know it as an admin this unit is special
<cmars> gugpe, it could be adapted to install from mattermost releases on github, just something to watch for if you try installing the latest cs:~cmars/mattermost
<arosales> lazyPower: would this only be for leaders or would non-leaders also have this message
<arosales> I would prefer to only see this on applications which have leaders
<lazyPower> arosales non-leaders get no special messaging additions. they get plain status output of whatever you sent through.
<lazyPower> oh sure, if you mean something like say.. wordpress that has no real leader doing coordination
<arosales> yup
 * lazyPower nods
<lazyPower> i intend this to be useful for charms making use of the feature
<arosales> you putting that into core?
<lazyPower> I have a little patch method at the bottom of my layer
<lazyPower> this could be adjusted in charm-helpers, or in charms.* namespace. I dont think it needs ot go in core
<lazyPower> considering its more up to the author to determine if the leader is special
<arosales> lazyPower: gotcha
<arosales> lazyPower: if your still around I am looking at https://github.com/juju-solutions/bundle-observable-swarm/blob/master/README.md
<arosales> the scp of creds needst to come from the  leader
<lazyPower> arosales - there should be swarm credentials on any swarm node
<arosales> so we could do something like juju run --service swarm "is-leader" --format=yaml | grep -A 1 True | awk '{ print $2 }'
<arosales> but not sure how elegant that is
<arosales> lazyPower: not in my testing
<lazyPower> if its not currently it willi be soon
<arosales> oh
<lazyPower> i have a branch that has that enabled. i'm sorry i'm so far behind
<arosales> ok than. I'll just file a bug as a reminder then :-)
<arosales> lazyPower: thanks
<lazyPower> delivering on beats + etcd has been a very labor intensive spike. I'm about to the point wher ei can switch feet and finish that last little bit, and get my bundles ready for hte store.
<arosales> *fixing* beats + ects has been labor intensive :-)
<arosales> etcd, that is
<lazyPower> packetbeat code is ready (i think) - it needs all the project meta and it'll be ready for the bundle as well
<lazyPower> dockerbeat is lagging, i need to ping the maintainer about -stable release and it will need the same as pb, then its g2g
<arosales> thanks for working on that
<kjackal> admcleod: have you ever used Mahout?
<admcleod> kjackal: no
<admcleod> kjackal: why do you ask?
<kjackal> I am trying to figureout what are valid use-cases so that I make sure the Mahout library is installed in the right places
<admcleod> kjackal: are you doing it as a subordinate?
<kjackal> admcleod: yes
<admcleod> kjackal: well you can use it with mapreduce, spark, flink.. i think you only need it on the unit you're executing the job from
<kjackal> I think so too, but then again there is this mahoot shell
<admcleod> kjackal: i have a cluster running, ill install it on one slave and see what happens
<kjackal> thanks
<admcleod> kjackal: for MR it only needs to be on the unit which is submitting the job, as the libs are bundled as part of the MR job and distributed to the slaves. for spark/flink im not sure but would assume the same
<kjackal> so, yes. Since it is essentially a library you can deliver it in a "fat" jar everywhere
<kjackal> But you can always add it to your classpath, right?
<admcleod> kjackal: yes. i think we only need it on the client
<admcleod> kjackal: the only difference adding it to any other units will make is that if it is a slave/worker the job may potentially begin to execute slightly (a few seconds) faster, if its a non-slave/worker you will be able to submit jobs from that unit
<admcleod> kjackal: .. or its own unit, i.e. pig
<kjackal> I will try to follow https://github.com/hixiaoxi/hixiaoxi.github.io/wiki/Installing-and-Testing-Mahout and see what happens
<admcleod> kjackal: yeah thats more or less what i just did
<kjackal> nice thanks!
<gnuoy> tinwood, https://github.com/openstack-charmers/charms.openstack/pull/8 is up for review if you have some time.
<tinwood> gnuoy, yep, I'll take a look.
<gnuoy> tinwood, thanks
<gnuoy> tinwood, thanks for the review
<gnuoy> jamespage, I have a pull request to add ha support to the charms_openstack module. Do you want to give it the once over? Otherwise I'll ask tinwood to hit the button.
<gnuoy> https://github.com/openstack-charmers/charms.openstack/pull/8
<jcastro> mbruzek: lazyPower: wrt. the view code discussion on the list, I am having a hard time finding the source to the elasticsearch charm
<mbruzek> jcasto: if it is recommend you can get the source using "charm pull"
<mbruzek> jcastro:  I am in meeting right now, but I can help later
<jcastro> ack
<lazyPower> jcastro - it still lives in onlineservices-charmers namespace
<lazyPower> https://launchpad.net/~onlineservices-charmers/charms/trusty/elasticsearch/trunk
<jcastro> hey lazyPower, nice work fixing that one.
<lazyPower> o/ i do what i can
<bryan_att> gnuoy: ping
<gnuoy> bryan_att, hi
<bryan_att> gnuoy: I tried your script, and ran into an error - the error is "ERROR cannot add service "mysql": service already exists" (since I just deployed OPNFV via the JOID installer)
<gnuoy> bryan_att, that should be ignorable
<bryan_att> gnuoy: ok, then the next error is "WARNING failed to load charm at "/home/ubuntu/save/joid/trusty/congress": open /home/ubuntu/save/joid/trusty/congress/metadata.yaml: no such file or directory"
<bryan_att> gnuoy: maybe related to the earlier error "build: Unable to locate layer:openstack-api. Do you need to set LAYER_PATH?" ?
<bryan_att> gnuoy: also not sure what the function of the http_proxy setting is (using the script from http://paste.ubuntu.com/16952298/)
<gnuoy> bryan_att, yeah, it'll be the http_proxy causing the problem, that's specififc to my env. Try removing the "export http_proxy=..." line and rerunning
<bryan_att> gnuoy: I removed the http_proxy setting and it got farther; note also the keystone charm did not work as there is also a keystone service defined. I'm watching now to see how far it gets
<gnuoy> bryan_att, keystone will br a problem because the version already deployed does not have the congress fix.
<bryan_att> gnuoy: how do I get the JOID installer to include your keystone patch? (ping: narindergupta)
<narindergupta> bryan_att: you can destroy service and redeploy
<gnuoy> narindergupta, JOID will redeploy the same version (wrong) version of keystone won't it? (I haven't used joid)
<tinwood> gnuoy, is charmhelpers source only on bzr or it it on git now too?
<gnuoy> narindergupta, where does JOID pick up the keystone from ? the charm store? If so we could run JOID and then do and a juju upgrade-charm --switch to upgrade to an updated local version
<gnuoy> tinwood, just bzr I believe
<tinwood> gnuoy, I though so.  Now I just have to find that page I found for you re: bzr specs for pip?
<jamespage> beisner, btw I added some newton and 'branch' targets to oct
<jamespage> for newton at least
<narindergupta> gnuoy: i am using git location from openstack
<gnuoy> narindergupta, is it straight forward for bryan_att to update joid to pick up the charm from an alternative repo?
<narindergupta> gnuoy: i am git clone first then use charm: local:trusty/keystone
<narindergupta> he has to chenge fetch.sh file
<gnuoy> narindergupta, fantastic, that should be simple
<narindergupta> he has to change fetch_charm.sh and give the git clone for bzr command
<narindergupta> to download then rest will fall through
<gnuoy> narindergupta, I'd like him to try using 'git@github.com:gnuoy/charm-keystone.git' . Is there more to is than just changing git@github.com:openstack/charm-keystone.git to git@github.com:gnuoy/charm-keystone.git in fetch.sh ?
<beisner> jamespage, ack thx
<narindergupta> gnuoy: this is file for opendaylight https://gerrit.opnfv.org/gerrit/gitweb?p=joid.git;a=blob;f=ci/odl/fetch-charms.sh
<narindergupta> gnuoy: bryan_att: please change ./joid/ci/odl/fetch-charm.sh
<narindergupta> and search for keystone and change the location of keystone as suggested by the gnuoy
<gnuoy> which is: git@github.com:gnuoy/charm-keystone.git
<narindergupta> gnuoy: currently i am usign   19 git clone https://github.com/openstack/charm-keystone.git $distro/keystone
<bryan_att> gnuoy: the repo reference doesn't work for me. I had to change it to "https://github.com/gnuoy/charm-keystone.git"
<narindergupta> bryan_att: thats correct bryan_att
<gnuoy> bryan_att, yep, sounds good.
<bryan_att> narindergupta: if I'm deploying for "nosdn", do I need to modify the file in the ODL folder or somewhere else?
<narindergupta> it should be ./nosdn/ folder
<bryan_att> ok, trying now
<jamespage> gnuoy, can I get your opinion on a keystone charm thing
<jamespage> gnuoy, ?
<gnuoy> jamespage, sure
<jamespage> gnuoy, okies
<jamespage> gnuoy, so the newton keystone package ships a site config called 'keystone'
<jamespage> gnuoy, and the charm enables one called wsgi-keystone
<jamespage> gnuoy, should I rework the charm to overwrite the package provided one, or just disable the package provided one and keep with the wsgi-keystone named version?
<jamespage> my tendency is to the former and make that the standard going forward
<gnuoy> jamespage, agreed
<jamespage> gnuoy, https://review.openstack.org/#/c/326597/
<jamespage> gnuoy, nope that does not work...
 * jamespage tries again
<cholcombe> jamespage, my first stab at that this rgw race: https://gist.github.com/cholcombe973/2a6601456cd0ae1e6612695776b7e5a9  what do you think?
<bdx> cory_fu: is there any functional difference between https://github.com/jamesbeedy/layer-puppet-agent/blob/7f84fcdcb8c615c5de91f0f94163cde64c4f550d/reactive/puppet_agent.py#L95-L104 and https://github.com/jamesbeedy/layer-puppet-agent/blob/master/reactive/puppet_agent.py#L110-120
<bdx> cory_fu: I guess I'm unfamiliar with what that is doing .... is PuppetConfigs.puppet_active(p) over p.puppet_active() just a style thing?
<bdx> I feel like it has deeper implication though
<bdx> admcleod, cory_fu: I want to revert to the p.puppet_active() syntax for consistency if it makes no difference to you
<rick_h_> cargonza: do you have the release schedule for OS handy?
<bac> jujucharms.com is down and the issue is being investigated.  This will affect new deploys for charms from the charmstore.
* bac changed the topic of #juju to: jujucharms.com currently is off-line || Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta8 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<cargonza> rick_h_ Ubuntu OS release?
<rick_h_> cargonza: charm release schedule
<rick_h_> cargonza: e.g. what's the target for the 16.07 release?
<cargonza> 16.07 charms is end of July - 7/22 from our last discussions
<rick_h_> k
<petevg> I've got a testing question: in juju 2.0 beta8, I'm still getting an error about environments.yaml being missing when I run "juju test". Is there a different method that I can use to run amulet tests right now? (I can unpack everything in the wheelhouse into a virtualenv, and just run the tests manually, or with nose, but that gets annoyingly clunky fairly
<petevg> quickly ...)
<bac> jujucharms.com is back
<magicaltrout> \o/
<lazyPower> petevg - we in ~containers tend to use tip of the testing tooling. We're using python-jujuclient and juju-deployer from tvansteenburgh's ppa, and we're using bundletester to execute the tests
<lazyPower> petevg - fyi, this is also included in charmbox:devel if you're into that sort of thing. You should be able to pick up charmbox, point it at a cloud, cd into the charms dir and then kick off bundletester
<petevg> lazyPower: got it. Thank you.
<skay> squeeee I got an easter egg from a typo. :D
<skay> it's gone in the new version though
<skay> I'm gonna typo everything now
<mramm> skay:  :)
<ejat> hi
<ejat> ERROR autorest:WithErrorUnlessStatusCode POST <-- im getting this while trying to connect to azure
<stokachu> cherylj: ^ seen this before?
<ejat> /oauth2/token?api-version=1.0 failed with 400 Bad Request
<ejat> ERROR autorest:WithErrorUnlessStatusCode POST https://login.microsoftonline.com/fb30bf07-xxxx-xxxx-xxxx-02ef08680fb9/oauth2/token?api-version=1.0 failed with 400 Bad Request
<cherylj> no, I've never seen that before.  Is that juju 2.0?
<stokachu> yea beta8
<ejat> PS C:\Users\Lenovo> juju --version
<ejat> 2.0-beta8-win10-i386
<cherylj> ejat: are you getting this during bootstrap?
<ejat> cherylj: yups
<cherylj> ejat: can you run bootstrap with --debug and pastebin the output?
<ejat> cherylj: http://paste.ubuntu.com/17098969/
<cherylj> ejat: okay, let me poke around a bit, see what I can figure out
<ejat> im checking my subscription id .. and authentication with azure cli
 * ejat done .. double check my subscription id n tenant id 
<ejat> its working fine if i tried to bootstrap in aws
<ejat> cherylj: managed to pokes someone?
<cherylj> ejat: I haven't gotten much time with it yet, but one of the more knowledgeable folks wrt azure is going to be coming online soon
<cherylj> that would be axw
<ejat> okie cherylj thanks so much
<cherylj> ejat: are you using a user/pass authentication for azure?
<ejat> auth-type : userpass
<cherylj> ok
<cherylj> ejat: can I ask you to open a bug for your issue?  It'll help me pass it on to someone
<ejat> under juju @ juju-core
<cherylj> yeah, here:  https://bugs.launchpad.net/juju-core
<ejat> https://bugs.launchpad.net/juju-core/+bug/1590172
<mup> Bug #1590172: ERROR cmd supercommand.go:448 autorest:WithErrorUnlessStatusCode POST https://login.microsoftonline.com/fb30bf07-xxxx-xxxx-xxxx-02ef08680fb9/oauth2/token?api-version=1.0 fa iled with 400 Bad Request <juju-core:New> <https://launchpad.net/bugs/1590172>
<bryan_att> gnuoy: ping
#juju 2016-06-08
<arosales> lazyPower: is https://jujucharms.com/observable-kubernetes fixed for the etcd issues
<lazyPower> negative
<arosales> ok
<arosales> I won't recommend that as a demo atm, but as soon as it is ready look out :-)
<gulu> I want to manually expose a port, how do I do that?
<gulu> My initial guess was to run a `juju run --machine 0 "open-port 3000"`
<gulu> but that failed "unknown command open-port"
<Andrew_jedi> Guys, I am deploying rabbitmq-server charm on trusty but instead i am seeing that charm for xenial is getting deployed?
<kakakka> Here's a mind bender. Why would the HOME var not be found inside an install hook?
<kakakka> `expand_path': couldn't find HOME environment
<Andrew_jedi> jamespage: ^^
<kakakka> Andrew_jedi: you can check if charm has explicitly set a series in config
<Andrew_jedi> kakakka: Nope, no series
<kakakka> you can also manually set when deploying
<kakakka> you can also check your model config to see if a 'default' is set
<kakakka> are you deploying from xenial?
<Andrew_jedi> kakaka: Nope trusty
<Andrew_jedi> kakakka: Where is the model config?
<kakakka> what juju version are you running?
<kakakka> juju get-model-config
<kakakka> for 2.0-beta8-xenial-amd64
<Andrew_jedi> kakakka: 1.25.5-trusty-amd64
<kakakka> search docs then, i dunno 1.25
<kakakka> you could try juju get-model-config
<kakakka> the version jump between 1.x and 2.x is staggering
<kakakka> lots of rough edges and changes going on here in jujuland
<kakakka> Trying to gem install rails in my charm and getting a "couldn't find HOME environment"
<kakakka> The rails gem seems to want to `File.expand_path('~/.railsrc')` and that's causing problems
<jamespage> Andrew_jedi, you can force the version by deploying cs:trusty/rabbitmq-server
<jamespage> Andrew_jedi, when you say 'on trusty' do you mean from a trusty client?
<Andrew_jedi> jamespage: I just did that after reading some docs and it worked.
<jamespage> Andrew_jedi, the charm store will be making a decision if you just do juju deploy rabbitmq-server as to which series is provided and used - I think that has switched to xenial as default now
<Andrew_jedi> jamespage: ohhh, thanks for this info. Is there a way i can override the default?
<jamespage> Andrew_jedi, yes you can set the default-series value for the environment
<jamespage> default-series: trusty
<jamespage> for example
<Andrew_jedi> jamespage: Perfect, that's what i was looking for
<Andrew_jedi> :)
<jamespage> Andrew_jedi, yw
<firl> jamespage: for Bug 1590257, should I have just checked on the mailing list or here before posting it?
<mup> Bug #1590257: ceilometer version has bug, can you increment verison <ceilometer (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1590257>
<jamespage> gnuoy, changed my mind on approach for https://review.openstack.org/#/c/326597/
<jamespage> having two files named keystone.conf that the charm manages is not helpful or functional
<nagyz> hi guys
<nagyz> according to the official documentation, 'juju charm get' should work - however, it tells me 'unrecognized command'. 'charm-get' also doesn't work after installing charm-tools.
<nagyz> this is with the latest stable ppa.
<nagyz> is the documentation wrong or am I doing something wrong? :)
<nagyz> there is also a pull-source which is not mentioned in the 'deploying charms offline' document
<ejat> anyone can help me with bug 1590172
<mup> Bug #1590172: ERROR cmd supercommand.go:448 autorest:WithErrorUnlessStatusCode POST https://login.microsoftonline.com/fb30bf07-xxxx-xxxx-xxxx-02ef08680fb9/oauth2/token?api-version=1.0 fa iled with 400 Bad Request <juju-core:New> <https://launchpad.net/bugs/1590172>
<gnuoy> jamespage, I'm looking into a bug in nova-cc that landscape have found and I seem to be seeing the charm attempt to start a nova service, and it reports a 'start/pre-start' but logs in  /var/log/{upstart,nova/* have earlier timestamp and the service is not running. Have you seen anything like that ? It looks like the upstart pre-start is running but not the main script block. I can't see anything wrong with the pre-start bit fwiw
<piao> What's the password for the the default 'ubuntu' account, created by Juju
<jamespage> piao, it does not have one - the only way to access is via "juju ssh"
<jamespage> using the ssh keys generated as part of deployment (or the local ssh keys)
<piao> So I'm using juju ssh to login, but I cant use systemctl to modify systemd?
<piao> Ah, I get that it's using ssh certs now.
<mgz> if you can login as ubuntu, it has passwordless sudo
<piao> ok ty
<piao> perfect, that worked
<yaz1234> hi
<yaz_10_36_> hi im having some issues with trying to run `juju quickstart mediawiki-single`
<yaz_10_36_> it says The program 'juju' can be found in the following packages:
<yaz_10_36_> it recommends the juju-2.0 package
<gnuoy> jamespage, nm, figured it out
<mgz> yaz_10_36_: quickstart is a 1.X thing only, so you'd want juju-1.25 (or you can just not use quickstart)
<yaz_10_36_> yeah i was following the guide at https://jujucharms.com/get-started and 1.25 is installed
<yaz_10_36_> if possible I was hoping to try it out, before having to read a lot of docs
<mgz> yaz_10_36_: so, you can install juju-1-default which will alias juju-1 back to juju, or you can type juju-1 quickstart ....
<yaz_10_36_> thanks, ill try that out
<yaz_10_36_> btw should i use juju-2.0 for production?
<mgz> yaz_10_36_: not quite yet, in the final betas, but shortly
<yaz_10_36_> ok I guess i'll keep an eye on it then
<upcatch> For my local charm the icon.svg doesn't want to render in the juju gui.
<Mmike> ubuntu@juju-1-18:~$ juju deploy ubuntu
<Mmike> ERROR cannot download charm "cs:trusty/ubuntu-0": bad SHA256 of "/var/lib/juju/charmcache/cs_3a_trusty_2f_ubuntu-0.charm"
<Mmike> Hi, lads
<Mmike> do you know why I'm getting this when trying to deploy ubuntu charm from any juju version less than 1.24?
<tasdomas> charms.reactive: what could cause a hook handler for a -departed hook in an interface layer to be executed in the context of a -broken hook?
<kjackal> kwmonroe: got a minute? I need to ask you about bunldes. Is it possible to call an action while after the bundle is setup?
<kjackal> I am asking this because the apache-spark charm starts in standalone mode and it is not sending jobs to yarn in the spark-zeppelin and spark-notebook bundles we have
<kjackal> kwmonroe: Also! Good morning :)
<jcastro> http://askubuntu.com/questions/774928/openstack-autopilot-deployed-new-machine-with-juju-not-showing-in-horizon-dashbo
<jcastro> http://askubuntu.com/questions/778234/unable-to-resize-openstack-instance
<jcastro> any help for these from openstackers?
<rick_h_> cargonza: ^
<jcastro> http://askubuntu.com/questions/779636/juju-bootstrap-wrong-maas-api
<jcastro> also not sure if this one is fixed in beta8?
<jcastro> I am unsure of maas/juju connectivity lately
<kwmonroe> yo kjackal - are you asking about calling actions from a bundle test?
<kjackal> kwmonroe: not from a bundle test but from a bundle.yaml itself
<rick_h_> jcastro: it's out from the feature flag and tells the API version automatically I thuoght
<kwmonroe> kjackal: for the spark problem, if the charm is in standalone mode, it won't send jobs o yarn.  the yaml needs to deploy spark in yarn mode in those bundles.
<jcastro> rick_h_: yeah I am unsure enough to not answer, heh
<rick_h_> jcastro: will prod for more details and we'll look into it
<kwmonroe> no kjackal, you can't call an action from a bundle yaml.
<jcastro> actually, cargonza if the fellas have time to go over some of the openstack tagged questions they're starting to pile up
<jcastro> stokachu: got a few conjure questions in there as well
<stokachu> jcastro: wehre?
<jcastro> http://askubuntu.com/questions/780831/conjure-up-io-openstack-lxd
<kjackal> kwmonroe: hm.... so we have acouple of options then. a) update the readme to say that spark is in standalone and you need to call an action to go to yarn mode b) fork & fromulgate a spark version that just has the patch to deploy to network restricted environments
<stokachu> jcastro: thanks taking a look
<jcastro> http://askubuntu.com/questions/781009/ubuntu16-04-conjure-up-opentsack-how-long-it-takes-to-conjuring-up-openstack
<rick_h_> jcastro: please feel free to shoot off emails on things you find to the lead of the team involved. cc me please
<kjackal> kwmonroe: basicaly b) is not an option, just a mental excersize :)
<jcastro> stokachu: hover over the juju tag and subscribe, it will send you an occasional summary email
<jcastro> rick_h_: if we could have people subscribe to the tag that would be lovely
<jcastro> though I know lots of core devs do already so it's likely just a time thing
<stokachu> jcastro: thanks Im also subscribed to ones tagged 'conjure-up' as well
<jcastro> excellent
<rick_h_> jcastro: understand, but appreciate you helping to poke people
<jcastro> stokachu: Lemme fix up the formatting on these
<kwmonroe> kjackal: i don't understand the problem.  spark changes exe mode on config, right?
<kjackal> kwmonroe: config? I thought it was an action, let me check this (obviously dont remember the code I worte....)
<kjackal> kwmonroe: you are right! (as always!)
<kjackal> we said that yesterday!
<kwmonroe> :)
<jcastro> stokachu: the conjure-up tag wasn't existing, it exists now, you might need to resubscribe
<kwmonroe> kjackal: i bet you got it mixed with upgrade.. upgrade is the action, but changing spark config mode is config... so you do have control over spark's execution mode in a bundle.yaml.
<stokachu> jcastro: thanks!
<kjackal> kwmonroe: now I need to see how we specify config variables in a bandle.yaml
<kjackal> de we do this anywhere else?
<jcastro> stokachu: you can fill in stuff here and submit it: http://askubuntu.com/tags/conjure-up/info
<jcastro> links to the site, etc.
<stokachu> jcastro: sweet, ill get this updated
<jcastro> I just found a truckload more questions for you, I'll start retagging them, prepare thyself
<kwmonroe> sure do kjackal, check mariadb's config in the sql bundle: https://github.com/juju-solutions/bundle-apache-analytics-sql/blob/master/bundle.yaml#L38
<kjackal> kwmonroe: awesome, thank you
<kwmonroe> np
<jcastro> stokachu: http://askubuntu.com/questions/772551/how-do-i-join-a-new-node-to-conjure-open-stack-mitaka
<jcastro> Mark explained the "what you should do" but I think adding an answer with the actual commands, etc would help that out
<cargonza> jcastro, I'll check it out.
<jcastro> http://askubuntu.com/questions/780072/how-do-i-make-juju-request-machines-and-provision-them-in-parallel
<jcastro> I could have sworn we did this by default
<mgz> jcastro: yeah, we make the api calls sequentially, but certainly shouldn't block on commissioning before we make the next call
<cargonza> jcastro: is it allowed to change some tags that are not openstack related? just a quick glance and there are MAAS items which doesn't relate directly to openstack.
<mgz> jcastro: I haven't done add-unit -n 36 on maas, but we do lots of bundle deploys where the machines are visibly deployed in parallel
<jcastro> cargonza: yeah if it's maas you can change the tag from openstack to maas
<jcastro> or add a maas tag if it applies to both
<cargonza> ok thx u
<jcastro> cargonza: for many of them they will have juju, openstack, and maas because usually the asker isn't sure exactly where in the stack their issue is
<cargonza> yup I saw one that is directly MAAS deployment related. I'll make sure I don't have a fast trigger on the edits.
<jcastro> all your edits will go into a queue anyway until you have enough reputation to make direct edits
<gnuoy> jamespage, have got a sec to look over https://github.com/gnuoy/interface-openstack-ha ?
<jamespage> gnuoy, looks generally OK but I think we've been using the .connected state to signal a joined event, rather than having a .joined state
<gnuoy> jamespage, ack, will fix
<jamespage> gnuoy, icey: hey for reference I think we need to start prefixing anything being targetted to openstack namespace with charm
<jamespage> so charm-interface-xxx
<jamespage> charm-layer-xxx
<marcoceppi> stokachu: hey, haproxy is failing on autopilot deploy, is this a known issue?
<jamespage> hmm we should also think about bug tracking as well...
<jamespage> gnuoy, beisner, cargonza, coreycb, thedac, tinwood, dosaboy, wolsen: btw its fine for us to start using the openstack-dev ML for charm development discussion now
<jamespage> I emailed the openstack-charmers ML to that effect
<tinwood> jamespage, kk
<cargonza> ok. cool
<beisner> jamespage, i like it.  replied to one earlier.
<jamespage> beisner, i saw
<jamespage> beisner, any idea how we go about getting an #openstack-charms irc channel?
<beisner> jamespage, ha i was just typing a suggestion here to that effect ...
<beisner> i'd like to have that channel, and can look into what it'd take to add it.  we added one for the now defunct tailgate group last yr.  just some infra coord iirc.
<beisner> aha, https://git.openstack.org/cgit/openstack-infra/project-config/tree/gerritbot/channels.yaml
<stokachu> marcoceppi: running openstack-install or conjure-up?
<marcoceppi> stokachu: openstack-install
<gnuoy> jamespage, I've made the change you suggested. Am I right in thinking: "looks generally OK" <=> "I LOVE it! Get it forked into github.com/openstack-charmers right now!"
<jamespage> +1
<gnuoy> ta
<mgz> gnuoy: forked into?
<stokachu> marcoceppi: this isn't a known issue to me, maybe dpb1_  knows more?
<gnuoy> mgz, Create a for of the repo in the openstack-charmers namespace
<gnuoy> s/for/fork/
<gnuoy> Anyone mind if I update topic, I think jujucharms.c is fine now
<mgz> I guess that makes sense, strange turn of phrase as it's generally forked-from
<gnuoy> forgiveness rather permission blah
* gnuoy changed the topic of #juju to: Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta8 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<gnuoy> mgz, not in my head
<dpb1_> marcoceppi: will need more details. :)
<dpb1_> got a pastebin or something?
<beisner> zul, o-c-t change merged.  thx!
<marcoceppi> dpb1_: http://paste.ubuntu.com/17119672
<dpb1_> marcoceppi: a bit more please?  unit: haproxy/0: machine: 0/lxc/0 agent-state: error details: hook failed: "install"  also, a paste of juju status would be good.
<marcoceppi> dpb1_: this is using openstack-install, I don't know how to even get that
<dpb1_> export JUJU_HOME=~/.cloud-install/juju
<dpb1_> juju status
<dpb1_> my go-to answer: http://askubuntu.com/questions/606422
<marcoceppi> dpb1_: it's an error with dns/apt resolutions
<marcoceppi> stokachu: this is a problem with openstack-install, it's setting apt-http-proxy to http://<MAAS-IP>:5240:8000
<stokachu> marcoceppi: thanks we'll take a look and get it fixed
<stokachu> marcoceppi: yea we fixed that lemme push a new build out to the stable ppa
<marcoceppi> stokachu: cool, ppa so I can install it onthe orangebox?
<stokachu> marcoceppi: yea ill kick off a build in just a sec
<marcoceppi> stokachu: link to ppa?
<stokachu> marcoceppi: ill push it here: ppa:cloud-installer/stable
<marcoceppi> stokachu: thanks
<stokachu> marcoceppi: waiting for the buidl to complete now, https://code.launchpad.net/~cloud-installer/+archive/ubuntu/stable/+recipebuild/1155753
<marcoceppi> stokachu: ack, ta
<Prabakaran> Hello Team,  I have a small doubt in the layered relation part. Usually we will be using interface for holding the exposed value from one charm to other charm with respect to layered charm development (Here both the charms are layered charms).
<Prabakaran> For example i have a non-layered charm which is actually exposing some port number.   And in this scenario i have a layered consuming charm which is actually making use of the port number of the non-layer charm. Could you please advise how to use port number of the non-layered charm in layered consuming charm.
<beisner> os-charmers:  new charm pushes will now result in a repo-info file in the published charm artifact.   https://jujucharms.com/tempest/  --> https://api.jujucharms.com/charmstore/v5/tempest/archive/repo-info
<beisner> dpb1_, fyi^ will be helpful in tying cs revno to the repo and commit level of that published charm.
<beisner> also, fyi, that applies only to os-charms in the git/gerrit flow.
<kwmonroe> cory_fu: bcsaller:  will you scan this and tell me if my expectations are wrong or if this is a bug in charm build?  tl;dr, i expect "middle" layers to override config values from "bottom" layers: http://paste.ubuntu.com/17121725/
<cory_fu> kwmonroe: You shouldn't repeat layer:basic in every layer
<cory_fu> kwmonroe: Your includes should be more like this: http://pastebin.ubuntu.com/17121896/
<cory_fu> Otherwise, layer:basic will get re-applied on top of layer:bottom and layer:middle
<cory_fu> er, on top of layer:bottom, but not layer:middle
<kwmonroe> ok, duly noted.  that doesn't seem to change the fact that the bottom config value is stuck in my top layer.
<cory_fu> Still looking.  That just jumped out at me
<bcsaller> there might be a legit bug in there :)
<dpb1_> beisner++
<kwmonroe> good lookin out cory_fu.  i was just saying that didn't affect this particular outcome.
<cory_fu> Hrm.  Actually, it looks like it already de-dupes the basic layer, so my comment is moot
<cory_fu> kwmonroe: Can you run it again with -lDEBUG?
<kwmonroe> cory_fu: http://paste.ubuntu.com/17122059/
<cory_fu> Hrm.  I don't know why I thought that would be helpful.
<kwmonroe> :)
<beisner> does anyone know what determines the "trusty" part when charm building?  ie.  i get  ./build/trusty/my-built-charm but there is nothing declared as trusty in the charm that i see.
<bcsaller> beisner: bitrot :)
<bcsaller> defaults that are no longer needed or apply
<beisner> bcsaller, aha.  would ./build/my-built-thing be safe in some future release?
<bcsaller> with multi series charms and no JUJU_REPOSITORY needed, yeah, we can change how that works
<beisner> bcsaller, i mean i'm fine with just always hitting that trusty dir for now.  i'm about to inject a build step in our commit -> push -> publish flow and didn't want to unexpectedly hit a variable in that dirname.
<beisner> oh!   -o   i can be explicit, yah?
<bcsaller> I think that currently would still create a series dir under it
<beisner> bcsaller, ack it does
<cory_fu> kwmonroe, bcsaller: I have a fix, but I don't understand how this didn't break long before now
<cory_fu> Oh, I think this was a regression from 87c26dc4
<marcoceppi> stokachu: http://paste.ubuntu.com/17122635
<marcoceppi> getting closer
<cory_fu> kwmonroe: https://github.com/juju/charm-tools/issues/218  I'm creating a patch now
<stokachu> marcoceppi: yea we broke it trying to fix another exception bubbling up, https://github.com/Ubuntu-Solutions-Engineering/openstack-installer/commit/1419917de3801340712fe52ca817c24c38685e63
<stokachu> marcoceppi: will get it fixed
<kwmonroe> aww - thanks cory_fu!
<cory_fu> kwmonroe, bcsaller (and marcoceppi): https://github.com/juju/charm-tools/pull/219
<marcoceppi> stokachu: any eta on that fix?
<marcoceppi> stokachu: or another way to get autopilot running?
<marcoceppi> like, if I just deploy the bundle directly?
<bcsaller> cory_fu: thanks
<stokachu> marcoceppi: you can deploy the bundle directly if you want
<stokachu> marcoceppi: i was going to take a look at the issue in the next hour
<lazyPower> cory_fu - is it possible to list and iterate over converations for another relation than the one thats in context of the interface module?
<lazyPower> eg: in provides.py - iterate over the list of peers
<lazyPower> or do i want to pre-fetch this and pass it into the method?
<cory_fu> You definitely don't want to be interacting with a different relation than the one for your interface class, because it breaks the encapsulation of the interface protocol
<cory_fu> The peer interface layer should provide a method to get the data that you're interested in, and you should then pass that in to the other interface class
<lazyPower> cory_fu - or i can iterate each member of the provides, and build the connection string on the client. I think was looking at the problem from the inside out.
<lazyPower> or do it the way you recommended because i cant break my interface data model
<lazyPower> ok i think that sorts it then
<beisner> when building from a top layer, do we expect its README.md to survive the build process and exist in the resultant built artifact?
<cholcombe> beisner, yes i believe so
<cholcombe> beisner, i've forgotten to add a README.md before and i get something from the lower layers
<beisner> marcoceppi, https://github.com/juju/charm-tools/issues/220
<marcoceppi> beisner: oh wow, it's just straight up getting deleted
<marcoceppi> that's really freaking odd
<beisner> yah no can proof ;-)
<beisner> marcoceppi, it's my day :-)   also losing dotfiles in charm push.  that's less critical for us.  but here's that:  https://github.com/juju/charmstore-client/issues/72
<beisner> marcoceppi, the build issue is pretty hi prio for us.  lmk if there are any suggestions or questions.  thx!
#juju 2016-06-09
<gnuoy> tinwood, got a sec for https://github.com/openstack-charmers/charms.openstack/pull/10 ?
<tinwood> gnuoy, lemme take a look.
<gnuoy> ta
<tinwood> gnuoy, done.
<gnuoy> tinwood, thanks
<gnuoy> tinwood, updated
<tinwood> gnuoy, looks good to me +1
<gnuoy> tinwood, can you hit the button?
<tinwood> gnuoy, I don't know - I'll give it a go.
<tinwood> gnuoy, sadly not.  I don't have write access tot he openstack-charmers repo.  I probably need to be elected to a group?
<tinwood> gnuoy, thanks for the invite :)
<beisner> o/
<gnuoy> narindergupta, https://review.openstack.org/#/c/327638/
<shruthima> hi kwmonroe , I have tried to deploy the IBM-IM charm that you have proposed for merge request but facing some issues..Please can we have a discussion for 5-10 min on these..
<shruthima> hi mbruzek/kwmonroe , I have tried to deploy the IBM-IM charm that you proposed for merge request but facing some issues..Please can we have a discussion for 5-10 min on these..
<mbruzek> Sure
<mbruzek> shruthima: What error did you see?
<shruthima> actually when we deploy is is not asking for license to accept
<shruthima> and it is gng to unknown state
<mbruzek> shruthima: please run "juju list-agreements" and pastebin the output
<shruthima> root@ptcvm2:~# juju list-agreements Press return to select a default value. Username:  (it is asking for username and password)
<shruthima> when we use launchpad id it is showing incorrect email/password
<mbruzek> shruthima: what launchpad id are you using?
<shruthima> mbruzek:ERROR failed to list user agreements: cannot get discharge from "https://api.jujucharms.com/identity/v1/discharger": cannot start interactive session: cannot get token: Provided email/password is not correct.
<shruthima> salmavar
<mbruzek> https://launchpad.net/~salmavar shows the user is "Shruthima" is the email wrong on this page?
<beisner> hi bcsaller, interested in your take on this behavior we're seeing:  https://github.com/juju/charm-tools/issues/220    tldr;  while trying to automate charm build in openstack ci, we're seeing lower layer ignores impact higher layer files.
<shruthima> mbruzek: username is Shruthima .. email is salmavar
<mbruzek> beisner: bcsaller is on the west coast, and it is 6 am there. I would not expect him in yet.
<mbruzek> beisner: sorry 7am
<beisner> mbruzek, ah thx.  :)  do you have any ideas on that^?
<mbruzek> beisner: I saw this bug come in, no idea why the dot files don't make it to the built charm. Ben told us we could look at "Tactics" to handle different kinds of files. We can include Tactics files in our layers that will govern how to copy, compose specific file types. Look at the code regarding Tactics
<beisner> mbruzek, actually the dotfile is a separate issue
<beisner> mbruzek, that dotfile (hidden files) issue is @ https://github.com/juju/charm/issues/201
<beisner> https://github.com/juju/charm-tools/issues/220 is what we're blocked on
<mbruzek> beisner: Ah I see
<mbruzek> shruthima: what version of juju do you have? "juju version"
<shruthima> juju 2.0
<mbruzek> shruthima: What specific version? For example I am on 2.0-beta8-xenial-amd64
<mbruzek> beisner: perhaps you can overwrite that ignore statement in the base layer. I wouldn't know how to do that (of course) but there might be away
<mbruzek> a way
<shruthima> 2.0-beta7-xenial-amd64
<beisner> mbruzek, i think the expected behavior is that each layer should be able to declare its own set of ignores, and layers above it should operate only on their own declared ignores.
<mbruzek> beisner: Yeah that makes sens, but something tells me Ben had though of this, and you can remove ignores or overwrite them somehow.
<shruthima> mbruzek: Now iam able to authenticate  juju list-agreements command is showing []
<mbruzek> shruthima: What commands did you run to get to that state?
<shruthima> i was checking juju status
<shruthima> and iam checking juju debug-log it is not gng to reactive even
<jcastro> So the jenkins charm, on the store page points to launchpad, but the branch itself lives upstream
<jcastro> are we manually syncing these because the one from the store seems old?
<mbruzek> shruthima: Can you deploy ibm-im now? Does it ask for terms?
<marcoceppi> beisner: dot files were a known issue, I thought they had sorted it though
<shruthima> yup after authentication i have tried it is not asking for terms
<mbruzek> marcoceppi: His current problem is the README.md not makign it to the final charm
<shruthima> i have used juju deploy /root/charms/trusty/ibm-im  --series trusty --resource ibm_im_installer=/root/repo/agent.installer.linux.gtk.x86_64_1.8.3000.20150606_0047.zip
<marcoceppi> mbruzek: I'm aware, but it might not be a build issue
<mbruzek> shruthima: pastebin the juju list-agreements command after you authenticate
<beisner> marcoceppi, so the hidden dot file thing is an issue, but not a current blocker.  our blocker is that lower layer ingore declarations are causing higher layers to drop files (it appears).
<marcoceppi> beisner: right, which makes sense in our current design but might not be the end goal.
<marcoceppi> beisner: why even bother having the ignore in the first place? since it'll just be overwritten with the top layer
<shruthima> mbruzek: root@ptcvm2:~/charms/trusty# juju list-agreements []
<shruthima> it is displaying []
<beisner> marcoceppi, this seems inert (just a readme file), but we also need to ignore unit tests from lower layers so they don't make it into higher layers or the built charm.
<marcoceppi> beisner: that's a bigger issue
<beisner> marcoceppi, it is.
<shruthima> mruzek: http://paste.ubuntu.com/17144275/
<mbruzek> shruthima: Ah!
<marcoceppi> beisner: going to work on a fix today for 2.1.3
<marcoceppi> err, 2.14
<marcoceppi> 2.1.4
<beisner> marcoceppi, rock on.  much appreciated :)
<shruthima> mbruzek: juju debug-log http://paste.ubuntu.com/17144408/
<shruthima> actually not understanding were it is getting struck bec deploy is gng smooth but installtion is not happening even reactive is not getting called according to logs what i have checked
<shruthima> terms is not displying though ,could u please suggest what may be the reason?
<mbruzek> shruthima: Paste bin a juju status
<gnuoy> jamespage, https://review.openstack.org/#/c/327638/
<shruthima> mbruzek : http://paste.ubuntu.com/17144275/
<mbruzek> shruthima: The reason you didn't get prompted to agree to the terms is because you are deploying the local charm. Terms is only prompted by charms deployed from the charm store.
<mbruzek> shruthima: If you reset that environment and juju deploy cs:~kwmonroe/trusty/ibm-im  you should be prompted for the terms
<shruthima> is there any way to check local charms with terms
<shruthima> ok il check
<kwmonroe> shruthima: it doesn't make much sense to enforce a term with a local charm, because if you have the charm locally, you could just edit metadata.yaml and remove the "terms" key.
<cmars> shruthima, distribution of a charm through the charmstore is gated by capturing user agreement to terms
<kwmonroe> so terms are only enforced when deploying from the charm store
<jamespage> gnuoy, two nits
<kwmonroe> heh cmars, i was just about to make you jump in :)
<cmars> shruthima, we can't effectively gate distribution if that distribution is happening out of band with a local charm deploy
<cmars> shruthima, nor would we have any legal basis for knowing who's agreed to what terms for a given charm deployment
<cmars> shruthima, i recommend publishing the charm to the charmstore, possibly with access permissions restricted if it's not ready for general public consumption
<shruthima> yes i agree it wont make sense for local charms but i mean in terms of testing juju terms before deploying charms
<shruthima> thanks for making me clear
<gnuoy> jamespage, nits swatted
<cory_fu> bcsaller: Can you take a look at beisner's issue above (https://github.com/juju/charm-tools/issues/220)?  I'm not really familiar with the intended behavior of "ignores"
<beisner> hi cory_fu, bcsaller - fyi marcoceppi is also looking.  i think this is where that behavior change was introduced:  https://github.com/juju/charm-tools/issues/85
<cory_fu> From a quick glance at the code, though, the ignore property uses rget (https://github.com/juju/charm-tools/blob/master/charmtools/build/config.py#L90) which seems pretty explicit that ignores are combined from all layers before being handled, rather than being per-layer like the issue expects.
<marcoceppi> cory_fu: yes, but I think the issue is valid
 * marcoceppi has a suggestion he's writing in the bug
<cory_fu> beisner: I'm not sure.  From the implementation of the ignores property, it seems like it was that way from the begining.  I do think your interpretation of how it should work is better, though
<bcsaller> maybe it should work that way, sure, but having to ignore a file you intend/expect to override in a later layer is also a little much since overwrite is the default behavior
<cory_fu> Although, there's also some ambiguity in whether an ignore in a given layer should ignore that file from only the current layer (and allow the same file from a lower layer to come through), or if it should block the file from only lower layers (i.e., preventing merges with the file as defined in the current layer), or block it from the current layer and below, having no file be given to higher layers
<bcsaller> like a base layer shouldn't have to say ignore: readme
<bcsaller> thats just a dead chicken
<cory_fu> That's also a good point
<beisner> bcsaller, we would like to ignore the unit_tests dir.  that's a little more concrete example than a readme file.
<marcoceppi> cory_fu: https://github.com/juju/charm-tools/issues/220#issuecomment-224941776
<marcoceppi> bcsaller: ^^
<cory_fu> That removes the ambiguity, I like it
<cory_fu> Do we need to change the key name, though?  Why not just change the behavior of "ignore"?
<marcoceppi> cory_fu: because how would I distinguish between ignoring on my layer and ignoring on layers below me?
<marcoceppi> I suppose if the key is a string vs a dict
<marcoceppi> but then it's a list of things
<cory_fu> Yeah, I guess I'm saying the current behavior isn't really useful, and I don't think it's used much at all, so just change it to be a map and re-use the existing key name
<cory_fu> I guess if we want to be backwards compatible, we can detect list vs map
<marcoceppi> cory_fu: that's fine, I suppose we could create a tactic that rewrote the key if it was a list?
<marcoceppi> and gave a warning
<cory_fu> Perhaps, though I'm think the value might be processed before a tactic has a chance to rewrite it
<marcoceppi> cory_fu: good point
<marcoceppi> so just do it transparently to the user?
<cory_fu> Yeah.  I don't think that feature is even documented anywhere.  It's certainly not in https://github.com/juju/charm-tools/blob/master/doc/source/build.md
<bcsaller> only apply ignores to layers previous to itself? then clear the list
<bcsaller> maybe
<kwmonroe> admcleod: you were right about zeppelin getting trigger happy.  it does indeed fail because it thinks hadoop is ready earlier than it actually is.  i didn't notice it before because it does eventually become ready.. is this similar to what you saw? http://paste.ubuntu.com/17146568/
<beisner> bcsaller, that seems like a useful and simple default behavior.   i like marcoceppi 's idea of having finer granularity with a different thing - but that can be a different thing, yah?
<beisner> bcsaller, well <= self actually
<beisner> strike -1 prev statement ;-)
<kwmonroe> anyone (kjackal?) have thoughts on bundle naming conventions?  for big data specifically, i'm thinking <engine>-<task>-<highlight>, like hadoop-analytics-hive, or spark-processing-kafka, or ignite-visualization-zeppelin
<beisner> coreycb, some overdue housekeeping along with a cleaner way to split legacy (precise):  https://code.launchpad.net/~1chb1n/openstack-charm-testing/pxc-vs-mysql-vs-precise-config-options/+merge/296963     validating @ osci (will retrigger your precise proposed test run as one of those steps of validation).
<coreycb> beisner, thanks!
<beisner> coreycb, yw
<marcoceppi> kwmonroe: why not the other way? <engine>-<highlight>-<task> ?
<marcoceppi> spark-kafka-processing
<marcoceppi> the <task> in the middle feels weird
<beisner> thedac can you also do the honors on this? - (fyi: jamespage tinwood ) need to unignore things for now to proceed with automating things.   https://github.com/openstack-charmers/charm-layer-openstack/pull/7
<thedac> sure
<beisner> thedac, thanks again
<thedac> beisner: merged
<beisner> off to the races then, thx thedac :)
<kwmonroe> ack marcoceppi, thx.  <task> may be superflous too.. like if you already know what spark and kafka are, you probably don't need somebody to say "processing" or whatever
<kwmonroe> yes cory_fu, i misspelled superfluous ^ get over it
<marcoceppi> cory_fu: I've got questions
<marlinc> I'm currently trying to run OpenStack in LXD containers (just to try it out). I'm currently running into the following error: RuntimeError: Exit code: 1; Stdin: ; Stdout: ; Stderr: mount --make-shared /var/run/netns failed: Permission denied
<marlinc> I'm not sure how to allow the LXC container to create that mount point
<cory_fu> marcoceppi: Sorry, I was out for a bit.  What question do you have?
<cory_fu> kwmonroe, petevg, kjackal, admcleod: Any objection to me creating a Jira and a patch for the Spark charm layer?
<petevg> cory_fu: sounds good to me.
<kwmonroe> +1 cory_fu
<cory_fu> kwmonroe: For https://github.com/juju-solutions/layer-openjdk/issues/5 do we need to be worried about the non-zero exit code?  The test failure doesn't look like it was actually choking on the output
<cory_fu> Also, if the grep does fail, maybe we should report something more useful than "Unexpected return code"
<kwmonroe> hmph, you're right cory_fu.. we weren't testing the output for anything, so the ssh host warning shouldn't have mattered.
<kwmonroe> i wonder if the rest of that output was "warning...java not found"
<cory_fu> I don't actually even understand why the command exited non-zero.
<cory_fu> kwmonroe: Actually, looking at the code, that print dumps all of the output, so it doesn't look like the `java -version` command even generated any output
<kwmonroe> yeah, agreed
<cmars> cherylj, didn't see test timeouts due to npipe listener Close, but I also forgot to capture stderr. However, I did find some other test failures -- which I also saw when testing in my own KVM. Opened LP:#1590947
<mup> Bug #1590947: TestCertificateUpdateWorkerUpdatesCertificate failures on windows <juju-core:New> <https://launchpad.net/bugs/1590947>
<cmars> running again, this time with stderr redirected properly...
<xilet_> So, I can't find much documentation on it so far, can anyone explain how expose is supposed to work? (Trying to expose openstack-dashboard)
<cherylj> thanks, cmars
<cory_fu> xilet_: It should be pretty straightfoward.  The charm has to call `open-port <port>` to open a port, which would then be listed in `juju status`.  If you call `juju expose <service>` on that service, then you should be able to connect to that unit's public address (also listed in juju status) on any of the ports the charm opened.
<xilet_> Yeah, I see exposed: yes, but no public addresses listed
<xilet_> I am using 2.0-beta7-xenial-amd64
<arosales> xilet_: are you using maas or lxd as the cloud?
<xilet_> http://pastebin.com/StVS70z5  [for juju status]
<xilet_> lxd
<cory_fu> xilet_: The public-address for openstack-dashboard/0 is 10.125.232.72
<cory_fu> xilet_: It's listed under [Units]
<arosales> xilet_: what I see also
<arosales> and exposed is true
<arosales> under [Services]
<arosales> IP under [Units]
<xilet_> Right, the question I had is what does it actually 'do', because it was on that IP before I exposed it
<cory_fu> And it has ports 80 and 443 open, so you should be able to go to http://10.125.232.72/ or https://10.125.232.72/
<xilet_> some of the documentation mentioned doing the firewall rules to make it publically available
<cory_fu> xilet_: It was on that IP before, but the firewall rules blocked all external trafic to it
<xilet_> ah ok, so you need to manually set up a bridge to the network to acutally make expose work?
<cory_fu> You shouldn't.  The lxd provider should manage the bridge for you
 * cory_fu hasn't used the lxd provider, though.
<magicaltrout> expose has not effect on lxd local
<xilet_> Ok, because right now nothing else on the general network (10.19.40.0/24) can reach that (10.125.232.0/24) subnet
<magicaltrout> but you will have to do some funky bridging on ports for stuff on that box which you want making pubic
<cory_fu> magicaltrout: I know that's true with the 1.25 local provider, but I thought it was needed for 2.0 lxd provider
<cory_fu> If that's true, it seems like a bug in the lxd provider
<magicaltrout> dunno then, I run beta7 and I do some IPtables natting to get local services exposed on the box
<magicaltrout> the networking for LXD is on a 10.x subnet
<magicaltrout> which isn't available to external processes
<magicaltrout> of course i could have just missed something in a release note or similar
<cory_fu> magicaltrout: Oh, are you talking about getting lxd provided services to be accesible from outside the machine that it's bootstrapped on?  I was assuming access directly from the machine that did the bootstrap
<magicaltrout> ah no
<magicaltrout> that is certainly fine
<magicaltrout> but if you have a remote box running lxd local, and want that service "exposed"
<magicaltrout> you do some iptables natting
<magicaltrout> but my understanding is "expose" in juju does nothing in lxd local because there is no firewall unlike AWS etc
<xilet_> Yeah, that is what I am trying to accomplish: remote server, everything running contained on that system, wanting direct https access without a ssh tunnel to reach it from other systems
<magicaltrout> yeah you'll need to do some natting xilet_
<magicaltrout> that said, i just slapped openvpn on the box to make my life easier
<magicaltrout> cory_fu: demoed some juju big datastuff to the JPL guys this week, they loved it
<xilet> magicaltrout: thanks, I just wanted to make sure I was not missing something obvious.
<magicaltrout> not that i'm aware of xilet
<magicaltrout> xilet: https://www.digitalocean.com/community/tutorials/getting-started-with-lxc-on-an-ubuntu-13-04-vps
<magicaltrout> that was my reference
<magicaltrout> the iptables call down the bottom
<magicaltrout> there may be other/better ways
<xilet> Thanks it is a start. I was thinking of being really lazy and setting up apache proxies
<cory_fu> magicaltrout: Awesome.  :)  Did they have any specific feedback?
<magicaltrout> mostly stuff like "this is freaking awesome, look how quickly all that stuff is configured" ;)
<magicaltrout> the usual
<magicaltrout> i've sent a few of them jcastro 's charmer summit mailshot
<magicaltrout> as they're in the same place I'm hoping to drag a few alng
<magicaltrout> along
<xilet> magicaltrout: worked like a charm, thank you!
<cory_fu> That would be awesome
<magicaltrout> no problem xilet
<magicaltrout> cory_fu: https://github.com/SciSpark/SciSpark they make a lot of use of scispark at JPL
<magicaltrout> I'll see if we can get them to build a charm for it, as they stand up scispark, hadoop and ipython/zeppelin stuff
<xilet> so one more stupid question, again on 2.0beta7 of openstack, there was never a prompt for setting a user account for horizon,  keystone user-list does show an admin user, if I just reset the password for that one user account is that the best way to gain access or did I miss a default password somewhere?
<xilet> Nevermind, did the smart thing and just added a new user.
<xilet> Sorry for the set of openstack questions, but if any of you have it working, how do you attach cinder to a lvm group local to the physical machine? The host can see it fine.
#juju 2016-06-10
<admcleod> kwmonroe: yep that similar
<jamespage> lukasa, thanks for approving the switch to apache 2.0
<lukasa> jamespage: No problem. =)
<jamespage> beisner, https://code.launchpad.net/~james-page/charms/trusty/mongodb/ch-resync-newton/+merge/297046
<beisner> jamespage, ack, will watch for tests to complete.
<xilet> I am running into issues connecting the cinder container to local block devices, was curious if anyone has that working. (beta7)
<beisner> jamespage, merged
<jamespage> xilet, is that with cinder/iscsi?
<jamespage> and with which provider? LXD/local or MAAS?
<beisner> jamespage, that's landed
<xilet> lxd
<xilet> and honestly I am not sure with iscsi or not,  I got the services fixed on the cinder container but I still can't see any of the volume groups passed through from the host
<xilet> (I am new to juju and cinder so I may be missing something fundemental)
<jamespage> beisner, also https://code.launchpad.net/~james-page/openstack-charm-testing/ext-port-configuration/+merge/297035
<jamespage> xilet, atm its not possible to consume block devices from within a container in the way I think you are trying todo
<jamespage> xilet, so the only option at the moment are cinder backends that are 100% userspace - like ceph for example
<jamespage> xilet, so for cinder/iscsi (the defautl) you have to run the cinder-volume service outside of containers...
<xilet> Ahh ok
<jamespage> api and scheduler can be run in container still
<jamespage> xilet, the cinder charm supports this type of split - its a config option
<jamespage> so you can deploy it twice - once for schedule/api in lxd container, once for volume on hardware
<xilet> Is there harm in living the container volume service running?
<jamespage> xilet, well for ceph its still required - it just manges the ceph backend rather than trying todo stuff locally with lvm + iscsi
<xilet> Thanks, I can dig through the docs, just curious if you know offhand, is there is a decent guide out there about adding additional duplicated services to openstack?
<xilet> Ah ok, there is one already.
<jamespage> rick_h_, marcoceppi: is there some sort of flag to ensure that test usage of the charm store does not get registered into the stats? we're about to switch our amulet testing to use cs:xxxxx but I don't want to bloat out the stats.
<rick_h_> rogpeppe1: do you know the flaf and how it would work from a bundle, cli, and the querystring? ^
<rick_h_> flag that is
<rogpeppe1> rick_h_, jamespage: yeah, one mo, i'll check
<rogpeppe1> jamespage, rick_h_: make the request with &stats=0
<rogpeppe1> jamespage: ah, but is this using the charm tool?
<rick_h_> rogpeppe1: right how would we get that through the tooling, bundle format so a bundle deploy, juju deploy, etc don't count
<rogpeppe1> rick_h_: currently I don't think there's a way to do that
<rogpeppe1> rick_h_: we could support an environment variable to disable stats, i guess
<rick_h_> rogpeppe1: yea, something like that
<rogpeppe1> rick_h_: i'm not entirely sure we want to enable anyone to do it, but i'm not sure there's an alternative
<rick_h_> rogpeppe1: taking this to the other channel to include uros
<rick_h_> jamespage: how much of this is juju1 vs juju2? (is any of it juju2?)
<jamespage> rick_h_, well right now its both :-)
<rick_h_> jamespage: ok, yea just thinking of the patch we need and going ruh roh...prolly need it on juju1  as well
<rick_h_> jamespage: so I'd say go ahead please and we'll work with the teams to get you a way to set it, but it's on the API but not exposed through clients atm :(
<rogpeppe1> jamespage: looks like you can disable stats by including a "test-mode: true" field in the model config
<jamespage> beisner, hey - for the switch to cs I'm going to deal with the fact that some other charms don't support all series, by testing with the closest match
<jamespage> rogpeppe1, I thought there was something like that!
<jamespage> beisner, ^^ we'll need to turn that on!
<beisner> jamespage, so, we'll need to have the amulet helper add &stats=0 to the cs: charm url;  and do the same when we update mojo/o-c-t bundles, yah?
<jamespage> beisner, no we can just set this in the environments.yaml for the overall environment
<jamespage> test-mode: true
<beisner> jamespage, ok i missed something in backscroll then
<beisner> boom right there it is :)
<beisner> jamespage, ok i can add that to all of our enviros
<jamespage> sounds like a plan
<beisner> jamespage, ok test-mode is true everywhere
<jamespage> \o/
<rick_h_> awesome beisner
<beisner> jamespage, thx for the ext port fixup (merged)
<jamespage> beisner, np
<beisner> jamespage, rick_h_ - curious, since juju-deployer (and amulet + bundletester) fetch the charm ahead of deploy, will that model config be effective?
<rick_h_> beisner: no, that only effects calls to juju client itself
<beisner> yep makes sense rick_h_
<rick_h_> beisner: if other clients fetch the charm they need to update the query string for getting the archive
<beisner> jamespage, seems like a juju-deployer test-mode flag needs to grow, and be plumbed upward to amulet, bundletester and mojo.
<beisner> that, or we hackily munge the heck out of things on the fly
<beisner> rick_h_, jamespage - 100% of os-charm testing currently has src ip of our company's known wan IPs.  do you know if are we already excluding ourselves from cs stats?
<kjackal> petevg: Hey are you around?
<petevg> kjackal: I am. What's up?
<kjackal> Just saw you comment on the failing test
<kjackal> how do you run the test?
<petevg> With bundletester.
<petevg> The exact invocation: bundletester -t ~/Code/charms/trusty/kafka
<rick_h_> beisner: no, we're not removing those from stats, just the long running juju usage numbers
<petevg> It actually seems to successfully deploy -- I have a kafka server running that is not in an errored state.
<petevg> kjackal: I suspect that there's a timing issue in the test -- maybe it gets past the point where it has that message too quickly?
<rick_h_> beisner: I'm of two minds of just excluding ourselves, might be coming time to do that though
<kjackal> petevg: did you change this line before running the test: https://github.com/juju-solutions/bigtop/blob/kafka/bigtop-packages/src/charm/kafka/layer-kafka/tests/01-deploy.py#L18
<kjackal>  ?
<beisner> rick_h_, yah ditto.  there are deploys happening for real production workloads, so that may not be best.  it'd be an easy big hammer though ;-)
<petevg> kjackal: I did not.
<kjackal> petevg: So let me explain what might be going wrong
<kjackal> petevg: this test says, go to the store and fetch the kafka charm that should be the same as cs:trusty/kafka
<kjackal> so this test is supposed to run when we have the charm in the right place
<kjackal> if we are to test this right now we should change it to something like local:trusty/kafka
<kjackal> so that it picks-up the charm you are building/reviewing....
<petevg> kjackal: Hmmm ... bundletester seems to be smart enough to just go and grab the charm locally if you pass the local path to -t.
<kjackal> it does? Nice, didnt know that
<kjackal> ok then it is a real error, I will take a look
<petevg> Cool.
<petevg> Now I know why the team has been giving me funny looks when I've been talking about running bundletester from a locally built charm -- maybe it's not widespread knowledge that it works :-)
<kjackal> :)
<kjackal> pgrep of java in a kafka deployment gives this output: https://pastebin.canonical.com/158511/
<jamespage> beisner, https://code.launchpad.net/~james-page/charm-helpers/amulet-switch-to-cs/+merge/297060
<beisner> jamespage, that looks like it will do the trick.   did you see the convo ^ re: test-mode only applying to juju proper cmds, and not juju-deployerisms (amulet, mojo)?
<jamespage> beisner, I've tested precise/trusty/xenial with with that in keystone - raised https://review.openstack.org/#/c/328305/ to test in full
<jamespage> beisner, hmm does deployer collect the charms itself?
<beisner> jamespage, awesome i was just thinking:  we should sync this and push a thing or two through the pipeline.  :)  good deal
<beisner> jamespage, yes i believe so
<beisner> jamespage, at least for all non cs: it does.
<jamespage> beisner, it downloads cs ones directly to a local cache dir
<beisner> jamespage, seems like a juju-deployer test-mode flag needs to grow, and be plumbed upward to amulet, bundletester and mojo.
<beisner> <beisner> that, or we hackily munge the heck out of things on the fly
 * beisner refers to self in the third person ;-)
<beisner> so glad it's friday
<beisner> jamespage, but since we're in control of that cs url in the amulet helper, we could also address it there and fix one test-mode path
<jamespage> beisner, oh - btw I poked at a few instances I got doing manual testing that did not bootstrap as juju nits
<jamespage> units rather
<jamespage> beisner, I did a rebuild on the existing machines, and post rebuild they got metadata and joined OK
<jamespage> which is frustrating
<jamespage> so its some sort of race between the instance booting and metadata being present/provided.
<beisner> jamespage, interesting.  any clues in timing on serverstack hosts?
<beisner> (logs on hosts i mean of course)
<jamespage> beisner, no waiting to hit another so I can log dive at the same time
<beisner> jamespage, can you review / button poke @?:  https://github.com/openstack-charmers/bot-control/pull/1    ... fyi: https://github.com/openstack-charmers/bot-control/blob/charm-what/tools/README.charm-what.md
<jamespage> stokachu, does conjure-up have a KVM option for the all-in-one under LXD deployment?
<stokachu> jamespage: not by default, i wanted to make openstack-base conjure-up enabled which would provide that
<jamespage> beisner, lgtm
<jamespage> stokachu, also does the LXD deployment include cinder atm?
<stokachu> jamespage: yea
<jamespage> stokachu, that's not going to be functional with nova-lxd
<jamespage> nova-lxd does not support persistent block storage options atm
<jamespage> as lxd can't do it securely just yet
<kjackal> petevg: I cannot confirm the error you see in kafka . This is what my test run got me https://pastebin.canonical.com/158519/
<stokachu> jamespage: this is the bundle im using https://github.com/battlemidget/openstack-novalxd/blob/master/bundle.yaml
<stokachu> jamespage: should i remove cinder?
<jamespage> yes
<stokachu> jamespage: ok
<petevg> @kjackal: Hmmm ... I'll run the tests again, and do some more digging.
<stokachu> jamespage: https://github.com/battlemidget/openstack-novalxd/tree/master/conjure i was going to propose something similar for openstack-base
<kjackal> petevg: I will redo the test again
<beisner> thx jamespage
<petevg> Hopefully, we are not looking at something unique to my machine.
<jamespage> stokachu, I think you adapted the nova-lxd bundle from my original kvm one right?
<stokachu> jamespage: yea
<kjackal> petevg: it is strange that your test did run.... but the assertion failed....
<kjackal> i suspect some race condition there.... let me add a wait for the Ready message
<aisrael> tvansteenburgh: Do you know anything about limitations in testing juju storage via amulet?
<tvansteenburgh> aisrael: nop.
<tvansteenburgh> nope
<tvansteenburgh> aisrael: that doesn't mean there aren't some though, i just don't know. haven't really done anything with storage
<aisrael> tvansteenburgh: Thanks. I'll get some clarification on what they mean
<aisrael> tvansteenburgh: This is the issue: https://github.com/juju/amulet/issues/112
<tvansteenburgh1> marcoceppi: ^
<aisrael> tvansteenburgh: An unrelated amulet/bundletester question. If a test should be skipped because some requirement isn't met (env variables, running on lxc, etc), is there a way tell bundletester to consider the test skipped vs. failing it?
<tvansteenburgh> marcoceppi: shall i carve out a little time for that?
<marcoceppi> tvansteenburgh: yes
<tvansteenburgh> aisrael: bt doesn't have the notion of skip, only pass/fail. you can control which tests are run with the tests key in tests.yaml though
<aisrael> tvansteenburgh: ack, thanks
<tvansteenburgh> aisrael: feel free to open a bug for the skip feature though, if you want
<aisrael> tvansteenburgh: will do!
<jamespage> icey, I will look in detail at charms.storage but can you add the testr/tox configuration as per https://github.com/openstack-charmers/charms.openstack
<jamespage> and drop all the legacy makefile stuff
<jamespage> icey, this will ease migration under /openstack
<icey> jamespage: I can work on that; first time intentionally using tox :-P
<jamespage> icey, essential for migration to openstack infra
<jamespage> icey, we also need to get setup with publishing these modules to pypi and doing propoper releases; some of that we shoudl defer until we migration as there is alot of automation we can leverage there...
<marcoceppi> cory_fu: I have a crazy idea wrt tactics
<marcoceppi> nvm
<cory_fu> That was quick
<aisrael> tvansteenburgh1: Another bundletester question. If a charm requires a kernel module be loaded (local provider/lxd, so must be loaded on host), is that an option? There's nothing explicit in tests.yaml, but what about a makefile target to do it?
<marcoceppi> cory_fu: I wanted to do something crazy, but then I thought about it and changed my mind
<cory_fu> Indeed
<tvansteenburgh1> aisrael: sure
<tvansteenburgh> aisrael: you can run whatever code you want, either in a make target or in a test itself
<aisrael> tvansteenburgh: k, thanks. It may not be an issue if we're able to skip certain environments (like local)
<icey> jamespage: build is passing, using the tox settings and travis settings from charms.openstack :)
<jamespage> \o/
<thedac> DNS HA charm helpers ready for review https://code.launchpad.net/~thedac/charm-helpers/dnsha/+merge/297009
<rick_h_> beisner: can I bug you a second on connecting to an openstack please?
<beisner> howdy rick_h_
<rick_h_> beisner: I'm poking at someone running a hosted openstack cloud and tring to figure out how I'd setup juju config to talk to it
<rick_h_> They've got me creating an application and having an application key and a secret key, sound familiar at all?
<beisner> rick_h_, generally you'll need the keystone endpoint address of their cloud, plus your tenant username, password and region name.
<natefinch> can anyone give me a simple example of a bundle that deploys units to lxc containers?
<rick_h_> natefinch: not off the top of my head but built one with the machine view on demo.jujucharms.com and exported it here: https://pastebin.canonical.com/158543/
<rick_h_> just made up something
<cherylj> natefinch: here's one I had for a bug a while ago:  http://paste.ubuntu.com/17180954/
<mattrae> hi, is it possible to upgrade-charm, going from the charm store to a local charm? it doesn't seem to work to specify --path https://pastebin.canonical.com/158545/
<natefinch> rick_h_, cherylj: thanks.
<rick_h_> mattrae: normally you have to update-charm --switch
<rick_h_> mattrae: that will let you go from one charmstore url to another.../me hasn't tested if that lets you go to local?
<mattrae> thanks rick_h_ yeah looks like for a local charm you may need to use --path rather than --switch.. according to the help --path and --switch are mutually exclusive, so i thought it may not work
<cory_fu> Oh, hey, kwmonroe.  I forgot that I got the bundle_deploy added to bundletester.  That was to fix an issue we had in one of our bundles, to give us more control over the initial deploy.  I forget what the issue was, though.
<cory_fu> However, we could use that to manage the deployer timeout ourselves
<kwmonroe> cory_fu: i dont recall what you mean by 'bundle_deploy added to bundletester'
<kwmonroe> oh, yes i do
<cory_fu> kwmonroe: https://github.com/juju-solutions/bundletester#testsyaml
<kwmonroe> https://github.com/juju-solutions/bundletester/issues/43
<cory_fu> Yeah, that
<kwmonroe> why the heck aren't we using that?
<cory_fu> Ok, so it was to fix something where we needed to reset the env.  Hrm.  Wish I could remember what the failure was
<icey> jamespage: the hardening one has me stumped, it's now using tox but still no module named apt: https://travis-ci.org/ChrisMacNaughton/charms.hardening
<cory_fu> Can someone remind me how to run bundletester with a bundle from the store?
<cory_fu> Ah, never mind.  I remembered how to read
<cory_fu> Though the --help output on BT could be better (I would never have guessed TESTDIR is what I wanted)
<cory_fu> kwmonroe: Hey, how is BT not failing every time on hadoop-processing given that it doesn't contain a Makefile?
<kwmonroe> cory_fu: what happens if you run bt from a local copy of the bundle?
<kwmonroe> so like, bundletester -vF -c bundle.yaml
<kwmonroe> also cory_fu, i don't see any of our bundles with Makefiles
<kwmonroe> exhibit a: https://jujucharms.com/realtime-syslog-analytics/, b: https://jujucharms.com/apache-analytics-sql/, etc
<cory_fu> Right.  I'm trying to figure out why I'm getting this: http://pastebin.ubuntu.com/17188103/
<kwmonroe> dunno cory_fu, i always run like this: ubuntu@a5d019db9619:~/charms/bigtop/bigtop-deploy/juju/hadoop-processing$ bundletester -Fvl DEBUG -t tests
<kwmonroe> i'm guessing cwr does too
<cory_fu> kwmonroe: cwr runs it from the store, but it uses bundle:hadoop-processing instead of cs:hadoop-processing
<cory_fu> Oh, drat.  That didn't actually fix it
<cory_fu> kwmonroe: Apparently, running bundletester with Juju 2.0 hits that issue but 1.25 does not
#juju 2016-06-11
<bdx1> is there a xenial wordpress bundle being passed around?
#juju 2016-06-12
<redelmann> hi there, im writing a charm with resources and start asking: Â¿Can i set a default resource file?
<redelmann> so if i deploy without setting any resource some default file will upload
<redelmann> and later only if user want can change resource file
<redelmann> it is possible?
#juju 2017-06-05
<Catalys> Whenever I do rebuild in Horizon, the OS doesn't seem to change or anything. Is this a known bug?
<rick_h> lazyPower: ping when you're up and about please
<lazyPower> rick_h: ping
<rick_h> lazyPower: dude, how goes?
<lazyPower> rick_h: i'm not sure, still working on the first cup of coffee :)
<rick_h> lazyPower: can I steal 5min? I'll bring the energy, just finishing my first cup :)
<lazyPower> sure 1 sec, i migrated laptops this weeekend, let me get signed into work services
<rick_h> oooh, upgrade time?
<lazyPower> its your old carbon ;)
<rick_h> lol, awesome
<rick_h> that thing is getting around :P
<rick_h> lazyPower: I'm chillin in https://hangouts.google.com/hangouts/_/canonical.com/rick?authuser=1 when you get it worked out
<magicaltrout> I was so happy to get rid of my old carbon
<lazyPower> oh magicaltrout ?
<magicaltrout> kids picked the keys off and you need new ones to replace them :)
<magicaltrout> plus the screen res was pretty woeful ;)
<lazyPower> the 1440 rez did throw me at first
<lazyPower> but with i3 things scale pretty well
<lazyPower> i3 is actually what brought me back to it from my crapple.
<rick_h> I really liked it but it was too big when I was traveling like crazy
<magicaltrout> Model name:            Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
<magicaltrout> i got a dell instead
<rick_h> of course I went too far with surface and that didn't last too long either
<rick_h> magicaltrout: xps 13?
<magicaltrout> 15
<rick_h> biiiig
<magicaltrout> its attached to 2 massive monitors, so in comparison its pretty small ;)
<magicaltrout> the thunderbolt port to drive 2 hdmi outputs is pretty cool
<rick_h> nice
<magicaltrout> its also nice to not be on those embedded intel gfx chips as well
<magicaltrout> not that i do much stuff thats 3d but they suck reasonably
<lazyPower> yeah, i can understand that magicaltrout
<lazyPower> my primary goal was to have a rig powerful enough to to day to day's and also not destroy my back when travelling
<lazyPower> its a little underpowered in terms of being able to run a hangout AND run a deployment in lxd, but overall it does a decent job if i'm not really pushing it. And at the price point i picked it up at, it was almost a no brainer.
<magicaltrout> i used to cycle to and from work 9 years ago with my laptop
<magicaltrout> which was a 15inch Toshiba brick
<magicaltrout> it used to weight a lot
<magicaltrout> and I had a 15 mile each way ride over the yorkshire dales
<magicaltrout> you just need to man up lazyPower ;)
<lazyPower> i'll pass on the alpha goading :)
<magicaltrout> ha
<lazyPower> workouts are for the gym, not commutes
<magicaltrout> i dunno, used to like feeling awake when i got to work compared to 90 mins on public transport
<lazyPower> magicaltrout: this is why i walk every day after my morning meetings
<lazyPower> I've been graphing this with google fit, which isn't a bad dashboard system for your health metrics
<magicaltrout> from your office to the bathroom?
<lazyPower> clever, :) only if thats a 2.3 - 3 mile hike
<magicaltrout> nice
<lazyPower> trying to turn it around broski
<lazyPower> i've been lazy butt a bit too long
<vlad_> Hey folks... quick question probably a simple answer.  If I'm deploying charms to a maas cluster... how much of the maas network configuration is exposed to those charms or does it override those previous settings completely?
<vlad_> In this context I'm deploying the openstack base bundle to a set of maas nodes
<rick_h> vlad_: so when you deploy you determine how the network is required. It doesn't really alter the network so much as leverage what's setup in maas as that's a hardwired setup.
<rick_h> vlad_: you leverage spaces to control what containers on the machines are using which devices and such
<vlad_> rick_h: First off thanks a ton! Secondly for example let's say I have a vlan in a space on a fabric with vlan tag 2030 and 10.203.2.0/24 as the CIDR. If I configure my juju charm to use that CIDR it will know to use the vlan?
<vlad_> The specific charm configuration option only allows the use of a CIDR that's why I'm asking
<rick_h> vlad_: so when you deploy are you specifying any binding of to a space name?
<rick_h> vlad_: e.g. https://jujucharms.com/docs/2.1/network-spaces
<vlad_> rick_h: The machines in question I have bound to the space, but I didn't see any configuration options in doing so from within the juju charm itself.
<rick_h> vlad_: right, normally it's part of the deploy command or the bundle description.
<rick_h> vlad_: so that Juju knows "this application needs to request (from MAAS) a machine with interfaces on these spaces.
<rick_h> vlad_: and then Juju can tell the charm "you've been asked to bind to space X" and it can adjust appropriately based on that request
<vlad_> rick_h: Ok cool makes sense I think I'm doing it right then. Thanks again for your help!
<rick_h> vlad_: k, let me know how it goes
<bdx> good monday morning!
<bdx> is there a preferred method for adding the JAAS controller?
<bdx> other than `juju register jimm.jujucharms.com` ?
<rick_h> bdx: in 2.2 it's juju login
 * rick_h looks for release notes stuff
<bdx> oh nice, but JAAS isn't there yet, so its still `juju register jimm.jujucharms.com`?
<rick_h> bdx: you can s/register/login
<rick_h> bdx: and that'll add it
<rick_h> bdx: that's the new way in 2.2
<bdx> rick_h: sweet! thx!
<Budgie^Smore> o/ juju world
<lazyPower> \o Budgie^Smore
#juju 2017-06-06
<kjackal> Good morning Juju World!
<Naz> Hi, I am not able to deploy a service to an LXD directly?
<Naz> In facts:     juju deploy ./mediawiki --series xenial --to lxd,  the following log is:  machine-34: 12:39:41 WARNING juju.provisioner incomplete DNS config found, discovering host's DNS config machine-34: 12:40:13 WARNING juju.provisioner failed to start instance (Failed to change ownership of: /var/lib/lxd/containers/juju-8aa58b-34-lxd-0/rootfs), retrying in 10s (2 more attempts) machine-34: 12:40:24 WARNING juju.provisioner incomplete DN
<Naz> # cd /var/lib/lxd/containers/juju-8aa58b-34 (NOT -LXD-0, NO SUCH FOLDER) root@HP:/var/lib/lxd/containers/juju-8aa58b-34# ls -l ----> drwxr-xr-x 22 231072 231072   22 May 18 15:43 rootfs
<Naz> Owner and group are 231072
<Naz> JUJU STATUS ==> 34/lxd/0 STATE=  down                     INST ID pending         Series xenial
<Naz> I want to deploy to LXD as I don't want to spin off more than 4 machine, I have 12 services currently. So, for example 3 LXD per machine to have the 12 over 4 machines
<bloodearnest> hello folks. I just notices the 'Notes' column on juju status. How does one add notes?
<bloodearnest> are they annotations? Or can I set them from the charm?
<Naz> I see this is already reported as bug at : https://bugs.launchpad.net/juju/+bug/1614329
<mup> Bug #1614329: Cannot deploy charm to new lxd container on machine: permissions error <lxd> <juju:Triaged> <https://launchpad.net/bugs/1614329>
<Naz> WOW :)
<Naz> at the same second, we posted this, My friend MUP :)
<lazyPower> Naz: :) thats fun, but mup is a bot and just expands bug ids
<lazyPower> bloodearnest: i dont think theres a way to set them from within a charm. I could be incorrect, but to my knowledge we dont have any support for that. you can set status messages via status_set() in the charmhelpers.core.hookenv package.
<Naz> @LazyPower :) Really Interesting, I thought he is human :) How could he ties it to the bug? I used Google :)
<lazyPower> Naz: you had a launchpad link with a launchpad id :) mup knows launchpad very intimately
<bloodearnest> lazyPower, sure, we do that, and application_version_set(), but the Notes thing was new to me
<lazyPower> bloodearnest: if we're missing a route to set data, lmk and i'll see about getting something pushed at charmhelpers
<Naz> Ok, so, he just echo-ed the contents of the bug id that I pushed. Nice job anyway ...
<Naz> @LazyPower, Anastasia (anastasia-macmood) wrote on 2017-02-17: 	#3  Removing 2.1 milestone as we will not be addressing this issue in 2.1. Changed in juju: milestone: 	2.1-rc2 â none . DOES THIS mean that it is postponed till 2.2?
<lazyPower> Naz: that seems to be the case, possibly later depending on the roadmap. they will re-target the bug at which release should receive the fix.
<Naz> Ok, Thanks for the clarification, In fact, Cheryl Jennings (cherylj) wrote on 2016-04-20: 	#1  There is an explicit check to determine whether or not a machine can host containers. When a machine agent starts up, it checks to see if the machine is running in a container, and if it is, that machine cannot host containers.
<vlad_> Hey everyone I'm back for more questions. I've got a yaml bundle of my openstack config that I want to use. However, when I try to deploy the yaml config it errors out complaining about lxd containers that don't exist. Is there an easy way for me to add the creation of these containers to my config? (I started out by exporting this from a bundle in the gui)
<admcleod> vlad_: can you pastebin your bundle?
<vlad_> admcleod: I don't know for sure if I can due to it being for work. However, it's just a configured version of the openstack-base bundle. I.e. I've added IP information and some other secrets (via the UI), and then exported it for easier iterating and development. Does that help?
<vlad_> Also if pastebin is necessary I can try to take time to remove sensitive information.
<admcleod> vlad_: well, at the veyr least, it would be useful just to see the block where it tries to deploy one of these lxd containers its complaining about, and the specific error message
<vlad_> admcleod: Yes I definitely think that makes sense. I actually just figured it out. I had to modify the bundle slightly, and that modification didn't make it into the export though other stuff did. Essentially it was referencing the creation of an lxd container on a machine that wasn't defined
<vlad_> admcleod: Thanks for your time!
<admcleod> vlad_: no worries, glad you got it working :)
<Budgie^Smore> o/ juju world
#juju 2017-06-07
<jac_cplane> does anyone have a juju 2.x work around for centos not supporting python3
<jac_cplane> it seems some of the juju libs require python3.  which part of the repos for centos7
<kjackal> Good morning Juju World!
<Mmike> Hello! Is there a way to configure the size of the 'logs' database? Currently it is 4GB, but I'd like to shrink that, if possible?
<anrah_> Is there a way to include non-PyPi packages on wheelhouse.txt
<anrah_> I mean that I created python package and I would like to include that on a charm, but not (yet) publish it on PyPi
<SimonKLB> anrah_: the wheelhouse.txt is very similar to requirements.txt, you can add your own local paths or paths to a git repo
<anrah_> Thanks, Noticed also that :)
<SimonKLB> anrah_: for example, i have this in a charm's wheelhouse.txt "-e git://github.com/simonklb/aiosmtpd.git@merged#egg=aiosmtpd"
<SimonKLB> great :)
<lazyPower> icey: ping
<icey> morning lazyPower
<lazyPower> hey there, can you refresh my memory on the ceph-osd osd-devices config string?
<lazyPower> does that *have* to point to a /dev device or can it just be a filepath?
<icey> lazyPower: it can be a directory, although there are some practical caveats
<icey> basically around using the same disk for multiple purposes, so the directory use case better serves for development, but can be used for example with bcache devices
<lazyPower> yep
<lazyPower> we're working on some demoware and i forgot how osd-devices config string worked, that unblocked me :) thanks
<icey> glad to help lazyPower
<bdx> hello all, can I apply constraints to juju deployed lxd instances via `--constraints`` ?
<bdx> I'm trying it out a few different ways and not seeing the lxd profile modified to any degree
<bdx> for example `juju deploy ubuntu --to lxd:0 --constraints "mem=1G"`
<bdx> I thought ^ was a thing
<bdx> possibly I'm using incorrectly
<rick_h> bdx: so it was something getting worked on. lxd now supports enforcing those and kvm was updated to respect them
<rick_h> bdx: I don't think the lxd side was landed
<rick_h> bdx: the 2.2 release notes call out kvm supporting them but no mention of lxd so looks like it didn't get in there
<vlad_> Hey guys quick question. I'm deploying a bundle of cephmon and ceph node charms from the store. On there CIDR network definitions do they need me to provide the actual static IPs or are they capable of DHCP?
<rick_h> 14min juju show warning: https://www.youtube.com/watch?v=7Tqg3Hnkq2U
<rick_h> lazyPower: marcoceppi bdx kwmonroe jrwren and more ^
<vlad_> rick_h: Hey you got a second to chat?
<rick_h> vlad_: a sec, doing a live stream in 10
<vlad_> rick_h: Is that the youtube link? If so I'd be down to watch
<rick_h> vlad_: I'm not sure on the CIDR/DHCP bit on the cephmon/node charms. Maybe cholcombe has the details there.
<rick_h> vlad_: yes, that link should go live once we hit go on the stream
<cholcombe> vlad_: we don't do anything different than this: http://docs.ceph.com/docs/hammer/rados/configuration/network-config-ref/.  We just pass the settings along
<cholcombe> vlad_: take a look at the ceph networks section
<vlad_> cholocombe: Thank you
<vlad_> rick_h: Thanks to you as well! Good luck on the stream I'll watch
<rick_h> https://hangouts.google.com/hangouts/_/y5xturh7gffpzedw7ob6hzpijme for those joining in (lazyPower bdx kwmonroe jrwren etc)
<rick_h> marcoceppi: you around for it today?
<lazyPower> rick_h: skipping this week, will catch you on the next one
<marcoceppi> sure, i think I can join
<rick_h> lazyPower: k
<rick_h> lazyPower: you get to be the start next one?
<vlad_> cholocombe: Hey ok cool that document totally cleared it up. If I'm deploying a set of them in a bundle I need to provide an array of addresses then correct?
<cholcombe> vlad_: are you deploying to metal?
<lazyPower> rick_h: uhhh
<vlad_> cholocombe: Yeah I'm deploying to metal. I don't think the array idea I had makes any sense
<vlad_> cholocombe: I've got all my networks setup on MaaS. When the deploy kicks of it spits out errors trying to get addresses for my services immediately, but keeps deploying. It gets IPs for all the physical machines and ends up failing around the ceph area
<rick_h> marcoceppi: https://hangouts.google.com/hangouts/_/hjkwxg4fvjb3rhvfwoqswejggue
<rick_h> for folks watching heading to https://www.youtube.com/watch?v=Ii1Ax4HgAP0
<rick_h> I'll have to see if I can download/stitch them together later
<cholcombe> vlad_: you can have ceph-mon/osd gather their ip info from the juju network spaces if you want
<cholcombe> it's in the docs: https://jujucharms.com/ceph-mon/ under network support
<vlad_> cholocombe: Is there any way to define spaces in a bundle?
<cholcombe> vlad_: i don't think so.  i think you define them on maas?
<cholcombe> i'm not entirely sure
<vlad_> cholocombe: Yeah at first I didn't even try and use spaces outside of MaaS, and I just realized that juju has picked up my space from MaaS, and I guess I've architected this a bit wrong from that standpoint
<vlad_> Is there anyway to remove a space once it has been added? I saw no mention of this in the docs
<rick_h> cholcombe: vlad_ yes, the spaces must exist in the underlying provider first
<rick_h> cholcombe: vlad_ and then you tell the bundle to leverage them and request that the right unit is on the right machine with access to the right spaces
<cholcombe> rick_h: thanks
<vlad_> rick_h: Is there a delete space command for when I've created a space in juju that doesn't exist ont he provider?
<rick_h> vlad_: hmm, checking "juju help commands" there's a reload-spaces but not seeing a remove-space :(
<Budgie^Smore> are we having fun yet?
<lazyPower> Budgie^Smore: with 2.2 on the horizon this week according to rick, you bet :)
<Budgie^Smore> lazyPower so he is being particular bossy this week then? ;-)
<lazyPower> not even close  (not to me anyway)
<lazyPower> he gave me leeway
<Budgie^Smore> hehehe nice :)
<Budgie^Smore> so just went will it hit jaas is the real question ;-)
<rick_h> Should hit really tight on release
<Budgie^Smore> coolio :)
<Budgie^Smore> Finally started building a preseed file for my "cluster in a box" idea
#juju 2017-06-08
<kjackal> Good morning Juju World!
<tvansteenburgh> rick_h: just watched juju show from yesterday. following up on the k8s hook failure that whizzed by
<tvansteenburgh> rick_h: got any logs from that?
<stub> Anyone else seen this failure with charm build? AttributeError: 'CommentedMap' object has no attribute 'append'
<stub> Full traceback at http://pastebin.ubuntu.com/24807862/ , buried in tactics stuff.
<rick_h> tvansteenburgh: no but I can repro later today.
<tvansteenburgh> rick_h: that would be great, thanks
<Zic> lazyPower: hi, interesting reading :p https://medium.com/@ktsakalozos/federated-k8s-with-on-prem-clusters-and-juju-wip-5d2022379818
<tvansteenburgh> kjackal: ^
<tvansteenburgh> Zic: kjackal wrote that (obligatory introductions here)
<kjackal> Have you tried federation Zic?
<Zic> kjackal: not, but I was waiting this feature to be bundled as part of CDK :)
<Zic> I need it for one of our customer which is on multi-region hosting (baremetal & cloud)
<Zic> s/not/no/
<kjackal> Zic: you can read on the blog that mostly works, but has its raugh edges. We will try federation again with the 1.7 release
<Zic> yup, I saw the limitation concerning NodePort, and my customer use them a lot
<Zic> but it's cool to see some material-reading about this
<Zic> (to prepare myself with kube-fed, and to see this is possibile even if the solution is not builtin CDK for now)
<rick_h> tvansteenburgh: first two deploys isn't hitting the issue. I'm going to run through a few more but not sure atm.
<tvansteenburgh> rick_h: okay
<rahworkx> hello all, was wondering if anyone can point me to some documentation on how to install a specific version of ruby within a charm?
<lazyPower> rahworkx: you could potentiallyl use something like rbenv to get that specific version of ruby, however it has the byproduct of requiring build tools.
<lazyPower> rahworkx: i would probably invest time in snapping up the application, and making that my go-to route for packaging the runtime env w/ the application.
<lazyPower> that way its nice and portable in and outside of a charm
<lazyPower> or if you're not opposed, i wrote layer-docker to ease shipping docker images in charms. and you could stuff it in a container and ship it that way :)
<rahworkx> lazyPower: Thanks, I like the snap option because of the newer solution. However will proably go with the docker solution to help with porting over some older images.
<lazyPower> rahworkx: are you familiar with reactive charming?
<lazyPower> rahworkx: here's layer-docker https://github.com/juju-solutions/layer-docker   and layer-snap respectively https://github.com/stub42/layer-snap
<rahworkx> lazyPower: learning reactive charming now... I will check them out...
<tvansteenburgh> rick_h: i just saw an etcd install hook error that was caused by enable-os-upgrade and enable-os-refresh-update being set to false, so the machine had an out-of-date snapd that didn't recognize the --classic flag to `snap install` - wonder if that might be what happened to you
<rick_h> tvansteenburgh: doubtful. I don't have that set...
<rick_h> tvansteenburgh: and 5 good installs so far :/ so not sure
<tvansteenburgh> rick_h: okay
#juju 2017-06-09
<kjackal> Good morning Juju World!
<bdx> rick_h: you blew up the JAAS hack!
<bdx> :)
<rick_h> bdx: who what me?
<bdx> - ha
<rick_h> bdx: it was a cool hack and had me finding out Juju didn't work the way I thought it did. You *blew* my mind dude
<jrwren> +1
<jrwren> it is great to have that better understanding now.
<bdx> hehe, nice! yea!!
<jac_cplane> is there a different channel for Charm - as in charm build command or is this the group
<jcromer>  I am having issues bootstrapping a vm based controller on MAAS.  Is this the right place to ask questions?
<jac_cplane> I'm having a problem with CHARM Build --series centos7  - which brings in python files written for ubuntu.    Who should I talk to?
<tvansteenburgh> jac_cplane: you're in the right place
<tvansteenburgh> jac_cplane: https://lists.ubuntu.com/archives/juju/2017-May/thread.html - search for centos on that page
<tvansteenburgh> jac_cplane: specifically, this one might help: https://lists.ubuntu.com/archives/juju/2017-May/009053.html
<jac_cplane> Yes - i went through this.  and We already have a charm that deploys perfectly with centos7.   the problem is when I try to get the charm ready for the charm store.  Charm push .    or charm build    adds basic.py  which includes  apt-get commands
<thedac> jcromer: would you mind adding your bootstrap debug pastebin here. We'll figure out who we can get to help shortly
<jcromer> thedac: Sure
<jcromer> https://pastebin.com/Hd89pz4E
<thedac> rick_h: who on the juju team can we get to help out with a bootstrap problem for jcromer? ^^^
<rick_h> thedac: /me reads backlog sec
<thedac> rick_h: the bootstrap debug just stops at fetching tools. No error.
<rick_h> thedac: I see, I assume that's hung there for a while? Is the instance coming up in maas and can it reach streams for the agent? hmm, why are we using 2.0.2?
<rick_h> thedac: jcromer I'd suggest poking at the log it mentions there: Logging to /var/log/cloud-init-output.log on the bootstrap machine
<rick_h> and see if anything is there
<tvansteenburgh> jac_cplane: https://github.com/lenovo/workload-solution/wiki/python2-charm#basicpy documents how fengxia worked around that
<rick_h> thedac: jcromer and 2.0.2 is fishy as 2.0.4 is latest on the 2.0.x series and we should at least be on 2.1.3 for anything coming up unless we're testing something old?
<thedac> rick_h: long story on the 2.0.2. But the bootstrap node is in a broken state so he can't get to that log
<rick_h> thedac: he can't ssh to it directly?
<jcromer> thedac: i can update juju
<jcromer> i am in
<rick_h> thedac: jcromer so I'm not sure. The machine came up enough for cloud-init to kick off and so if maas is setup with keys I'd expect to be able to connect with those
<jcromer> sorry, had been attempting to use juju ssh
<jcromer> direct ssh works
<rick_h> thedac: jcromer and I'd see if there's something in the logs, I'd think there's something in the network going out
<rick_h> thedac: jcromer which is causing it to fail.
<jcromer> so what version of juju do you recommend that i update to?
<rick_h> jcromer: right, so getting the agents is getting the binary that is juju so until that's done you're not in juju land
<rick_h> jcromer: 2.1.3 is the latest stable release
<rick_h> jcromer: if you have to be on 2.0.x per thedac's long story then 2.0.4 so it has the security fix that was released a weekish ago
<jcromer> i don't have to stay on 2.0.x
<rick_h> jcromer: k, then yea 2.1.3 and from here I'd debug why it isn't getting the agent download via sitting on the machine provisioned and checking out if it can reach streams and if there's anything in the cloud-init log there
<jcromer> ok, i will update juju, attempt bootstrap again and then check logs if i still see issues
<jcromer> thanks, for the help and information
<rick_h> jcromer: definitely, unfortunately a lot of the team is traveling and hitting weekend time so it might be a little bit quiet but I'll try to keep an eye out for your next steps
<jcromer> thanks, greatly appreciated
<rick_h> jcromer: I strongly assume that there's something like an apt proxy sitting there that needs to be setup for ssh, or an agress network filter, or something fishy causing the cloud-init script to hang trying to get the juju binary from the url it's telling you in the logs there.
<rick_h> but that's just my own experiences I seem to hit when I get something that looks like this.
<jac_cplane> - ok -- the last work-around looks promising.   do you know if this be interfered with if I do a Charm push cs: ?
<jac_cplane> will layer/basic gets overr written when put into the charm storr
<tvansteenburgh> jac_cplane: charm push should work fine
<tvansteenburgh> jac_cplane: you would want to charm build, then make local edits to basic.py, then push
<Budgie^Smore> o/ juju world
<lazyPower> \o Budgie^Smore
<lazyPower> whats news budge
<Budgie^Smore> hating on bash exit 1 status code right now
<Budgie^Smore> for some reason I can't get my preseed file to echo a line into a new file :-/
<lazyPower> booger :\
<jrwren> you mean like echo "a line" > a_new_file ?
<Budgie^Smore> jrwren yeah
<Budgie^Smore> I am putting in-target in front of the line too
<vlad_> Hey guys quick question. I'm deplyoing the openstack base bundle via juju onto a maas set of managed instances. Do I need to manage the dhcp of the servers and containers themselves from maas or can juju manage this? Or should I have something else managing DHCP for all those networks? Thanks in advance!
#juju 2017-06-10
<jac_cplane> how do I push a centos7 charm to the charm store?   when I push it only the trusy charm shows up.
<jac_cplane> when I look at the charm in the charm store https://jujucharms.com/u/cplane-team/oracle-12c/
<jac_cplane> it shows juju deploy cs:~cplane-team/trusty/oracle-12c-4  instead of juju deploy cs:~cplane-team/centos7/oracle-12c-7
<jac_cplane> charm pull cs:~cplane-team/centos7/oracle-12c-7  works as expected.
<jac_cplane> what is the difference between charm release and charm [push, publsh]  Does release just provide meta-data for a type like "stable" ?  or is there something more it does
<lazyPower> jac_cplane: thats exactly it. push just pushes the object. release sets the charm revision on a channel
<lazyPower> jac_cplane: https://jujucharms.com/docs/stable/authors-charm-store#entities-explained covers this in detail
<jac_cplane> Hi I'm struggling to get a Centos Charm to work - following lenovo centos fixes http://fengxia.co.s3-website-us-east-1.amazonaws.com/python2%20charm.html.
<jac_cplane> is there any other resources ?
<jac_cplane> does the juju team have any way to support centos ?
<tvansteenburgh> jac_cplane: your best bet is to ask (either here or on the mailing list) specific questions about specific errors or problems that you are encountering
<tvansteenburgh> jac_cplane: mailing list is better in my opinion - more people will see it
<jac_cplane> can you point me to the mailing list?  thanks
<jac_cplane> running into this problem
<jac_cplane> Traceback (most recent call last):
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected   File "/var/lib/juju/agents/unit-rac-master-0/charm/hooks/leader-elected", line 10, in <module>
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected     basic.init_config_states()
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected   File "/var/lib/juju/agents/unit-rac-master-0/charm/lib/charms/layer/basic.py", line 221, in init_config_states
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected     from charms.reactive import set_state
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected   File "/usr/lib/python2.7/site-packages/charms/reactive/__init__.py", line 20, in <module>
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected     from .bus import set_state  # noqa
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected   File "/usr/lib/python2.7/site-packages/charms/reactive/bus.py", line 27, in <module>
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected     from charmhelpers.cli import cmdline
<jac_cplane> unit-rac-master-0: 13:09:42 INFO unit.rac-master/0.leader-elected ImportError: No module named cli
<jac_cplane> if i ssh to the machine and type python - i can manually import that module
<jac_cplane> I've printed out the path here
<jac_cplane> ['/var/lib/juju/agents/unit-rac-master-0/charm/hooks', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib/python2.7/site-packages', '/var/lib/juju/agents/unit-rac-master-0/charm/lib', '/var/lib/juju/agents/unit-rac-master-0/charm/lib']
<tvansteenburgh> jac_cplane: juju@lists.ubuntu.com
<jac_cplane> Python 2.7.5 (default, Nov  6 2016, 00:28:07)
<jac_cplane> [GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2
<jac_cplane> Type "help", "copyright", "credits" or "license" for more information.
<jac_cplane> >>> import os
<jac_cplane> >>> import sys
<jac_cplane> >>> print(sys.path)
<jac_cplane> ['', '/usr/lib64/python27.zip', '/usr/lib64/python2.7', '/usr/lib64/python2.7/plat-linux2', '/usr/lib64/python2.7/lib-tk', '/usr/lib64/python2.7/lib-old', '/usr/lib64/python2.7/lib-dynload', '/usr/lib64/python2.7/site-packages', '/usr/lib/python2.7/site-packages']
<jac_cplane> >>> from charmhelper.cli import cmdline
<jac_cplane> Traceback (most recent call last):
<jac_cplane>   File "<stdin>", line 1, in <module>
<jac_cplane> ImportError: No module named charmhelper.cli
<tvansteenburgh> charmhelpers
<jac_cplane> right. one sec
<jac_cplane> > from charmhelpers.cli import cmdline
<jac_cplane> >>>
<jac_cplane> yes - no problem.  I've done this before.  sorry - going on little sleep here
<tvansteenburgh> so that's if you ssh to the unit i assume?
<jac_cplane> yes - ssh into the unit.  as root
<jac_cplane> seems there is a routine to reload to the python runtime.  I wonder if that needs to be changed
<jac_cplane> this module runs before this section of code is executed def reload_interpreter(python):
<jac_cplane> I'm guessing that some env is not set properly
<tvansteenburgh> jac_cplane: have you used the `juju debug-hooks` command yet?
<tvansteenburgh> juju debug-hooks rac-master/0
<jac_cplane> well - i can tell you that the charm works properly before I do a charm build
<jac_cplane> ok - let me try that
<tvansteenburgh> once the tmux session starts, open your leader-elected hook file and put a pdb break point in it, save it and exit
<tvansteenburgh> then execute the hook directly to debug it
<jac_cplane> ok - will do
<tvansteenburgh> is your charm source in github or somewhere i can look at it?
<jac_cplane> yes - https://github.com/cplane-networks/dvnd-juju/tree/master/oracle-12c
<jac_cplane> we can run the charm locally in a bundle  using  the ../oracle-12c syntax.   but when I do a charm build --series centos7  to push it to the charm store is when it breaks
<jac_cplane> tyring to find the path of least resistance is to just provide the local charm to my customers or to figth to put it in the charm store.
<tvansteenburgh> jac_cplane: the error is from a rac-master charm - where's the source for that?
<jac_cplane> source is here https://github.com/cplane-networks/dvnd-juju/tree/master/oracle-12c
<jac_cplane> do you think if i just eliminate the reactive/oracle-12c.py then it wont bring in all of the reactive frame-works.    I believe the problem is in the reactive framework changes for python2 on centos7
<jac_cplane> If I just comment that out in the charm will it pass?
<tvansteenburgh> jac_cplane: if you're not using reactive then you don't even need charm build
<jac_cplane> oh - awesome
<jac_cplane> we are only using reactive to display when the charm is installed
<jac_cplane> but it seems that reactive is not supported on centos.
<jac_cplane> hence this mess
<jac_cplane> let me try to strip that and push it to the charm store.
<jac_cplane> otherwise - I'll come back and see if we can debug this.
<jac_cplane> deleting the relation seems to have fixed the problem.  will push to the charm store and rerun test.  thanks
#juju 2017-06-11
<thomi> Hello #juju - I have a single lxd controller running on my laptop, with 4 applications (5 units total) running. This morning I find that /usr/lib/juju/mongo3.2/bin/mongod is using 13GB or RAM. This seems excessive to me. Is there something I can do to trim that down?
<thomi> I thought perhaps restarting mongod might work, but I don't see a systemd service file for it, so I guess it's managed by jujud itself.
<thomi> veebers: perhaps you have any ideas?
<veebers> thomi: hey, wow yeah that seems excessive. mongodb is known as something else for systemd I think, juju-mongodb?
<thomi> veebers: this should be on my host, or on the bootstrap node?
<veebers> thomi: which version of juju are you using?
<thomi> 2.0.2-zesty-amd64
<veebers> thomi: on the bootstrap node, you're host wouldn't know about mongo (as it's installed in the lxd container)
<thomi> I don't see any systemd control files with 'mongo' in their name. Not on the host nor the bootstrap node
<veebers> huh
<veebers> I might be wrong then, let me check something out
<thomi> I might not understand how lxd works, but I don't see 'mongod' in the process list in the container, only on the host
<veebers> thomi: you only have a single lxd container?
<thomi> no, there's a bunch, bot most of them are for the applications deployed
<veebers> thomi: ok, you'll want to be on the controller machine
<thomi> I'm not familiar with that terminology. What does 'controller machine' mean, exactly?
<thomi> oh! I see now, this has changed from juju1
<veebers> thomi: ah right, sorry yes I'm so used to thinking in juju2 terms
<thomi> I was assuming that 'juju ssh 0' got me into the bootstrap node, but I guess it doesn't
<thomi> So what's the easiest way with juju 2 to ssh into the controller containers?
<veebers> thomi: ah right, juju ssh -m controller 0 should so it
<thomi> ahh
<veebers> you may need juju ssh -m <controller name>:controller 0
<veebers> (Each controller has a controller model)
<thomi> hmm, ok, thanks. That helped. Thanks!
<veebers> thomi: nw, let me know how it goes
#juju 2018-06-04
<myrat> wassup guys
<TheAbsentOne> Hey everyone, I need some help from a pro charmer. I have a charm (A) that uses the pgsql interface to communicate with the postgres charm (P). I know want A to be able to change the postgres pg_hba.conf (I want to add an entry). What is the best way to do this? No postgres library allows me to edit this file it seems :/ Is it possible for a charm to change a file on a remote charm?
<TheAbsentOne> I should probably ping stub for that question. Is it possible for another charm (so not the postgresql) to edit the pg_hba.conf? So no manual (ssh) steps are needed?
<stub> TheAbsentOne: No. If a subordinate charm did make a change, the main PostgreSQL charm would stomp on it. There is the extra_pg_auth  charm config option, but charms can't set their own or other's charm config.
<stub> I think you will need to connect your charms to pgbouncer rather than directly to PostgreSQL. pgbouncer doesn't do IP address checks, so if a charm leaks credentials to another charm, the other charm will also be able to connect via pgbouncer using them.
<TheAbsentOne> stub: and is there a way (without looking at postgres) for a charm to edit a config file remotely? That would solve it too. I'm not sure how pgbouncer works, I'll look into it later todat
<gnuoy> cory_fu, I think https://bugs.launchpad.net/juju/+bug/1738614/comments/10 is the issue with libjuju ci
<mup> Bug #1738614: LXD pool already exists <lxd-provider> <storage> <juju:Incomplete by ecjones> <https://launchpad.net/bugs/1738614>
<TheAbsentOne> stub: could you also tell me how it actually works. I connect to pgbouncer as if it was postgresql charm through the pgsql interface; I use set_database to request a database and then what? What happens in terms of auhentication? What does the pgbouncer and the postgres charms actually do?
<stub> TheAbsentOne: A subordinate charm is cohosted, so can edit all local config files. Charms need to cooperate to do this though, or they will stomp on each other. A remote charm cannot edit anything on a unit it is related to - that would be a security hole. The remote charm can only ask on the relation for some operation to occur, which requires a protocol that supports it.
<stub> TheAbsentOne: The pgbouncer charm is a proxy, and to your clients just looks like PostgreSQL. It talks the same protocol to the clients.
<stub> TheAbsentOne: The difference is that pgbouncer does not support IP address restrictions.
<TheAbsentOne> stub: So that means that the postgres user, that is created when the set_database request occurs, can access the database from whatever host then?
<stub> TheAbsentOne: So if your unit is related to PostgreSQL, only your unit (and subordinates) can use the credentials it hands out. But if your unit is related to pgbouncer, any unit can use the credentials it hands out.
<stub> yes
<stub> A security limitation, but a helpful one in your case.
<TheAbsentOne> Ah I see, I'm gonna test it out, a huge thanks stub! And maybe (it's a very small maybe) I will pull your repo and try to implement the feature request but I doubt I will succeed :P
<TheAbsentOne> correct it's not the optimal solution, the feature request on the interface would be the perfect scenario!
<TheAbsentOne> thanks again sir stub!
<stub> Sure. I've been buried in Cassandra, and won't be back on PostgreSQL for a bit (I've got automatic failover half done, so that is up next)
<stub> There is now a published Cassandra interface, if it was you who was asking the other week.
<TheAbsentOne> ohn would you mind sharing your cassandra repo? Or is it not online yet? You created an interface?
<TheAbsentOne> yeah x)
<stub> interace:cassandra now works in your layer.yaml, per https://github.com/stub42/interface-cassandra
<TheAbsentOne> I might try to add cassandra support to my charm then as well, good stuff
<stub> https://git.launchpad.net/cqlsh-charm/tree/ is a simple charm that uses it
<stub> (cs:~cassandra-charmers/cqlsh )
<TheAbsentOne> awesome I look into it after I fix postgres and mysql!
<cory_fu> jamespage: You around?  I've been requested to facilitate a charm-helpers release for the goal_state helper.
<TheAbsentOne> I want to install this on a charm: https://github.com/PyMySQL/PyMySQL, since it's installable through pip I thought adding a wheelhouse.txt with the name (pymysql) was enough. What am I forgetting? :/
<kwmonroe> TheAbsentOne: that should be it.  charm build after you update the wheelhouse.txt should stick that in the output ./wheelhouse directory, which gets installed at charm deploy time.
<TheAbsentOne> kwmonroe: it seems no archive was created in the wheelhouse dir any idea how I can solve this?
<TheAbsentOne> kwmonroe: nvm I have no explenation but I redid a build and it is fine, I think I didn't put the wheelhouse.txt where it belonged or something
<TheAbsentOne> thanks man!
<kwmonroe> np, glad it's there now
<TheAbsentOne> kwmonroe: what was the best/easiest way to fetch the IP app of the charm where I (another charm) has a relation with?
<kwmonroe> TheAbsentOne: if you're dealing with a python charm that includes charmhelpers, use the hookenv network_get: https://github.com/juju/charm-helpers/blob/master/charmhelpers/core/hookenv.py#L1157
<kwmonroe> TheAbsentOne: here's a sample: https://git.launchpad.net/postgresql-charm/tree/reactive/postgresql/client.py#n350
<Guest25856> hi
<TheAbsentOne> perfect kwmonroe gonna try now, thanks!
<TheAbsentOne> stub: I tried pgbouncer but I end up with the same thing. "Unable to connect to PostgreSQL server: FATAL: pg_hba.conf rejects connection for host..." Not sure what I'm missing here
<TheAbsentOne> also kwmonroe I'm not getting it to work buddy; ip = hookenv.network_get(endpoint) ; ip["ingress-addresses"][0] for the actual ip right? Or am I messing things up? :/
<TheAbsentOne> welp it's gonna be for in a couple of hours gonna catch some Zzzz's
#juju 2018-06-05
<TheAbsentOne> Owkey I'm gonna try this here too guys, maybe a genius, as you all are, can help me. So I'm having the same issue for mysql as for postgres. I want to edit privileges so the credentials I (correctly receive) also allow a completely different host to use them and connect to the database.
<TheAbsentOne> For mysql I had the idea to use pymysql (a library) and connect from within my charm (that is directly connected to mysql). I can connect and use the database perfectly. So I thought I'm gonna run a grant all privileges query from within this charm. I noticed however that the created user doesn't have grant privileges.
<TheAbsentOne> This means that I cannot edit the newly created user to allow him to access from other hosts but the direct connected charm. Hence I'm stuck, does anyone know a workaround?
<stub> TheAbsentOne: Either the pgbouncer charm is broken, or you are attempting to connect to the PostgreSQL server directly rather than via the pgbouncer proxy. If the '...' in 'rejects connection for host ...' is the pgbouncer IP address, then the charm is broken.
<TheAbsentOne> stub: when trying to connect with adminer from my charm (directly connected to pgbouncer) to pgbouncer I receive: "Unable to connect to PostgreSQL server: could not connect to server: Connection refused Is the server running on host "10.10.139.97" and accepting TCP/IP connections on port 5432?"
<TheAbsentOne> the portnumber I received however was "6432" unsure if that is the issue
<stub> pgbouncer by default runs on a different port, so if you use a different port than what you were given it will not work.
<TheAbsentOne> a netstat shows he listens on 6432, so it might be adminer here
<TheAbsentOne> stub it was the port, I didn't check it my bad! Just like you said it works from my far away charm as well! A huge thanks man, I'm glad this means I have at least postgres working now <3
<TheAbsentOne> You don't have, by any chance, an idea to work around my mysql issue?
<stub> I'm not at all familiar with mysql, and only vaguely with the charm. I think it has an administrative as well as a standard relation endpoint? You might need to use the adminstrative one if you need more privs
<stub> Devs like their apps to bootstrap their own schema, but DBAs and security analysts like the apps to run with restricted permissions and setup to be done from an isolated system with the higher level of privs.
<TheAbsentOne> yeah I completely understand and I think it's a good security measure stub, I'm using the mysql-shared interface and the interfaces I'm gonna try with the mysql-root interface now
<TheAbsentOne> I'm just a bit stunned from the sql part, stub. I thought it would be possible for a user to grant himself privileges to add more host support but it seems that's impossible :/
<TheAbsentOne> anyways big thanks man!
<cablekevin> Question, i can't find a known bug regarding to JuJu and MAAS (in particular
<cablekevin>  file:"github.com/juju/gomaasapi/controller.go", line:885). The controller forces it to use a wrong URL to get the MAAS version.
<cablekevin> Output of JuJu: https://pastebin.com/VJKHittk
<cablekevin> Pardon me for my crappy copy/paste
<cablekevin> Bootstrapping juju doesn't really want to work for me because of this issue
<cablekevin> The request URL is clearly not correct
<rick_h_> cablekevin: is that the url in your cloud setup? You did an add-cloud for MAAS and told it where the MAAS was? It should verify it can contact the MAAS as part of that add-cloud walk through
<cablekevin> hi rick_h_
<cablekevin> the result of juju show-cloud is: endpoint: http://192.168.188.2/MAAS
<cablekevin> and that is valid
<cablekevin> but what i see is that the endpoint "/MAAS/api/2.0/version" does not work and "/MAAS/api/version" works
<cablekevin> However when juju is bootstrapping i cannot specify that specific endpoint (not that i want to do so)
<rick_h_> cablekevin: what version of Juju is this?
<cablekevin> Maas: 2.3.3 and Juju: 2.3.8
<cablekevin> On Xenial
<rick_h_> cablekevin: hmm http://maas.mydomain/MAAS/api/2.0/version/ does work for me on my MAAS
<rick_h_> cablekevin: it should work with the trailing /
<rick_h_> cablekevin: which is in the log you pasted
<cablekevin> hmmmm let me check
<cablekevin> i can confirm that the trailing "/" makes it work
<rick_h_> cablekevin: so I read the log that the juju client in question is having a hard time reaching the IP address vs the URL being incorrect
<cablekevin> you are correct
<cablekevin> it's weird, it should be correct
<cablekevin> and it should work actually
<cablekevin> i think my internal/external addresses are being mixed up
<cablekevin> thanks for pointing me in another direction other than what i was gazing at
<rick_h_> cablekevin: np, let me know if we can help in any way. Appreciate you pushing through it.
<cablekevin> it's working now :)
<cablekevin> learned a lot by getting it done the hard way
<cablekevin> really appreciate it rick_h_
<cory_fu_> jamespage: You around?
<jamespage> cory_fu_: yeah sorry sound was muted - missed your ping!
<rick_h_> cablekevin: awesome, glad you got it running
<cory_fu_> jamespage: No worries.  You up for cutting a charmhelpers release?  Or any objection to me doing it?
<jamespage> cory_fu_: please feel free!
<cory_fu_> jamespage: +1  Thanks
<cory_fu_> jamespage: Was going to call this 0.19.0 to follow semver, since goal_state is a new feature, but it's a pretty small release, so just wanted to make sure you didn't prefer 0.18.12
<jamespage> cory_fu_: lets go with 0.19.0
<cory_fu_> jamespage, kwmonroe: 0.19.0 is out
<kwmonroe> +100
<kwmonroe> thx cory_fu_
<kwmonroe> 9 for you
<TheAbsentOne> hey kwmonroe you don't have, by any chance, a minimal working example for your mysql-root interface that requests a database as well. I'm kinda confused with the documentation :s
<ItsAllRigh> Anyone able to help out with a 'space binding' issue with deployments?
<KingJ> Silly question... can Juju's controller be in a different cloud to the machines it orchestrates? I want to deploy my Juju controller on AWS/GKE etc, but actually deploy to bare-metal machines locally via MAAS.
<wpk> KingJ: Single model needs to be on the same cloud, controller can control multiple models on different clouds. As long as there's connectivity between controller and machines in the model.
<KingJ> So the initial model needs to be in the controller's cloud, but subsequent models that are created can be targeted to any cloud so long as there is connectivity between the controller in cloud A and machines in cloud B? What connectivity is needed - just Machines->Juju plus Juju->Cloud API or does Juju need to be able to reach the machines directly?
<wpk> IIRC you are correct (regarding the connectivity)
<KingJ> Great, thanks for the info :)
<rick_h_> KingJ: no, you have to have the controller on the same cloud so it's using the same provider but can do different regions of that cloud
<rick_h_> KingJ: JAAS provides a cross public cloud but doesn't work with MAAS since that's got a lot of requirements to make sure your private network/etc in MAAS can reach out
<rick_h_> and howdy wpk :)
#juju 2018-06-06
<wpk> rick_h_: hm, I was sure this was possible, dunno why. But now, looking at add-model help - yep, it states that you can't have multiple clouds on one controller
<TheAbsentOne> Does the layer-index repo on github has (guaranteed) the most recent interface and charm layers?
<stub> TheAbsentOne: That is what charm build uses to map interface:foo to a repo, so yes
<TheAbsentOne> allright, thx stub!
<kwmonroe> TheAbsentOne: you asked about mysql-root yesterday.  there's really not much different between mysql and mysql-root interfaces -- the latter simply does a more permissive grant.  as an example of a charm that connects to mysql, have a look here: https://github.com/juju-solutions/layer-ubuntu-devenv/blob/master/reactive/ubuntu-devenv.py#L81
<kwmonroe> TheAbsentOne: ubuntu-devenv doesn't actually do anything with the db -- but it does print out the host, db, passwords, etc from the connection string (impl'd here btw: https://github.com/johnsca/juju-relation-mysql/blob/master/requires.py#L40)
<ejat> anyone can assist me with this : https://paste.ubuntu.com/p/nQYZbxvj8G/
<ejat> i think someone has file the bug here https://bugs.launchpad.net/juju/+bug/1761577
<mup> Bug #1761577: Juju does not automatically use the correct default instance type for certain Azure data-centers <azure-provider> <cdk> <cpe-onsite> <k8s> <kubernetes> <juju:Triaged> <https://launchpad.net/bugs/1761577>
<TheAbsentOne> hey kwmonroe thanks man! I was really confused as there is no request function for a database and I read somewhere (but I guess that was wrong) that the mysql-root interface didn't in fact create a database. Really confusing me
<TheAbsentOne> I haven't find the time to further experiment with it but I would love to get it to work ofcourse. Hopefully tomorrow!
<TheAbsentOne> So if I get it right, as soon as you do add-relation the charm requests a database over mysql-root, right? kwmonroe
<TheAbsentOne> also your second link is the mysql interface right? Not the mysql-root interface?
<kwmonroe> oh yeah TheAbsentOne!  the admin (mysql-root) interface doesn't create a db:  https://github.com/marcoceppi/charm-mysql/blob/master/hooks/db-relation-joined#L70
<kwmonroe> that must be on purpose, but i don't remember why.  maybe it's because we want to do all mysqly things over the mysql interface, and the only time you would ever use mysql-root is for granting privs to the created user.
<kwmonroe> TheAbsentOne: so as soon as the db-relation-joined hook fires (which happens when you add a db relation -- NOT db-admin), the db gets created on line 71 in the above link.
<TheAbsentOne> kwmonroe: hmm it's really annoying tbh x) If I use the mysql or mysql-shared interface I can properly create a database but I end up with an user that cannot grant himself privileges so that he can access from other hosts as well. And if I use the mysql-root interface I can't properly create a database with a custom databasename. :/ I'm kinda stuck
<TheAbsentOne> I tried running sql queries with pymysql to grant privileges to the received user (so that he could connect from other hosts) but that is not possible as he hasn't the grant option (if I understand it correctly)
<le_g> Hi everyone !
<le_g> I'm using juju and conjure-up for the first time and I'm trying to deploy a staging k8s cluster on aws, but it fails when I try to deploy with a "juju.errors.JujuAPIError: cannot select controller: no matching controllers found" although i've selected the saas controller on the previous step
<le_g> Has anyone encountered such an error ?
<TheAbsentOne> what docs did you follow le_g might be usefull information for the experts here
<le_g> TheAbsentOne it was this : https://jujucharms.com/kubernetes-core/bundle/337
<TheAbsentOne> le_g: and you installed with snap conjure up or juju? The problem is probably your controller that isn't bootstrapped correctly or an issue with credentials/provider
<le_g> I tried to installed it with conjure-up
<le_g> It seems to work when I create a controller but I should be using the saas one which would save me a useless aws machien
<le_g> machine
<TheAbsentOne> I'm sorry, I don't think I'm able to help you :/ I'm sure others might help you!
<bdx> Ie_g: `juju login jaas`
<bdx> on that machine
<bdx> make sure you can see the jaas controller locally in `juju controllers`, then conjure-up should pick it up
#juju 2018-06-07
<enrico__> hello I am Enrico and I am new with juju! I come after have installed a working maas, I would now use juju, I installed juju and configured a cloud in the same maas rack/region controller vm but when I bootstrap the cloud and I have to install a cloud controller I have this error: ERROR failed to bootstrap model: cannot start bootstrap instance in any availability zone (default, juju)     Please can you help me to debug and figure ou
<enrico__> I tried to follow this guide https://docs.openstack.org/charm-deployment-guide/latest/install-juju.html
<enrico__> hello I am Enrico and I am new with juju! I come after have installed a working maas, I would now use juju, I installed juju and configured a cloud in the same maas rack/region controller vm but when I bootstrap the cloud and I have to install a cloud controller I have this error: ERROR failed to bootstrap model: cannot start bootstrap instance in any availability zone (default, juju)     Please can you help me to debug and figure ou
<rick_h_> enrico__: try the bootstrap with --debug on there and then pastebin what it says
<enrico__> rick-h-   here pls https://pastebin.com/VcfcWpkC
<enrico__> rick_h_ here pls https://pastebin.com/VcfcWpkC
<enrico__> @rick_h_: any idea ?
<kwmonroe> enrico__: do you have a node with 3.5g in your default maas zone?
<enrico__> kwmonroe_ : no... two 4G but nothing change even if I specify juju bootstrap --constraints mem=4G
<kwmonroe> eriklonroth: yeah, the mem constraint is a minimum, so a machine with 4 will satisfy the ask for 3.5.  no need to change anything there.
<jam> manadart: https://bugs.launchpad.net/bugs/1773018
<mup> Bug #1773018: ERROR: juju.worker exited: setting up container dependencies on host machine: could not find unused subnet <juju:Incomplete> <https://launchpad.net/bugs/1773018>
<manadart> Ta.
<thumper> babbageclunk: morning
<jam> babbageclunk: https://www.mongodb.com/presentations/replication-election-and-consensus-algorithm-refinements-for-mongodb-3-2
<jam> babbageclunk: seems to say "inspired by raft"
#juju 2018-06-08
<enrico_> hello.. small problem:  No available machine matches constraints: [('zone', ['default']), ('agent_name', ['f565ba24-62c2-412c-8c75-f56eb9211342']), ('mem', ['3584'])] (resolved to "mem=3584.0 zone=default")
<enrico_> this when I try to bootstrap
<enrico_> does anyone know how to fix it?
<enrico_> I have 1 maas controller (1 cpu 1Gb ram) and 2 nodes (20 cpu and 4 Gb ram)
<enrico_> please help
<pmatulis> enrico_, how can you have a controller when you say you cannot bootstrap?
<enrico_> pmatulis it is a maas controller, not a juju controller
<pmatulis> enrico_, ok
<pmatulis> enrico_, maybe submit the precise command you're using
<enrico_> pmatulis right now I got juju bootstrap a controller... the problem was that the nodes where I was attempting to bootstrap were in state deployed in MAAS but for juju in order to bootstrap, It needs the nodes in state ready     any way now form juju gui I can see the juju controller of the maas environment but It does not list all the nodes displayed in MAAS no matter which state ready or deployed
<enrico_> pmatulis do you have any idea how to list all nodes of the cloud into juju ?
<pmatulis> enrico_, typically juju becomes aware of machines when it either adds a machine (add-machine command) or deploys a charm (deploy or add-unit command)
<pmatulis> enrico_, i don't use the juju gui. maybe try going through the documentation which emphasises the CLI commands
<enrico_> pmatulis so in order to deploy an application using juju.. the machine has to be without OS ?
<pmatulis> enrico_, yes, 'deploy' command provisions the OS, installs & configures the application
<pmatulis> enrico_, in the case of a non-MAAS backing cloud, say AWS, it requests a cloud instance to be created
<pmatulis> enrico_, it would be great if you could go through the Juju docs and provide feedback via a docs issue [1] if something is unclear
<pmatulis> [1]: https://github.com/juju/docs/issues/new
<pmatulis> https://docs.jujucharms.com/
<enrico_> pmatulis thanks you
<pmatulis> welcome
<stickupkid> hml: https://github.com/juju/juju/pull/8813
<hml> stickupkid: looking
<stickupkid> hml: it was missing --testcache, when we ran the command. This PR fixes that
<hml> stickupkid: sweet
<hml> stickupkid: is there a way to make it quieter?  my screen scrolled by with crap
#juju 2018-06-09
<KingJ> What's the best way to upgrade my controller/model from 2.4-beta3 to 2.4-rc1? If I run juju upgrade-model -m controller or juju upgrade-juju I just get 'no upgrades available' returned. I've already updated the client to 2.4-rc1
<KingJ> aha juju upgrade-juju --agent-stream=devel seems to be the magic command
#juju 2018-06-10
<KingJ> What's the replacement for juju set? The ceph-osd charm documentation mentions that I can use juju set to add additional OSD devices to a unit but juju set is no longer a recognised command in 2.x
<evilnick> KingJ, `juju config`, see the docs! https://docs.jujucharms.com/2.3/en/charms-config
<KingJ> evilnick: From what I saw of that though, I can only use that to edit the application's config, not an individual unit's config?
<KingJ> Ah hold on. I've just found the juju add-storage command. This might be what I need - as I need to attach different disks to each ceph-osd unit.
<evilnick> Kingj, yeah, you can attach storage to a unit with `add-storage`. Sorry, I didn't see what it was you were trying to do
<KingJ> Yeah, my bad - I didn't explain what I was trying to do too well
<KingJ> Aha, i've now run in to https://bugs.launchpad.net/juju/+bug/1691694 . Going to have to think about the best way to work around this.
<mup> Bug #1691694: Support for MAAS storage binding in bundles <bundles> <cpe-onsite> <new-york> <juju:Triaged> <https://launchpad.net/bugs/1691694>
<KingJ> Likely avoid defining the machines in the bundle, ensure that ceph-osd is the first definied entity in the bundle and deploy to all machines.
#juju 2020-06-01
<hpidcock> correct
<hpidcock> also there are only 119 tests in that package
<hpidcock> the 1 failed is a non test
<wallyworld> could be teardown then
<hpidcock> "applicationSuite.TestSetApplicationStatus"
<hpidcock> some sort of race, calling with -check.vv causes it to not fail
<wallyworld> hmmm, ok, i'll see if anything jumps out
<hpidcock> wallyworld: found it
<hpidcock> func (s *applicationSuite) SetUpTest(c *gc.C) { shouldn't be using c from the SetUpTest
<hpidcock> for apiCaller
<wallyworld> good catch, i can fix as a driveby. that gc.C scoping issue sucks
<hpidcock> wallyworld: awesome thanks
<hpidcock> wallyworld: prepping for 2.8 release, I think its time to release the juju-db snap
<hpidcock> any objections?
<hpidcock> wallyworld: https://github.com/juju/juju/commit/8d1435b9f241c23a33f81bc09d6683373910c9d9#r39571648
<wallyworld> hpidcock: oh ffs, can't belive i did that, sigh
<wallyworld> no objections to releasing juju-db snap
<hpidcock> not your fault, merge jobs doesn't run linters
<hpidcock> adding linters to merge job so we can't land bad code
<wallyworld> at least it's not fatal and it only for testing
<wallyworld> wosrt that can happen is a developer won't see the testing they want
<hpidcock> small pr https://github.com/juju/juju/pull/11649
<tlm> looking hpidcock
<kelvinliu> wallyworld: could u take a look this PR for eks provider? https://github.com/juju/juju/pull/11624
<wallyworld> kelvinliu: looking
<kelvinliu> ty
<wallyworld> kelvinliu: seems ok to me, ty
<kelvinliu> wallyworld: ty and the other 2 PRs, plz
<wallyworld> oh, sorry, looking
<kelvinliu> nws
<wallyworld> kelvinliu: done
<kelvinliu> ty
<jamespage> o/
<manadart> stickupkid achilleasa: https://github.com/juju/juju/pull/11650
<stub> Has juju 2.8 stopped automatically invoking the config-changed hook after install ?
<stub> nope, framework didn't create the hooks/config-changed stub :-/
 * stub updates for dispatch
<rick_h> stub:  that happened in 2.7 I believe
<rick_h> stub:  oh nvm, not after install, after something else
<rick_h> guild just a heads up giumaas should be all released and thanks for all the help and support last week
<petevg> rick_h: cool cool. Thank you for all the work on the team's behalf last week :-)
 * rick_h slips back into the shadows
<rick_h> petevg:  same to you!
<rick_h> petevg:  only used that bundle about 12 different times in different ways, solid one. Useful functional test one ime
<petevg> Nice! Glad it was useful :-)
<petevg> Props to the OpenStack team for the bundle. I just stole on of their test bundles and modified it a bit.
<stickupkid> sigh TLS issues
<pmatulis> how do i see the available revisions of a charm?
<hpidcock> re-review please https://github.com/juju/juju/pull/11649
<tlm> i can take a look
<hpidcock> tlm its just manadart's changes verbatim
<tlm> roger
<tlm> approved hpidcock but may need mandart to approve as well by the look of the github interface
<hpidcock> tlm: I just dismissed his review so shouldn't block landing
<tlm> getting so many random test failures in CI hpidcock :|
<wallyworld> hpidcock: https://github.com/juju/juju/pull/11653
<hpidcock> wallyworld: looking
<hpidcock> wallyworld: one comment
<wallyworld> ta
#juju 2020-06-02
<hpidcock> wallyworld: also a bug now that you are checking for errors, invalid JUJU_AGENT_TESTING_OPTIONS env value "": expected "key=value", got ""
<josephillips> hey
<josephillips> how i can know when a charm is upgraded
<josephillips> ?
<pmatulis> josephillips, when the revision number changes. it can be seen in juju status output. i guess you need to know at what revision the charm is at first
<josephillips> yep
<josephillips> is for one reason everything was working fine
<josephillips> today i found openstack is not working correctly
<josephillips> when im trying to perform a resize im getting a error failed to connect using the ip of nova-cloud-controller with the port of placement
<josephillips> and i want to make sure was a update that broke something
<josephillips> keystoneauth1.exceptions.connection.ConnectFailure: Unable to establish connection to http://172.19.1.26:8778/resource_providers?in_tree=4abf4202-4da9-47e0-a5e5-70bbe5200265: HTTPConnectionPool(host='172.19.1.26', port=8778): Max retries exceeded with url: /resource_providers?in_tree=4abf4202-4da9-47e0-a5e5-70bbe5200265 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at
<josephillips> 0x7f9afc060320>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',)) |
<josephillips> im getting this error
<josephillips> the problem is the ip there is for nova-cloud-controller but with the plaement port
<hpidcock> review please https://github.com/juju/juju/pull/11654
<tlm> looking hpidcock
<tlm> I was already working on this :)
<hpidcock> oh sorry, for some reason I thought this was a different one
<tlm> this is to patch the systemctl calls during tests ?
<tlm> looks like it is
<manadart> achilleasa: I made a slight change to the behaviour of https://github.com/juju/juju/pull/11638, if you have time for another look.
<achilleasa> manadart: looking in a few min
<manadart> stickupkid: Can you tick this one for 2.8? https://github.com/juju/juju/pull/11659
<achilleasa> stickupkid: https://github.com/juju/juju/pull/11660
<seffyroff> hey folks :)
<seffyroff> anyone know which channel maas is in these days? #maas doesn't seem to be joinable?
<hml> seffyroff: have you tried https://discourse.maas.io/t/welcome-to-discourse/8?  a lot of conversations are moving there
<seffyroff> ah ok, sure.  I am on there, but I miss the IRC-ness
<hml> seffyroff:  okay, let me ask if there is an alternative
<seffyroff> much obliged sir!
<seffyroff> the discourse board is fine tbh, I'll make myself a nuisance there
<seffyroff> Appreciate the info :)
<hml> seffyroff: there isnât a #maas on freenode at this time, nor a replacement.  the directive is to use discourse.
<seffyroff> gotcha, thanks so much for your help :)\
#juju 2020-06-03
<kelvinliu> wallyworld: got this pr for the oci image fetching thing, could u take a look? thanks https://github.com/juju/juju/pull/11662
<wallyworld> yup, looking
<wallyworld> kelvinliu: can we log a message saying that microk8s was not ready if we reach the timeout?
<wallyworld> also, can we call a micrk8s command to see if the microk8s snap is still coming up so we can maybe know to wait a little longer
<kelvinliu> wallyworld: it's because microk8s needs root to run
<kelvinliu> no point to wait in this case
<wallyworld> kelvinliu: are you saying that micrk8s k8s would have been broken even if juju were not there
<kelvinliu> https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/actions/common/utils.sh#L7 $USER wasnot there was the error
<wallyworld> ok. as an optimisation later, would be good for juju snap hook to check if user is in microk8s group at least so we avoid waiting for 30 unnecessarily
<kelvinliu> microk8s wasnot broken, it's just it's not accessible for non root user at that time
<wallyworld> gotcha
<kelvinliu> we are not waiting for 30s
<wallyworld> ah right it exits immediately
<kelvinliu> we  exit 0 immediately if we got any errors
<kelvinliu> yeah
<kelvinliu> we just exit immediately if we got any errors
<kelvinliu> not only permission error
<wallyworld> lgtm, with a suggestion for a message
<kelvinliu> ty
<stub> Is it a bug or by design that I can't deploy a charm with no hooks/*, just a dispatch script in $CHARM_DIR ?
<stub> Or am I misinterpreting how dispatch works in 2.8 ?
<thumper> stub: it is a bug
<thumper> a bug that I thought we had caught already
<stub> Think it was rc3
<stub> I can reproduce if you want
<thumper> stub: if it is in rc3, it is in .0
<stub> thumper: reproduced with rc3
<stub> want a bug?
<thumper> stub: please
<stub> https://bugs.launchpad.net/juju/+bug/1881839
<mup> Bug #1881839: dispatch-only charms cannot be deployed, hooks/ directory required <juju:New> <https://launchpad.net/bugs/1881839>
<thumper> ta
<thumper> hpidcock: if you get a moment, I'd like your thoughts on https://github.com/juju/juju/pull/11660
<hpidcock> thumper: I'm not sure what tests were failing
<hpidcock> we really need to move to ecdsa
<hpidcock> key generation is fast enough + very secure
<hpidcock> but that is really only possible in 2.8 with tlm's changes to pki
<thumper> hmm
<thumper> a very simple bug fix https://github.com/juju/juju/pull/11665 for a race test failure
<stickupkid> manadart, you around?
<manadart> Yeah, but I need 20 mins or so before I can do a HO.
<stickupkid> manadart, I need to nip out anyway for an early lunch, but the reason we don't land in the correct space with --bind is because we pick the wrong AZ
<stickupkid> manadart, in the StartInstanceParams AvailabilityZone the selection is wrong, just wanted to pick your brains about potentials of why
<stickupkid> manadart, the code doesn't look like it's changed from what I can see, so obviously missing something
<manadart> stickupkid: But using constraints works. Mechanism for using bind should be the same. Anyway, I'll look and we can chat in a bit.
<stickupkid> manadart, that's what I would have thought, but clearly not
<stickupkid> manadart, ping
<manadart> stickupkid: Pong.
<stickupkid> daily mc daily
<stickupkid> https://github.com/CanonicalLtd/juju-qa-jenkins/pull/456
<Eryn_1983_FL> hey guys
<Eryn_1983_FL> how would i figure out why a node failed commission
<Eryn_1983_FL> what/where are the logs
<stickupkid> juju status <- will tell you the over all issue
<stickupkid> juju debug-log --replay --no-tail <- will give you the logs for the model
<stickupkid> juju debug-log --replay --no-tail -m controller <- will give the logs for the controller
<stickupkid> Eryn_1983_FL, ^
<Eryn_1983_FL> ok
<Eryn_1983_FL> well i think its cert issues and ovn issues,
<Eryn_1983_FL> fun times..
<Eryn_1983_FL> is Ubuntu 20.04 LTS good for openstack
<stickupkid> manadart, https://github.com/juju/juju/pull/11667
<stickupkid> that cleans up the INFAN networking
#juju 2020-06-04
<wallyworld> thumper: whenever.... https://github.com/juju/juju/pull/11668
 * thumper looks
 * thumper looks briefly, and decides to look in more detail loater
<Chipaca> 'juju status' (with 2.7.6 installed from snap) freezes apparently forever, is there a trick to it?
<stickupkid> Chipaca, that normally happens if it can't communicate with the controller
<stickupkid> Chipaca, there are other times when this can happen also as well, but that would be where I'd look first
<Chipaca> stickupkid: it's just a local thing talking to an lxd, does that make a difference?
<stickupkid> Chipaca, same scenario. What does `lxc list` say
<Chipaca> stickupkid: so, this started happening yesterday, and as this is only used by me for testing stuff, i purged juju and lxd and reinstalled; 'lxc list' lists nothing, but 'juju bootsrap localhost overlord' tells me 'ERROR controller "overlord" already exists'
 * Chipaca is probably breaking all kinds of stuff in his ignorance
<stickupkid> Chipaca, so you don't have a controller, juju thinks there still is one. You have two options... clean up the ./local/share/juju yaml files or run `juju controllers --format=json | jq -r ".controllers | keys[]" | xargs -I% juju kill-controller -y -t 0 %`
<stickupkid> it assumes you have `jq` installed.
<Chipaca> why would the juju snap keep state outside of ~/snap :-/
<Chipaca> sigh
<stickupkid> --classic
<stickupkid> Chipaca, we don't support confinement yet, we do have it on our roadmap... priorities I guess ;-)
<Chipaca> i know, but you're just making harder for yourselves :)
<Chipaca> anyway, thanks, that's exactly the problem
<Chipaca> i'll nuke that and start afresh \o/
<Chipaca> stickupkid: thanks!
<wallyworld> stickupkid: juju unregister
<wallyworld> will get rid of an orphaned controller entry
<wallyworld> no direct editing of controllers.yaml needed
<stickupkid> wallyworld, haha, knew there was another command
<Chipaca> FWIW the jq pipeline didn't work so I went for removing ~/.local/share/juju
<Chipaca> Unable to open API: open connection timed out
<Chipaca> WARNING unknown config field "lxd-snap-channel"
<Chipaca> ERROR getting controller environ: getting environ using bootstrap config from client store: Get https://10.160.185.1:8443/1.0: Unable to connect to: 10.160.185.1:8443
<Chipaca> ~
<stickupkid> achilleasa, this is new --> WARNING unknown config field "lxd-snap-channel" - when trying to remove a controller
<achilleasa> stickupkid: that's a side-effect of how we access these settings in the workers. TLDR: we pop them off the map when accessed and complain about remaining entries. We do that here (https://github.com/juju/juju/blob/develop/worker/provisioner/container_initialisation.go#L207) for this field but this (https://github.com/juju/juju/blob/develop/container/lxd/manager.go#L78) doesn't hence the warning.
<achilleasa> we could add a dummy pop in the latter file to silence the warning but I didn't add it as part of my commit because it looked weird. We can do that as a drive-by fix in any future PR
<stickupkid> manadart, ping
<manadart> stickupkid: Pong; just starting a call.
<stickupkid> manadart, ping when you're free
<manadart> stickupkid: I'm in Daily.
<Eryn_1983_FL> hey guys
<achilleasa> manadart: what is the current status on spaces and manual machines?
<Eryn_1983_FL> whats the default username/password for maas and juju?
<Eryn_1983_FL> for ssh
<Eryn_1983_FL> im locked out atm
<manadart> achilleasa: Currently unsupported; on the this cycle of work to enable. It just requires population of subnets from the machine NICs.
<Eryn_1983_FL> manadart:  if i got into rescue mode can i get to a console?
<manadart> Eryn_1983_FL: There's no default. Juju sets up it's own key in authorized_keys, which allows `juju ssh <machine-id>`.
<Eryn_1983_FL> i dont has ssh to juju
<manadart> Or `juju ssh <unit>` for that matter.
<manadart> Eryn_1983_FL: You have no client connectivity to your Juju controller?
<Eryn_1983_FL> i got the web gui
<Eryn_1983_FL> just dont understand why my keys are not working
<Eryn_1983_FL> time to reinstall again
<achilleasa> quick CR https://github.com/juju/utils/pull/312 anyone? needed to fix the broken tests on develop
<stickupkid> manadart, https://bugs.launchpad.net/juju/+bug/1882097
<mup> Bug #1882097: Needless brute forcing with finding AZ in space topology in the provisioner task <juju:Triaged> <https://launchpad.net/bugs/1882097>
<manadart> stickupkid: vÃ½borne
<achilleasa> Can someone review and run the "straightforward" QA steps for https://github.com/juju/juju/pull/11670?
<stickupkid> achilleasa, I'm stealling your PR for now (sorry for the noise)
<achilleasa> no worries. I want to force-push a fix for one of the tests. Let me know when you need a new !!build!! in there ;-)
<stickupkid> achilleasa, do it when ever I'm breaking stuff atm
<pmatulis> has something changed in 2.8 that may affect deploying bundles? seems slower going through all bundle lines
<pmatulis> really slower
<thumper> pmatulis: yes, the multiwatcher code was refactored, and it does slightly more now
<thumper> pmatulis: I'm currently looking into it to see what can be done
<thumper> this is the basis of a watcher that bundle deploys use
<pmatulis> thumper, ok thanks. what else does it do? more validation?
<thumper> pmatulis: there shouldn't be a measurable difference between 2.7 and 2.8. If there is, we need to work out why
<thumper> pmatulis: if you were able to show how different it is, that would be very helpful
<thumper> https://github.com/juju/collections/pull/4
<pmatulis> thumper, ok, will do
#juju 2020-06-05
<wallyworld_> thumper: here's the forward port https://github.com/juju/juju/pull/11672
<wallyworld_> thumper: re bug 1882025, i don't think it's unreasnable to have "to" in bundles ave the same sematics as "--to" in deploy
<mup> Bug #1882025: Specific machine placement by hostname in bundle <juju:New> <https://launchpad.net/bugs/1882025>
<tlm> wallyworld_: got 5 min for HO ?
<wallyworld_> tlm: just need a couiple of minutes
<tlm> no rush
<wallyworld_> tlm: jump in standup?
<tlm> k
<wallyworld_> stickupkid: if i land a facade bump for a facade used by the CLI, with a brand new method added, will libjuju break? I am hoping that the new method simply won't be usable initially
<stickupkid> wallyworld_, nah
<stickupkid> wallyworld_, it does version negotiation now, go wild
<stickupkid> unless you're using a REALLY old version of libjuju, where the fix is upgrade libjuju
<wallyworld_> stickupkid: ta, i was hoping that was the case
<wallyworld_> stickupkid: we landed all the 2.7 patches except the dep update as there was an issue with the s3 copy of the deps
<stickupkid> wallyworld_, ah, ok
<wallyworld_> didn't look too closely as there was/is a bit of a backlog
<wallyworld_> and other prs got landed anyway
<stickupkid> I'll see if I can get a chance today
#juju 2020-06-07
<hpidcock> wallyworld: https://github.com/juju/juju/pull/11675 + https://github.com/juju/testing/pull/151
