#juju 2012-07-16
<SpamapS> hazmat: right, any juju after "reboot" support cannot spawn nodes for a juju before reboot support.
<SpamapS> hazmat: so distro support is incompatible in oneiric
<vladas> hello, can someone help me?
<vladas> I'm trying to deploy openstack on vm's and got weird error.
<vladas> I have deployed all charms successfuly, except nova-volume and openstack-dashboard
<vladas> I'm getting these errors:
<vladas> Error processing 'cs:precise/nova-volume': entry not found
<vladas> Error processing 'cs:precise/openstack-dashboard': entry not found
<vladas> Any ideas?
<hazmat> vladas, those charms are not in the store
<hazmat> vladas, they need to be deployed from bzr
<vladas> Oh, i see. Thanks, will try that.
<SpamapS> gooooooooooood morning juju town!
<mramm> gooood morning to you!  (sorry for the reduction in O's)
<SpamapS> Its like an echo.. it can't sustain.. ;)
<marcoceppi> gd morning
<SpamapS> marcoceppi: done with wordpress? Are you done yet? How about now? done..?
<SpamapS> ;)
<marcoceppi> no yes! nope not yet
<marcoceppi> it's in bzr now though
<SpamapS> Ugh that reminds me I need to finish nagging the last of the un-maintained charms
<koolheadd17> SpamapS, met someone at cls12 who is working on newest owncloud charm
<SpamapS> koolheadd17: oh?
<SpamapS> koolheadd17: did you hand it off to them or they just wanted to make it better?
<koolheadd17> SpamapS, he wanted to make it better
<SpamapS> jimbaker: jitsu --help looks *amazing* now btw. THANK YOU
<SpamapS> koolheadd17: *awesome*
<koolheadd17> he is working on integrating nfs based support too i guess bkerensa knows him
<SpamapS> cool
<koolheadd17> i have few charms in baking state though :)
<SpamapS> cool
<negronjl> 'morning all
<SpamapS> I'm working on making nagios more awesome
<koolheadd17> SpamapS, waoo :)
<SpamapS> and then hopefully collectd too
<koolheadd17> SpamapS, is someone working on Monit?
<SpamapS> koolheadd17: https://bugs.launchpad.net/charms/  ... look there
<koolheadd17> k
<koolheadd17> SpamapS, so jcastro google doc list and all is primitive now
<SpamapS> koolheadd17: it has been dead for months now
<SpamapS> it should have been deleted if it hasn't been already
<koolheadd17> ejat, around
<ejat> koolheadd17 : yups ..
<koolheadd17> ejat, are you coming to china for openstack asia pacific event
<ejat> hmmm not sure yet â¦
<ejat> how about u ? confirm ?
<koolheadd17> not yet, need to talk to my manager on the same and visa might be difficult
<koolheadd17> ejat, you should definately go there
 * ejat in the mist of changing employment â¦ so could not give a say yet .. but hopefully â¦ 
<koolheadd17> ejat, ol
<koolheadd17> k
<ejat> SpamapS : is there any tutor to deploy OS using juju after installing via MAAS ?
 * ejat means for production environment .. 
<ejat> or how to wiki .. .
<SpamapS> ejat: yes there's something on wiki.ubuntu.com
<ejat> https://help.ubuntu.com/community/UbuntuCloudInfrastructure <-- ok thanks
<jimbaker> SpamapS, thanks. over at oscon with m_3 working on demo stuff
<SpamapS> jimbaker: woot
<bkerensa> m_3: so the dependency should not be tight according to the upstream author so I removed the ppa portion
<hazmat> jimbaker, m_3 rackspace is a fail btw
<hazmat> jimbaker, m_3 they implement their own version of credential passing
<hazmat> instead of keystone v2, even fixing that there are some other issues
<hazmat> jimbaker, m_3 i'd go with ec2 or hpcloud
<m_3> hazmat: I love you man... `jitsu export | jitsu import -enewenv -` worked
<m_3> bkerensa: thanks
<negronjl> SpamapS: got a chance to look further re: haproxy-overhauled ?
<SpamapS> negronjl: I want to wrap up this nagios thing I did, and then I'll dive back into it
<negronjl> SpamapS: you hate me :)
<SpamapS> hazmat: wtf?! no keystone?
<SpamapS> negronjl: because you're beautiful
<_mup_> Bug #1025382 was filed: Add a generic constraint "persistent-root" <juju:New> < https://launchpad.net/bugs/1025382 >
<hazmat> SpamapS, its a bastardized keystone
<hazmat> SpamapS, mostly its just different credential passing
<SpamapS> hazmat: ahh the fun of apache licensing
<hazmat> SpamapS, openstack is a kit for building your own snowflakes ;-)
<SpamapS> "Lets create a single place to dump code that nobody runs"
<SpamapS> btw.. 'juju status | ccze -A' is purty
<SpamapS> we should like, build in --color to juju cli
<hazmat> SpamapS, don't even get me started on how broken rspace is about their client tools.. nova client has had broken packages for months, and their swift client doesn't work with their custom keystone either.
<SpamapS> 'started' is green, 'error' is red..
<hazmat> nice
 * hazmat should file a bug on status improvements
<SpamapS> hostnames are blue
<SpamapS> and known things are purple.. which is weird.. nrpe and mysql are purple.. but nagios is not
<SpamapS> wow
<SpamapS> jitsu export is.. like, everything I've been looking for
<SpamapS> ^5's hazmat
<hazmat> SpamapS, :-)
<SpamapS> That should be like, feature #1 to go into goju
<hazmat> SpamapS, i haven't pimped it out to folks yet, cause people hates features..
<hazmat> but perhaps you guys can
<SpamapS> Its enough to prove the concept is valid
<SpamapS> we can move forward from there once we have a permanent home for the feature :)
<hazmat> SpamapS, this is pretty much the  exact syntax from the austin/lxc sprint ml discussion
<hazmat> bcsaller, is the repairs branch ready for review?
<bcsaller> hazmat: since we are not supporting restarts I was just testing changes that use start-ephemeral to see if its faster for people
<hazmat> bcsaller, pls stop adding additional features to the bug fix, its much, much more important to release the fix for the original bug that add extraneous things.
<hazmat> s/than
<hazmat> moreover most of things are inappropriate for an SRU
<hazmat> and at the very least need to have separate bugs/documentation
<bcsaller> hey, before that bug you were the one that had me working on this ;)
<hazmat> bcsaller, i wanted a fix for the bug, not for any of the other things that are in that branch
<hazmat> some of those things can definitely go into an SRU, but their still more appropriate and better documented with separate branches and bugs.
<hazmat> and specifically for the download feedback, that was in the context of informing of downloads when doing cloud-images, not tying the existing ma into desktop-notifications.
<hazmat> feels much more like pending stuff that's been discussed and you decided to put into this branch, but it was never part of the expectation for the bug in question
<SpamapS> a good SRU btw, would be to *just* remove the 'start on' from the upstart jobs
 * hazmat nods
<SpamapS> so you can land whatever fix you want
<SpamapS> Once its fixed in quantal, I can do a much simpler patch for SRU :)
<hazmat> actually for the upstart jobs, its better to just not have them if it doesn't survive restart and just fork the relevant processes.
<hazmat> rather than dropping files into sys dirs that are irrelevant
<SpamapS> yes that would be great
<SpamapS> tho its nice to be able to easily restart them
<SpamapS> hazmat: upstart has a notion of user jobs
<SpamapS> but I think they're off by default IIRC
<SpamapS> hrm, getting lots of tracebacks when running relation-get -r$relid in an upgrade-charm hook
<SpamapS> http://paste.ubuntu.com/1095430/
<SpamapS> Ok, I *think* have working nagios+nrpe in a generic fashion...
<SpamapS> to the point where this is all thats needed to add monitoring to wordpress:
<SpamapS> http://paste.ubuntu.com/1095441/
<james_w> cool
<SpamapS> james_w: I haven't forgotten custom nagios plugin support either :)
<james_w> :-)
<SpamapS> james_w: lp:~clint-fewbar/charms/precise/nrpe/trunk ... but.. I'd wait for my README updates and blog post.. there's a *LOT* there :)
<SpamapS> time for lunch
<dpb___> is there a limit to the number of lxc clients that juju could effectively start?
<SpamapS> dpb___: ZooKeeper keeps everything in RAM.. and each one needs at least 400MB of disk space...
<SpamapS> dpb___: so, yes, your box will crumble well before any logical limits are reached
<dpb___> oh, nice.
<SpamapS> dpb___: I've done 7 at a time before.. it made my SSD feel slow. :)
<SpamapS> All that dpkg :-P
<dpb___> ya, I just tried to spin up 10 and it's zookeeper is throwing errors in debug-log
<dpb___> new one for me: 2012-07-16 22:08:28,265 ERROR Invalid Remote Path provider-state
<imbrandon> SpamapS: even the local ZK keeps a full FS and not just pointers to it ? so ... in LXC a single NRPE deploy would be 1.2GB of disk just to store the charm
<imbrandon> ... that seems wrong.
<dpb___> ok, for future reference, data-dir cannot do ~ substitution.  :)
<imbrandon> ( also seems wrong there is 400MB of data in the charm ... )
<SpamapS> imbrandon: no it doesn't keep *EVERYTHING* in it
<imbrandon> actually a more than 2.4GB ... 400 in the local:repo ~/.juju/charmname 400 more and then ZK bootstrap lxc has 400mb on the zk fs and then it deploys to another lxc that add 400 in /var/lib/juju/service/* and finaly another in /mnt or where ever its running from the final 400mb...
<SpamapS> imbrandon: it keeps everything that is in the ZK tree in RAM
<imbrandon> right i'm just saying a 400mb charm is 2.4gb deployed in LXC
<imbrandon> seems ... off
<imbrandon> SpamapS: btw got me a shiney new fileserver powered on and in the spare bedroom ( my in-home DC , lol )
<imbrandon> now i have enough disks spun up and in the right manner that i can migrate things from 4 other machines I have around doing various things all to that one + any minor crons etc they ran, and I've already purchased a rsync.net account just to offsite backup that one box GREATLY simplifying by @home setup that has become a frankenstein over the years ... one server, running minimal services with 12tb of raided storage and a offsite backup ready to g
<imbrandon> BUT the major win out of the whole thing , and what caused me to finaly do it this weekend ? heheh in-house MAAS with OpenStack on  5 nodes ( got a lil laptop to use as the controller ) with Gigabit speeds
<imbrandon> woot woot
<imbrandon> :)
<imbrandon> hoping by Wed or so I'll have everyting copied into place and verified etc  etc + the boxes reprovisioned fo sum fun hehh
<imbrandon> ( i keep off site backups of family members data too so i cant afford to goof it up , lol )
<SpamapS> imbrandon: where do you get 2.4gb from 400MB?
<SpamapS> imbrandon: you want CoW for the charm on local? Thats a bit far reaching. ;)
<SpamapS> imbrandon: some day.. not today
<imbrandon> SpamapS: yea , i konw its not fixable ( well not in the current context other things need to fall into place first )
<imbrandon> just had no real strong feeling about more data in the charm than config templates untill just now tho
<imbrandon> and i realized that
<SpamapS> 400Mb would be a bit weird
<imbrandon> i mean i dident like it but was like meh, now it seems like a very very bad idea
<imbrandon> and yea 2.4
<imbrandon> count it with me , maybe i'm wrong ...
<SpamapS> 400 in the charm.. 400 into the file storage.. 400 on the disk layed down.. thats 1.2G not 2.4
<imbrandon> ok so you have /var/lib/charm/400.mb   then you juju bootstrap and it copies it to ~/.juju/400.mb and then again to the lxc zk/400.mb then you "juju deploy charm" and zk copies it to new node at /var/lib/charm/400.mb and then the hooks fire and unpack it to /mnt/400.mb
<imbrandon> whats that add up ... all on your laptop ... but even on ec2 thats a bit extreeme
<imbrandon> oh, and there is also a cached copy in s3 as well ...
<SpamapS> imbrandon: I don't believe it copies it to ~/.juju if its a local:
<SpamapS> imbrandon: and it never copies the charm into zk
<SpamapS> zk is structure only
<SpamapS> imbrandon: /var/lib/charm also doesn't exist :)
<imbrandon> ahh ok so i was mixing up my zk/400.mb with s3/400.mb
<imbrandon> and thats where i deploy local: form
<imbrandon> from* so just used it in my head
 * imbrandon echo "JUJU_REPOSITORY=/var/lib/charms/precise/" >> /etc/profile.d/juju-charms 
<imbrandon> :)
<SpamapS> Hrm, bug in subordinates
<imbrandon> nice clean place outside of my ~/Projects/juju/charms/ to let "charm getall" live and not mix uninstentionally with the ones in Projects i'm actively developing :)
<SpamapS> if I destroy the primary service.. I never see a depart for the unit of the subordinate
<imbrandon> hrm
<imbrandon> race ?
<SpamapS> something like that
<SpamapS> 2012-07-16 16:17:03,362 unit:nrpe/5: statemachine DEBUG: relationworkflowstate: transition complete depart (state departed) {}
<imbrandon> i never took the time to dig into pyjuju enough to be effective even trying to look since i'm just concentrating on other things waiting for go
<SpamapS> Thats the depart from the sub<->primary relationship
<imbrandon> ok ...
<SpamapS> so its possible it just commits hari-kari and never cleans anything else up
<imbrandon> so thats right ...
<imbrandon> oh well sure , well kinda
<SpamapS> it should be departing all of its non-primary relations first
<imbrandon> isnt that the point of its async call backs is it can do it anytime, so it is right? its just sloppy
<imbrandon> like i said not dug enough into this bit to be really informed
<imbrandon> just thought thats how it was tho
<SpamapS> Yeah unless one of your callbacks does its own little self-suicide instead of telling the reactor to exit when its all done
<imbrandon> right, so its a bug but not quite the same kind
<SpamapS> this is all speculation
<imbrandon> just a bad implmentation not accounting for any order
<SpamapS> I have a reproducible problem
<imbrandon> right rihgt
<SpamapS> which causes my nagios stuff to never clean up after itself :-/
<imbrandon> just just trying to help ya play strawman :)
<SpamapS> which sucks
<imbrandon> but you destroyed it
<imbrandon> why care ?
<SpamapS> because I've been able to do it without regenerating the whole nagios config every time
<SpamapS> I destroyed it
<SpamapS> so now I need to remove it from the nagios configs
<imbrandon> oh THAT hook isnt going ?
<SpamapS> thats the hook that isn't going
<imbrandon> but i thought that they all fire just late ...
<imbrandon> or ... ok one sec let me re-read that above
<imbrandon> confused mtself i think lol
<SpamapS> there's a relationship between nrpe<->nagios .. and one between  wordpress<->nagios .. and when I destroy wordpress, it nrpe/X is gone.. but never departs from the nagios<->nrpe relationship
<imbrandon> ummm
<imbrandon> it shouldent
<imbrandon> oh wait ...
<imbrandon> it /should/
<imbrandon> but dident you tell me that before that might happen
<imbrandon> with something i did in the 1st newrelic ones
<_mup_> Bug #1025478 was filed: destroying a primary service does not cause subordinate to depart <juju:New> < https://launchpad.net/bugs/1025478 >
<imbrandon> one*
<SpamapS> yes but in this case the relation hasn't been broken.. nrpe and nagios are still related.. just a *unit* departed
<imbrandon> right, i dident care cuz i dont work without it
<imbrandon> you could still try to be working ... if they both reported to the same nrpe ... but honestly thats the incorrect way to deploy nagios i was taught
<imbrandon> here let me run through a line of how i was shown to deploy nagios when first learning about it at GSI logn long ago and this si from memory but i think will not be effected by the bug even if the bug needs fixwed
<imbrandon> ok soooo
<imbrandon> service we say simle html app on apache
<SpamapS>         is_principal = not (yield self._service.is_subordinate())
<SpamapS> I just love inlineCallbacks :-P
<imbrandon> easy one check on port 80 nrpe checking it
<imbrandon> i actually do ... specicaly recursive ones :(
<imbrandon> heheh
<imbrandon> anyhow so yoy got one nrpe on 80
<SpamapS> imbrandon: sorry wtf are you getting on about nagios?
<imbrandon> you run 3 nagios in your senerio
<imbrandon> in the right way
<imbrandon> like how jenkins is run
<imbrandon> jenkins.qa.ubuntu.com dont run any jobs, other jenkins report their results into it
<SpamapS> so, in my world, NRPE is for checking things local to the box, and everything else the nagios server does direct against the host address.
<SpamapS> and then nsca goes the other way.. pushes things back from server -> nagios
<SpamapS> imbrandon: if you're talking about scaling out nagios.. we're not there yet. Lets just *configure* nagios first.
<imbrandon> nrpe local -> to one nagios per service --> one more nagios for whole environment
<SpamapS> yeah
<SpamapS> slow your roll
<SpamapS> thats later
<imbrandon> yea but if you do that then bug dont matter
<SpamapS> yes
<imbrandon> is what i was pointing out
<SpamapS> yes it does
<SpamapS> because I'd still be monitoring a now deceased box
<imbrandon> no because both services die thgen
<lifeless> what do I need to do to push the charm review foeward ?
<SpamapS> You're going to run *ALL OF NAGIOS* on the box?!
<imbrandon> no , well thats how i said it but not in rewality
<lifeless> I'm reminded of the perl joke here.
<imbrandon> it was for wasy explaining
<SpamapS> lifeless: m_3 is supposed to pilot tomorrow, but I suspect he'll be busy prepping for OSCON demo's .. so I might take his spot
<SpamapS> imbrandon: ok, so you're going to run a nagios for every service?
<imbrandon> but you end up with nrpe plugins all over , maybe even many on one box , then one nagios where ever per service name its related to if it gets another relation name then it fires up another daemon of nagios , and then both report to a 3rd
<bkerensa> imbrandon: hi
<imbrandon> maybe even on that same box as well but that 3rd will be the  one the customer "reads"
<bkerensa> SpamapS: he is very busy preparing ^
<imbrandon> SpamapS: not just every service but every service relation, but they can all still just be on one "nagios" machines physicly
<imbrandon> heya bkerensa , hows it goin
<SpamapS> imbrandon: so if I have 40 services, I have 40 /usr/sbin/nagios running? All due respect, but that sounds bat@!$% crazy
<SpamapS> imbrandon: nagios is perfectly capable of scaling one nagios daemon up to thousands of service checks
<imbrandon> lifeless: i'll have some time tonight and tomarrow as well, i'll review it and minimum give ya feed back if I think its too complex for me to +1 alone
<imbrandon> :)
<bkerensa> imbrandon: nothing much just hanging with all the juju folks except for SpamapS  :D
<imbrandon> SpamapS: yea and no , and in yes 40 and no its the norm, or at least how i've actually seen nagios deployed realworld
<SpamapS> imbrandon: also for this bug.. I have a workaround.. which is to just trash anything that belongs to the primary service even if the sub relationship still thinks it should be there.
<imbrandon> but seriously 40 services in one env ?
<SpamapS> imbrandon: even 5.. I see no reason to do that.
<imbrandon> sure, nagios is designed just exactly for that and its light
<elmo> imbrandon: I've never seen nagios deployed that way or even heard anyone doing it that way, FWIW
<SpamapS> imbrandon: and it would still suffer the same problem, because the nrpe relationship would still be left dangling
<elmo> (until now)
<imbrandon> its like built into the setup that one is normaly a "collector" and gathers the others
<imbrandon> SpamapS: but would only break one instnce of the daemon
<imbrandon> that dont matter at that point anyhow
<SpamapS> imbrandon: yes thats one thing nagios can do, but thats for aggregating several monitoring boxes into one.. not for "the norm"
<imbrandon> soory i dont follow what you mean there
<imbrandon> heya elmo :)
<SpamapS> I could see an issue where nagios would be heavily single threaded and need more processes to scale up onto one box with lots of cores..
<SpamapS> but nagios is pretty well written and is almost always just waiting on slow network stuff and disk
<elmo> nagios already spawns processes to distribute check load
<SpamapS> Yeah, I've seen pretty moderate boxes keep up just fine with thousands of checks defined and running at pretty regular intervals
<imbrandon> elmo: yea thats what i'm trying to unsuccessfully explain that its very common to break nagios up like that
<imbrandon> or other arbitrary ways
<imbrandon> SpamapS: yea i know for fact there is a dell 2650 in the basement reporting about 10k checks :) [ its my brother in laws machine heh ]
<imbrandon> and those are only like c2d 1.6ghz or something silly
<imbrandon> ( that dumb dude went nuts and has like one website cheked like for real 8 ways or something , like tcp, then telnet then http 80 then text exists etc etc etc ]
<imbrandon> i think he was just bored or was trying to make it break )
<imbrandon> heh
 * imbrandon setups up 2 checks for php/python/rails apps ( on each node direct ) one to pull a txt file from /health-check/plain.txt and make sure its 200 with body "OK" and one to pull /health-check/dynamic.php{or .py or .erb/.rb} to get a 200 and a "OK" body thats printed by the code
<imbrandon> that should cover about any thing as far as if the server is working ( not taking something slips past lint and other checks at build and/or deploy and is a app error )
<SpamapS> imbrandon: IMO the really important monitor is the one that verifies traffic is flowing
<SpamapS> imbrandon: artificial checks are great, but I want to know that requests are happening at normal levels
<imbrandon> right, thats a whole nother class of check , as well as the disk space one, dunno how many times i've seen a log fill the box and totaly hose everyting
<imbrandon> even if most all logs are on other partiions or something , someone fraks up and writes their own or logrotate dies or some crap
<imbrandon> never fails
 * imbrandon thinks damn near every log should be sent to /dev/null on prod machines anyhow with the exception of the authlog and dmesg for hardware failures etc
<imbrandon> but all service/daemons supporting services/and apps should not need or really imho be forcefully sent to /dev/null in prod
<SpamapS> imbrandon: I think I'll stick with the "should be sent to a central logging host asynchronously" not /dev/null
<imbrandon> better ways to clicktrack or anything else they might provide, and you should have a identical setup in staging to debug issues, not similar, identical, that way if something is up its in a hardware log/syslog still cuz its hardware ( or more likely delll openmanage has already told nagios that a hdd has been spun and you just get the report from in the NoC HUD's and call dell to add another when they show up in their 4 hrs and it gets hotswapped 
<SpamapS> imbrandon: if that host decides to devnull them thats fine, but I want to be able to see them
<imbrandon> oh i know , just ranting at this point
#juju 2012-07-17
<imbrandon> to do that it would have to be not just tecnical but they would need to be aware of all the reporcusions and how to debug repotly but be certain its not in vein etc
<imbrandon> and well past what can / should be encapsulated in a first draft charm if at all
<SpamapS> Man, I really really really want a "pretend to re-negotiate the relations" command
<imbrandon> oh SpamapS btw, i'm at feature parity with jjc.c now on jujube
<SpamapS> So many times where I just need to tweak something, and run upgrade-charm ..
<SpamapS> imbrandon: lol.. of course you are
<imbrandon> but i did a few thiings that i decided are a little beeter another way so i'm refactoring a little bit of it
<imbrandon> but even that i should be done tonight too
<imbrandon> oh and i said that wrong tooo, i'm not at fp with the whole domain , i broke it up into parts in my head and the README but none else know that LOL ... i intended to say i'm there with all of the charm browser aspects of jjc.c
<imbrandon> includiing both the html output and a .json out out on the same routes if /api/v1/ is prefixed
<imbrandon> like /charms/precise lists all precise charms and /charms/precise/appflower is details for appflower , then /api/v1/charms/precise and same for appflower are json output and also take a optional ?callback=anything returning jsonp so CORS will work as well
<imbrandon> with cross origin domain calls, and later other sites could use JS to just make remote api calls for the data thet want ( or what i'm more excuted about is that juju-jitu could as well should it want to maybe it will spark something cool )
<SpamapS> http://ec2-54-245-5-73.us-west-2.compute.amazonaws.com/nagios3/  (user:nagiosadmin pass:ohpuipeera)
<SpamapS> So there's my nagios magic
<SpamapS> still needs to add host/service groups
<SpamapS> and I *think* I can do some clever parent/child stuff by inspecting the relationships a bit (provides should be the parent of requires.. perhaps)
<negronjl> SpamapS: nice re: nagios
<m_3> SpamapS: nagios is compelling... can I use this as part of a "mature" stack for Thursday?
<hazmat> SpamapS, there is no clean up re subs
<hazmat> SpamapS, thats a significant feature thats needed across the board
<hazmat> orderly cleanup and instead of parent tree kill
<hazmat> imbrandon, incidentally i got the oks to push jc.c public
<hazmat> SpamapS, goju should hopefully have that from the start, it was discussed  as part of the mongo storage work
<hazmat> imbrandon, yes charms are copied on disk for each unit... pls file a bug if its of concern
<hazmat> for ec2 there's not much to do, it must be present on each machine
<hazmat> for local..
<hazmat> perhaps
<hazmat> SpamapS, how'd you get juju with ostack working on hpcloud?
<hazmat> ah.. auth-mode keyapir
<imbrandon> hazmat: I had to use auth-mode: userpass ... see PM
<hazmat> imbrandon, i'm making some progress with keypair
<imbrandon> hazmat: nice, if you do get keypair to work later that would be cool, much less complex than currently i think for the env.y
<imbrandon> although thats a set and forget thing so ...
<imbrandon> heh'
<SpamapS> m_3: this is pretty solid, I'll chat w/ you tomorrow to get you up to speed on deploying it. I think it should be widely consumable.
<hitexlt> Hi, i am trying to deploy openstack on VM's. I have deployed all required charms, but for most machines in MAAS it shows agent-state: not started, though they are running. Here is output of juju status http://pastebin.com/vAb1mb3r Any ideas why juju is not registering running nodes?
<SpamapS> hitexlt: the agent state is running on machine 2
<SpamapS> hitexlt: and machine 2's rabbitmq seems to be started actually
<SpamapS> hitexlt: can you look at the console of machine 1 and see if something failed on first boot?
<jml> :(
<jamespage> jml, sad today?
<jml> oh yeah
<jml> I just had to kill off juju processes that were chewing up all my CPU
<jamespage> jml, local provider?
<jml> jamespage: yeah. I pretty much never use ec2
<jamespage> :-(
<mgz> jml: canonistack is actually pretty reasonable currently
<jml> mgz: I can run precise stuff on that, right?
<mgz> yup.
<jamespage> mgz, hows the openstack provider work coming on?
<mgz> it's is a working state, but there are a few required features left and some cleanup to do.
<hazmat> mgz, i've spent some time yesterday working it across several providers (rspace, hp, cstack)and doing some cleanups. rspace is very different behavior with v2 api
<hazmat> no floating ips
<hazmat> auto public ip
<mgz> right, hp have also switched to enabling the auto-assign of a public ip
<mgz> there's an easy enough work around (see if there's a public ip before allocating a floating ip)
<mgz> but that doesn't get me round the sleep unless I have a way of getting the config state out of a particular deployment
<hazmat> mgz it would be nice to avoid the wait entirely
<mgz> what I'd *really* *really* like is juju to move the public ip requirement to the ports handling
<hazmat> mgz, there's nothing to run that code for the bootstrap node
<mgz> it wouldn't be that hard to juju ssh to ensure the instance is accessible
<hazmat> that's a workaround not a solution
<mgz> well, it's moving the requirement of being able to talk to the machine from instantation to when it's actually needed
<mgz> any code that currently expects to access public_ip could basically do that lookup with a check that will allocate one on demand
<hazmat> mgz ah.. so your saying automatically instead of manually. that's reasonable.
<hazmat> mgz, even distinguishing what's the public ip isn't well defined
<mgz> right.
<hazmat> mgz, also for ec2, it no longer waits for the security group shutdown/delete dance, it just reuses the groups
<hazmat> on trunk
<hazmat> for ostack this removes the delete dance at shutdown
<hazmat> all the providers seem to support just killing the machine
<mgz> that's an improvement, is there any cleanup for the groups?
<mgz> hazmat: do you have a branch for your changes up anywhere? and did you find my hp tweaks branch? there are a few other differences they have that need properly integrating into the provider
<hazmat> mgz, it looked like the hp tweaks where already mainlined
<hazmat> mgz, i'm still in progress on my changes.. but here's the crack in progress ~hazmat/juju/openstack_provider/
<hazmat> i need to run an errand, bbiab
<mgz> right, I merged Clint's, but last week an HP guy did some more testing as they want to use juju in a demo
<mgz> and turned up some more things
<hazmat> mgz, rackspace has their own auth form derived from v2 but with different fields.
<hazmat> mgz, where's that branch?
<mgz> see lp:~gz/juju/openstack_provider_hp_tweak
<hazmat> cool, i'll check in when i get back
<mgz> I'll take a look at your current state too on the rackspace auth things.
<hazmat> mgz, i still have to parameterize all the security group work there, since all of those apis are 404s on rspace
<mgz> hm, all of them? they don't support security groups at all?
<hazmat> mgz, at the moment no
<hazmat> nor floating ips
<hazmat> mgz none of those apis is in v2
<mgz> okay, those auth changes make sense, shoule be able to integrate that nicely
<hazmat> there in quantum
<mgz> well, there are various things moving from nova to other projects
<mgz> in not very backwards compatible ways
<mgz> provided detecting can be done sensibly, doing both shouldn't be too hard, just a little annoying
<hazmat> mgz, there's some minor capability though inconsistent to detect via service catalog version info
<hazmat> rspace actually publishes multiple compute endpoints of different versions
<mgz> hm, probably need to change the endpoint picking logic again then
<surgemcgee> Hmmm, so the quotes are interpeded as a string now in the yaml. Is there a different type which will evaluate the quotes?
<mgz> you mean will include the quote marks in the string?
<surgemcgee> Yes, are we just using the pipe now?
<surgemcgee> Well, I mean, will evaluate the quotes as a special character (escape sequence)
<surgemcgee> With the default type of string, the quotes are included. I think this was changed.
<surgemcgee> Also, referring to the single quote
<mgz> hm, was about to say it works for me, but turns out I don't use any quotes in my config
<surgemcgee> I am trial/error'ing it now. I will post my results.
<smoser> can someone take a look at http://askubuntu.com/questions/162255/does-juju-really-need-two-nodes ?
<mgz> I looked upon it an despaired
<mgz> he wants to do an openstack deployment with only one box, the maas demo used 9.
<hazmat> jitsu deploy-to can lesson it
<smoser> it seems to me that the JuJu bootstrap node failed to come up.
<hazmat> smoser, agreed
<hazmat> it looks like the node is up but not running zk
<mgz> I've had similar symptoms from cloud-init steps failing
<mgz> so, sshing to the box and looking at the logs is what I'd do next
<mgz> I just don't see it's really helping his end goal
<smoser> mgz, cloud init does not fail.
<smoser> i'm done talking to you
<smoser> :)
<mgz> :D
<mgz> sorry, 'apt-get fails and cloud-init plows on regardless' :P
<smoser> mgz, yeah, that is what i woudl have suggested, but i dont know which logs to get.
<smoser> mgz, thats better.
<smoser> hazmat, or mgz, could you request posting of some logs that might give information?
<mgz> /var/log/cloud-init-output.log and the ones under /var/log/juju generally
<surgemcgee> Is it appropiate for me to start bug reports on juju? E.g. Boolean in config uses python interpratation with capital first letter --> "True". But bash uses lowercase --> "true".
<surgemcgee> Ahhh crap. Well, if anyone was curious about the quote thing, careful where you put you delimiter settings. IFS="$(echo -e "\n\r")" is not normal for commands.
<hazmat> surgemcgee, yes re bug reports, that particular one is addressed by some recent changes, but requires the charm to explicitly specify format 2
<hazmat> which makes hook cli api output normalize to yaml
<SpamapS> hazmat: kind of weird though... format 2 isn't supported by precise...
<SpamapS> hazmat: I suggest that people just use --format=json
<hazmat> mgz it looks like the fixes re hp tweaks are to support an older form of sec groups extant there
<hazmat> it seems like the version identifiers here have no meaning re 1.1 nova compute, as they entail quite a bit differences between extant entities
<marcoceppi> Has openstack support landed yet?
<hazmat> marcoceppi, almost, just trying to kick the tires across a few providers
<marcoceppi> Awesome, good to hear
<mgz> right, the api versioning isn't... er... robust
<hazmat> considering the size of the branch, i'm sorely tempted to have it just land, and fix per provider incrementally..
<hazmat> mgz, does the server respond back with version headers in any meaningful way?
<hazmat> mgz, the keystone svc catalog doesn't seem to mandate versions
<hazmat> which makes it hard to utilize that
<hazmat> mgz, visualizing what v2 compatibility is going to look like with a provider that supports separate network, compute, volume api.. while keeping compatibility with 1.1 feels quite different
<mgz> nope, though versions should generally be in the catalog... main issue seems to be they're largely ignored
<hazmat> the encapsulation needs to flow from abstractions inspecting the svc catalog, but i digress.. its probably better to fix up some of the window dressing
<mgz> so, stuff get broken and no one notices
<hazmat> mgz, ignored by the current code.. or by the svc provider?
<hazmat> well the former is true.. just curious about the latter
 * hazmat openstack's api innovation makes me sad
<hazmat> hmm.. s/me/
<mgz> for instance, nova-client handles 1.1 and 2 with the same code under v1_1 module
<mgz> there's pushback now it's starting to get use, but the history so far is messy
<hazmat> mgz, thats going to fail as soon as people start deploying quantum.. its  already a fail using the cli against v2 impl
<hazmat> with v2 we're going to have construct our facade if we want to support both nova-network and quantum and dispatch by version id
<hazmat> i'm not as concerned about that at the moment though re getting this into trunk, since those are extant
<mgz> right, and the volumes code has the reverse problem, I have to use an old version of python-novaclient because the api was moved out of 'compute' into 'volume' and current deployments don't support the new way
<mgz> I think these problems aren't too hard from the juju side though, given the api usage it pretty light and we should alwayse be able to pick a method that works
<hazmat> same problem fundamentally.. two versions address the same core api in different ways
<mgz> right.
<hazmat> mgz, is volume going to be exposed at its own endpoint in the svc catalog as well?
<mgz> yes.
<mgz> and in fact is, but... canonistack gives an unresolvable internal url, and hp gives an endpoint which uses the old naming :)
<hazmat> argh.. snowflakes
<mgz> :D
<SpamapS> doesn't keystone hand back a map.. "volume -> this url, network -> this url" ?
<hazmat> SpamapS, in future in theory yes
<hazmat> i believe rspace has both of those in private beta
<hazmat> rspace is on v2 of the api.. hp is effectively on a forked version of the 1.1 api from diablo time frame which is different than any  other impl.. canoistack is on v1.1 but doesn't advertise versions for its svc catalog endpoints (which keystone hands back).
<SpamapS> I'm surprised HP hasn't spun up an Essex region
<hazmat> mgz, SpamapS have you guys noticed any oddity about the bootstrap node getting a stringified id in hpcloud stored in zk, and thus not useable for queries to hpcloud about it?
<SpamapS> hazmat: yes
<SpamapS> hazmat: the bootstrap node just is wonky w/ hpcloud
<SpamapS> I haven't looked into it very much
<hazmat> the provider needs to materialize it back to an int i think before passing it off to the api serialization
<mgz> hazmat: yeah, that's an issue, I hacked around it in the hp tweak branch I linked earlier
<hazmat> mgz, it looked like the branch was trying to serialize all the ids to strs
<hazmat> oh.. this the curl against the main id
<hazmat> er bootstrap id against fstorage
<mgz> there's a neater way of doing it in StatusCommand._process_machine but I was trying to avoid changes to other bits of juju
<hazmat> mgz, i'm thinking just sniff the type in the provider
<_mup_> Bug #966577 was filed: add explicit egress 'owner' rule on non-bootstrapping nodes to require root access to zookeeper <rls-p-tracking> <juju:New> <juju (Ubuntu):Triaged> <juju (Ubuntu Precise):Triaged> < https://launchpad.net/bugs/966577 >
<hazmat> yup.. that does the trick
<SpamapS> hazmat: ^ the bug above is my hack for preventing non-root users from poking at ZK..
<SpamapS> hazmat: No diff yet.. just the idea. But I think its worthwhile even after ACL's land
<SpamapS> hazmat: which, btw, are we going to land ACLs ?
<SpamapS> hazmat: or is libzookeeper still an unknown weird fail-generator ?
<imbrandon> zk dosent handle acl's at all? it expects the client to 100% ?
<imbrandon> hazmat: btw i found if you use local placement on the bootstrap node the ssh 0 is still failwhale but you can juju ssh service/0 to it fine and juju status us the same at the top but its correct down at the server part with ip and all
<imbrandon> hazmat: personally i've just been doing juju ( the charm ) deployment or something small just for convience but if that help with the debug path in the code maybe heh
<imbrandon> hpcloud ^^ btw
<negronjl> 'morning all
<imbrandon> ugh, where are unity's global hotkeys stored anyone know right off to save me 15 minutes of looking , please dont say gconfeditor ...
<imbrandon> heya negronjl
<negronjl> hi imbrandon
<hazmat> imbrandon, just fixed that fwiw
<imbrandon> oh sweet, i was aobut to get lunch bvut after mind if i snag ur branch :)
<imbrandon> heh
<hazmat> i guess clint rolled out
<imbrandon> oh cool cool, i'll catch up when i get back i've been head down in code and only half paying attn to irc today
<imbrandon> ok back in a bit, foooooood
<hazmat> the ssh fingerprint problem is more pronounced with ostack providers i'm noticing
<hazmat> i guess thats the habit of reusing the floating ip
<hazmat> to a different machine
<mgz> yup.
<hazmat> SpamapS, yes stunnel si worthwhile
<hazmat> SpamapS, i haven't dug into libzk issues
<SpamapS> stunnel would be a bigger deal
<SpamapS> just using the iptables rule we at least limit all traffic to the bootstrap node to root
<SpamapS> imbrandon: not sure what happened while I was split off. ZK has ACL's, but we don't use the (yet)
<hazmat> imbrandon, just hold down 'windows'/cmd key for a while and the shortcuts appear
<hazmat> a listing that is
<SpamapS> ooo somebody is learning unity
<SpamapS> hazmat: hey, the charm versions in import/export are pretty bogus..
<hazmat> SpamapS, ?
<hazmat> SpamapS, how so?
<hazmat> SpamapS, pastebin pls
<SpamapS> hazmat: with local charms at least.. I no longer have "version25" of mysql.. I"m on 31.. but it deployed 31 as local:mysql-25
<SpamapS> hazmat: causes no harm, but its confusing
<hazmat> SpamapS, ah.. i actually added explicit support for that
<hazmat> SpamapS, otherwise it would fail to deploy at all if the version in a local repo wasn't an exact match
<SpamapS> It works fantastically, but its a bit weird. ;)
<SpamapS> hazmat: I'd rather see the actual versions (and warnings). But.. we can do that in the go version, right? ;)
<hazmat> SpamapS, basically for local charms, the version id isn't a hard requirement, it will look for the specified version and then fallback to namematch
<marcoceppi> Interesting question, can I change the region during deployment? to have nodes across regions?
<hazmat> marcoceppi, no.. that's not supported in constraints atm.. twas a long discussion..
<hazmat> well.. you can specify az
<marcoceppi> hazmat: thanks, I figured it would be pretty involved
<hazmat> for a constraint at a service level
<hazmat> but if you want a multi region service, the result is you need a service in each that are connected
<hazmat> that way if you need to add capacity (or a node disappears) replacing it is explicit
<SpamapS> I'd love to have a service constraint of ec2-availability-zone=balanced
<SpamapS> meaning, put these in as many different zones as possible
 * hazmat nods
<marcoceppi> +1k
<hazmat> we went back and forth on that one a few times at uds orlando
<hazmat> not treating individual nodes as special was the concern for supporting it explicitly within a service
<hazmat> if  i have three nodes in one az and one node in another az, and it dies, what does add-unit do to correct properly
<hazmat> a juju policy level  identifier for zones would work though
<hazmat> but at the moment the recommendation is to have separate services in each zone and relate them
<dpb___> is it possible to reboot an instance inside an install hook (for example)?
<hazmat> dpb___, not at the moment, the address change doesn't properly propogate if there is one, if the address is stable then yes
<dpb___> hazmat: ok, you mean the ip address?
<hazmat> dpb___, yes, both public and private would have to be stable for it to work reliabily
<dpb___> hazmat: ah, ok.  I guess the private one can change between boots on aws?
<hazmat> dpb___, both change on reboot in aws
<dpb___> hazmat: ok, thx, I did not know that.
<hazmat> dpb___, there's an unloved branch for it, if your interested, that updates address at boot, and propogates to relations.. with it doing things like suspend/restore an ec2 env become possible
<dpb___> hazmat: well, first let me see if I can do some magic to make the reboot uncessary.  that would be speed up the process anyway.
<hazmat> dpb___, absolutely that would be the best way
<dpb___> hazmat: thx
<SpamapS> dpb___: I would defer reboot to config-changed
<dpb___> SpamapS: How do you mean?  you mean that config_changed can handle a reboot in the middle?
<SpamapS> dpb___: config-changed is what will be run after the reboot
<jcastro> SpamapS: heya
<jcastro> SpamapS: mims is on review this week but I didn't account for oscon
<jcastro> SpamapS: if it's possible, what do you think about reviewing the queue this week and have mims grab next week, when it was your turn to go.
<jcastro> basically, wanna swap?
<SpamapS> Yeah I'm going to take his shift
<SpamapS> jcastro: I saw that coming yesterday. :)
<jcastro> <3
<jcastro> thanks dude
<SpamapS> jcastro: which is unfortunate because I was hoping to have Mims review my massive Nagios refactor :)
<SpamapS> but he probably will anyway as he might use it in his demo :)
<jcastro> indeed
<jcastro> also we have decided we want you writing more subordinates. :p
<SpamapS> ME TOO :)
<SpamapS> well I already took over james_w's nrpe
<SpamapS> jcastro: http://ec2-54-245-5-73.us-west-2.compute.amazonaws.com/nagios3/
<SpamapS> jcastro: user:nagiosadmin pass:ohpuipeera
<SpamapS> and no I don't mind if you all pound the crap out of it ;)
<imbrandon> hazmat: yea i knew that actually, cuz its the ones i keep hitting ( cmd ) and  so i want to swap enmase` cmd<->ctl so my hotkeys for osx and ubuntu should line up almost identically, as it is i keep bringing up the hud every time i goto save a file or copy/paste etc etc, finaly had enough and gonna find it even if i got to recompiler unity :)
<imbrandon> heh
<jcastro> SpamapS: can you leave that instance up for a few hours?
<jcastro> that'd be nice to show during our talk
<SpamapS> jcastro: when are you guys speaking?
<SpamapS> jcastro: I can give you an export to recreate it. :)
<jcastro> in 1.5 hours.
<jcastro> well, we didn't want to go too deep into it, actually, I'll just screenshot it
<SpamapS> jcastro: are you guys demo'ing at all or just talking?
<jcastro> demoing a bunch of stuff
<jcastro> so we didn't want to add another demo
<SpamapS> hah yeah ok
<SpamapS> jcastro: let me add thinkup back in and then you can screenshot it
<jcastro> ok, lmk when
<SpamapS> jcastro: its there now, just PENDING
<SpamapS> jcastro: more fun if its all green :)
<jcastro> I like it with the pending though
<SpamapS> true
<jcastro> shows that it all just works, and you don't care, you set the relationship, it WILL work, etc.
<jcastro> except when it doesn't, but whatever. :p
<SpamapS> haha
<SpamapS> jcastro: got a shot yet? I want to refresh one of the relations
<jcastro> we're good
<SpamapS> jcastro: Ok re do your snap of services now. MUCH more compelling :)
<SpamapS> jcastro: and I can give you some reds if you want. :)
<jcastro> ok one sec
<jcastro> thanks dude, got it
<imbrandon> SpamapS: haha ok know what i just realized, ok so like none of this info is new to me ( or you either i dont think ) but it JUST all clicked a few seconds ago
<SpamapS> imbrandon: you forgot one thing. You forgot, to hookup, the doll
<imbrandon> so ... Google had another announcement like 2 days ago that they just finished laying the 200th mile of FttH in Kansas City
<imbrandon> doll ?
<imbrandon> so ok the fiber rollout is almost complete, its double subsidised once by the city/county to get ppl on it and once by google, so total cost to residents at rollout is $30 per month out of pocket and thats for 100GB SYNC connection , not just to the street but all the way to the net AND 16 static IP .....
<imbrandon> on the residential connection ... ok so add that ... pluss i start rounding up some HP blades and Sun NAS boxes  heheh
<imbrandon> toss open stack on it , and start a lil mini Linode in my basemnent
<imbrandon> likely rivaling if not exceding the quality of many mom and pop operations round the country
<imbrandon> hahah
<imbrandon> anyhow i likely will do all that cept the resell part, i dont want that headache :)
<imbrandon> lol
<SpamapS> imbrandon: re doll.. http://www.imdb.com/title/tt0090305/quotes .. look for the word 'doll'
<imbrandon> k
<SpamapS> I'm like the only one who remembers that quote.. but I use it in my head constantly :p
<imbrandon> AHHH
<imbrandon> heh, thats ok i had the audio playing ( very loud too ) for the samuel jakson seen at the start of pulp fiction when my brother came by and dident reconise it
<imbrandon> i told him i disowned him
<SpamapS> imbrandon: your power setup will be crap w/o a diesel generator, but otherwise yeah you will have quite a bit of advantage. :)
<SpamapS> perhaps KC will become us-east-2 ;)
<imbrandon> hrm yea but that would not be too bad :)
<imbrandon> hahaha
<imbrandon> http://gfiber-kansas.appspot.com/fiber/kansascity/news.html
<imbrandon> cant wait for real, i'm just happy as hell that not only its got 16ips static standard but that are quoted to say something very very close to "KC is the pilot city here is some cool shit now go make the next gen web"
<imbrandon> PLUS sync both ways :)
<imbrandon> you should see all the new Tech startup that are already downtown that are newly funded and starting to setup shop now in prep
<imbrandon> gonna be a wild ride the first year i bet
<imbrandon> SpamapS: ok, i'm on Time Warner now ... kc.rr.com , and check this pic out http://www.businessinsider.com/google-fiber-speed-2011-8
<hazmat> it looks like hpcloud is blocking private traffic by default now
<imbrandon> they block all by default i thought
<imbrandon> and you had to add pub and priv to the zones
<hazmat> huh?
<imbrandon> i forgot what they call their security groups
<hazmat> i couldn't deploy services because the agents couldn't connect back to the bootsstrap node
<hazmat> security groups..
<imbrandon> right, juju dont do that on diablo
<imbrandon> SpamapS: had me do it by hand so i just set it to 0.0.0.0/0
<imbrandon> and never destroyed it
<SpamapS> Oh right I totally forgot about that
<imbrandon> i did too
<imbrandon> untill hazmat said it
<imbrandon> since destroy never took the groups away
<imbrandon> and i used the same name
<hazmat> this is going to break most of the charms unless we netmask all internal port traffic
<imbrandon> to redeploy , never had to mess with it again
<hazmat> to the env sec group
<hazmat> i had the same issue on azure although lacking any notion of a private network there
<imbrandon> right , but you can only do that by name and not ip with the python-novaclient :(
<SpamapS> hazmat: there's no concept of private traffic being allowed between security group members?
<imbrandon> SpamapS: they is but its off and docs say its not in the webui
<hazmat> SpamapS, there should be but it was pretty broken on diablo
 * hazmat grumbles about snowflakes
<imbrandon> snow ? its over 90f here
<imbrandon> man
<imbrandon> no effin way
<hazmat> imbrandon, just referencing the individual snowflake implementations of openstack ;-) its a 100+ here
<SpamapS> ahh diablo seems to be just making everything ridiculous
<imbrandon> ohhh ohh ok, i've interntionally stayed clear of the OS guts for now, until it evens out a little
<imbrandon> and i dont waste a ton of time on something that will be bad before i really "get" it
<imbrandon> hehe
<hazmat> SpamapS, the differences between essexes is just as bad.. ie the delta for rackspace and canonistack is huge
<imbrandon> i figured by the time RAX and that other one rolls em public will be a good time to hunker down and really "get" all the bits that i would need
<SpamapS> hm, a wise man once told me that the browser wars were the only thing that saved the web
<imbrandon> to know about , or maybe not but i still like to understand the full stack eventually, kinda sucks lately intentionally staying out of the juju and OS code just for that
<SpamapS> perhaps we're right at the point where netscape has implemented a billion little extensions, and IE has done the same...
<imbrandon> i dunno about saved it, but they sure as hell are what drove it and still do
<imbrandon> SpamapS: we are, even software devlopemnt as a whole is an insanely young disapline let along generations inside of that
<SpamapS> imbrandon: the theory being that while HTTP RFC's and w3c specs *helped*.. what really kept things honest was users demanding the Netscape/Firefox/etc. interoperate with sites that blindly used IE stuff..
<SpamapS> Tho I could make a compelling argument that what really put a nail in the IE-special-snowflake was the iphone
<imbrandon> who's got money on hp will give in to the OSapi and join the ranks and goog will be the wild card like IE
<imbrandon> with "alomst" but "better" stuff :)
<SpamapS> Ugh
<SpamapS> subordinates should have a guarantee that their primary relations will have fired before any others
<imbrandon> nah, it was way before the iphone, thats just when everyone realzed it but by that time its tooo late, itunes is what sealed everything
<SpamapS> nrpe killing me here by relating to nagios before it can talk to its local service
<imbrandon> SpamapS: no, bad juju , dont force serialization, lets make the hooks smarter to deal with async
<imbrandon> yea iu know easier said than done
<SpamapS> imbrandon: I do that
<SpamapS> but.. we need a 'defer me until later' to make that easy
<SpamapS> otherwise every hook ends up calling every other hook
<imbrandon> i know, you do, it was really me just "going by the book" but in reality your right
<imbrandon> heh , have you seen my hooks, they do call every other one ;)
<imbrandon> for realz
<SpamapS> yeah I'm doing that too and its just making me sick how crap thats going to be for some use cases
<imbrandon> so i want to charm http://c9.io ... but its got a "magic" build system i dont under stand and cant make work on 12.04 but same git checkout works on OSX ... got time / wanna help on it :) heh
<SpamapS> like nagios.. where I fully expect thousands of units related
<imbrandon> what thousands ? maybe on the top 3% but the vast majority of deployments will be 20 or less i bet, Penton only had about 100
<SpamapS> there's a really big network effect though
<SpamapS> so even 100 units will be painful to do 4 times every time 1 comes and goes
<imbrandon> yea but having not looked at it at all i know that the "nagios" way of dealing with that is not the typical setup, we ( read you , i aint dont shit but bitch ) might be rtying to bend nagios into a traditional scale out form factor its not gonna work at ... this is problably going to be the most complex charm i bet , including the OS ones
<imbrandon> nah that shouldent be
<imbrandon> those checks are cheap
<imbrandon> think about how many hit apache a second
<imbrandon> it just SEEMS like it would
<SpamapS> its the constant chatter back and forth between juju and the hooks and execing relation-get for every relation-list unit for every relation-id relation
<imbrandon> that and thats one reason they should not all report to the one
<SpamapS> its not about *nagios*
<SpamapS> its about the charm eating up all the CPU cycles just dealing with the churn
<SpamapS> I can scale out nagios really easily
<SpamapS> several ways
<imbrandon> oh well if the charm cant scale thats not nagios's fault
<SpamapS> my point exactly
<imbrandon> then we need to look at mq
<imbrandon> but i thought zk did that for us
<SpamapS> its not that either
<SpamapS> its calling all the hooks all the time
<imbrandon> sure you can slow the queue back
<SpamapS> nagios is pretty critical.. I'd argue slowing the queue is a major fail
<imbrandon> right , so how do we make the hooks cheaper ...
<SpamapS> if I have 60 minute guaranteed response time, getting the check 5 minutes late could cost me money
<SpamapS> write them in go!
<SpamapS> ;)
<imbrandon> nagios will slows its queue too, i'm talking ms not minutes
<imbrandon> just enought to give the cpu time to slice
<imbrandon> ok i do see what your saying now tho, and actually
<imbrandon> thinking aobut it 30 whole seconds heheh
<imbrandon> its a design problem
<imbrandon> hooks need to be fat
<imbrandon> but in that cane they cant
<imbrandon> cant
<SpamapS> Its much simpler to write hooks that just regenerate everything every time
<imbrandon> well its not onlythat
<SpamapS> BUT that will cost a lot of efficiency
<imbrandon> you would need to say ok, you only get 250ms of runtime each fire
<SpamapS> and juju has an event based model that should allow us not to do that
<imbrandon> do what you need
<imbrandon> but then
<imbrandon> you cant garentee
<imbrandon> so yea, its like conflicting goals
<SpamapS> I don't think we can guarantee..
<marcoceppi> IIRC you can do "push" notifications to Nagios. So nagios doesn't have to poll every time
<imbrandon> but ...
<imbrandon> yea we HAVE to
<SpamapS> but a best effort would be to try and keep hook execution down
<imbrandon> yok SpamapS i got the solution
<SpamapS> marcoceppi: indeed, thats a better model I think
<imbrandon> config mangement, stop going it in bash
<SpamapS> but.. there are some things you can't push.. like making sure the thing is actually reachable
<SpamapS> imbrandon: mine are in python :)
<imbrandon> let the bash fire async off to the nagios boz anddo it
<SpamapS> at least, a huge portion
<imbrandon> sure, but you ssee what i mean
<SpamapS> bash is for the simple stuff
<marcoceppi> SpamapS: right, we had a similar problem at InMotion hosting. Monitoring 2,500 services in Nagios
<imbrandon> right i was telling him about that at GSI was the same
<imbrandon> but its really that 2500 hooks fire not
<SpamapS> marcoceppi: mod_gearman for nagios can help with scaling out the polling for things that still have to be polled
<imbrandon> the nagios
<SpamapS> its not 2500 hooks, lets be clear
<SpamapS> lets say you have 100 units total
<SpamapS> all of which are related to nagios
<imbrandon> i was using marcoceppi example silly
<imbrandon> but the checks arent a problem
<SpamapS> and you add or remove at least one every 3 minutes...
<imbrandon> its that python hook on the juju relations that is the bottle neck
<SpamapS> departed, joined, and changed all then *must* finish within 3 minutes, and cannot eat all of the CPU for that 3 minutes
<imbrandon> ok wait a min, we/i'm going in circles, why are the checks and the hooks coupled , i was assuming they wernt
<imbrandon> untill just now
<SpamapS> because the checks run on the same box as the hooks
<imbrandon> why
<SpamapS> <sigh>
<SpamapS> where else would they run? another box proxied to?
<imbrandon> yea only 3
<imbrandon> not all of tghenm
<imbrandon> no
<imbrandon> but ... ok let me find a way to get what i'm thinking right the first time
<SpamapS> So my point is that the joins, changes, departs, brokens, have to be moderately efficient
<imbrandon> you got nagios svr
<SpamapS> but if they loop over 100 units running relation-* 5 times for each unit..
<imbrandon> that solved nothing really tho
<SpamapS> thats not going to take 3 minutes, but it might take 30 seconds. And thats 30 seconds of CPU backlog
<imbrandon> because their is nothing stopping another charm form beong ineffecianrt
<imbrandon> and yea if we're talking the main server running 300 check every few minutes then i think we might be oiptimizing before there is an issue, each nagios check should only take milizeond or 2 so 500 to 1000 milizeonds
<imbrandon> then again on the callbackl
<SpamapS> forget
<SpamapS> the
<SpamapS> nagios
<imbrandon> so 2 seocnds max at 100 services
<SpamapS> checks
<SpamapS> the hooks alone are the problem
<imbrandon> oh jesus i just asked why you switched on my and they was coupled above
<imbrandon> ...
<SpamapS> They're on the same box, but they're not the problem
<imbrandon> so the hook runs 3 times ... and the checks run 300 to 500
<SpamapS> the problem I'm facing is keeping the nagios changed/departed/broken hooks efficient, thats all.
<imbrandon> thats still very cheap
<imbrandon> SpamapS: hadcop did this
<imbrandon> if its just the hooks then its the same problem
<imbrandon> the 3000 nodes or /we
<SpamapS> well anyway, I'm already seeing the hooks take 3 - 5 seconds with 7 units
<SpamapS> spending their time waiting on relation-*
<imbrandon> ohhh and wait, for real, cant you put the join/depart ips and stuff your redoing the config for optionally into a mysql/pgsql
<SpamapS> or just a sqlite :)
<SpamapS> yes I do some of that
<imbrandon> SpamapS: that could be zk + python spinup time too
<imbrandon> about half of the 3 to 7
<SpamapS> agreed
<SpamapS> REST API, I can has, plrease
<imbrandon> maybe only ship the .pyc files in the /hooks
<imbrandon> ?
<SpamapS> imbrandon: the pyc's for juju.* are already compiled
<imbrandon> SpamapS: sure http://jujube.websitedevops.com/api/v1/
<imbrandon> and i was saying the hooks not juju
<imbrandon> your hooks are in python
<SpamapS> the hooks aren't really slow
<imbrandon> ...
<imbrandon> >.<
<SpamapS> measuring them w/ strace -c ..
<SpamapS> wait() leads the pack
<imbrandon> marcoceppi: oh btw i did get a full CharmStore API ReST interface for json and jsonp done
<marcoceppi> imbrandon: cool, saw the link - got a 404
<imbrandon> just need to push it up but i need to split out some commits, i'll email ya with the details
<marcoceppi> <3
<imbrandon> that route plus whatever the normal url is will give ya json , and add ?callback=your_callback
<imbrandon> will give ya jsonp
<imbrandon> but like i said there are some other little stuff to , i got the readme docs about 3/4 i should be able to get it out for ppl to poke sometime tonight
<imbrandon> cuz i added a php5-yaml dep as well too and some other things
<imbrandon> that would be a PITA if you just had to debug why its broke if missing :)
<imbrandon> SpamapS: what would be calling wait() disk io ?
<imbrandon> that blocks i assume
<imbrandon> unless its just a thrad wait
<imbrandon> or something
<SpamapS> imbrandon: wait() is waiting on forked processes
<SpamapS> imbrandon: as in, relation-*
<imbrandon> oh ... so yea it is the scripts then waiting on the pid to finish
<imbrandon> err parent waiting on the pid of the scipt etc
<SpamapS> right
<SpamapS> I could, as you say, keep all the relation data cached locally
<SpamapS> and only update it when the hook fires
<SpamapS> perhaps juju should do that
<imbrandon> so it should be doing something in the script then durring that but its child ... hrm dtrace ?
<imbrandon> SpamapS: something this complex i think that might be the most correct way without getting it into productions some and looking again from 30kft
<SpamapS> imbrandon: the relation-*'s are largely stuck waiting for ZK
<SpamapS> because the way we interrogate ZK is I think a bit verbose
<imbrandon> e.g. might comes up with something not occuring to you the last week or whatever after you see it live some afternoon :)
<imbrandon> SpamapS: imho zk is very slow but i thought that was just the java being slow
<imbrandon> i was hoping that we'd move to mongo :)
<SpamapS> mongo will make queries faster
<SpamapS> but I am not convinced we will be able to observe it as easily
<imbrandon> we use zk like mongos's gridfs implmentation
<imbrandon> so it would be identical
<imbrandon> infact moreso because distributed mongodb is MUCH easier
<imbrandon> and this dist gridfs
<imbrandon> a part could be on all nodes reoplication to all the others
<imbrandon> with like 3 hours of design
<imbrandon> :)
<SpamapS> hm, sounds somewhat like my 0mq idea
<SpamapS> (have relation data live on nodes that own it, not in the central database)
<SpamapS> I'm about to say screw it and just regenerate the whole config every time..
<imbrandon> and monogo has driveers and gridfs interfaces for everything and its fast and it has auth built in ( something zk also lacks i think )
<SpamapS> too many weird cases where I miss one thing and then there's no way to ask juju to re-fire a hook and regenerate the configs
<imbrandon> heh yea , working is better than correct , we're going to be going back and fixing alot of these charms
<imbrandon> took me a long time to actually come to grips with that being OK and we're also still so stringent on other things
<imbrandon> heh
<SpamapS> I'm also doing something extremely weird.. maybe even wrong.. by relating the primary *and* nrpe to nagios
<SpamapS> I think I can actually simplify by just relating nrpe to nagios
<SpamapS> but ugh
<SpamapS> I've already spent so much damn time on this
 * SpamapS goes to eat and think
<imbrandon> heheh iterate man iterate, do it wrong once or 40 times
<imbrandon> no one is gonna get this stuff  right on the first go and definately not the ones doing it first .... :)
<imbrandon> but if it works but is just slow at 100 nodes, thats ok :)
<imbrandon> fo now
<imbrandon> lol
 * imbrandon says that like any of the above you did not already know
<imbrandon> heh ugh ok i need to ... there was something i neeeded to do before getting back to that charm ....
<imbrandon> ugh, memory of a dog and the attn span of a gnat ... fml
<imbrandon> oh hazmat btw , dug this out for ya to poke at later just cuz ... http://caucho.com/products/whitepapers/quercus.pdf
<imbrandon> but hazmat you konw when i was grabbing that tho, i ran into this and about seriously fell on the floor almost out of breath ... http://morepypy.blogspot.com/ SpamapS marcoceppi  m_3 jcastro , yall check that too
<imbrandon> the second one
<ejat> hi .. sorry to interrupt .. just want to ask about this email .. is it applicable to all ? https://lists.launchpad.net/lubuntu-qa/msg00787.html
<imbrandon> ejat: its already began but you can get in touch with jcastro as he is the liason for that promo iirc
<ejat> imbrandon : thanks
<imbrandon> ejat: see http://wiki.ubuntu.com/HPCloud iirc ( typed from mem, may be off )
<ejat> DEADLINE FOR THE FIRST ROUND OF APPLICATIONS IS 3 JUNE - Sorry this is so short notice, we'll be doing applications in batches. :( i just notice today :(
<dpb___> Is there a hook that is called when a service is destroyed (destroy-service)
<dpb___> I guess maybe I shouldn't be calling destroy-service, but I'm not sure.
<ejat> imbrandon : is this the latest from askubuntu http://goo.gl/a0mMX
<ejat> juju with hpcloud or now already supported ?
<imbrandon> defined supported ? hehe its somewhat functioning with some known issues but there is a spcific LP branch that can be used and myself , hazmat , SpamapS , mgz and i'm sure a few others have been trying it out and some fixes have landed but its no where near offical yet
<imbrandon> let me find you the url ...
<imbrandon> ejat: here is the banch lp:~gz/juju/openstack_provider
<imbrandon> SpamapS: thats not back in trunk yet correct ? ^
<SpamapS> dpb___: no there's no cleanup currently.. you just have to terminate the machines
<SpamapS> imbrandon: correct, but hazmat is working on it
<ejat> imbrandon : thanks
<dpb___> SpamapS: thx
<SpamapS> This bit where subs never depart or break really screws me. :-/
<SpamapS> hazmat: ^^ is that bug documented somewhere?
<imbrandon> the hour of irclogs to convince me of it ? lol
<SpamapS> No this even screws me if I do it the easy way
<SpamapS> basically I'm trying to simplify
<SpamapS> I should be able to relate nagios<->nrpe<->*  and have * get monitored
<imbrandon> yea i was trying to be a smrt az
<SpamapS> but if nrpe units never depart ..
<imbrandon> no workie tho
<SpamapS> then I don't know when to tell nagios to regen the configs
<imbrandon> config-change fire after when it shold have ?
<SpamapS> So make the admin poke it?
<SpamapS> that sounds like "the suck"
<imbrandon> yea but it sounds like that or delay until a patch is pushed
<SpamapS> there's no patch for this IIRC
<SpamapS> its fundamentally broken
<SpamapS> IMO subordinates are kind of just clunky and should be rethought
<SpamapS> nice first try..
<imbrandon> ahh ok i was thinking it was still lot further out
<SpamapS> but I think they should just be regular services like anything else, just directed to spin up inside the container
<SpamapS> making them special is making them broken
 * imbrandon feels that way about all the plugins and hooks too, thet work but feel tied we cant enhance them
<imbrandon> :(
<imbrandon> yea
<imbrandon> that sounds rihgt
<SpamapS> I can flip it around and make it work
<imbrandon> your takergeting the container for the hooks relating and the plane ments should not matter
<SpamapS> relating nrpe<->*<->nagios
<SpamapS> and just having nrpe tell * to inject stuff into the monitors relationship
<SpamapS> but that is *lame*
<SpamapS> because now I can't have generic nrpe/nagios relations
<imbrandon> so in theory you could have a subordiate that lives on its own single node but relates to many tings that way
<SpamapS> I'm going to step away from nagios now
<SpamapS> Its pissing me off too much. ;)
<imbrandon> SpamapS: yup i know the feeling, thus how many times i have done the same thing on nginx and others ... maybe not that complex but i get jaded that i cant do something in a way i reason in my head is rweally the correct one
<imbrandon> so yea ... i feel yea. and now as you pointed out the other day i think lots of mediocre charms just need to get pushed, these are ALL frakiin wrong and bad
<imbrandon> but they are out base and we just iterate them when each thing lands or is fixed
<imbrandon> vs not having nginx for a month :)
<imbrandon> i hate that, i hatre working like that but i dont see another way right nowwith how things are setup, too much stuff is waiting on go, wel really only probably 2 things are but then it cascades and nothing can go, because to "fix" some of this its really implment the feature/provider
<imbrandon> etc
 * imbrandon preaches to the chior more
 * imbrandon is sadened by his ~/Projects directory ... 
<SpamapS> :)
<imbrandon> way tooo many things in to ( as i tend to only keep in progress projects there and once they hit a certain point they live online or a nother perm place on the hss/server .... but SpamapS check this ....
<imbrandon> bholtsclaw@mini:~$ ls -lR ~/Projects|wc -l
<imbrandon> 423438
<SpamapS> uh
<SpamapS> imbrandon: randomly delete half of them? ;)
<imbrandon> well i got 2 big things that are adding to that ... a charm/getall symlink and a OMG full checkout
<imbrandon> minus those two and it would be a bit more reasonable ... but still
<imbrandon> its way way past where my OCD/Retardation normaly keep it
<imbrandon> bholtsclaw@mini:~$ du -sh ~/Projects/
<imbrandon> 15G	/home/bholtsclaw/Projects/
<imbrandon> and crap i just realized that the hostname on this is wrong ... hrm suprised more things are not broken for me ... that i noticed yet least
<imbrandon> oh yea synergy is broke ... wth , how would my hostname revert ...
<imbrandon> ahh terminal was left open from prior to it changing ... heh
<hazmat> SpamapS, generally speaking it is
<hazmat> i answered that last night.. but you might have not been here.. but basically its the lack of coordination around stop. ie. parent supervision kills children
<hazmat> in this context (subordinates) its actually a bit worse
<hazmat> SpamapS, pls file a bug if you have a specific behavior around that you'd like to see
<hazmat> data jitsu ;-) http://oreilly.com/data/radarreports/data-jujitsu.csp?cmp=tw-strata-books-data-products
<imbrandon> heh wow
<imbrandon> hazmat: u seen @http://jit.su ? the nodejs api tool
<imbrandon> err the nodejitsu api too
<hazmat> imbrandon, yeah.. i've got an account there
<hazmat> one of the few paas's to support websockets
<imbrandon> heh me too and appfog and phpfog and vmc and phpcloud and heroku and about a half dozen others i shiuld use more
<imbrandon> lol
<imbrandon> yea i *think* nodester does
<imbrandon> too, iirc
<hazmat> SpamapS, imbrandon so to expose services on hpcloud.. did you do it by hand via the control panel?
<imbrandon> ugh i need to rethink/redo my emails again and unzub from some things ... i hate that gotta do  it about once every 2 years
<imbrandon> hazmat: yea i'm lazy and just set it to 0.0.0.0/0 on bootstrap for the default one
<imbrandon> and not care about the others
<imbrandon> cuz not prod services were there yet
<imbrandon> etc
<imbrandon> e.g wide open
<imbrandon> err not default but the service name without a number
<imbrandon> cuz all of them had it
<hazmat> actually it looks like the expose functionality works with the tweak branch
<hazmat> imbrandon, i just setup for internal traffic limited to 10.2.0.0
<imbrandon> cool, yea i have it pulled but noy launced a new env yet
<hazmat> ie. private network addrs
<imbrandon> yup
<imbrandon> and the pubs are always 15.185
<imbrandon> in all 3 az
<imbrandon> s
<imbrandon> just fyi if you wanna limit that for someting later
<hazmat> its unfortunate but not horrible i guess
<hazmat> its manual though which is lame and needs docs
<imbrandon> yea i've seen 15.185.100 .99 .98 and .102
<imbrandon> manual ?
<hazmat> imbrandon, the rule management for the internal traffic
<hazmat> since hpcloud doesnt' support a rule for a self referential group (ie open group traffic)
<imbrandon> they have cli tools too , and juju is doing something with the IPs too funky as they dont automaticly get floating ones
<imbrandon> noramlly
<imbrandon> yea it does
<imbrandon> you just have to use the nova client to make it
<imbrandon> you cant make it with the webui
<imbrandon> half sec
<hazmat> imbrandon, you mean the hpcloud tool?
<imbrandon> damn internet is going slow, but its in the docs, and no i mean python-novaclient
<imbrandon> eg "nova" onthe cli
<imbrandon> i'll toss ya my env vars to get it setup
<imbrandon> this crap was all trial an error
<imbrandon> its all piecedmealed and fraken stein and wrong
<imbrandon> hehe
<SpamapS> hazmat: bug 1025478 btw.. filed yesterday :)
<_mup_> Bug #1025478: destroying a primary service does not cause subordinate to depart <juju:New> < https://launchpad.net/bugs/1025478 >
<hazmat> there nova is some forked bespoke version
<imbrandon> yup so is their fog
<imbrandon> aka hpfog
<hazmat> yeah.. but at least that one works pretty easily
<hazmat> there nova fork never did auth succesfully for me
<SpamapS> wow
<SpamapS> so they just went off in the weeds
 * hazmat nods
<hazmat> SpamapS, it looks like their going down the path of supporting all their own client libs
<hazmat> and some what amusingly python isn't one of those per se
<SpamapS> thats just awesome
<SpamapS> why isn't anyone shaking fists at these people?
<hazmat> https://docs.hpcloud.com/bindings
<SpamapS> probably because nobody is actually trying to use them yet
<hazmat> SpamapS, because everyone is busy congratulating them on validating openstack?
<SpamapS> I would think the libcloud guys in particular would be a champion for this
<imbrandon> export NOVA_USERNAME=me@brandonholtsclaw.com
<imbrandon> export NOVA_PASSWORD=
<imbrandon> export NOVA_URL=https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/
<imbrandon> export NOVA_PROJECT_ID=me@brandonholtscaw.com-default-tenant
<imbrandon> export NOVA_VERSION=1.1
<imbrandon> export NOVA_REGION_NAME=az-3.region-a.geo-1
<Daviey> imbrandon: what is the NOVA_PASSWORD ?
<hazmat> imbrandon, pastebin would have been nicer, but thanks
<imbrandon> hazmat: that will make their nova work + your webui pass
<hazmat> imbrandon, tried that didn't work for me though
<imbrandon> bah was lazy, ohhh crap for got that one Davie, its PinkPoniez23
<imbrandon> :)
<Daviey> ta
<hazmat> i tried just about every permutation of key/username/password/token auth
<hazmat> the ruby cli tool works but is limited
<hazmat> and frankly given how poorly nova worked against rspace..
<imbrandon> also with the swift cli tool you need to drop the auth to v1 as well
<imbrandon> :(
<SpamapS> I was able to use stock precise nova manage
<hazmat> i'm not expecting to much
<SpamapS> IIRC
<imbrandon> yea i can use the nova
<hazmat> SpamapS, oh.. interesting.. i'll have to try that ..
<hazmat> i tried their version of nova in a sandbox
<imbrandon> nova i installed from repo, the hpfoghpcloud i setup the first time, then it screwed my rvm/rails install
<imbrandon> and nevew reinstalled it
<hazmat> most of nova cli doesn't work against rackspace.. volume/secgroups, etc.. because that's all split out into separate services that aren't publicly available atm
<imbrandon> but did send em a patch so it would work on 1.8.7 :)
<hazmat> its really a mystery to find out what impl of nova actually supports as far as api
<hazmat> most of these have some sort of java front end proxy doing request load balancing against the api services and rate limiting it appears (rspace and hp)
<imbrandon> hp has a knolge base one just for nova cli that lists what does/cont
<imbrandon> dont*
<SpamapS> java front say what?
<imbrandon> i think they had apache traffic server on the GUI console when i first looked
<imbrandon> but their strange anyhow without much direction i dont think, they have yui and jquery and a few other double duties things i've noticed
<imbrandon> they tend to grab whatever tool from where ever to do that they need for the moment and customize it a bit then not really fully intergrate any of it
<imbrandon> cept the php-bindings
<imbrandon> that is the only exception :)
<imbrandon> anyhow none is a big deal until after a month or 6 you add them all up
<imbrandon> and its like wow this really sucks
<SpamapS> -relname=${JUJU_RELATION_ID##:*}
<SpamapS> +relname=${JUJU_RELATION_ID%%:*}
<SpamapS> I hate bash
<SpamapS> why am I doing things in bash?
<imbrandon> bwhahaha <?php require('/hooks/lib/common');
<imbrandon> you know you wanna :)
<SpamapS> What I want is for my charms to be 90% declarative
<imbrandon> ugh i need to push the new stuff for that too ....
<SpamapS> I wonder if salt's states can be easily used w/o salt's agents
<imbrandon> nah i like the flexability , i just want first class config mangemwent and raw access to the cloud-init even if hidden normally
<imbrandon> chefsolo or capistrano or fabric ... ? /me figured ytou'd use fabfiles.py
<imbrandon> fabfile.py*
<imbrandon> SpamapS: you know since we require cloud-init on the machines , it bakes capistrano and puppet as dependencies already so
<imbrandon> they are their to be used :)
<imbrandon> we just cant use them in cloud-init but could still use them :P
<SpamapS> how does one break out of puppet into a real language when it is needed?
<imbrandon> no idea, i hate puppet personally its a DSL not ruby
<imbrandon> but i know others like em
<imbrandon> only really seriously used capistrano or homecrown config mgmt for any length of time
<imbrandon> but i seriously do thing you CAN
<imbrandon> just not sure HOW
<SpamapS> Yes I know its a DSL
<SpamapS> and there are these tiny moments where you need to do something that the DSL can't do
<SpamapS> I want to know how do I add those capabilities
<SpamapS> I suspect "with ruby"
<imbrandon> SpamapS: you might check mozillas github repo for their config stuff they use it alot iirc
<imbrandon> well all of them
<imbrandon> diffrently
<imbrandon> no thats what i'm saying you dont have to add them they are already installed you just need to like call "puppet somefile.pp"
<imbrandon> in your shell
<SpamapS> Ok here's another problem with subs..
<SpamapS> so my nrpe charm had a bug in the joined hook for the subordinate relation
<SpamapS> I fixed it
<SpamapS> upgraded charm
<SpamapS> but oh noes.. I can't actually apply the fix
<imbrandon> cloud-init depends on them , thats why i was irritated when we was restricted from them i'm like ummmmm that sux
<SpamapS> I have to now rewrite that joined charm to be "relation agnostic"
<SpamapS> err
<SpamapS> joined hook
<SpamapS> and then I have to call that hook in upgrade-charm
<imbrandon> SpamapS: yea when that happeens to me i ssh to the box and edit the files in /var/lib/juju/blahvblhblag
<SpamapS> except, the error was in the way it grabs JUJU_REMOTE_UNIT
<imbrandon> then --resolved -retry
<imbrandon> or whatever
<SpamapS> which is not available ever again
<SpamapS> so.. doh .. kill the primary service and start over
<imbrandon> well i do it local imediately but the alternative is to destroy it
<SpamapS> imbrandon: it did not error
<imbrandon> no
<SpamapS> so there's nothing to retry
<imbrandon> not debuu=g hooks
<imbrandon> ok lets step back
<imbrandon> i know there is nothing .. i'm telling you my hack
<imbrandon> that works BUT now will get fixed
<imbrandon> hehe
<imbrandon> ok soooooo
<SpamapS> that fix won't even work here
<SpamapS> unfortunately
<imbrandon> when it gets to that point that you only have the choice to destroy it
<imbrandon> i dont debug-hook
<imbrandon> i ssh TO it
<imbrandon> real ssh
<imbrandon> end edit the cached charm
<SpamapS> there's a value that I can't get
<SpamapS> its gone
<imbrandon> then when i run retry --resiolved
<imbrandon> etc it refiles
<SpamapS> there is nothing to retry
<SpamapS> state is up
<imbrandon> i know there isnt . but it still does it
<SpamapS> interesting
<imbrandon> and fixes my problem
<imbrandon> heh
<imbrandon> yea i think that unintentionally that "juju resolved --retry service/2"
<imbrandon> is out "force a hook"
<imbrandon> our*
<SpamapS> 2012-07-17 16:05:22,323 WARNING Matched relations are all running
<imbrandon> sure but watch the log
<SpamapS> nope
<SpamapS> nothing running
<SpamapS> which hook would it even run?
<imbrandon> it always tells me that it dident need to sdo it and act like it dont then i watch the debug log and hooks start going to town
<SpamapS> joined?
<SpamapS> changed?
<imbrandon> changed
<SpamapS> I think you been smokin something son
<SpamapS> that does nothing
<imbrandon> you could also edit the upgrade temp
<SpamapS> I could do 100 things
<SpamapS> but I know juju in and out
<SpamapS> users might not want to learn that
<imbrandon> yea they do if there was a way
<imbrandon> :)
<imbrandon> no seriously they will tho
<imbrandon> think about how well the chef users know that
<imbrandon> etc
<SpamapS> If you brought me something and said "well sometimes you just have to ssh in and edit random files that you don't understand on disk".. I'd bring you a rather large blunt object to beat me to death with
<SpamapS> we are not targetting chef users
<imbrandon> its part of their profession that like most ggeks will take pride in knowing more than the boss
<imbrandon> :)
<SpamapS> chef is a big beast
<SpamapS> so is puppet
<imbrandon> heh
<SpamapS> not './configure && make && make install' for the cloud.. *apt-get* for the cloud
<imbrandon> sure if thats all we needed we have aws-tools in the repo
<SpamapS> as in, install it, configure it, and get it to the point where you can use it.
<imbrandon> you all keep saying shit like that and are contradiucting your selfs
<SpamapS> oh?
<SpamapS> I know how *I* feel
<SpamapS> I think I disagree with quite a few on this
<imbrandon> dude we just had tis exact convo
<imbrandon> like a week ago
<imbrandon> heh
<SpamapS> and I said we want puppet for the cloud?
<SpamapS> cause, I was drinking a lot last week...
<imbrandon> well i explained untill shit lands we are puppet
<SpamapS> oh lol
<SpamapS> until shit lands we are a ship w/o a rudder
<imbrandon> yea that was the jest of it
<imbrandon> :)
<SpamapS> so lets not talk about our problems, but our intentions
<SpamapS> intentions == simpler way to get stuff into the cloud
<imbrandon> no for real tho we went though all this, and you kept telling me how we wernt but we could do this and that and this yet  and had to ignbore dry and blah , and i'm like ummm ok so we're not ...... yet :)
<SpamapS> And a way to share the deployment "formulas"..errr.. charms.. so you don't have to sit down and open your editor and refactor half of the thing to get it tow ork in your infra
<imbrandon> oh yea i totally see where we're headed , or i wouldent be here
<SpamapS> What we aren't, and what we want to be, are the same thing. :)
<SpamapS> So yes, we can ssh in and dick with files
<SpamapS> but I am kind of tired of throwing bugs on the pile. ;)
<imbrandon> but i was and am being a realist about right now, we do pretty much aboutr 60% of things worse with plans that are solid to be 200% better
<imbrandon> the same realist in me says tho once it does all pan out its gonna be sweet tho !
<imbrandon> heh and much better than if you bolted this onyo chef or salt stacks
<imbrandon> or made anseble not use the /etc/hosts file for 7000 nodes
<imbrandon> ugh
<SpamapS> haha
<SpamapS> I see you tried it out? :p
<imbrandon> you rwealize that btw, ok so here is the 5min explination of anseble == juju s#zookeeper#/etc/hosts+hash in memory no matter how many nodes of ALLLLLL meta data and databags#g
<imbrandon> yea i try everything , some things more than i should some less ;), thats why i end up with accounts on all services , because unless you try or learn about every part that you might at some point need to know i dunno to me you just should know the options available as much of a pita it si, like your CentOS thing :)
<SpamapS> yeah I try to keep it limited to the biggest alternatives, not all alts
<imbrandon> but yea. try to use anseible with 700 nodes launched and another 300ish  off but in the host file ... I saw the results and laughed as i unsubscribed from the ML
<imbrandon> its perfect for 5 machines, does somethings alot better , but ... yea $6 on fuckit
#juju 2012-07-18
<hazmat> SpamapS, i'm tired of watching the pile too
<hazmat> imbrandon, zk mem isn't a practical limitation
 * hazmat digs salt
<hazmat> imbrandon, can you run ansibile on a single node?
<hazmat> ie something for a hook to use?
<imbrandon> hazmat: erm, if you kinda bent it very likely, i'll try it out here in in just a few and see if run into anythgin
<hazmat> SpamapS, i take it back, hp does support internal group access rules, just wasn't enabled in the provider, doing some last testing work, but it appears we have a flawless victory for hpcloud
<hazmat> rackspace is going to require a bit more refactoring, worth putting off till post trunk merge
<hazmat> hmm.. maybe not re rackz.. tbd
<hazmat> SpamapS, imbrandon if you want to try it out.. its at lp:~hazmat/juju/openstack_provider
<hazmat> should be an OOTB experience
<imbrandon> sweet
<imbrandon> kk yea ill fire er up in half sec was just finiishing up some stuff with sis
<imbrandon> been a loooooooong day
<imbrandon> :)
<imbrandon> hazmat: do i need to modify my env.y for any of the changes ?
<imbrandon> ( that you vcan think of )
<hazmat> imbrandon, nope
<imbrandon> sweet, snagging now
<imbrandon> heya hazmat / SpamapS : while i'm thinking of it can someone please bump the ver in setup.py to the proper version too with this release ? heh
<hazmat> it would be nice to go 0.6
<imbrandon> :)
<imbrandon> hazmat: i did this a few days ago
<imbrandon> https://code.launchpad.net/~imbrandon/juju/fix-setuppy-version/+merge/114207
<hazmat> imbrandon, i saw it
<imbrandon> if you wanna use it, if not i'm fine as long as it gets filed in, i was thinking that in the debian packing then the __version__ could be changed by the debian/rules on build
<hazmat> imbrandon, i thought i could finish up the upgrade stuff, which gets the python packaging up to snuff for reals..
<hazmat> and uses an embedded version.txt file
<imbrandon> thats cool too either way, i just did that cuz i ran accross in pep-08 where it said to use module.__version__
<imbrandon> but honestly i dont care either way, the txt would work for tools like you said and the __version__ could even just feed off that and in fact the debian/rules could feed the version.txt on build too
<imbrandon> depending on how elaborate you wanted to get :)
<hazmat> yeah.. that bit is already solid for lp:~hazmat/juju/core-upgrade
<hazmat> but unless its extracted its perhaps better to go with something simple, as
<imbrandon> rockin, i'll un req that MP then, really was just mostly pokeing learning etc anyhow
<hazmat> i'm spread thin
<imbrandon> i hear ya there :) heh if there are bite sized stuff ever i'm happy to help when/where i can :)
<imbrandon> looks like its all
<imbrandon> wortking great too
<imbrandon> gonna full teardown and redeploy ( just got junk there now anyhow , the mirror are manula for the moment )
<imbrandon> but i want to get those charmed too sometime soon-ish
<imbrandon> then tossing up a mirror for any env , even vpc ones would work for all providers etc
<imbrandon> in one command :)
<imbrandon> ( /me still dont like it takes 70 minutes to update the HPCloud mirrors per run and only 3 to 5 on AWS or @home )
<imbrandon> hazmat: is that https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/ 35357 number tied to my acct ? ( was gonna prep some doc additions for HPCloud in a bit )
<imbrandon> or is that just the port for everyone
<imbrandon> hazmat: sweet, so when i deployed a new env and then destroyed it , it actually cleaned up all the junk left from the broken juju too
<imbrandon> nice :)
<hazmat> imbrandon, that's the port for everyone
<imbrandon> sweet and i found my keyboard issue, was dead easy
<imbrandon> there is a eng(US) and a eng(MACINTOSH)
<imbrandon> just had to pick the right darn one :)
<imbrandon> no more pasting wrong things AND i dont loose ctl+c to stop a process like you do swapping ctl+cmd
<imbrandon> :)
<imbrandon> well you dont loose it but then its cmd+ctl, i think this is the first time in gnome/unity ive gotten it to fully work ( OSx , Windsows and KDE all default to the mac way)
<hazmat> imbrandon, that sounds pretty nice re mirror charm
 * hazmat destablises the branch to ensure proper essex support
<hazmat> mometarily
<hazmat> cool all good now, should work with essex  and diablo
<imbrandon> nice
<hazmat> mgz, greetings
<mgz> hazmat: hey
<mgz> so, merging the current state sounds good, should I file bugs for the remaining bits and pieces?
<hazmat> mgz sounds good, actually its probably better to do that first
<hazmat> mgz nothing that's a show blocker that i saw?
<mgz> nope, a couple of things we really do want before release though
<hazmat> mgz most  of the todos there are pretty minor
<mgz> cert checking and some concurrency management
<mgz> the rest is all just incremental improvements
<hazmat> mgz the cert checking it would be nice to have on by default
<mgz> right, it's not hard either as the tough bit is available in txaws
<hazmat> i'm not sure what you mean by concurrency management
<mgz> atm there's nothing stopping vast numbers of parallel http api requests
<hazmat> if there's multiple provisioners, there's external management of the concurrency around any given machine.
<mgz> so, try to shutdown 200 machines, twisted will attempt to make 200 tcp connections
<hazmat> if there's multiple provisioners, there's external management of the concurrency around any given machine.
<hazmat> mgz, hmm.. true, although that's the only O(n) ops semantic op, that's actually rather lame in general (lack of multi node deletion in ostack), that's a juju client op, batch and iter  would be nice there.
<hazmat> but not a blocker
<mgz> anyway, will file bugs for those bits and look over your branch again
<hazmat> mgz, cool
<hazmat> mgz, for that shutdown case, it would be nice to kill the zk nodes last in a separate op, so that if the op fails, the env is still interactable.
<hazmat> bbiab, off to drop off the kids
<mgz> okay, just going through the diff
<mgz> +            # Add internal group access
<mgz> +            yield self.nova.add_security_group_rule(
<mgz> +                parent_group_id=sg['id'], group_id=sg['id'],
<mgz> ...this was in the EC2 provider, and I didn't understand the purpose there, and I still don;t
<hazmat> mgz, without it in a restrictive env, units a and b can't talk to each other
<mgz> this used to flat out not work in openstack, as a bug fix they permitted it, but it... still does nothing no?
<mgz> it's permitting a rule to access itself, is tautology
<hazmat> mgz, it does what's intended
<mgz> it is itself
<hazmat> in hpcloud
<hazmat> mgz, its permitting nodes in the group to access each other
<mgz> what's the actual effect?
<hazmat> its permitting a group.. not a rule to acess itself. without it mysql and wordpress can't talk to each other over mysql port
<mgz> <https://bugs.launchpad.net/nova/+bug/965674>
<_mup_> Bug #965674: openstack api create security group rule doesn't allow self-referential group <OpenStack Compute (nova):Fix Released by vishvananda> < https://launchpad.net/bugs/965674 >
<mgz> as I understand it, security group rules come in two forms:
<mgz> 1) allows access to ports in a range for any instance with the group
<mgz> hmm...
<mgz> so 2) is allows access to... everything? for any instance in given group to the parent group?
<mgz> so the intention of the rule is to allow any machine managed by juju to access any other machine managed by juju?
<hazmat> mgz within the same env yes
<hazmat> expose/open/close-port govern external acccess
<mgz> without the mysql charm needing to say "allow access to this port on the local network", only charms that need a publicly accessible port
<mgz> okay, I get it.
<mgz> thanks.
<mgz> so, it would make some sense to add that to the per-machine group rather than the per-environment one I think
<_mup_> Bug #1026102 was filed: Openstack provider does not validate https certs <juju:New> < https://launchpad.net/bugs/1026102 >
<_mup_> Bug #1026103 was filed: Openstack provider does not use constraints <juju:New> < https://launchpad.net/bugs/1026103 >
<_mup_> Bug #1026107 was filed: Openstack provider does not limit http api connections <juju:New> < https://launchpad.net/bugs/1026107 >
<_mup_> Bug #1026108 was filed: Openstack provider client does not raise exceptions with full details <juju:New> < https://launchpad.net/bugs/1026108 >
<mgz> okay, after lunch I merge your branch hazmat, fix the test failures and push
<hazmat> mgz, sounds good
<hazmat> mgz re per machine group instead of environment group.. i don't see why
<hazmat> mgz, can you ping me when its ready, i'd like to get this merged today
<mgz> sure.
<hazmat> mgz, there are a couple of oscon demos tomorrow which people have expressed interest in using juju on hp for.
<hazmat> fwiw
<mgz> there does seem to be a fair bit of interest in exactly that
<hazmat> mgz, did you ever get an account setup with hpcloud?
<mgz> I didn't hear back about whether we got the billing thing sorted
<mgz> sent a query after you mentioned it
<mgz> a rackspace one would be good too.
<mgz> did you get trystack working? zul posted a thing about an arm zone coming on line there.
<SpamapS> hazmat: \o/
<hazmat> mgz, no re trystack.. i limited myself to 3 openstack clouds..
<hazmat> so many snowflakes
<hazmat> mgz, re hpcloud afaik its sorted, i'll double check
<SpamapS> trystack is at least pure upstream
<hazmat> SpamapS, so is rackspace
<hazmat> well mostly
<hazmat> its actually tracking trunk by about 3w is what i recall
<marcoceppi> imbrandon: is the environments.yaml file in ~/.juju/ or somewhere else?
<hazmat> mgz, your hp account should be good to go
<SpamapS> hazmat: so its only special because they hide some services?
<hazmat> SpamapS, its not special in that regard, that is the state of art for folsom as well
<hazmat> SpamapS, network and volumes are in separate services
<hazmat> SpamapS, technically the existing sec group and floating ip stuff isn't in core either per se, its implemented via api extensions.
<hazmat> mgz arm and juju still requires something intelish for the zk nodes, afaicr java on arm is still a boondongle
<m_31> testing from subway
<Nippo> I setup subway irc. but it is not working
<Nippo> if anybody knows please help me .
<Nippo> thanks advance
<marcoceppi> m_31: eat fresh
<SpamapS> m_31: howdy subway!
<SpamapS> Nippo: m_31 can help :)
<Nippo> Thanks SpamapS, :-)
<Nippo> I will ask m_31
<m_3> Nippo: hi, there were two complications I came across... first is to see what port the app is coming up on
<m_3> Nippo: ssh to the subway/0 unit and 'netstat -lnp'
<m_3> Nippo: then make sure you're doing a 'juju expose'
<m_3> (if you're not using the local lxc provider)
<SpamapS> m_3: subway doesn't always use 80?
<Nippo> m_3 : Please give me easy setps or URL to install subway irc :-)
<Nippo> spamaps : I am using port 3000
<SpamapS> m_3: hey how did the talk go?
<m_3> SpamapS: good... Jorge's part was _awesome_!  I'd really like to make improvements to my demo part
<m_3> tweaking... ;)
<m_3> preparing for the next one tomorrow
<m_3> Nippo: see if juju status shows the ports exposed
<m_3> Nippo: then 'juju ssh subway/0' (assuming the 0... look at the status output to find out for sure)... once you're on the machine, then `sudo netstat -lnp | less` and see where "node" is listening
<mgz> hazmat: have pushed what should be a reasonable state
<Nippo> m_3 : Yes, It's listening . i launch the subway . but multiple users are not working
<Nippo> i follow this link  https://github.com/thedjpetersen/subway
<hazmat> mgz, nice, i'll take a look
<mgz> basically just alters the tests for the behaviour changes to sec groups and shutdown and a couple of trivial things
<mgz> other things (removal of some dead code and so on) can probably wait
<hazmat> SpamapS, any dead chickens you want me to sacrifice b4 landing ostack?
<hazmat> mgz, one minor diff fwiw, i'm just applying during merge.. http://paste.ubuntu.com/1098899/
<mgz> hazmat: right, you changed that back to the old form so that test update wasn't needed any more
<mgz> I swear that branch has broken and fixed that particular test about four times
<hazmat> mgz, indeed it has ;-)
<hazmat> mgz, looks good though
<iamfuzz> is there any way to tell juju to import another ssh key?
<iamfuzz> I'm patching things to try to get it working with eucalyptus but when i bootstrap, the instance come sup but I'm unable to ssh into it
<SpamapS> iamfuzz: the ssh key is installed using cloud-init
<SpamapS> iamfuzz: so if its failing to be installed, thats because cloud-init is failing
<SpamapS> iamfuzz: check console output maybe?
<iamfuzz> SpamapS, ouch, that's no fun
<iamfuzz> I'll give it a go but that rarely works
<SpamapS> iamfuzz: cloud-init, or console output?
<iamfuzz> SpamapS, console output
<SpamapS> iamfuzz: thats disappointing. :-/
<iamfuzz> looks like networking isnt even coming up on the instance, can't even ping it.  Or does juju not enable ICMP by default?
<SpamapS> iamfuzz: juju just boots an image
<iamfuzz> (note than when i launch the same emi manually, everything is kosher)
<SpamapS> I don't think it does any specific authorizing other than SSH
<lifeless> you can check the access groups it creates
<lifeless> euca-describe-access-groups or whatever
<mgz> just euca-describe-groups and compare with names listed for instance under euca-describe-instances
<mgz> ping and ssh should work for all juju created instances if they're operational
<hazmat> mgz, tis done
<mgz> hazmat: yeay!
<hazmat> iamfuzz, it doesn't re icmp afaicr
<hazmat> it setups tcp / udp for internal group access, and ext access for ssh only by default
<iamfuzz> hazmat, I tried again and am able to ssh in mnaually now but juju status and juju ssh hang
<iamfuzz> investigating
<mgz> hazmat: I swear icmp used to be enabled, was that removed from the ec2 provider?
<mgz> ...maybe just misremember
<iamfuzz> hazmat, http://pastebin.com/PakTGnwj
<iamfuzz> hazmat, machine 0 should be the instance I'm on, no?  Any idea what could cause this error?
<lifeless> mgz: I could swear it was enabled too
<hazmat> iamfuzz, do you have console output for that instance?
<iamfuzz> hazmat, figured it out - juju-admin call failed
<hazmat> iamfuzz, it looks like the initialize some how failed
<iamfuzz> hazmat, ran it manually and now status works
<hazmat> iamfuzz, cool
<iamfuzz> baby steps!
<hazmat> iamfuzz, cloud init logs in /var/logs should have the original error from the admin initialize cmd
<SpamapS> hah.. I love when I hit a public IP from a server I had yesterday, and its now something totally different
<SpamapS> http://ec2-54-245-6-168.us-west-2.compute.amazonaws.com/
<hazmat> hmm.. looks like the stack tests depend on a local ssh key
<SpamapS> stack tess?
<SpamapS> tests even
<hazmat> SpamapS, i trigger the ppa build
<hazmat> but it failed as the ostack provider tests need a key setup.. working on a fix
<SpamapS> heh
<SpamapS> the ec2 tests had that problem for a long time too
<_mup_> juju/trunk r558 committed by kapil@canonical.com
<_mup_> [trivial] fix openstack provider tests to not require an ssh key
<hazmat> fixed and committed
<hazmat> rebuilding ppa
<SpamapS> ok who broke charm promulgate?
<SpamapS> $ charm promulgate
<SpamapS> W: metadata name (gunicorn) must match directory name (.) exactly for local deployment.
<SpamapS> lifeless: earlier this week, did you ask who was on review duty?
<lifeless> nope
<lifeless> I asked how to move my charm forward :)
<SpamapS> lifeless: I don't see your charm on the review queue.. Need to subscribe charmers to the bug and make sure it is New or Fix Committed for us to see it in there.
<lifeless> it was
<lifeless> will look in a sec
<SpamapS> Ring the bell twice everybody.. python-django and gunicorn promulgated!
<SpamapS> avoine: ^^ \o/
<avoine> yay!
<SpamapS> negronjl: now just Riak, then.. you.
<negronjl> SpamapS: 'cause you still hate me :)
<SpamapS> just you now, Riak is Incomplete
<SpamapS> oh look at the time...
<SpamapS> ;)
<iamfuzz> SpamapS, when would be the last safe date to submit patches for juju to get it working against walrus again?
<SpamapS> iamfuzz: there's no "freeze" scheduled at this piont
<iamfuzz> SpamapS, would beta freeze be a safe target?
<SpamapS> iamfuzz: oh for quantal? I'd aim a bit sooner than that
<iamfuzz> SpamapS, week before feature freeze? (16th of august)
<SpamapS> iamfuzz: actually no thats fine, it won't be in main, so we can upload it right up to the end
<iamfuzz> likely won't take that long, just handing off a bug here in-house and he wants a date
<iamfuzz> good deal, I'll tell him feature freeze, with enough time for review, so August 16thish
<SpamapS> I'd love to add Eucalyptus to our charm tester targets
<iamfuzz> we would as well
<iamfuzz> also asking in-house if we can setup a "public" cloud for you guys to run tests against
<SpamapS> like, if you could give us a URL to hit and capacity to run 3 or 4 at once.. we'd put it up on our jenkins.
<iamfuzz> shouldn't be a prob
<iamfuzz> much appreciated
<iamfuzz> we're just in the midst of a datacenter move so it may be a bit
<SpamapS> right now I think we only run against ec2 itself and the local provider
<SpamapS> iamfuzz: sure. Just ping me or m_3 when you want to chat about it.
<iamfuzz> SpamapS, will do, thanks man
<lifeless> SpamapS:  https://bugs.launchpad.net/charms/+bug/1017796 was the charm
<_mup_> Bug #1017796: Charm needed: opentsdb <Juju Charms Collection:New> < https://launchpad.net/bugs/1017796 >
<SpamapS> hazmat: any idea on the latest build fail for the PPA?
<SpamapS> lifeless: i re-subscribed charmers
<lifeless> SpamapS: ah
<lifeless> also, relatedly, is there anything I can do to get my uses-ip patch rolling again? It seemed to just get rejected and sidelined into a bug somewhere...
<SpamapS> lifeless: the fallback method i proposed seems preferred
<SpamapS> lifeless: in case u missed it tho, openstack native support landed...
<lifeless> in precise ?
<SpamapS> hah no
<lifeless> :)
<SpamapS> its worth putting in backports tho...
<hazmat> SpamapS, sigh no
<SpamapS> problem being there is no way to tell spun up nodes to use backports
<hazmat> SpamapS, there' all different
<hazmat> SpamapS, perhaps a missing inline callbacks..
<hazmat> i'm able to run trunk tests locally without issue
<SpamapS> hazmat: it works in my local sbuild
<SpamapS> i *hate* buildd specific problems :(
<hazmat> SpamapS, even worse when every different rel build is reporting something different
<SpamapS> hazmat: i am retrying just precise
<hazmat> SpamapS, k
<hazmat> SpamapS, one of the oneiric failures is odd
<SpamapS> hazmat: have seen io probs lead to wierd fails in the past
<SpamapS> Failure: twisted.internet.defer.TimeoutError: <juju.control.tests.test_status.StatusTest testMethod=test_subordinate_status_output> (test_subordinate_status_output) still running at 10.0 secs
<Daviey> hazmat: considering "juju-origin: ppa" is already supported, i'd imagine "juju-origin: backports" is a trivial change.. and a reasonable SRU.. <-- SpamapS, comment?
<Daviey> (although, adding additional providers as SRU also seems reasonable IMO)
<SpamapS> Daviey: yes its something we could do
<SpamapS> Daviey: and the additional provider, I agree with that too
<SpamapS> Daviey: but I think it would require TC approval, since it does not fit with SRU guidelines.
<SpamapS> hazmat: so I think it was some kind of weirdness on the buildds
<SpamapS> negronjl: so I've been thinking more about the haproxy service_name thing
<SpamapS> mthaddon`: ^^
<SpamapS> I'm still not sold on a generic thing like that.
<SpamapS> negronjl: can you perhaps show me what exactly is intended?
<SpamapS> Like, a fully filled out config and the web app that is being balanced?
<Daviey> SpamapS: Well, do what you feel.. but the TC seemed to be pretty clear that the SRU team can handle one-off's quite nicely themselves
<SpamapS> Daviey: one-off's of applying the actual SRU policy... this would not even come close.
<SpamapS> Daviey: actually I take that back
<SpamapS> Daviey: we can SRU stuff that enables access to remote services that did not exist when the release was made.
<SpamapS> Like in days past empathy would add an IM network and that was ok to SRU
<Daviey> SpamapS: Generally, mdz made it quite clear that the SRU team can handle this stuff and "only escalates to the tech board in exceptional circumstances".
#juju 2012-07-19
<imbrandon> SpamapS: yea and there can be blanket exceptions for apps/teams too like clamav defs etc
<imbrandon> re: sru's
<imbrandon> Daviey: ^
<SpamapS> imbrandon: indeed, but there is no such blanket exception for juju
<imbrandon> SpamapS: right, but i just mean it might be good to see if that is our best option to ask for
<imbrandon> seems like it, but i've not deply considerecd everything
<imbrandon> deeply*
<SpamapS> that was exactly my point.. we would need the tech board's approval to put such a radically new thing into precise-updates
<SpamapS> the SRU team's discretion is only on interpretation of the policy
<imbrandon> SpamapS: btw after the convo from a day or three ago re juju et al, i've started the last hour or so re-evalutating puppet and chef-solo and trying to see how best to maybe use them along with juju in a franken-charm heheh , looks like it may be promising but not gotten far yet
<SpamapS> "For Long Term Support releases we regularly want to enable new hardware. Such changes are appropriate provided that we can ensure to not affect upgrades on existing hardware. For example, modaliases of newly introduced drivers must not overlap with previously shipped drivers."
<SpamapS> One can argue that hpcloud == new hardware :)
<imbrandon> sure, i dont think thats a bad hurdle to jump tho
<imbrandon> true
<imbrandon> yea not only that it dont circumvent the spirit of the policy
<imbrandon> in that it would not overlap and break older providers aka hardware
<SpamapS> well thats where it gets tricky
<SpamapS> because the new provider landed in what is basically the '0.6' tree
<SpamapS> with a lot of new code
<imbrandon> it would be nice if we could ( talking in-general here so even in go version etc etc ) decouple the providers somehow , and provide them in a juju-providers package or even allow juju-providers-somethirdparty , via them being excapsulated and swapable at runtime
<imbrandon> *encaps...
<imbrandon> i mean look at some of the other tools that do things on multi clouds , given most are out of scope , but just trying to give a sense of why i think it would be good. they all support nearly 30 cloud providers
<imbrandon> so to add/remove/change those seperatly would be a huge win long term
<SpamapS> yes I suggested long ago that the provider code should be isolated as plugins
<SpamapS> no code in juju.environment.config
<imbrandon> right
<SpamapS> just have the providers register their configs
 * SpamapS is too lazy to lookup the bug that was closed saying no
<imbrandon> i'll look it up later i'm too lazy to have followed the link right now anyhow
<imbrandon> i just woke up an hour ago so still groggy
<imbrandon> lol
<imbrandon> btw SpamapS i gave up trying to get that usb dvi to work, so been on 2x monitors since i went full time  ubuntu again a few weeks ago
<imbrandon> and
<SpamapS> usb dvi?
<imbrandon> so with 3rd monitor sitting here off mostly been trying to find out what to use it for ...
<imbrandon> yea my mac mini has one hdmi out and one displayport out
<imbrandon> that drives 2x monitors
<imbrandon> the 3rd was a usb->dvi adapter
<imbrandon> that i never got to work, even brought some to uds
<imbrandon> etc
<imbrandon> well work in ubuntu, and thats wrong too, i get it to work solo if the other are off ( e.g. there is a working kernel driver ) but not in combo where all 3 are on and working at same time :)
<SpamapS> imbrandon: 3 is too many for me
<imbrandon> but anyhow was gonna say .... its no big deal now, took a bunch of parts had laying round and managed to toss a amd64 2.5ish dualcore with 4GB ram and solo hdd togather and attached it to the 3rd monitor and hid under the desk
<imbrandon> now just loaded synergy on it for single keyboard / mouse and all good to go again :)
<SpamapS> imbrandon: throw synergy on it and you've got a good place to have IRC live ;)
<SpamapS> oh haha
<SpamapS> I remember when synergy cam eout
<SpamapS> I blew peoples minds flipping from my macbook to my Debian desktop and back
<imbrandon> SpamapS: heh i like 3 i tried more and less , but main in mid and one on left/right angled works out weel for my brain
<imbrandon> hahah yea
<imbrandon> same here
<imbrandon> used to be a little wonky on osx but not anymore works well on all 3 now
<imbrandon> ( 3 os's )
<imbrandon> but in this case i have ubuntu on my main mac mini here and i loaded kubuntu on the new frankenstein
<imbrandon> but yea its perfect to have irssi and other misc non-critical stuff off to the side going
<imbrandon> not set it up yet either but i'm thinking about taking the apache/nginx etc that i norm have local too and setup a workflow so in my ide et al it still seems local but run off that machine saving me a tiny bit o cpu
<imbrandon> but not have to change workflow or giveup the reasons to dev local and not just a remote dev server etc
<imbrandon> e.g fast iterations etc
<imbrandon> i need to find some way that i can stop caring about the data on it tho, e.g. like a vm thats been snapshoted
<imbrandon> so i can "reset" it or similar at any moment and not care
<imbrandon> not really thought about how to accomplish that yet but will be next on my personal projects todo's
<imbrandon> it is pretty slow tho, so dont wanna tax it too bad, the ram is single channel ddr2 and the amd64 is like the first gen amd64 dual core's heh
<imbrandon> and the hdd in it is actually a 80gb 2.5in laptop sata drive i had laying arround so i'm sure its 5400rpm at best
<imbrandon> but just for basicly a "fat-kvm" or "fat-monitor" or whatever its perfect
<SpamapS> heh.. the PPA builders seem wildly disparate in their I/O capabilities
<SpamapS> nannyberry finishes the juju build in 9 minutes.. uranium gave up after 27 minutes
<imbrandon> some are likely vm's with remote root/build fs's that seem local to the buildd
<imbrandon> so network traffic hurts em
<imbrandon> or proximity to the NAS
<imbrandon> etc
 * imbrandon is only guessing
<SpamapS> Yeah I don't know
<SpamapS> I just keep hitting retry until they get nannyberry
<imbrandon> heh
<imbrandon> way way back there was a uber secret way to selcet a buildd target
<SpamapS> oo iridium.. thats a 'virtual 64' box.. this should be interesting
<imbrandon> but i forget now and dunno if u even can anymore
<imbrandon> i should dig into the current ubuntuwire setup , heh , thats when i had to learn all this is when we wre building that network , and wgrant tells me its not changed much since me, him, and ajmitch set most of it all up ... definately could use a www design refresh at the least :) heh
<imbrandon> and changed from my version of ssh-import-id to the new one that dident exist that the time :) i'm sure there are things min dident consider that the new one does ... except mine supports LP teams ... so i can ssh-import-id <some lp team> and it gets the ssh keys for the whole team ... i might be able to wrangle support into ssh-import-id for that
<imbrandon> seems like others might like it too maybe
<imbrandon> ajmitch: what ya think about charming ( the things that make sense that is, the stuff on PPC and Sparc hardware obviously we couldent ) the UbuntuWire services, maybe revive some intrest in it ?
<imbrandon> SpamapS: not sure which i hate more tho, unity or kde4, heh
 * imbrandon should just run fluxbox on everything and be done with it
<SpamapS> I don't think I'll ever understand "I hate <insert gui here>"
<SpamapS> It has a pointer.. windows.. and keyboard shortcuts. I'm good.
<SpamapS> XFCE.. Gnome.. Unity.. all the same in my book
<imbrandon> SpamapS: well when i say it , its normally because of things i would rather do anothger way and cant without recompileing / writing my own code etc
<SpamapS> KDE just strikes me as bloatware... tried it for a couple of days back when I was frustrated with the initial natty alpha3 version of Unity
<imbrandon> nah those all have the same widgets for their windows
<imbrandon> but the desktop is not
<SpamapS> 99% of the time I'm just looking at a browser in one monitor, and a 3-paned Terminator in the other
<imbrandon> SpamapS: yea KDE has always been the desktop you need horsepower to run, but its also been IMHO the desktop most like GNU* apps in that every little bit of everything has a config option in the gui somewhere
<imbrandon> hehe
<SpamapS> So I'm not exactly a GUI power user
<SpamapS> Yeah I HATE being able to config everything like that
<SpamapS> In fact I appreciate that Unity strives to pick a way that works for everybody so we can all stop caring where our launcher icons live
<SpamapS> tweaking == wasted time
<imbrandon> see i like the middle ground, i want the simplicity of a gnome2 or gnome3 minus gnome-shell or unity minus the launcher BUT all the config options of KDE and they are only available in .conf files tho, no gui to clutter shit up
<imbrandon> no way, tweaking is not wasted time if its doing more than just visual tweaking
<imbrandon> SpamapS: thats an awesom goal, i just think they have declaired victory a little too soon
<imbrandon> ever used Sublime Text editor , if it were a desktop i would be in love , its a simple gui , eg there is ONLY the tabs of open docs, nothing else, not even config options window or anything , BUT ever aspect can be changed in the settings files that are 100% hidden from the user and dont even have a gui to change them , not even common changes like font size etc that editors normaly have
<imbrandon> but if i want i just edit the json settings file and reload , boom i'm good :) but the editor is just an editor and per default works for 99% of ppl, no crazy options to conmfuse users , and power users get what they want
<SpamapS> so suffice to say, tweakers don't like Unity
<imbrandon> a desktop like that backed by gnome widgets like the first 3 you mentioned would rock
<SpamapS> but people like me, who want to pick up a stock machine and *NEVER* change any configs, adore it.
<imbrandon> SpamapS: yup exactly
<SpamapS> I think *most* people are in the non-tweaker category
<SpamapS> but most vocal people are
<SpamapS> in the tweaker category
<imbrandon> SpamapS: but you know that 1) group #2 is very loud/vocal minority and 2) they have influence over a signifigant portion of #1's buying decisions
<imbrandon> there is very few in the group that I fall in that would use one and try to improve the other still ... e.g use osx and still contribute actively to ubuntu desktop ... i've only seen a very very few and most of the time they keep it quiet the only exception I can think of is Ian Murdoch (sp?) [ eg. the Ian in Debian , if i spelled it wrong ] openly runs windows and osx as well as debian and other linux distros daily
<imbrandon> so how do you get the tweakers that are essential to contribute but not include them in the target ?
<SpamapS> Thats the funny thing. I don't think Unity wants tweaker code contributors. They want advocates, and feedback, but not people to actually write the code.
<imbrandon> then they dont want what they say they want then, that cant be "for everyone"
<imbrandon> e.g ok here is my thing unity is great for those like you like you said
<imbrandon> it really is , my bitching is for me
<SpamapS> Right, and the way you want to scratch your itch is considered harmful to users. Gnome takes the same stance btw.. that having all those config options just makes it harder to teach people to use.
<imbrandon> but i really have a problem with their "goal" and stuff like that that makes it sound like the debian goal of "universal openerating system"
<SpamapS> and harder to develop apps for
<imbrandon> and then have other mandates that oppose that
<SpamapS> No the goal is to have something that reduces friction for adoption and use.
<imbrandon> SpamapS: exactly , but gnome also lets those be changed in gconf and the like
<imbrandon> and i agree with gnome
<SpamapS> Right, so Unity is taking a harder line stance, which I agree with, that this harms the ecosystem for app developers
<SpamapS> because now your app works only for people who haven't changed setting X
<imbrandon> right and i adamently hate that
<SpamapS> Its what has made ios and android so successful in attracting light weight app developers
<SpamapS> the limitations of the medium make it easier to develop for
<SpamapS> You don't have to take into account "well what if the user has focus follows mouse on?" or "what if the user has their icon bar in auto hide on the bottom of the screen"
<imbrandon> so much so that not only does it make it to where i cant use it , i would actively encourage others not to, but you knw what ... that goal only comes out in conversations like this , if you just ask out of the blue you get "its the desktop for everyone"
<imbrandon> SpamapS: thats a support problem, there is proven ways to have the ability to change things but not support them
<SpamapS> So while increasing friction for users who want to change stuff.. it has reduced friction for *growth*
<imbrandon> no
<imbrandon> i csnt qagree
<imbrandon> bah
<imbrandon> cant agree
<imbrandon> see there is a big diffrence to me in providing senseable defaults and a few ( very few ) config options for something
<imbrandon> i can agree with that, and actually very very much like the idea
<imbrandon> then anything else in a .conf hidden
<SpamapS> There are a very few options. Just not the ones you want. :)
<imbrandon> heh , add the last part :)
<SpamapS> imbrandon: but here's the thing. iOS developers have *flourished* because they can basically count on having things work exactly one way and one way only. Android devs have had to deal with some moving targets, but ultimately, it works a certain way..
<imbrandon> have you looked at the settings in iOS or an iOS app ? they are 30+ screens
<SpamapS> imbrandon: Windows devs have the gravity of the platform pulling them along making them spend the money necessary to make apps that can deal with all of windows bazillion configs
<SpamapS> imbrandon: but Ubuntu wants to grow a market of app devs. TO do that, they need the ease of development of iOS, but with the power of the PC platform.
<imbrandon> and they are doing it wrong, OSX has that
<SpamapS> imbrandon: those settings are nothing compared to what is still in the system config panels on precise.
<imbrandon> see i agree with Unity's goal , thats the problem i always have when i have this convo, i do agree with it 10000% seriously, i just dont agree with the execution
<imbrandon> at all
<SpamapS> hah well thats fair
<imbrandon> :)
<imbrandon> okies, need to refill the mt dew and start on something productive for the day ... /me constemplates and what that will be when afk ...
<imbrandon> brb
<imbrandon> ahh yes, apt-mirror charm, will be simple and productive :)
<m_3> hi from subway
<m_3_> m_3: right back at you
<semyazz> I've almost fresh Ubuntu 12.04 & Openstack (devstack - master branch) and I'd like to deploy hadoop to my local test-cloud environment. How shoudl I configure environment.yaml to deploy to Openstack? What is control-bucket in openstack's cloud? Any example how everything should be configured? Any native configuration for Openstack or the only way is to use EC2 api?
<melmoth> semyazz, do you have the dasbhoard installed ? There s a page that create your environment.yaml therE.
<melmoth> though, last time i try it, i had some kind of problem with it :)
<melmoth> semyazz, my environment.yaml looks like this http://pastebin.com/sb0Yr8bB
<semyazz> thx :) indeed there is export to EC2 in Openstack's dashboard.
<SpamapS> FYI, http://www.oscon.com/oscon2012/public/content/video sabdfl keynoting about juju
 * SpamapS gets giddy as mark does jitsu export -e ec2 | jitsu import -e hp
<hazmat> SpamapS, yummy
<SpamapS> hazmat: so I think the buildd problem is just that tbe buildds are just too slow
<SpamapS> hazmat: retrying enough times works
<hazmat> SpamapS, thanks for keeping at that yesterday
<hazmat> SpamapS, i've been under the weather and ended up crashing for like 18hrs
<SpamapS> sleep dep is a bitch ;)
<marcoceppi> wee keynote
<hazmat> m_3_, i see your gmail ;-)
<SpamapS> hahahaha
<SpamapS> m_3_: ^5
<SpamapS> whoa I just noticed, the premier diamond sponsor for OSCON is *MSFT*
<negronjl> SpamapS: Are you approving https://code.launchpad.net/~mthaddon/charms/precise/haproxy/mini-sprint-sf/+merge/114951 ?
<negronjl> 'morning all btw
<SpamapS> negronjl: in a bit, yes. I want to pontificate a bit more before I approve of your implementation. :)
<negronjl> SpamapS: pontificate away :)
<jcastro> negronjl: heya
<jcastro> I ran into someone at cloudfoundry
<jcastro> I need to link you up with him
<negronjl> jcastro: hey man
<negronjl> jcastro: anybody I know ?
<jcastro> he seemed excited about your work, and he's like the head dev experience guy
<jcastro> Andy Piper
<negronjl> jcastro:  I've heard the name
<negronjl> jcastro: cool
<negronjl> jcastro:  now I just need them to support precise and we're rolling
<hazmat> nice
<negronjl> jcastro:  I got it working on precise ( somewhat ) but, it would be a lot better if they can fix some of their dependencies
<jcastro> yep, he actually came to the session to find someone to talk about that
<jcastro> I was like "hey cloudfoundry, we need to talk!" and he was like "yes, we do we're trying to sort 12.04"
<jcastro> and I was like "I've got your man."
<jcastro> cool, expect an email in a few
<negronjl> jcastro:  nice ... there is some interest ... perfect ...
<_mup_> Bug #1026714 was filed: PPA builds fail because of low twisted timeouts <juju:In Progress by clint-fewbar> < https://launchpad.net/bugs/1026714 >
<imbrandon> SpamapS: whoa sabdfl used jitsu in the demo ?
<jcastro> yeah, crazy
<jcastro> hazmat: I must say, that import export pipe thing you made.
<jcastro> hazmat: is total brilliance.
<james_w> yeah
<james_w> shame it doesn't move the data too :-)
<imbrandon> oh hell yea. frakin cake 100%.
<jcastro> baby steps
<imbrandon> james_w: should be able to be added
<imbrandon> but dosent the fact that sabdfl demo'd with jitsu show that our current dev cycle-ish blah blah blah could be a lil broken :)
<hazmat> james_w, give it time ;-)
<SpamapS> I dunno if I want it to move the data
<hazmat> james_w, just need some time to do some x env rels and it could do that
<imbrandon> btw ^5 hazmat :)
<hazmat> SpamapS, not move the data, but allow for x-env sync would do it
<SpamapS> Right what I really want is a backup/restore service that I can relate cross-env
<hazmat> block data movement  is provider specific, app protocols can do it
<imbrandon> SpamapS: would be an worthy option , export to me kinda implys it can, like phpmyadmin can export schema and schema+data
<jcastro> 90 charms folks!
<jcastro> 10 more!
<imbrandon> jcastro: i got 2 in the works
<imbrandon> and 2 in the review q
<imbrandon> so that is almost half :)
<jcastro> heh
<imbrandon> actually only one of 2 in the q will pass i think but i want other eyes on it anyhow to get feedback
<imbrandon> "Using 5.6 GB" I think it may be time for me to archive some things off imap
<imbrandon> heh
<jcastro> imbrandon: hey so I was thinking
<jcastro> we can hit 100 by august 23rd (12.04.01)
<jcastro> marcoceppi: also ^^^
<marcoceppi> I've got about 3 hours tonight to work on Charms. I know there are a few I was working on still limbo. Gluster and it's subordinate, Gitolite, WordPress, Papertrailapp subordinate
<imbrandon> marcoceppi: if i can help in any way lemme know , i'd be happy to do a lil grunt work for ya if needed man
<marcoceppi> imbrandon: thanks, I'll take a look at each in a min and ping you
<imbrandon> kk
<SpamapS> hazmat: would you count this as trivial? https://code.launchpad.net/~clint-fewbar/juju/fix-buildd-fail/+merge/115794
<hazmat> SpamapS, yes, but the  intended use is?
<hazmat> to raise the timeout for the buildd?
<SpamapS> yes
<SpamapS> I think its reasonable to keep it at 5 for developer use
<SpamapS> like, none of them should even take 1s on a developer box
<hazmat> sadly a handful do
<SpamapS> right, but we can attack that with the testr stuff
<hazmat> but they typically have explicit timeouts
<hazmat> yup
<SpamapS> I'm going to set package builds to 30s timeouts for tests
<SpamapS> thats 16 hours if they all take 29s
<SpamapS> hazmat: ok, so should I just go ahead and merge?
<SpamapS> then I can move forward w/ updating the packaging
<hazmat> SpamapS, +1 re merge
<SpamapS> done
<SpamapS> thanks! :)
<SpamapS> negronjl: oh hey, we don't currently allow "lists" in maintainer for charms.
<SpamapS> negronjl: you're the second to want to do it though.. so maybe we should?
<negronjl> SpamapS: we should
<SpamapS> It would need to be a yaml list tho
<negronjl> SpamapS: I'll figure something out
<Daviey> SpamapS: When it's more than one, shouldn't it become a team?
<SpamapS> Daviey: yes
<SpamapS> but there's no email address for a team
<m_3_> mark's demo rocked!
<m_3_> ok, reading through backchannel I see y'all were watching
<negronjl> SpamapS: We can make charmers the maintainers ... for the haproxy one
<SpamapS> negronjl: actually, thats probably a good idea
<SpamapS> I tend to want charmers to be largely responsible for key bits of infrastructure.
<negronjl> SpamapS: if/when you promulgate it I can change it
<SpamapS> negronjl: well really charm proof should complain
<Daviey> SpamapS: it's trivial to apply for a team mailing list.
<negronjl> SpamapS: maybe we need to update charm proof so it does
<SpamapS> yeah, perhaps charmers should have one actually
<negronjl> SpamapS, Daviey: Don't we have a mailing list for charmers already ?
<SpamapS> Though I tend to think right now juju@lists.ubuntu.com is a better home for charmers
<negronjl> SpamapS: That may be the case right now but, in the future the charmers traffic will end up polluting the juju mailing lists
<negronjl> It's easier ( better ? ) to have a separate one
<SpamapS> yeah
<SpamapS> perhaps now is the time
<negronjl>  aawwwww ... we're growing up ... we're gonna have our own mailing list :)
<SpamapS> See I dunno.. I kind of want those discussion to happen in the view of the whole community until messages get more frequent than once every 2 weeks
<negronjl> Can we have our own and have the email address for the charmer's mailing list subscribe to the juju ML ?
<hazmat> m_3_, nice
<hazmat> SpamapS, it seems a bit early for that
<negronjl> hmmm .. that wouldnt work would it ?
<negronjl> nm
<hazmat> SpamapS, like how many charm specific questions on the list have their been
<SpamapS> yeah I think charmers stays on the main list for now
<imbrandon> negronjl: heh yea i already added and subsuqently removed a array of maintainers :)
<negronjl> Maybe file a request so we can add multiple email addresses in maintainer field ?
<hazmat> negronjl, just email the list
<imbrandon> SpamapS: well also some things may not goto the list because of that, i tend to use email as a last resort anyhow but juju@ dont seem like the correct place to ask a charm question to me and likely others
<negronjl> I see this issue growing as people modify other's charms and add their name to the existing maintainer field
<hazmat> its trivial to add support for it in the only place that info is used atm
<hazmat> negronjl, one time commit, is not maintainer
<hazmat> enduring activity
<SpamapS> imbrandon: Its a place to ask a question about fundamental charms tho.
<imbrandon> sure but since charms are more like a software app than packaging that app it might be nice to have defined way to list past contributors univerally now that its mentioned, i dont think maintainers is right , althouh i do like the idea of it being a list for other reasons
<SpamapS> maintainer is a statement.. "I will keep this charm in good order and respond to questions about it"
<imbrandon> SpamapS: sure, but it looks more like a place that juju development happend
<imbrandon> happens*
<SpamapS> You can contribute like crazy to the charm w/o saying that.
<SpamapS> imbrandon: no, thats on the juju-dev mailing list
<imbrandon> SpamapS: right , i know this, i said looks like :)
<SpamapS> imbrandon: juju is for discussions about using juju, which includes charm dev
<SpamapS> Seriously traffic is next to nothing
<SpamapS> when it gets to a level where charmers and charm user questions are dominating the list, we'll open a new list
<imbrandon> until this conversation in irc tho i would not have thought that :) and the amount of serious traffic does not matter ppl getting their head bitten off from old school lists
<imbrandon> is enough to stop them at the smell of "important"
<SpamapS> but I would do that reluctantly, as I think its vital that interested users see charm dev discussions
<marcoceppi> We should just go use Google Wave instead of the lists.
<imbrandon> see moving resources, esp on the internet in any fasion stops momentium and is confusing, its better to do what you want from the begning
<imbrandon> marcoceppi: haha +10
<SpamapS> imbrandon: right, ideally we'd never move it, because the traffic would be interesting to all juju users.
<imbrandon> imho forums are better used for what lists do, and wiki's are better at what forums are used as, and ask! is good to supply a healthy community on both the others :)
<imbrandon> and email should die
<imbrandon> :)
<imbrandon> but there are whole religious cults on both sides of the forum<->email thing
<imbrandon> thats why something like google groups are perfect, would be nice for ubuntu lists to do that
<imbrandon> forum and email interface combine
<SpamapS> forums are death for busy user interactions
<imbrandon> email is not a good archive to refer/search
<SpamapS> yeah, thats what askubuntu is for
<imbrandon> then why would i send an email to the list ?
<imbrandon> heh
<marcoceppi> +1
<imbrandon> SpamapS: i'm just being a PITA , i konw the awnsers, but seriosuly i think email lists are only around out of habbit, interaction in so many other ways is better for every use case i've ever seen a list used for
<imbrandon> announcements was about the last use case i thought was OK for them, but even that i like the RSS model better as it stops the chance of spam pretty much
<SpamapS> imbrandon: ETOOMUCHPITA ...you have reached your PITA quota for this millenium. Please try again in the year 3000.
<imbrandon> lol kk
<hazmat> its like the everyman version of the editor wars
<hazmat> s/everyman/universal
<hazmat> ml vs forum
<hazmat> push vs pull ;-)
<imbrandon> heh
<jcastro> hazmat: I'd like to write up the OpenStack provider, when do you think would be a good time? Should I wait for it to hit distro/PPA you think?
<hazmat> jcastro, now
<hazmat> jcastro, its in the ppa
<jcastro> ok, ON IT.
<jcastro> do we have instructions for setting it up on HP Cloud somewhere?
<SpamapS> jcastro: I just uploaded it to debian experimental, and it will hit quantal as soon as launchpad sees it so I can sync
<SpamapS> jcastro: no the docs still need a lot of work
<jcastro> ok so I'll whine to mgz about it
<SpamapS> or perhaps they just didn't make it from lp:juju to lp:juju/docs ?
<SpamapS> ahh no docs dir
<SpamapS> yeah definitely need some docs
<jcastro> I don't mind waiting a day for the docs to catch up
<jcastro> we'd be lost in the webapps madness today anyway. :)
<SpamapS> true
<SpamapS> that was pretty subtle, but pretty damn cool
<imbrandon> does lp:juju's /docs ever get updated with whats in lp:juju/docs at all ? its like _WAY_ outdated
<imbrandon> SpamapS: ^
<SpamapS> there is no more confusion
<SpamapS> lp:juju/docs is the only one
<SpamapS> I don't have a docs dir in my lp:juju
<imbrandon> i have some more doc's css fixes to land soon too that are the product of ubuntu-accomplishments and pkgme are both using me shpinx theme now :)
<imbrandon> SpamapS: ahhh i hadent looked ina week or teo
<imbrandon> two
<SpamapS> imbrandon: At some point I did some mods to the old one to add Version metadata for docs that had listed it... mainly for the charm store policy document. I don't see that there anymore.
<imbrandon> and its has a home of its own too at lp:ubuntu-website-community-webthemes/light-sphinx-theme too ( but thats old and i need to push there as well )
<imbrandon> SpamapS: its there
<imbrandon> it just never worked
<SpamapS> it worked for me. ;)
<imbrandon> see hte two blank lines with a "."
<imbrandon> at the bottom
<imbrandon> thats what gets output for me and on the www
<imbrandon> eg all that [meta] is there but empty
<SpamapS> oh right its because of sphinx 0.6.4
<imbrandon> how is that filled in ?
<imbrandon> from the conf.py ?
<SpamapS> meta is automatic in later sphinx's
<imbrandon> well i dont think the build of the docs has access too juju either, look at the bottom link in the TOC the modules are empty
<SpamapS> its not getting it from juju
<imbrandon> i wish it was a little more transparent on what gets done on that build ... so i could at least work around things
<imbrandon> right but the modules it does
<imbrandon> just was adding things in since we was on topic
<imbrandon> :)
<imbrandon> after i finish this apt-mirror charm i may charm the juju site and see how far we can get with m_3_'s dogfood'in idea, i like it
<imbrandon> but we'll see , plus i can use the charm for other docs/websites if it dont fly :)
<imbrandon> should only take an hour or two really, as it should be very simple charm
 * imbrandon thinks on it while he gets foood
<imbrandon> brb
<jml> when juju starts chewing 100% of both of my CPUs, what's the correct way to make it stop?
<SpamapS> jml: juju destroy-environment -e [name of local env]
<SpamapS> jml: you *might* have to kill -9 it, as I've seen it at least once stuck in some kind of signal ignoring loop
<jml> 23325 ?        Ssl    0:00 /usr/bin/python -m juju.agents.machine --nodaemon --logfile /tmp/tmpDvjyn3/jml-djpkgme.bigtests.test_integration.TestJuju.test_bootstrapped-1/machine-agent.log --session-file /var/run/juju/jml-djpkgme.bigtests.test_integration.TestJuju.test_bootstrapped-1-machine-agent.zksession
<jml> what about that sucker?
<SpamapS> jml: though upstart should be taking care of that
<jml> every time I kill it, it comes back stronger
<jml> (not actually stronger, that just sounded cooler)
<SpamapS> jml: so the destroy-env should have removed the upstart jobs
<SpamapS> jml: but if it didn't, try 'sudo initctl list|grep juju' followed by some stops.
 * SpamapS wonders how the branch to fix that is faring
<jml> SpamapS: thanks.
<hazmat> SpamapS, the fix is there, but ben went on holiday
<hazmat> the actual fix needs extraction from the branch
<hazmat> to be standalone and some minor cleanup besides
<hazmat> SpamapS, are you going to do another release of juju into 12.04 .. i could take a break from websockets and have a go at if if need be
<hazmat> i think ben said he was going to work on it during his plane trip
<SpamapS> hazmat: yes I'd like to do one more SRU to fix the few bugs that are most important.
<SpamapS> jcastro: hey, are those HP micro servers quiet?
<SpamapS> As much as I've fought it for a while..
<jcastro> yeah
<SpamapS> I think I need to stop building software on my 5500rpm laptop disk
<jcastro> with no extra drives it's basically 0 noise
<SpamapS> I was thinking of putting SSD's in it
<jcastro> for building stuff? It's got like an AMD netbook processor, I wouldn't recommend that either
<SpamapS> Oh
<SpamapS> I want a big fat 4 core CPU :)
<SpamapS> with no fan
<SpamapS> ;)
<jcastro> always the impossible you seek
<SpamapS> https://www.system76.com/desktops/model/leox3
<SpamapS> liquid cooling should be quieter right? ;)
<SpamapS> I wonder if I can call them and get them to send it to me w/o a video card so I can rid myself of nvidia
<SpamapS> 5 Free GB of Ubuntu One Online Storage and Sync
<SpamapS> doesn't.. everybody get that? ;)
<imbrandon> heh
<imbrandon> SpamapS: server to build thats better than your daily machine != quiet :)
<SpamapS> my laptop, at this point, is not quiet
<imbrandon> conflicting goals .... but maybe you can toss it in the little ones closet to help them sleep the night from the vibrations ? hehe
<SpamapS> thats my main gripe with it actually
<SpamapS> fans just spin and spin
<SpamapS> helps a bit to shutdown one of the cores
<imbrandon> heh yea, doing the same here , as nice as my mini is its great to offload things, so my server / pc reorg at home is perfect time to do all that
<imbrandon> but i tend to only care about the noise on mine and the others are rooms/buildings away
<SpamapS> actually if there were a command like 'sbuild', but it spun up an EC2 c1.medium instead.. I would probably not care
<imbrandon> like the new server with all those nice TB of hdd's i put in a few days ago sounds like a damn plane is taking off when you walk in the room
<imbrandon> heh i'm sure it could be done
<SpamapS> but by the time I ssh to a box w/ juju and scp a .dsc and.. oh look shiny
<SpamapS> imbrandon: I think we should run a service for ubuntu devs to share a pool of build boxes in EC2.. ;)
<imbrandon> automate that shit, if you have to type a command in 2 times, its worth scripting
<imbrandon> SpamapS: welcome to ubuntuwire ( e.g. that was the main goal behind it pre ec2 )
<imbrandon> it used donated dell servers with xen instances :)
<SpamapS> ah never heard of it
<imbrandon> www.ubuntuwire.com , service i started way way way back in the day, been thinking about bringing it into the 2000's
<imbrandon> heh
<imbrandon> then wrangled a bunch of sponsors like dell and other peeps to help me SA like wgrant and ajmitch :)
<imbrandon> and its basicly been dormant for 2 years , but uses for like qa.uw.com and revu.ubuntuwire.com still
<ajmitch> not dormant, just infrequently updated since things just work
<imbrandon> might be kinda cool to add what your talking about , beacuse its basicly there , thats the service i was tlaking about i wrote ssh-import-id for before it existed
<imbrandon> ajmitch: right :)
<imbrandon> ajmitch: it is nice how low maint it is when shits setup right :)
<ajmitch> though I did have to install a postgres backport recently
<imbrandon> but dormant i more ment "new development" or add shineys
<ajmitch> there are still things being added, but mostly in the realm of new ubuntu qa tools
<imbrandon> btw is revu still running on that single ol ultra sparc ?
<imbrandon> heh
<SpamapS> I believe REVU has been phased out
<ajmitch> like various things that people write that use the local UDD mirror
<imbrandon> SpamapS: it has for the most part, only there for those that choose to use it
<SpamapS> Last I heard it was heavily discouraged
<SpamapS> --> debian is over there
<ajmitch> SpamapS: right
<imbrandon> heh basicly
<SpamapS> and otherwise, submit to the ARB
<ajmitch> or otherwise stick stuff in a branch & get it reviewed that way
<imbrandon> yea the queue used to get large
<imbrandon> and undermaned
<imbrandon> yup, and iirc there is a huge banner at top of it that says so too
<imbrandon> last i touched it tho it was on a single 400mhz ultra sparc iirc heh
<imbrandon> hrm /me checks out the bzr branches and contemplates a webtheme update
<imbrandon> bah
<imbrandon> do that this weekend , got charms for now :)
<imbrandon> but yea would be nice to add something like that SpamapS :)
<imbrandon> ajmitch: and even update all those scripts we wrote to use the LP api now that there is such a thing instead of screen scraping
<imbrandon> :)
<ajmitch> imbrandon: I don't think there would be many of those left
<imbrandon> yea , the ssh one if any, and its likely been replaced
<imbrandon> or not even used
<imbrandon> since people.ubuntuwire.com is offline
<ajmitch> not used
<imbrandon> and thats where it was mostly
<ajmitch> ssh access is just given on request as needed
<imbrandon> ahh if ya get time mind refreshing me on the www hostname ( if its still one www with vhosts ) and all that jaz , i might do some cosmetic updates this weekend and get familiar with it all again
<ajmitch> just, but not right now
<imbrandon> sure thing just whenever
<imbrandon> :)
<ajmitch> & in #ubuntuwire, it's off-topic enough here
<imbrandon> bleh :)
<SpamapS> would be pretty awesome to get all of it charmed and moved to hpcloud or RAX or something
<imbrandon> SpamapS: that would be except the $$ part, atm its all sponsored
<imbrandon> other than that i'm 100% down for it
<SpamapS> right, let HP or RAX sponsor it
<SpamapS> didn't HP give all ubuntu devs access?
<imbrandon> yup
<imbrandon> but only for 3 months, i'll see what i can manage
<imbrandon> :)
<imbrandon> one sec phone
<SpamapS> hazmat: first try at building r559 on quantal failed.. but.. like usual..its a random "other" failure. :-P
<SpamapS> ok, they're still performance sensitive
<SpamapS> roseapple finished them in 5 minutes.. but when they take 13 min.. no good
#juju 2012-07-20
<imbrandon> woot
<hazmat> SpamapS, can we kill build recipes for oneiric & natty?
<imbrandon> umm SpamapS / hazmat : ideas on whats wrong ?
<imbrandon> http://paste.ubuntu.com/1101192/
<imbrandon> 12.04 pretty clean install, nothing strange, juju from ppa
<hazmat> imbrandon, thats jitsu not juju
<hazmat> do you have some sort of alias in there
<hazmat> jimbaker, ^
<imbrandon> hazmat: yea i ran juju-wrap
<imbrandon> it should be ( and has untill now ) passed through commands
<imbrandon> that were not jitsu right to juju
<imbrandon> something is broken with it
<imbrandon> maybe i'll take that as my queue to propose the code that would do it a diffrent way that i've been contemplating a month
<imbrandon> heh
<imbrandon> hazmat: but to get what i have in the pastebin just "jitsu wrap-juju" then go
 * imbrandon likes the idea of "alias juju=jitsu" better , thats the other way i mentioned 
<hazmat> imbrandon, jimbaker recently added built in help commands, possibly that interferes with wrap
<imbrandon> ahh yea likely, but this does give me said opertunity so i'll dig a little
<imbrandon> fwiw i never used jitsu another way hehe, kinda suprised you hadent noticed the wraper yet
<imbrandon> :)
<imbrandon> but yea a check to see if $0 == juju | jitsu , should be less problmatic anyhow imho
<imbrandon> and thats basicly what i'd want to change it to , then maybe add the wrap-juju command to emulate it for backwards compat
<imbrandon> so it can be added as a symlink like ln -s /usr/bin/jitsu /usr/local/bin/juju; or alias juju=jitsu;
<imbrandon> makes it quite nice ( spoiled by a few other utils that do it that way )
<hazmat> imbrandon, i've tried to avoid the wrapper as a bad habit ;-)
<imbrandon> :)
<imbrandon> i cant hardly use git with out the hub wrapper
<imbrandon> git+hub = github intergration for git like , bzr+lp intergration
<imbrandon> and other cli niceities like colors etc
<imbrandon> so stuff like "git clone brandonholtsclaw.com" runs "git clone https://bholtsclaw@github.com/bholtsclaw/brandonholtsclaw.com.git"
<imbrandon> and git push the same , plus added stuff like "git create" and "git pull-request" ( merge proposal )
<imbrandon> etc etc
<asachs_> Hello !
<asachs_> Noob question - got MaaS setup and working, now after juju bootstrap completes (and i wait for the machine to come up) i can't get a connection from juju status ?
<asachs_> Any ideas ?
<asachs_> so zookeeper is missing on both my maas server and the juju bootstrap node
<melmoth> asachs_, can you ssh on the zookeper node with your pre defined shs key ?
<melmoth> (not that i got an idea how the whole stuff works, but this is what i would check first)
<asachs_> melmoth: i can ssh about so cloud-init did its job
<melmoth> juju --verbose status ? may be this will give you more info ?
<asachs_> melmoth: sounds like the juju bootstrap node runs the zookeeper instance ?
<melmoth> as far as i have understood, yes. When you bootstrap it starts a node and runs zookeper on it.
<asachs_> melmoth: had to install zookeeper myself by hand - juju did not install it
<asachs_> juju -v status gets me : DEBUG Environment still initializing. Will wait.
<melmoth> hmmm. may be some hint as to wht the install fail are available in /var/log/cloud* logs file ?
<asachs_> let me have a look
<asachs_> melmoth: no error and also no indication of installing zookeeper, last line is :  handling final-message with freq=None and args=[]
 * asachs_ is stumped
<melmoth> i have no idea what could be wrong.
<asachs_> i wonder if i should go bug the guys on #juju-dev
<imbrandon> asachs_: how long has it been
<imbrandon> since you told it to bootstrap
<asachs_> 2 hours
<asachs_> it looks like the bootstrap command is not installing and starting a zookeeper instance
<imbrandon> there is no zookeeper running on the node it installed ?
<imbrandon> is zk installed ?
<asachs_> its not installed at all
<asachs_> its a maas setup
<asachs_> bootstrap returns success pretty quick - while the hardware boots, not sure what is responsible for injecting software into the newly booted host
<asachs_> for juju
<melmoth> https://juju.ubuntu.com/docs/getting-started.html mention "It's also required that the environment provides a permanent storage facility such as Amazon S3."
<melmoth> i was able ,to bootstrap juju on an openstack where there was no swift.
<melmoth> so, why is this s3 stroage needed for ?
<melmoth> (i would not mind about it, but horizon complain about missing s3 catalog entry when i ask it to generare my environment.yaml)
<fwereade_> melmoth, hi
<melmoth> hola
<fwereade_> melmoth, the storage is used for 2 things
<fwereade_> melmoth, 1, storage of charms you publish to the environment
<fwereade_> melmoth, 2, a tiny snippet of data in a known location so that the client knows where to find the zookeeper node
<melmoth> does this mean i will not be able to deploy charm without installing swift and s3 compatibility layer ? and how can it works with simple locl lxc install then ?
 * melmoth is confused
<fwereade_> melmoth, the LXC install just runs a local webdav server for charm storage, IIRC, and it just runs the zookeeper on localhost (so there's no need to look somewhere else to find where it is)
<fwereade_> melmoth, it is I agree something of a shame that ec2-style providers expect s3-style storage, it would be cool if we were to make storage independent from the provider
<melmoth> so it does mean that if one want to use juju on his private openstack cloud, he needs to install swift as well ?
<melmoth> (i am assuming swift can act as a s3 storage solution)
<fwereade_> melmoth, yes, that's right, I'm afraid
<melmoth> Oh. oh. ok.
<fwereade_> melmoth, (although I'm not sure if there are other things that can adequately mimic s3; there probably are)
<fwereade_> melmoth, the important thing is that there's *something* that looks like S3 acessible to both the client and the various nodes
<melmoth> hmm http://askubuntu.com/questions/132411/how-can-i-configure-juju-for-deployment-on-openstack
<melmoth> they mention the ec2 provider, whatever that means
<fwereade_> melmoth, that answer is kinda specific to deploying openstack with juju so you can then deploy *onto* that openstack with juju
<fwereade_> melmoth, the ec2 provider is the backend that is designed to work with ec2 but which can also be prodded into a configuration that works with openstack
<fwereade_> melmoth, I know that there has been some work on a native openstack backend provider but I haven't been following it closely
<melmoth> i have a nova-objectstore running on my manager node, i am assuming this is what this ec2 stuff is about ?
<melmoth> because, one thing is sure, i was able to juju bootstrap, and try to deploy a charm (wich fails because of http proxy being mandatory in my setting)
<fwereade_> melmoth, hmm, this is interesting: https://code.launchpad.net/~gz/juju/openstack_provider/+merge/110860
<melmoth> yep, this match --s3_port and -s3_host i have in my nova.conf
<fwereade_> melmoth, this is merged, and mentions an included fudge that lets you use nova
<melmoth> so, looks like i have a s3 daemon running after all (wich could explain why i was able to bootstrap)
<fwereade_> melmoth, hmm, so you have an explicit s3-uri in your environments.yaml?
<melmoth> bingo, the same:     s3-uri: http://192.168.122.4:3333
<melmoth> well, at least i learnt what this stuff was for :)
<fwereade_> melmoth, a pleasure :)
<melmoth> Now, i wonder why is horizon complaining about a lack of s3 catalog entry...
<melmoth> i guess there s some service to define in keystone
<fwereade_> melmoth, most likely, I'm afraid I am not the man to ask about openstack, but you may have some luck in #ubuntu-server
<melmoth> oh, yet another chan i was not aware of :)
<fwereade_> melmoth, yeah, the list is a bit overwhelming :)
<hazmat> melmoth, there is native openstack support in juju, and yes nova-objectstore counts as s3
<hazmat> melmoth, you don't need to define nova-objectstore in keystone
<melmoth> cool. My horizon problem was that i had no s3 type service defined
<melmoth> now that i created one and associated the endpoit to the s3 url, horizon gives me my environment.yaml
<hazmat> ah ic.. nova-objectstore doesn't do keystone afaicr
<hazmat> cool
<melmoth> yep. Feels like i have deserved a sandwich :)
<hazmat> you can also using ec2 s3 with your private cloud openstack if that floats your boat
<hazmat> er.. amazon s3
<mgz> or the s3 interface to nova-objectstore
<mgz> (if enabled)
<mgz> that is, without using the ec2 interface to the rest of nova.
<hazmat> mgz, that's what he's doing it sounds like
<mgz> hazmat: what is placement: local|unassigned about when working with actual clouds?
<mgz> ec2 has it as an acceptable config item, but I'm not sure what you'd use it for
<hazmat> mgz, irrelevant
<hazmat> mgz, it was a bad path, to support density, that got left in. but there's jitsu deploy-to which does much better
<mgz> okay, as I seem to have it in openstack_s3 but not openstack. should I just delete it from both?
<hazmat> mgz, yes please
<mgz> mp upcoming
<hazmat> mark's oscon keynote http://news.softpedia.com/news/Mark-Shuttleworth-Talks-Juju-at-OSCON-2012-282209.shtml
<imbrandon> whats the default s3 port, 443 isnt it ? (ssl)
<imbrandon> hazmat: ^^
<imbrandon> btw morn all
<hazmat> imbrandon, depends if the endpoint is https or not
<hazmat> mgz, hmm. given that the schema is at best advisory (ie that has no enforcement effect) it seems like something to better not document  in the docs
<imbrandon> way off topic but i konw how it can be getting into work/something on the PC, but thought i'd mention for those that not noticed yet, there has been a bad shooting in Colorado this mornign
<imbrandon> 24 estimated dead etc, i dont really know more but check the news i'm sure its on everywhere
<marcoceppi> Yeah, happened at a movie theater during midnight showing of Batman
<imbrandon> yea, i just noticed it on the tv on accidednt walking by, i never watch, not even news really
<imbrandon> that sucks tho, crazy how ppl got to do extreem things
<marcoceppi> Radio during morning commute
<imbrandon> ahh yea :)
<melmoth> so, i cannot use juju on my cloud because i m behind a http proxy, and if i try to use juju on a single vms with lxc, i end up with Failure: zookeeper.OperationTimeoutException: operation timeout
<melmoth> i can bootstrap, then i deploy a mysql charm. A new vms boots, i can ssh in it, and this timeout error appears in /var/log/juju/unit-mysql-0.log
<melmoth> any idea what could be the problem ?
<SpamapS> melmoth: sounds like 2 problems
<SpamapS> melmoth: the http proxy one is known and may be fixed sometime in the near future.
<SpamapS> melmoth: the second one, zookeeper, is a bit ocnfusing
<hazmat> melmoth, do you see that on the command line or in a log?
<melmoth> to be honest, i never manage to have juju deploying anyhting with lxc
<SpamapS> melmoth: but most likely that problem is caused by a local firewall blocking access from the containers to the zookeeper which was started when you bootstraped the local provider
<melmoth> in the service machine /var/log/juju/unit-mysql-0.log file
<melmoth> juju status just state the service is pending
<SpamapS> hazmat: hey, I think natty+oneiric are failing because of an older twisted version
<hazmat> SpamapS, saw that
<hazmat> SpamapS, the api used in the openstack provider is different than what's avail in older twisted vers
<SpamapS> hazmat: we should either fix that, or just skip the tests on << twisted 12
<SpamapS> and have it error out gracefully on old twisted
<hazmat> SpamapS, i've heard from on high that we can just stop feature dev support for older versions
<SpamapS> I'm not really interested in coddling oneiric and natty :)
<SpamapS> now that we have a shiny LTS to play with
<melmoth> SpamapS, from the mysql service machine, i can telnet to my host machine on the port the java zookeper thingy is listening to
<SpamapS> melmoth: weird.. can you verify the address its trying to connect to?
<jcastro> mgz: hey so I was thinking, in the meantime, do you have a sanitized environments.yaml we can look at to try the OS backend? for say hp cloud?
<melmoth> SpamapS, not sure... all i can think about is tcpdump on any interface from the host... any better idea ?
<mgz> jcastro: yup, so, basically I keep all the bits that matter in envvars
<jcastro> mgz: if you can pastebin something for me I can at least get something temporary up
<jcastro> We have a bunch of people in the HP cloud beta basically aching to get their juju on
<imbrandon> jcastro: what ya need ? i have a working setup too i can get ya
<imbrandon> i've been playing with export/import of omg :)
<imbrandon> lol
<jcastro> just a sanitized hp cloud thing
<melmoth> hmm, if i rebootstrap, zookepers listen to another port. i wonder how the new machine knows where to try to connect
<imbrandon> sure, one sec
<jcastro> with links to where on the hp page we can get the account #'s etc.
<jcastro> basically, just what we do for amazon in the docs
<jcastro> imbrandon: actually yo, branch the docs and just put it in there
<jcastro> it's RST time!
<imbrandon> jcastro: ok i'll pastebin it, then i'll do the docs while you pass on the pastebin for those choming at the bit
<imbrandon> chomping
<jcastro> I don't suppose you are in Rackspace Beta for their Openstack?
<imbrandon> i do
<imbrandon> actually 2
<imbrandon> one is ohso's
<imbrandon> :)
<imbrandon> but its the same env.y
<jcastro> oh sweet
<imbrandon> just get the info from diff spots
<jcastro> so you can make one page and then just if you are using HP do this, Rackspace do that.
<jcastro> sweet
<imbrandon> OpenStack
<imbrandon> :)
<imbrandon> but i know what ya mean
<imbrandon>  yup yup
<imbrandon> ok sec
<mgz> jcastro: so, my configs for canonistack and hp respectively are basically just: <http://pastebin.ubuntu.com/1101923/>
<mgz> then for canonistack I source ~/.canonistack/novarc
<jcastro> juju-origin: lp:~gz/juju/openstack_provider
<mgz> and for hp I made a similar file with the envvars in
<jcastro> Man, we can do that?
<mgz> I prefer it to writing passwords in multiple plain text files
<jcastro> I meant the juju-origin to an lp branch, but yeah, I understand what your setup means
<mgz> ah, right, that's not needed any more, as it's landed
<mgz> there's a slight gotcha with HP in that they provide both a name and an id for the tenant (read, project)
<mgz> provided the name is given rather than the id all is well.
<imbrandon> jcastro: http://paste.ubuntu.com/1101928/
<imbrandon> jcastro: origin can be ppa, thats what i'm using
<jcastro> yeah I knew about PPA, I just didn't know we could add a branch in there
<jcastro> basically, I could have easily tried the provider this whole time and didn't realize it
<imbrandon> that pastebin they just need to change username: <email> poass: <console pass>
<imbrandon> and project name
<imbrandon> to their tennenid
<imbrandon> jcastro: nah it only landed yesterday late
<imbrandon> in the ppa
<jcastro> imbrandon: ah excellent
<imbrandon> but you could have run the branch :)
<melmoth> and of course, when i run the tcpdump, then... it works.
<melmoth> hmmmm.
<jcastro> so really we just need an example like this, and then reuse the page we used for the contest to show people where they can find their tenet ID
<melmoth> may be the nic needs to be in promiscious mode....
<imbrandon> jcastro: yup, gonna do the docs up real proper right now
<imbrandon> and the tennent-id is labled that and at the top of the api page, its most of the time their email+-default-tennant
<imbrandon> but like SpamapS is not, some are diff, but most are like mine
<imbrandon> there is ways to make it work with the keys VS user/pass too but i'l put that in the docs
<imbrandon> pasted is what i know for sure works as it is right out of mine
<imbrandon> ok /me starts that doc page
<imbrandon> jcastro: intended to add the OSX page to the offical docs anyhow, gives me the ndge to do both
<imbrandon> heh
<jcastro> indeed
<jcastro> it's your turn to doc. :)
<imbrandon> btw the mirrors are all up and running fast as hell ( and swift storage backed ) and in sync within ~5 min
<imbrandon> AND i think i got snapshoting working for it , like snapshot.debian.org does
<imbrandon> but gotta test that more later
<imbrandon> :)
<imbrandon> any docs merges in the queue ? might as well do a whole docs afternoon /me looks
<mgz> imbrandon: I have some to write for the openstack bits.
<imbrandon> mgz: ok, if you want just pass me the branch when yer ready in here or PM and i'll snag it right away
<hazmat> mgz, awesome re docs
<hazmat> mgz, why the split between test mixin and mock provider.. feels like a bit of duplication
<hazmat> i'm finishing up constraint support and due to usage i have to add effectively the same to both
<hazmat> which makes the distinction quite unclear
<hazmat> their both remote interaction proxies
<mgz> hazmat: it is duplication currently
<mgz> I'm trying to find a less lame way to do testing
<mgz> the problem with the mixin is it makes it really hard to focus the tests, test_bootstrap ends up caring about the implementation details of ports and launch when it really shouldn't
<mgz> but a little bit of down to the http level testing is good
<mgz> so, the mock provider was a stab at seeing how to write launch tests while only caring about launch stuff, not the rest of how provider works
<mgz> suggestions welcome.
<mgz> having provider being a monolithic gateway for everything makes splitting stuff back out again slightly annoying
<hazmat> mgz noted, i'll think on it, just trying to get this branch into the queue, for now i just made a common base class for the shared constraint support
<mgz> so, ideally the mock provider would be ignorant of constraints, except in a subclass just for testing constraint support
<mgz> but the mixin needs to know all the details of everything currently
<hazmat> mgz, debatable constraints permeate everywhere
<hazmat> anything that launches a machine needs to be at least minimally constraints aware
<mgz> tests for port management and file storage shouldn't care about constraints
<hazmat> and making that support variable needs to wire through to an instance level variable
<hazmat> i'll think on it, almost done with this
<mgz> yeah, get a working state for now and I'll have a look as well.
<hazmat> argh.. all the image ids just changed throughout hp cloud
<hazmat> well some regions
<hazmat> nm.. user error
<imbrandon> no they do
<imbrandon> its 110 in az-1 and someting else in the others
<hazmat> imbrandon, vary by region is different than changing within a region, i thought it was the latter.. but my error, i had accidentally changed my region
<hazmat> mgz, constraint mp at https://code.launchpad.net/~hazmat/juju/ostack-constraints/+merge/116027
<mgz> ace, looking.
<hazmat> mgz, pls excuse some of the line noise in there cause i was switching to std project style imports.
<mgz> hm, would be e... right
<mgz> otherwise seems to make sense
<hazmat> mgz, one oddity i noticed that i haven't  tracked down is that it seems to make more calls to the flavor details endpoint than in imo strictly nesc.
<mgz> I don't really like that both launch and the provider needing to query the flavours
<hazmat> i think it might be related to constraint set instantation, in which case either fixing the caller, or using a time limited cache would have value
<mgz> would prefer one lookup that made a usable object
<hazmat> mgz, well both are used to launch machines
<hazmat> mgz, bootstrap uses the mixin, the launch uses the provider
<hazmat> both launch machines
<hazmat> oh.. you mean the api
<mgz> but if the provider created something via a helper, the launcher could access that (as it's passed in the provider)
<mgz> right.
<mgz> just impl. details.
<hazmat> mgz, one is used to define valid values and that instance-type is available, the other to actually use the value given
<hazmat> and map it back to the provider notion
<hazmat> mgz, true.. but we don't store these persistently atm. the provider is a long live object in the daemon
<mgz> and in nickpick mode, s/list_flavors_details/list_flavor_detail/g
<mgz> I tyop flavor as flavour too much already, and that's another confusing one, but may as well stick with what's in the url
<hazmat> mgz, sure, pls put comments in the review, i'll try to hit them up tomorrow, i've got to switch tracks to something else for now.
<imbrandon> SpamapS: hahah gotta run afk a few min, but just got an email ... rember i kept saying Sparrow client was alot like gmail etc etc
<imbrandon>  Hello,
<imbrandon> :)
<imbrandon>  
<imbrandon> We're excited to let you know that Sparrow has been acquired by Google!
<mgz> will do, I'm nearly done for the day too
<imbrandon> You can view our public announcement here, but I wanted to reach out directly to make sure you were aware of the news.
<hazmat> mgz, yeaah.. they picked a name with different spelings depend on style of english.. why couldn't they just use instance type ;-)
<imbrandon> heya mgz pass me the bzr branch url for you doc merge if you dident do  MP already and i'll get it merged in with my open stak stuff and up here in a few min
<imbrandon> before ya bolt :)
<hazmat> imbrandon, hah.. cause their email clients suck'd ;-)
<imbrandon> heheh
<imbrandon> hazmat: sparrow really is nice tho, sad part is its $$, and now google bought em it will likely be free
<imbrandon> but i paid :(
<imbrandon> lol
<hazmat> imbrandon, me to
<hazmat> though i hardly use it anymore
<imbrandon> :)
<imbrandon> yea i really wish there was a gnome port of it
<imbrandon> like if i had the time * hahahahahahah * i might try to emulate it , hahahahahahahahah time
<imbrandon> but i seriously would use it hands down
<imbrandon> maybe3 the webui will be closer/almost with this new unity stuff
<imbrandon> anyhow afk brb
<hazmat> imbrandon, i dug into the src of the web app stuff for somethings it will be nice, but the indicator-datetime is still a fubar for getting calendar events showing up, i'm still using evolution-webcal to get my google calendar events to show up there correctly, its  hack but it works.
<hazmat> bcsaller, how'd the travel go?
<bcsaller> hazmat: usual badness, no vegan food on the flight, missed connection, that sort of thing, close to 24hrs total travel time the way it played out
<bcsaller> feeling much better now though
<hazmat> bcsaller, ouch.. that sounds unfortunately eventful
<hazmat> bcsaller, where you able to hit up the local provider thing on the plane?
<hazmat> or just to busy recovering from serial disasters
<bcsaller> hazmat: I wasn't but I still have some time to finish it today. On the plane I had the dentist styled seating where the person in front leans back far enough you can see their molars
<hazmat> bcsaller, i'd like to get that in, and if your in vacation mode, i'd like to either hand it out to jimbaker or myself to get it done
<hazmat> but if you want to finish it that be great
<bcsaller> no no, I should be able to split that out today
<hazmat> great
<hazmat> bcsaller, times like those one should carry dental floss to hand out ;-)
<jimbaker> :)
<SpamapS> and some nitrous
<jimbaker> i'm closing up this conf today, but i will be on vacation myself end of day
<hazmat> jimbaker, nice, enjoy.. how was the conf?
<jimbaker> i should have some time this afternoon, but i'm going to try to first get in a fix for jitsu watch for m_3
<hazmat> from the tweet-o-sphere it seems like we got some love
<jimbaker> so he can get more reporting out
<jimbaker> hazmat, great conf, great talk by sabdfl
<jimbaker> btw, there's a link here to the keynote, http://news.softpedia.com/news/Mark-Shuttleworth-Talks-Juju-at-OSCON-2012-282209.shtml
<jimbaker> hazmat, people were clapping when they saw jitsu export | jitsu import -
<SpamapS> yeah that was pretty cool :)
<hazmat> jimbaker, yeah.. i watched it live.. nice indeed
<jimbaker> and of course jitsu deploy-to was pretty essential to robbie's demo on openstack, so good work on the jitsu front
 * hazmat takes a bow
<jimbaker> hazmat, well deserved indeed!!!
<hazmat> speaking of pushing code out.. its time to release charmworld / browser
<jimbaker> that should be quite exciting, seems like some pent up demand for that
<SpamapS> hazmat: oh?
<SpamapS> hazmat: btw, if I didn't make it clear before, +1 from me. :)
<hazmat> SpamapS, yup, code will be avail in next 20m or so, just need to do some audit and bit fiddles
<SpamapS> hazmat: werd. License?
<hazmat> AGPL
<SpamapS> but of course. cool. Does it have a "download the code" feature or is it just AGPL in spirit?
<hazmat> SpamapS, hmm.. it doesn't have that feature, does a link suffice or does it really need to be able to tarball itself up?
<SpamapS> well thats the only thing AGPL guarantees over GPLv3
<SpamapS> that if the service has a way to DL the code, you can't disable it
<hazmat> SpamapS, is that the only mechanism for its delta? http://en.wikipedia.org/wiki/Affero_General_Public_License
<hazmat> the download source feature...
<SpamapS> hazmat: its the one described by GNU's website
<SpamapS> I admit I have not actually diffed the licenses
<hazmat> i'd have to delay releasing it for a few weeks to implement that, due to other priorities
<hazmat> i'm fine with just putting it out there
<SpamapS> its not a requirement
<SpamapS> I was just curious
<hazmat> SpamapS, looking over some the other AGPL web apps i don't see many with that feature readily available from browsing the site, most just link to their src repos
<SpamapS> hazmat: I'd say release w/o it and open a bug
<hazmat> ie. https://gitorious.org
<SpamapS> hazmat: yeah, "they're doing it wrong" is probably the answer
<hazmat> its interesting to note that such a feature is most likely implemented via download tarball
<hazmat> vs. actually zipping up the runtime code
<hazmat> ah.. ic
<hazmat> so it just needs a link
<hazmat> for the download source feature that's self hosted,and then deriviatives have to comply
<hazmat> well not nesc. self hosted, but that gives the clearest intent
<SpamapS> hazmat: Yeah a link is ok, but you have to make sure that link is accurate then
<SpamapS> hazmat: but IMO this is not strong enough to ensure user freedom
<hazmat> SpamapS, agreed a runtime source extraction and zip would be best
<SpamapS> hazmat: since the site can of course just serve up a link to code w/o all their optimizations. :)
<hazmat> SpamapS, the would be in violation of the license then
<SpamapS> hazmat: "prove it"
<hazmat> SpamapS, your doing a great job of convincing me not to release ;-)
<hazmat> jk
<SpamapS> hazmat: no you're good don't worry about it
<SpamapS> hazmat: though you should have a link to the code hosting page :)
<imbrandon> nah there isnt a requirement to have it in the footer or a tar ( no same medium e.g. cant make a download binary and a cdrom source only and comply , clause like gpl ) but "anyone that has access to the service also have a clear way to get the full source required to run it completely" or some cruft very close to that
<hazmat> hmm. we have 28 charmers extant
<hazmat> new group
<imbrandon> jitsu ninjas ?
<hazmat> yeah.. jitsu hackers is a bit more managable
<imbrandon> so hazmat along these lines, i been thinking the last few min ... what if ( in spirit, not gonna force php code on ya or even the exact conventions or anything really  hahaha ) we take the general idea of what i was doing and lay a public rest api down ontop of the charmdb monogo to provide any of the information the gui may need then we get the interface free for other apps later should some mashup seem cool
<hazmat> imbrandon, definitely, that should be pretty trivial
<hazmat> imbrandon, i need to json search anyways. its not quite the same db though
<imbrandon> right thought so but just wanted to be vocal
<imbrandon> json search ?
<hazmat> imbrandon, i'm thinking just add /json to any of the urls to get a json representation
<imbrandon> as in learn it or ?
<imbrandon> ohhh yea
<hazmat> imbrandon, as in i need it ;-).. charm browser has a xapian backed full text search index
<SpamapS> wow I think I must have read some crackheaded interpretation of the AGPL at one point
 * SpamapS has learned this lesson a few times.. :P
<imbrandon> cool ok, yea i like implemnting api's i'd be happy to, and i've learned to no matter what ya think version them from the get go
<imbrandon> lol
<imbrandon> if you dont mind i'd like to crack at the json api some with ya if not do it all etc ( i know your swamped )
<imbrandon> will give me a reason to tighten up my python a bit
<hazmat> imbrandon, sounds great
<imbrandon> but i know its shit so feel free to reject me many times :) i got thick skin
<imbrandon> my python :)
<imbrandon> but it can only get better right ?
<imbrandon> ok i need to run, sis things for a few hours
<imbrandon> back in actually probably one
<hazmat> imbrandon, lp:charmworld
 * imbrandon is still keeping jujube going tho i dunt care if its redundant /me likes php as well plus i can protoype things in node/php ( current code ) that i can later translate into py :)
<imbrandon> rockin, ok back in like one hour
<imbrandon> ^5's hazmat
 * hazmat looks for food
<mgz> okay, I've proposed a doc branch with the basics in it.
<mgz> and that's me for the week.
<jcastro> can someone review the doc branch?
<jcastro> I'd love to blog about this branch today before EOD!
<jcastro> oh, nm, it's docs, I can review it myself!
<marcoceppi> *bugs imbrandon *
<marcoceppi> hey, hp cloud, is that in the ppa yet?
<SpamapS> yes
<SpamapS> marcoceppi: well, for precise+
<SpamapS> marcoceppi: the code uses some twisted 12 API stuff, so it likely won't work on oneiric and natty
<marcoceppi> So if I have precise, I should be good to go?
<SpamapS> yes
<SpamapS> 0.5.1+bzr559
<marcoceppi> Cool, must be missing an update
<marcoceppi> Yikes, I'm on a precise machine with an oneiric package of Juju
<jcastro> marcoceppi: hah man, that must have sucked
<marcoceppi> never really noticed
<marcoceppi> I use the ultrabook for everything now
<marcoceppi> so trying to deploy from a desktop
<marcoceppi> I was like "why is the screen so big"
<jcastro> ok so this needs the PPA and all that jazz right
<marcoceppi> hum
<marcoceppi> jcastro: need to make a few clarifications in the post
<marcoceppi> otherwise it bootstrapped
<marcoceppi> Going to do a few bootstraps
<jcastro> cool, I'm working on an example generic openstack one
<marcoceppi> s/bootstraps/deploys
<marcoceppi> Now I can blow this cloud up
 * imbrandon returns
<imbrandon> sorry man, things been a little nuts on the homefront last week or two
<imbrandon> wasup?
<jcastro> nothing just trying to sort out the different openstack providers
<imbrandon> ahh kk, yea i had to bolt to run with sis for a bit but i'm back now, gonna finish up those docs i started a bit ago
<jcastro> there's an incoming merge request
<jcastro> if you wanna merge it up
<imbrandon> kk
<imbrandon> sure thing
<imbrandon> SpamapS / hazmat : ok this kinda sucks, the juju agents are not started really with a true init script ( that i can find ) and to add insult to injury they dont produce a .pid file anywhere ( some domain sockets in the /var/run/juju/ but no pids ) to cat kill hup et al on the units, after some units running ohhh 55days without a reboot and 6 charms ( pretty much evry suborinate ) each with its own daemon ( really? wth ) this tends to be a signifig
<jcastro> SpamapS: CLINT. G+ me for a sec?
<jcastro> I have a jitsu question that's easier to just ask you
<hazmat> jcastro, i'm also around if that works
<hazmat> imbrandon, their upstart'ified
<jcastro> for sure hazmat, inviting
<jcastro> marcoceppi: imbrandon: hey any of you have an anything deployed on hp cloud to generate an svg?
<imbrandon> not and generated to svg but i got about 8 juju boxes on hp cloud atm
<imbrandon> or so
<jcastro> yeah
<imbrandon> the mirrors are juju
<imbrandon> :)
<jcastro> I just want a deployment visualized
<jcastro> to prove I am not lying. :)
<imbrandon> heh ok, mine are all mostly single testing crap, let me finish the rax edit and i'll deploy a cool layout
<imbrandon> unless marcoceppi gets it first
<imbrandon> ~10 min tops
<jcastro> imbrandon: hey so hazmat tells me the rackspace cloud stuff isn't quite ready, so I'll cut it for now and we'll talk about rackspace when we get  there
<hazmat> imbrandon, juju status --output=status.svg --format=svg
<imbrandon> yup, just need to deploy something intresting instead of a one off mirror charm :)
<imbrandon> heheh
<jcastro> omg?
<imbrandon> jcastro: it "works" heh
<hazmat> imbrandon, mediawiki x 5 + haproxy + mysql + memcached
<hazmat> ;-)
<jcastro> YEAH!
<hazmat> nagios and mysql replication for bonus
<imbrandon> no nginx-proxy ? hahah jk, kk comming right up
<imbrandon> ok , then jcastro go save the rackspace images from the poost
<imbrandon> on ask
<imbrandon> actually
<imbrandon> one sec
<imbrandon> jcastro: there is the all prettied up screenshots you'll want for later then
<imbrandon> [1]: http://f.cl.ly/items/0k0e20291e3y2C3T0S3p/Selection_001.png [2]: http://f.cl.ly/items/0k0e20291e3y2C3T0S3p/Selection_002.png
 * jcastro nods
 * imbrandon deploys da world , back ina few
<imbrandon> jcastro: btw i was gonna use those number annotations to say what bit that was to get
<imbrandon> etc
<imbrandon> hopefully thats obvious
<jcastro> yeah
<jcastro> I mean instructions will be fine
<imbrandon> no i mean look at the images
<imbrandon> already done,
<imbrandon> #1 #2 etc
<jcastro> right
<SpamapS> jcastro: back from lunch
<imbrandon> jcastro: ohh and i'll add in the anouncement of the GUI OSX installer too for ya to chaulk up
<jcastro> SpamapS: I was just prepping to blog about the openstack branch
<SpamapS> s/branch/feature/
<SpamapS> it aint no branch baby
<SpamapS> its in the PPA
<jcastro> PPA I meant
<jcastro> http://www.jorgecastro.org/2012/07/20/democratizing-the-cloud-here-comes-native-openstack-support-for-juju/
<jcastro> SpamapS: woo!
<jcastro> SpamapS: man dude, so basically, I think import and export are cool
<jcastro> and wanted to tell the world, in conjunction with this provider landing
<xnox> jcastro: can you change the default-image-id?
<imbrandon> heya xnox , finaly see ya on irc :)
<xnox> for me in region 1 of hpcloud it's: default-image-id: "8419"
<xnox> not an array
<imbrandon> and yes on HPCloud you can, infact must from my default only works in az-3
<xnox> imbrandon: playing with hpcloud juju, seems to have created instances =)
<imbrandon> xnox: see note at bottom "things in [] need changed"
<imbrandon> its not an array :)
<xnox> true =)
<xnox> but a sensible default - latest ubuntu seems appropriate, unless all image id's are different across regions
<imbrandon> and yea 8419 is for one az, 120 is another, not sure on the third, was gonna lookup the proper API for juju just to pick like aws later
<imbrandon> xnox: yup, known issue
<imbrandon> thus one of the fields marked to fill in
<imbrandon> :)
<SpamapS> jcastro: there's a link to sabdfl's demo somewhere.. you should link to it from your blog post
<imbrandon> its only for us early birds tho
<jcastro> I found one but it didn't look legit
<jcastro> it's like the guy recorded it streaming from oreilly
<jcastro> and I didn't want to do that wrt. licensing of content and stuff
<imbrandon> softpedia.com's is the legit one
<SpamapS> http://news.softpedia.com/news/Mark-Shuttleworth-Talks-Juju-at-OSCON-2012-282209.shtml
<imbrandon> and its published on youtube
<SpamapS> http://www.youtube.com/watch?v=0UHXW10t38I
<imbrandon> xnox: btw i had to stop the mips stuff half way yesterday due to some rl stuff, but i made a metric ton of improvements to the speed and reliabilty of whats there now
<imbrandon> btw
<imbrandon> and i'll restart the arm ( not mips ) tonight
<xnox> ?
<xnox> imbrandon: what are you on about =)
<imbrandon> did you not request arm mirrors for sbuilds ?
<imbrandon> must be mixin u up
<SpamapS> xnox: the image id problem is pretty big actually. We need a resource map type service for all the known public clouds.. and for any private clouds a user might have access to
<SpamapS> IMO we should just start with a local file for that
<xnox> SpamapS: yeah, I guessed from the console dropdown / existing instances. But there was no euca-describe-images
<SpamapS> we can maintain our own somewhere public
<imbrandon> SpamapS: local file could be "nova list-images >> newlist.txt" :)
<xnox> for the openstack / hpcloud
<xnox> ah =)
<xnox> thanks.
<SpamapS> xnox: nova manage can do it
<xnox> SpamapS: as a user or as an admin? it was not clear to me
<jcastro> SpamapS: yeah that's the one, you can even see the dude hitting record.
<SpamapS> nova image-list I think
<imbrandon> SpamapS: in the intrem i can add a cron to the mirrors i already have on HPC to run the nova client and export the list and then do some text massageing on it to get the info we want as a temporary solution, then check it into bzr on LP or something automatic
<SpamapS> imbrandon: that sounds ... painful
<SpamapS> imbrandon: why not just keep a list of cloud->region->release->imageid
<imbrandon> SpamapS: btw i provided a little rational to one of the points on your review ( before you groan , i agreed 99% ) just wanted to point it out to ya so you could tell me if i'm off my rocker before i get back to the charm tomarrow
<SpamapS> imbrandon: good I was a bit confused by the whole thing
<imbrandon> SpamapS: thats what i'm taliing about doing, but it needs to be automatic, you seen how often hpcloud depricates amis ?
<imbrandon> heh
<imbrandon> basicly it came down to 99% of "i knew those bits was not done ... got burnout needed feedback ... but this other bit is intentional and here is why ..."
<imbrandon> and thats it
<imbrandon> :)
<imbrandon> SpamapS: btw how do we get that info now from ec2 ? from the json provided by cloud-images.ubuntu.com ?
<SpamapS> imbrandon: https://cloud-images.ubuntu.com/query2
<SpamapS> imbrandon: and I'm certain we'll be adding it for other clouds as well
<imbrandon> "< SpamapS> imbrandon: why not just keep a list of cloud->region->release->imageid" <-- thats exactly what i was getting at too btw, just automated with the datasets i knew to exist because i know HPClouds list changes very very frequently
<SpamapS> imbrandon: well ideally the Ubuntu project would maintain the images and thus would rewrite the known good version of the file whenever the ids change.
<imbrandon> ohhh 100% i was just talking for the next week/month in the mean time considering it would be only about 30 minutes of dev work tops to kick out , easy to justify tossing out later
<hazmat> SpamapS, getting query2 indexes against 3rd party providers is part of the master plan
<hazmat> its nesc. imo for  a good ootb experience
<hazmat> imbrandon, the ec2 stuff is generated as part of the image build process that canonical does for ec2
<hazmat> we don't have that for third party provided images in other clouds
<imbrandon> yea i ment more of the current retreival process
<hazmat> imbrandon, there is no retrieval process, its part of the publishing process
<imbrandon> not creation, e.g. so i had a result to decompile if ya will
<imbrandon> juju has to retreive it from somewher
<imbrandon> to know what ami to use
<hazmat> imbrandon, it queries it from cloud-images.ubuntu.com
<hazmat> not query2 format though
<hazmat> not yet anyways
<hazmat> i'm getting random desktop crashes all of a sudden, lame
<imbrandon> rockin , kk yea i was gonna see if i could spend less than 30 min to hack something in the same format from hpcloud , just cuz ... i can try :)
<imbrandon> i have alot today, but i thought it was me
<imbrandon> i hate that damn popup
<hazmat> imbrandon, yeah.. me too, but i'm convinced something went sideways.. yeah.. i get the popup to
<imbrandon> btw deploys almost ready
<imbrandon> btw why on earth does paste.ubuntu.com always want me to login with openid, makes me a lil ill
<imbrandon> SpamapS / jcastro / hazmat : http://paste.ubuntu.com/1102549/
<imbrandon> just waiting on booting etc
<imbrandon> and to make sure nothing went wong ....
<imbrandon> in addition to all the ones you listed hazmat i made memcache X 2 and added newrelic subordinate to mysql and newrelic-php to the mediawiki's
<imbrandon> :)
<imbrandon> SpamapS: btw fwiw i had/have full intentions on making the php helpers generic ( trying to do it from the getgo mostly ) and ultimately packageing them up as "upstream" ( myself maybe unless others join in some ) into a .phar and add as a php lib/cli to charm-helpers as like the charm-helpers-php binary
<imbrandon> pkg
<imbrandon> just say little use in it yet, infact a pretty big hinderance considering the number of chnages needed still and there is only 30ish sloc now
<imbrandon> s/say/see
<SpamapS> imbrandon: but thats 30 new lines of code that you invented
<imbrandon> SpamapS: just like every sed and 98% of all the other charms ( bash )
<imbrandon> not seen one charm minus the django one using puppet that has a real template engine, and besides your gripe was it was in the charm not that i was doing it
<imbrandon> so whats the diff ?
<imbrandon> i mean i'm not saying your 100% wrong, but to play the 10yr old, but but but eveyrone else is and far worse than me , i thought it was pretty elegant for the simplicity of ti still
<imbrandon> and likely far more reliable than trying to use a html centric template engine , and far lighter than using a config one and far more robust than sed
<imbrandon> :)
<imbrandon> but again the point was in/out of the charm is what i digested anyhow tho
<imbrandon> and solely on that its a bit of a strech
<imbrandon> *cough* http://jujucharms.com/charms/precise/ceph/hooks/ceph-common.sh i'm thinking does much the same thing as my /hooks/lib/common :)
<SpamapS> imbrandon: read that again.. you'll only see heredocs and appending. ;)
<SpamapS> imbrandon: my point isn't that its bad to do a little abstraction around templating. My point is that the problem space did not warrant that.
<imbrandon> it does imho, its that 3 like very clear function to those that dont even code php OR a much more convoluted sed secript
<SpamapS> sed is actually a horrible way to do these things IMO
<SpamapS> templates are for spaces where you need to be able to see and edit the presentation outside the logic.
<imbrandon> my point is its clear even to non-php people, and compare it to the complexity of chef-common.sh
<SpamapS> Otherwise, just put stuff in .d files or append with variables
<imbrandon> ceph-common.sh*
<SpamapS> imbrandon: did you read the ceph charm?
<SpamapS> its doing *quite* a bit more than the nginx charm :)
<imbrandon> sure, NOW but you think the one varable that is changed will be it ?
<imbrandon> rember OMG
<imbrandon> and that was without forethought mostly
<SpamapS> was the templating the problem with OMG?
<imbrandon> and to top it off i planed on using it as a general helper lib
<imbrandon> yea
<imbrandon> err no
<SpamapS> really? It wasn't apache.. and the plugins.. and the lack of caching?
<imbrandon> thats totaly not the point, and yea i said wrong word
<imbrandon> i was trying to agree :)
<imbrandon> i wasent saying it was the problem i was saying it was complex
<imbrandon> keeping the varables of my defence here in line :)
<SpamapS> I don't want you to be on the defensive.
<SpamapS> I actually want you to be showing me the benefits of your choices.
<imbrandon> i mean honestly i adapted that php pretty much line for line what i did in shell for omg
<SpamapS> Because my point wasn't "that sucks" but "why do that?"
<imbrandon> well its really hard not to be when i can look at ANY other charm and its far more complex, i explained the why's in the reply
<SpamapS> Also its entirely possible that all the other charms did it wrong too
<SpamapS> so don't use that as a reason to repeat their mistakes
<imbrandon> oh i'm not saying they dont infact i think they do
<imbrandon> and after a review of 3rd party tools and what they did thats what i had come up with
<imbrandon> in bash and php identicly
<SpamapS> alright, that makes sense. So what, again, is the charm supposed to do?
<imbrandon> 3rd party tools  + what the charms did
<SpamapS> you didn't respond to that. :)
<imbrandon> that kinda ran into one
<imbrandon> well thats kinda loaded, depends on the point in the lifecycle your asking about
<imbrandon> heh
<imbrandon> but tl;dr is ...
<imbrandon> deploy to a know state, and always be in that know state as config options are swaped in and out by the devops or other charms
<imbrandon> use that info of the state desired and do it the best way i know how
<imbrandon> as a charmer
<imbrandon> tl;dr ^^
<imbrandon> in this instance, i made the choice of a creating a new tool for the job where i felt others were overkill <--> not on target enough
<imbrandon> but said tool likely does belong in its own lib sure, but would we not have this same convo if it dident ?
<imbrandon> err did
<SpamapS> imbrandon: still don't understand what the charm is supposed to do
<SpamapS> imbrandon: walk me through how a user can make use of it
<imbrandon> and i think other charms are doing it wrong by not making the same decision, instead are using sed or other goto bash staples
<imbrandon> SpamapS: well that one is very much like you said only apt-get , but it was created on the days that we talked about doing that kind of thing now and later expanding them to be ideal so inheritance or similar otions emerge
<imbrandon> but if you look at nginx-prox it builds on it
<imbrandon> and very much in a way that repeating
<imbrandon> but only becouse of aformention problem
<imbrandon> eventually it as well as other will build on this
<imbrandon> thus i started planning for things like nfs / sharedfs
<imbrandon> etc
 * xnox spinning up juju on HPCloud 41/100 instances done =)
<xnox> ProviderInteractionError: Unexpected 413: '{"overLimit": {"message": "RAMLimitExceeded: You can only allocate 4096 RAM (in MB)", "code": 413, "retryAfter": 0}}'
<xnox> =(
<imbrandon> limited to 20GB per az combin iirc unless you request an increace
<SpamapS> xnox: NICE
<imbrandon> mmm mine should be done tooo /me goes to make the svg
<imbrandon> SpamapS: i mean am i wrong in the thingking ( over all )
<SpamapS> imbrandon: re your point about inheritance.. if nothing else, it shows that we need inheritance. :)
<imbrandon> right
<SpamapS> imbrandon: I had an idea of how we could do that in the store w/o juju's help
<SpamapS> imbrandon: what if we use my charm splicer to build charms that inherit others?
<imbrandon> ahh i bet you have the same idea me and marcoceppi came up with at uds :)
<imbrandon> hahahahah yup
<SpamapS> So just have a launchpad project.. like   base-charms .. and anything in there will be scanned for a splice.yaml and spliced up and pushed into lp:charms
<imbrandon> like 30 minutes after me and him met ( on way from airport to hotel on train ) we talked about JUST that idea, that was gonna be our laptop winner :)
<SpamapS> but.. really.. I can't see why we don't just do this in juju
<SpamapS> it would be so easy
<imbrandon> we was gonna hack up hooks
<SpamapS> I tried splicing every charm that I could into one charm
<imbrandon> to call other chrms and deploy em
<SpamapS> it was fun :)
<imbrandon> as deps
<SpamapS> I should merge my splice command into charm tools
<imbrandon> e.g see if service running if not, deploy default ourself
<imbrandon> defrent method same thing tho, well would enable the same thing
<SpamapS> oh
<SpamapS> thats definitely something entirely different
<SpamapS> lazy loading services.. sounds awesome
<imbrandon> i think there was bets on how fast the charm would simultainously win the laptop and be banished from existance :)
<SpamapS> hahaha
<SpamapS> I should have submitted a Frankencharm
<SpamapS> like, splice  * into one evil charm to rule the world
<imbrandon> i actually had fully intended to , as the nginx stack* at the time, but other priorities like full as hell charm rooms emerged
<imbrandon> heh
<SpamapS> interesting.. LAX<->San Diego commuter flights are about $245. Driving + parking at the Sheraton would cost about the same....
<imbrandon> basicly the idea was to "juju bootstrap && juju deploy mywordpress-subordinate-theme-and-plugins"
<imbrandon> and thats it ....
<imbrandon> just wait for it all to come up
<SpamapS> and taking Amtrak will be about $100.
<SpamapS> if I didn't know that Amtrak's wifi was TOTAL CRAP I would say its an easy win. But Amtrak also comes with thugs.. hm.
<imbrandon> hahah
<imbrandon> amtrak thugs >
<imbrandon> never equated the 2
<imbrandon> SFO BART + Punks , yea
<imbrandon> heh
<imbrandon> never fails every time i'm in SFO ( like 7 or 8 times not ) i see mohawks and leather studed clothing , no matter what decade it is
<imbrandon> s/not/now
<SpamapS> The only people who ride Amtrak are people too poor to have cars and hippies.
<imbrandon> heh, i hate driving
<SpamapS> But I'm kind of thinking for this particular conference its worth it since I don't really need a car.. I have tons of friends in San Diego who can drive me anywhere I want to go.
<SpamapS> I wonder if there is a cheaper daily lot somewhere tho.. $22/day is ridiculous
<imbrandon> i only own a motorcycle for the last 8 or 9 years, and even that i dont drive like a daily driver, its a for fun bike and i use mass-transit/cabs everywhere
<imbrandon> even out here in hickvill midwest :)
 * imbrandon hugs his almost to be considered classic 1996 Ninja Neon Green 7ZXr he bought new
<SpamapS> Ninja :)
<imbrandon> dude, i love em, had a honda, and 2 ninja's now
<imbrandon> layed the 1st one down on the highway ... at speed
<imbrandon> stoped riding for a year, then got a new one ( in 96 ) and kept it :)
<imbrandon> honda was always too big for my frame
<imbrandon> 1100cc is about all the horse i need :)
<xnox> apparently my quota request must go through special approval.... so archive rebuilds until monday....
<xnox> well I can work within 100GB RAM limits....
<xnox> I've asked for 1600GB RAM limit...
<imbrandon> SpamapS: u noticed the fairly new ec2audit tool ? would be nice to make use of that + cross cloud etc
<imbrandon> somehow
<imbrandon> xnox: i just closed my 2nd acct or i'd loan it to ya
<xnox> imbrandon: .... hmm... I have put two requests in for AZ1 and AZ3 regions. And I'd like to see when they will come back to me about it ;-)
<imbrandon> and on HP i'm nearing my limit too between one az of mirrors and long running dev instances + one az of trying to rebuild ubuntuwire.com et al + one az of random juju deploys and destroys
<xnox> perfect request at 4:30pm Friday LA time.... =) and night time in EU
<imbrandon> xnox: the few times i used support it was about 24hrs
<xnox> weekend comming up ;-)
<imbrandon> yea but iirc their noc and support is manned 7 days
<xnox> imbrandon: can you start mirroring AZ1 mirror into AZ3?
<imbrandon> xnox: i got one better for ya, about to unleash the swift mirrors
<xnox> ???? =P what are those?
<imbrandon> ( in *.region-a.geo-1 )
<xnox> the CDN backed thingy?
<imbrandon> yea but if used local its faster than the ones i have up now
<imbrandon> and pulling from my house in KC i saturated my home cable as well
<imbrandon> :)
<imbrandon> will be another 6 hours or so before they are done then i need to run an initial checksum on them
<imbrandon> so tomarrow sometime, BUT in the meantime if you drop the internal. off the hostname
<imbrandon> there is public dns for those too and its much faster in az 2 and 3 than us.archive.ubuntu.com
<xnox> ;-)
<imbrandon> for the meantime
<xnox> that was my plan, I did warn the HP guys about high traffic between the regions if they grant me quota, and the mirrors are not done yet ;-)
<imbrandon> but yea i started to make the 6 xlarge instnces required for full mirrors in all 3 az's then said screw it and put effort into doing the swift object store one that only requires a single xsmall in one az to kick off jobs and cleanup etc
<imbrandon> and ended up being hella faster anyhow
<xnox> =))))
<imbrandon> i havent put a cname on it yet, and its about 6 to 12 hours behind depending on the package compared to the others that are ~5 min
<imbrandon> but http://hdf00356c5f10fb3768dd5e9db5fcf855.cdn.hpcloudsvc.com
<imbrandon> you should be able to hit it and test it ... would not use it just yet til final checksum and cname in place tho
<imbrandon> btw one drawback is there is no browsing , same as on s3
<imbrandon> gotta know what your after
<imbrandon> heh
<imbrandon> http://hdf00356c5f10fb3768dd5e9db5fcf855.cdn.hpcloudsvc.com/dists/quantal/Release
<xnox> lol
<xnox> good night ;-)
 * xnox is in EU
<imbrandon> gnight :)
<imbrandon> SpamapS / hazmat : http://paste.ubuntu.com/1102672/
<imbrandon> normal status
<imbrandon> outputs all good, but thats from svg
<imbrandon> if you want i'll pass one of yall my env.y credentials and all if you wanna toy with it to get the pic for jcastro , as the env is pretty sweet atm
<imbrandon> wikimedia x 5, mysql, memcache x 2, haproxy, newrelic(mysql), newrelic-php(wikimedia) x 5
<imbrandon> all related to the corosoponding services etc
 * imbrandon will leave it up indefinately over the weekend and debug himself as well on and off ...
<imbrandon> ahhh svg no likey hyphen in the env name ...
<imbrandon> that could be a problem.
<imbrandon> ( for completeness when someone reads this, norm and json status are fine, only svg + the error looks to be a svg not agree with my env name of websitedevops-com-hpc [hyphens]
<imbrandon> )
#juju 2012-07-21
<hazmat> indeed
#juju 2013-07-15
<bloodearnest> heya folks - am using lxc environment, trying to add a private ppa in my install hook, but apt-cacher-ng is getting in the way. Is there anyway to run  a local lxc env without it?
<stokachu> jcastro: http://www.astokes.org/post/2013-07-14-juju-gunicorn-apache-django
<stokachu> not sure if you saw that yet, was hoping to get some contributors added to the github project
<stokachu> https://github.com/battlemidget/juju-apache-gunicorn-django
<stokachu> going to add rabbitmq support soon as well and maybe haproxy
<pavelpachkovskij> is there a way I can change config attribute from a hook?
<pavelpachkovskij> something like "config-set attr=value"
<marcoceppi> pavelpachkovskij: No, you can only manipulate config from the juju cli
<marcoceppi> It's not designed to be manipulated from within hooks
<pavelpachkovskij> marcoceppy, and if I try to run juju set ... from hook?
<pavelpachkovskij> marcoceppi, sorry for typo
<marcoceppi> You can try, but you'd have to also include a copy of your ssh private key and ~/.juju/environments.yaml file in order to even start using  the juju cli from the charm.
<pavelpachkovskij> I see
<marcoceppi> pavelpachkovskij: let me just say, while you can do this, it likely won't be accepted in to the charm store. This is definitely not a good practice
<pavelpachkovskij> I understand it
<marcoceppi> The impression config has for users is *they* set it, having the charm do it will break that story and preception
<pavelpachkovskij> marcoceppi, my problem is that I want to do something like this:
<pavelpachkovskij> juju set sample-rails command='rake db:migrate'
<pavelpachkovskij> but I can't track for changes if I can't modify this variable from the instance
<pavelpachkovskij> though this is a huge workaround there is no other way to run command within application
<marcoceppi> pavelpachkovskij: So what you're trying to do is send ad-hoc commands to the charm?
<pavelpachkovskij> yes
<pavelpachkovskij> It would be nice to have support of ad-hoc commands built into juju itself
<pavelpachkovskij> marcoceppi, do you have smth like this in the roadmap?
<marcoceppi> pavelpachkovskij: There's a few problems with this particular example. When you run juju set it will run config-changed on all units. Imagine having a deployment at scale (2+ units) they'd all start executing a rake db:migrate, which I can see as being particularly harmful
<pavelpachkovskij> I already thought about this
<pavelpachkovskij> marcoceppi, and work out a solution with different roles
<pavelpachkovskij> marcoceppi, exactly the same way how it is in capistrano
<marcoceppi> pavelpachkovskij: There's a mechanism for this kind of one-off command: `juju ssh <charm>/<unit-num> rake db:migrate`
<pavelpachkovskij> marcoceppi, this will not work for my purposes
<pavelpachkovskij> marcoceppi, issue with 2+ units solves with rack-master and rack-slave nodes in a cluster
<pavelpachkovskij> marcoceppi, your solution requires user to now at least root folder
<pavelpachkovskij> marcoceppi, maybe it's not the best example, but also I need to update application source code
<marcoceppi> pavelpachkovskij: The other thing you can possibly do is trap the previous value of the config, something like `config-get command > .previous-cmd`, check if it's actually changed. The downside to this is, if they set command to be the same exact string again it won't get caught
<pavelpachkovskij> marcoceppi, yes, this is the problem I'm trying to solve
<pavelpachkovskij> marcoceppi, tracking the change from 'rake db:migrate' to 'rake db:migrate'
<pavelpachkovskij> marcoceppi, it just doesn't trigger the hook
<marcoceppi> pavelpachkovskij: there's no way currently, unfortunately. Could you describe the entire workflow? Is the command just a one-off or is it part of a bigger process?
<pavelpachkovskij> marcoceppi, did you use heroku?
<marcoceppi> pavelpachkovskij: I know of it, for the most part
<pavelpachkovskij> marcoceppi, I want rack-charm with heroku like behavior
<pavelpachkovskij> with support of custom commands and source code update mechanism
<pavelpachkovskij> marcoceppi, actually I have a great progress with it, but I don't know how to solve this very problem
<pavelpachkovskij> give me a second, I'll push my code to github
<marcoceppi> pavelpachkovskij: what you can do instead, is have two config values sections, one that tracks the souce (source-url, source-branch, source-tag, whatever), then one that tracks what commands to run when doing an update (update-cmds). That way the user has described what they want done during updates and you can now trap when there's an update
<pavelpachkovskij> https://github.com/Altoros/rack/blob/master/hooks/chef/cookbooks/rack/recipes/config-changed.rb
<marcoceppi> ah, yes juju ssh really wouldn't help much here
<pavelpachkovskij> marcoceppi, another problem with this approach is that if user change another attribute, command attribute will stay the same and cause another run of command
<pavelpachkovskij> marcoceppi, I didn't get your idea with update-cmds
<pavelpachkovskij> marcoceppi, can you expand with it?
<marcoceppi> pavelpachkovskij: let me write a quick bash example
<pavelpachkovskij> marcoceppi, waiting, thanks
<marcoceppi> pavelpachkovskij: I'm not sure how well a bash example will translate to chef, but this was my idea: http://paste.ubuntu.com/5877478/
<pavelpachkovskij> marcoceppi, yeah... but... there are two issues: 1. Forcing users to write huge configs. 2. It's really hard to fully automate deployment.
<pavelpachkovskij> marcoceppi, it's really common case when you want to run custom rake task
<marcoceppi> pavelpachkovskij: I don't see how it's anymore difficult than having to run juju set several times to complete tasks
<pavelpachkovskij> marcoceppi, if user make a typo, he have to re-run all deployment
<marcoceppi> pavelpachkovskij: Could you explain how a typo would force a re-run of deployment?
<marcoceppi> I'm not grasping the use case very well
<pavelpachkovskij> marcoceppi, forget, this issue is not very important
<pavelpachkovskij> marcoceppi, I just don't like this idea
<marcoceppi> pavelpachkovskij: We've not has a real strong use case for ad-hoc command running where `juju ssh` wasn't a valid solution. In this case, if you feel strongly about being able to do so, you might want to open a bug with juju-core to start a dialog on how this can be resolved within juju core. Either by allowing the resetting of config values in a hook context, or by adding some new mechanism in juju
<pavelpachkovskij> marcoceppi, yep
<marcoceppi> Also, juju ssh drops you in as the `ubuntu` user, not directly as root, if that was a previous concern of yours
<pavelpachkovskij> marcoceppi, I know that it's ubuntu user, ssh makes it difficult to run simple commands
<pavelpachkovskij> you have to ssh to it, go to root folder, su as a deploy user, and only then run your command
<pavelpachkovskij> marcoceppi, becomes not simple at all
<marcoceppi> pavelpachkovskij: it does, I mean there are other ways around that, like writing convience scrips (imagine just doing `juju ssh 2 run "rake db:migrate"` where run would cd to proper folder, su to user, wrap command or pass to chef solo, etc)
<pavelpachkovskij> marcoceppi, I understand that it's easy for devops, but not for a casual developers
<pavelpachkovskij> marcoceppi, last example makes sense
<marcoceppi> pavelpachkovskij: This might be a better question for juju@lists.ubuntu.com to open a deeper dialog (and the juju-core devs typically read it) for more feedback if you're still looking for alternatives
<pavelpachkovskij> marcoceppi, I have to think about this
<jcastro> stokachu: I can syndicate your blog post on juju.ubuntu.com
<jcastro> stokachu: but feel free to also post it to the list!
<stokachu> jcastro: cool, do i need to just tag it with 'juju'?
<stokachu> and i can post to the mailing list as well
<jcastro> no I have a plugin that sucks it in an I can choose it
<stokachu> ah ok
<stokachu> jcastro: posted :D
<jcastro> negronjl: you're on queue this week!
<negronjl> jcastro: Aye :)
<negronjl> jcastro:  I should be able to work it early ... I'm in London for two weeks
<jcastro> https://bugs.launchpad.net/charms/+bug/1006064
<jcastro> this is the only one I care about since we'd like to demo it for oscon
<ev> is it possible to specify an instance type for canonistack using gojuju? default-instance-type is now ignored, but bootstrapping with --constraints="mem=6" has no effect. I still get an m1.tiny node.
<pavelpachkovskij> marcoceppi, I think using juju ssh 1 run should work for my goals
<pavelpachkovskij> marcoceppi, thanks for idea
<marcoceppi> pavelpachkovskij: awesome, glad I could help out!
<marcoceppi> pavelpachkovskij: just make sure do document it in the readme, etc
<marcoceppi> to document*
<pavelpachkovskij> marcoceppi, sure
<jcastro> hey jamespage
<jamespage> hey jcastro
<jcastro> I need you to rate the ceph charm before OSCON
<jcastro> but don't worry, it's easy, it's like 5 min of work
<jcastro> http://manage.jujucharms.com/charms/ceph/precise/qa/edit
<jcastro> you need to login
<jcastro> adam_g: you need to rate glance and cinder too
<jamespage> jcastro,  Access was denied to this resource.
<jcastro> are ou logged in?
<jcastro> jamespage: ok let's do this
<jcastro> https://juju.ubuntu.com/docs/authors-charm-quality.html
<jcastro> copy and paste that into an email, the +1 parts
<jcastro> and then change things you don't do in the charm to +0
<jcastro> and I'll go ahead and enter it in the webform.
<jamespage> jcastro, how do I tell which providers it works on?
<jcastro> skip that part
<jcastro> that's automatic
<jcastro> skip all of reliable actually
<jcastro> jamespage: got it, thank you sir!
<jcastro> adam_g: we just need cinder and glance and then we're all set for OSCON!
<jcastro> negronjl: mira, when you do liferay can you rate it?
<jcastro> negronjl: also if you can't do it today lmk, I'm on an OSCON schedule so I can whine to someone else to finish it off
<jcastro> https://bugs.launchpad.net/juju-core/+bug/1201559
<_mup_> Bug #1201559: Prompt the user to generate a key instead of erroring out <juju-core:New> <https://launchpad.net/bugs/1201559>
<jcastro> this one's from Maarten
<sidnei> hum, is the package in the devel ppa potentially broken? it's not showing up as alternatives
<sidnei> $ update-alternatives --config juju
<sidnei> There is only one alternative in link group juju: /usr/lib/juju-0.7/bin/juju
<sidnei> Nothing to configure.
<sidnei> $ apt-cache policy juju-core
<sidnei> juju-core:
<sidnei>   Installed: 1.11.2-3~1414~precise1
<sidnei> $ apt-cache policy juju
<sidnei> juju:
<sidnei>   Installed: 0.7+bzr628+bzr631~precise1
<sidnei> mgz: ^
<sidnei> jamespage: maybe you know? ^
<sidnei> hazmat: https://code.launchpad.net/~ubuntuone-pqm-team/juju-deployer/refactor/+merge/174877
<marcoceppi> hazmat: I've got a question about deployer
<sidnei> marcoceppi: ask, maybe i can answer
<marcoceppi> If I have two ambiguous relations/interfaces, how would I map that? Can I do "mysql:db": { 'consumes': ['wordpress:db'] } in the relations object?
<marcoceppi> The examples show "mysql": { 'consumes': ['wordpress:db'] } but none show if you can map the relation at the top key level
<sidnei> marcoceppi: both mysql:db": { 'consumes': ['wordpress:db']} and wordpress:db": { 'consumes': ['mysql:db']} would work
<marcoceppi> sidnei: awesome, that makes life a little easier
<hazmat> sidnei, marcoceppi the consumes/weights syntax is quite ugly.. .. i'd try..  - ["mysql:db", "wordpress:db"]
<hazmat> imo
<marcoceppi> hazmat: wait, so I won't have to use weights?
<sidnei> yes, there's that too
<marcoceppi> I can just have a list of relations?
<hazmat> marcoceppi, yeah.. that's legacy format.. i added some  simpler ones..
<sidnei> marcoceppi: weights are optional (or should be)
<hazmat> sidnei, there not optional in the dict format..
<marcoceppi> hazmat: can you expand just ever so slightly on your example?
 * hazmat digs up a link
<sidnei> hazmat: maybe it's in my branch only? i don't have weights in any of my configs
<hazmat> marcoceppi, http://bazaar.launchpad.net/~hazmat/juju-deployer/refactor/view/head:/doc/config.rst#L61
<marcoceppi> hazmat: so I should start using yaml instead of json for the deployer cfg?
<hazmat> marcoceppi, i do
<hazmat> much less typing
<hazmat> sidnei, dunno.. perhaps they were optional keys on a dsu sort, but for legacy format, the relations struct was a dict, and it used weight as a sort key
<marcoceppi> fantastic. Has this landed in "darwin" branch yet?
<marcoceppi> jk, this is the darwin branch
<marcoceppi> Okay, moving on. thanks for the clarification!
<sidnei> hazmat: im pretty sure i sent an mp to make weight optional, default to None for the sort.
<hazmat> sidnei, cool
<sidnei> to the legacy deployer that is
<hazmat> marcoceppi, hopefully that series distinction goes away this week, going to chat with adam_g post vacation about merging the two
<marcoceppi> hazmat: that's make my life easier
<sidnei> hazmat: did you fix that issue with things ending up in non-started state not causing deployer to exit non-zero?
<hazmat> sidnei, is there a bug for it?  i did fix a minor that deployer wasn't checking state after it did a relation wait
<hazmat> so maybe
<sidnei> i don't think i filed a bug no, maybe i emailed you
<jose> hey guys, when are charms getting reviewed again? mine has been on the queue and wasn't
<marcoceppi> jose: the queue is constantly being reviewed, we just rotate who is "on deck" to do reviews
<jose> well, I'll wait then
<marcoceppi> jose: your postfix charm will definitely get eyes on this week, there's only one charm ahead of you in the queue
<marcoceppi> jujucharms.com/review-queue
 * jose checks
<jose> well, I'll wait then, thanks!
#juju 2013-07-16
<AskUbuntu> openstack infrastructure help | http://askubuntu.com/q/320538
<negronjl> jcastro:  I may not able to finish it ... in London sprinting :/
<melmoth> hmmm, my charm.log says opened 123/udp, but i do not see any change in my vm security rules
<melmoth> hmm, probably because it s a subordinate charm.
<jcastro> negronjl: we'll handle it here.
<marcoceppi> melmoth: what provider?
<melmoth> marcoceppi, openstack.
<melmoth> turned out it was related to the fact the charm is a subordinate
<melmoth> once i exposed the container charm, i saw the security rules change
<marcoceppi> melmoth: interesting
<melmoth> not sure if it s a bug or a feature :)
<marcoceppi> melmoth: probably a bug
<jcastro> marcoceppi: liferay if you have time today
<jcastro> marcoceppi: also if you do promulgate, rate it.
<jcastro> *cough*
<marcoceppi> jcastro: I've carved out some time around lunch
<jcastro> EXCELLENT.
<jamespage> marcoceppi, do we have an agreed location for charm unit test?
<jamespage> I think I'm correct in saying that the tests top-level directory is more charm integration testing type stuff right?
<marcoceppi> jamespage: Yes, so test(s)? directory in CHARM_DIR is functional testing, unit tests should live near the code of the file it's testing
<marcoceppi> jamespage: we've tossed a few ideas, but haven't agreed on anything yet, the idea was lib/<LIB_NAME>/tests
<jamespage> aack
<pavel> how I can specify array type in config.yaml?
<marcoceppi> pavel: There are no array types in juju config
<pavel> marcoceppi, thanks
<marcoceppi> Typically it's just string and you specify delimiter (, ; |) etc
<pavel> yep
<stokachu> is it possible to have multiple subordinates related to one service
<stokachu> for example i would like to expose my django app over gunicorn and have another webserver like monkey to serve the static files
<stokachu> and all be on the same machine
<marcoceppi> stokachu: There is no limit to subordinates on a single machine, however becareful not to have them stomp on each other
<stokachu> marcoceppi: yea i just ran into that and shutdown the whole unit
<stokachu> does it make sense to have a django application a subordinate to apache? reason I ask is im trying to figure out the best way to serve static files
<stokachu> currently apache is one instance, and django+gunicorn is another which is also where the static content is
<marcoceppi> stokachu: There's really no right or wrong answer, what works best for you and your charm/application is the right answer. Charms are designed to be opinionated reflections of how you/the serivces community does workloads
<stokachu> i gotcha
<marcoceppi> Personally, I'm not a fan of apache as a subordinate. It's too difficult to guess every possible workflow. When I need to be opinionated about web service to use in a charm I add it as a config option (many of my charms has an "engine" config option with apache2 and nginx being valid values)
<stokachu> marcoceppi: with your setup do you install apache/nginx on the same system as the charm you are deploying?
<marcoceppi> stokachu: I will install either depending on the option provided by the user on the unit itself. That way I know what configuration files to place where, etc
<stokachu> marcoceppi: ok cool that makes sense to me
<stokachu> that'll reduce my instance count too
<marcoceppi> stokachu: however, you don't even have to provide the option, if apache2 is the answer for you then use that. If someone else wants another web service they can add teh configuration option and submit a patch ;)
<marcoceppi> If you have a desire to use different engines, then I'd look for configuration options over subordinates
<stokachu> marcoceppi: ok cool, i think ill look into that
<stokachu> makes more sense to me than the subordinate does
<marcoceppi> Again, personally, I like to look at suborindates as a way to extend a service in to other unrelated services without the use of a relation. Like monitoring, shared file systems, etc
<jcastro> yeah to me subordinates is "I need to tack on another service like backup"
<stokachu> ah so totally separate from the intention of the charm itself
<stokachu> that clears up a lot now
<marcoceppi> stokachu: gunicorn is of course the exception to my opinion, as it works quite well the way it's designed
<stokachu> marcoceppi: agreed, ill just make note of it as being a 'special case'
<stokachu> thanks guys you have just cleared up the rest of my confusion with the subordinates and relations
<marcoceppi> juju's flexible to do most whatever you want, you just need to note of what makes sense for you and then shape that in to charms. We really don't have many conventions of what each type of charm should look like. It's very very lose and opinionated
<marcoceppi> cheers o/
<stokachu> marcoceppi: yea i love the flexibility :D
<_mup_> Bug #1201879 was filed: Juju LXC deployment fails with abort error <juju:New> <https://launchpad.net/bugs/1201879>
<jcastro> hey adam_g
<jcastro> did you get my messages wrt. rating those 2 openstack charms?
<adam_g> jcastro, dont think so. sorry, been out since thursday. still going thru inbox
<adam_g> jcastro, not seeing it, when did you send
<jcastro> IRC 2 days ago
<jcastro> TLDR I need you to rate glance and cinder
<adam_g> jcastro, oh, ok
<jcastro> you can either do that through manage.jujucharms.com
<jcastro> or mail me the results for each one based on: https://juju.ubuntu.com/docs/authors-charm-quality.html
<jcastro> and I'll rate them for you
<jcastro> you can basically just copy and paste to criteria into an email and then change the ones the charm doesn't do to +0
<jcastro> and I'll do the rest
<jcastro> but I need it by Friday EOD
<adam_g> jcastro, email you the qualtiy assessment?
<jcastro> yeah
<adam_g> ok
<jcastro> I only need cinder and glance by Friday
<adam_g> note glance + cinder (and the other bash openstack charms) will have python rewrites proposed to charm store sooooon
<jcastro> will they be rewritten by OSCON?
<jcastro> if no then I need the ratings before then. It should only take you like ~5min per charm to rate
<adam_g> jcastro, we are targeting to have *most* of them at least done and ready for testing + review by this week EOW
<jcastro> adam_g: ok then if you want to rate the new ones instead that's fine
<jcastro> As long as we have ratings for them for OSCON
<adam_g> jcastro, i'll rate them as-is. the rewrites are functionally the same. the're just much cleaner and an improvement in terms of maintainability
<adam_g> oh, and not bas
<jcastro> if you want to rate what they will support that's fine too
<adam_g> *bash
<jcastro> a few day discrepancy isn't a big deal
<marcoceppi> hazmat: is python-jujuclient packaged?
<marcoceppi> "somewhere"
<adam_g> marcoceppi, https://launchpad.net/~ahasenack/+archive/python-jujuclient i think kapil mentioned this as a source last week
<marcoceppi> adam_g: cool, thanks
 * rick_h still watches over here...
<arosales> jcastro, I don't see @ https://juju.ubuntu.com/docs/authors-charm-policy.html
<arosales> where it states a charm will be ack'ed if upstream is still not fully GA
<arosales> if the charm itself looks good why block it from the charm store?
<marcoceppi> arosales: Well, with tests I don't see why we wouldn't let alpha services in
<arosales> sorry I meant t say "where it states a charm will be _nacked" if upstream is still not full GA."
<marcoceppi> it doesn't, but I dont' think it's a policy more an ethical thing
<jcastro> that's just a choice we decided
<arosales> marcoceppi, if the charm itself looks good and follows the charm store guildlines then . .  .
<marcoceppi> I don't want to say "DEPLOY DISCOURSE WITH THIS CHARM" then because of a change upstream (and there are a lot of changes) it doesn't deploy
<jcastro> the charm pulls from github, there's no guarantee it'll even deploy at any given point
<marcoceppi> it makes juju look bad
<jcastro> so we left it in ~marcoceppi
<marcoceppi> Since the store is supposed to be golden
<jcastro> when they hit 1.0 I think we should totally Feature it though
<marcoceppi> So, when testing gets better, I'd feel more confident saying "put it in the store"
<marcoceppi> because I'll know exactly when it starts failing to deploy
<marcoceppi> Of if I can get sam & co to take over the charm upstream
<arosales> jcastro, marcoceppi what about the charm maintainer keeping a known good working version
<arosales> and the config option to pull from trunk
<marcoceppi> arosales: it pulls "latest-stable" but that changes every week
<arosales> any charms that pulls directly from tip is at risk for a non-deploy
<marcoceppi> so if it's pinned to a version that becomes out of date in a matter of days
<arosales> marcoceppi, sure, but it deploys and I config change to get the latest
<marcoceppi> they're seriously averaging 30+ commits to tip a day
 * arosales thinking about scenerio aside from discourse
<arosales> ie when should the charm store not accept charms when upstram is moving to fast
<marcoceppi> it's up to the maintainer at htat point
<marcoceppi> I wouldn't halt a review because someone did that, I just don't have the time atm to keep up as a maintainer and truly offer a compelling charm experience with discourse + juju
<arosales> and thats fair
<arosales> it just sounded like jcastro was saying it was policy
<jcastro> no, that was charm specific
<jcastro> sorry, I didn't mean to imply that
<jcastro> I mean, we could do the known-good-deploy thing, as soon as upstream does a release
<jcastro> they don't even really roll tarballs yet
<arosales> jcastro, gotcha. Thanks for clarifying.
<jcastro> we could do known-good github snapshops but I am pretty sure upstream wouldn't like that
<marcoceppi> about to roll out 0.4.0 of amulet with initial deployer support. Should have the rest of deployer integration ready by Thursday in time for next week
<jcastro> they want you on tip right now, so that's why we kept it out of the store for now
<arosales> marcoceppi, made it good point of it being up to the maintainer wanting a good juju story.  Specifically, marcoceppi could push discourse into the charm store but would have to heavily maintain it am.
<arosales> s/am/atm
<jcastro> marcoceppi: hey I think I found a problem in your ninja CSS thing for the docs
<jcastro> go to http://juju.ubuntu.com/docs
<jcastro> and check out the installation instructions
<marcoceppi> I am checking them out
<jcastro> then click on "getting started" to get to the actual page then the proper instructions show up
<marcoceppi> wat.
<marcoceppi> jcastro: OH
<jcastro> yeah weird
<marcoceppi> that's because there is two pages right now, index.html and getting-started.html
<marcoceppi> I thought I commited a fix for that
 * marcoceppi checks
<jcastro> also did you see how I butchered the instructions
<jcastro> 2 lines.
<marcoceppi> jcastro: </3
<arosales> jcastro, +1
<marcoceppi> I hope you moved the update-alternatives somewhere else
<jcastro> to where?
<marcoceppi> people will ask, maybe a caveats section
<jcastro> IS and m_3 know how to do it
<jcastro> that covers everyone right?
<marcoceppi> ROFL
<jcastro> at this point if someone is like "I want local" we can say "wait a week"
<marcoceppi> jcastro: pushed index.html fix, will be out next doc build
<marcoceppi> "I was using 0.7 and installed juju-core NOW EVERYTHING IS BROKEN"
<marcoceppi> maybe it'll be a better question to link in the install page from Ask Ubuntu
<jcastro> I can add a link but to what?
<marcoceppi> jcastro: put a link to AU in the docs
<jcastro> If you used Juju .7 and install 1.x it doesn't change the update-alternatives anyway
<jcastro> yeah to which question?
<marcoceppi> jcastro: pretty sure it does?
<marcoceppi> jcastro: make one?
<marcoceppi> Oh, wait
<marcoceppi> it's the other way around
<marcoceppi> installing juju-core then installing juju pushes alternatives to 0.7 I think
<marcoceppi> nvm
<marcoceppi> w/e mims knows
<jcastro> yep
 * marcoceppi goes back to work
<jcastro> and alternatives doesn't really work anyway
<jcastro> either way you need different environments.yaml
<jcastro> so I don't think people are like just flipping it back and forth
<marcoceppi> jcastro: I've been flipping, but I have a different JUJU_HOME for juju-core, but yeah it's a much more advanced thing
<marcoceppi> that will hopefully just go away soon
<jcastro> yeah
<jcastro> ok so how about I point to this
<jcastro> http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<marcoceppi> jcastro: naw, I just wouldn't worry about it
<marcoceppi> when local lands we'll just document it as a provider
<jcastro> fixed it, a link won't hurt anybody
#juju 2013-07-17
<herophuong_> Hi
<herophuong_> I need some help on using juju on local machine
<herophuong_> I've followed the get started page on juju.ubuntu.com
<herophuong_> but the agent-state of mysql/wordpress keeps on 'pending'
<herophuong_> when I manually start the container
<herophuong_> it says 'Permission denied - Failed to make / rslave'
<herophuong_> what should I do now?
<bloodearnest> heya, using pyjuju and lxc, is there anyway to disable using apt-cacher-ng?
<bloodearnest> is the incoming lxc support for gojuju using it too?
<bloodearnest> FWIW, adding "Acquire::HTTP::Proxy::private-ppa.launchpad.net "DIRECT";' into /etc/apt/apt.conf.d/97disable-proxy-ppa did what I needed
<marcoceppi> bloodearnest: I don't think it will, the LXC support is being re-written (and re-architected) but I can't say for sure either way
<bloodearnest> marcoceppi, ok thanks. Looking forward to trying it out
<marcoceppi> bloodearnest: it's been a long wait, but I hear it's landing really soon
<marcoceppi> Is there anything I need to do in order to get websockets to work in juju-core?
<stokachu> using the apache2 charm how would i install files to that instance after deployment?
<marcoceppi> stokachu: if there isn't a configuration option for like a files repo, then juju scp would be your next bet
<stokachu> marcoceppi: ok thanks ill look at that
<noodles775> Any charmhelpers reviewers able to look at these two MPs?
<noodles775> https://code.launchpad.net/~michael.nelson/charm-helpers/ensure_etc_salt_exists/+merge/170546
<noodles775> https://code.launchpad.net/~michael.nelson/charm-helpers/namespace-relation-data/+merge/170575
<jcastro> evilnickveitch: both local and azure are landing in 1.11.3
<jcastro> we should probably sync with thumper or mramm as far as updating the docs, etc.
<mramm> well azure is sort of not really there yet
<mramm> I think
<evilnickveitch> jcastro, yeah, I saw that. I will have to go back and put in all those references to LXC i took out :)
<jcastro> oh, I saw it in the release notes draft
<evilnickveitch> it is in the release notes
<mramm> it will bootstrap
<jcastro> oh, we should be explicit about that then
<mramm> but it will not deploy services or something like that
<evilnickveitch> okay, we should mention it as experimental
<mramm> I will check in and make sure the notes get updated
<jcastro> I mean we could just mark it tech preview or something
<jcastro> evilnickveitch: can you mail cheney and try to coordinate landing the docs at about the same day he releases? We should do a nice announcement, etc.
<evilnickveitch> jcastro, ack#
<marcoceppi> mramm: what's the version schema, shouldn't a new provider bump to 1.12.0 ?
<mramm> x.odd are development releases
<mramm> x.even are stable releases
<mramm> major version upgrades have some some sort of backwards incompatibility in API
<marcoceppi> mramm: awesome, thanks!
<ehg> hi guys, what should happen when i assign an elastic IP to an ec2 instance? should goju pick it up?
<marcoceppi> ehg: unit-get public-address will work, but it might not properly appear in the status output, I believe the public address is cached in that output
<marcoceppi> basically, everything will work. Only juju status might not reflect the change
<ehg> marcoceppi: ah ok. i'm asking because we rebooted an instance and it lost its elastic ip, just wondering if that's juju doing it, or something we're doing wrong
<marcoceppi> ehg: no, that's actually something that EC2 does
<marcoceppi> IIRC
<ehg> ah - thanks :)
 * ehg <-- ec2 n00b
<ehg> is there any way to clear the juju cache?
<marcoceppi> ehg: what do you mean by juju cache?
<marcoceppi> For the juju status output?
<ehg> and juju ssh
<marcoceppi> Ah, I believe if you just use juju ssh <machine-number> instead of the service/unit denomination it'll use the correct address
<marcoceppi> Otherwise, not that I know of
<ehg> i seem to remember juju picking up the elastic public address, i can't remember what i did to get it to do that, maybe restarting some jujud
<marcoceppi> ehg: it's possible, I've not really bothered too much with that though. I know the machines output will have the correct address, just the unit's public-address isn't updated (this was also several versions of juju ago that I came across it, haven't dug much in to it since)
<ehg> marcoceppi: fair enough. i'll search for/open a bug - thanks :)
<jcastro> hey evilnickveitch
<jcastro> any thoughts on the developer.u.c resyndication?
<jcastro> for the docs etc?
<evilnickveitch> jcastro, tbh i haven't looked at it yet, busy rewriting the author docs
<jcastro> ok
<jcastro> charmers sync up on G+ in 10 minutes!
<jcastro> I'll be updating ubuntuonair.com too for people who want to follow along
<pavel> jcastro, hello, can you give me a link?
<jcastro> I am firing it up now
<jcastro> I'll paste it in here in a minute
<pavel> jcastro, ok thanks
<arosales> jcastro, any way developer.u.c can just pull from the docs bzr branch?
<jcastro> yeah probably
<jcastro> I think I might just selectively pull
<arosales> that we it is just happens with out us having to maintain another location.
<arosales> s/we/way/
<jcastro> don't worry, I'll make sure we don't do that
<jcastro> anything I figure out will be based on one canonical location of the docs
<arosales> jcastro, much appreciated
<jcastro>  
<arosales> evilnickveitch, fyi for Azure it probably won't need any changes to the docs yet. I"ll confirm with the red squad, but I don't think it is ready for the docs just yet.
<jcastro> https://plus.google.com/hangouts/_/4e37f79d5ad469b475340d826e93cf3df4d64227?authuser=0&hl=en
<jcastro> for anyone wanting to participate today ^^^^
<jcastro> in about 5 minutes
<jcastro> everyone else if you just want to listen  in on the charm meeting you'll just go to ubuntuonair.com
<marcoceppi> http://pad.ubuntu.com/7mf2jvKXNa
<kirkland> fyi, 404: https://juju.ubuntu.com/Interfaces/mount
<kirkland> I'm trying to use nfs and the mount interface
<kirkland> there's a number of pages that link to that address
<bbcmicrocomputer> marcoceppi: thanks for reviewing the liferay charm :)
<marcoceppi> bbcmicrocomputer: no problem!
<jcastro> evilnickveitch: where do people file bugs on the maas docs?
<evilnickveitch> jcastro, the MAAS docs are included in the source, so they just file them on the project page
<evilnickveitch> jcastro, why? do you have a bug?
<jcastro> someone pinged me
<jcastro> He's filing a bug
<evilnickveitch> cool
<jcastro> https://bugs.launchpad.net/maas/+bug/1202314
<_mup_> Bug #1202314: discrepancy between docs and behavior <MAAS:New> <https://launchpad.net/bugs/1202314>
<jcastro> evilnickveitch: ^
<jcastro> http://blog.scraperwiki.com/2013/07/17/weve-migrated-to-ec2/
<jcastro> \o/ and they're using Juju!
<marcoceppi> jcastro: Awesome!
<arosales> go ScraperWiki !
<arosales> jcastro, thanks for the link
 * marcoceppi compiles latest juju, looks forward to local!
<stokachu> marcoceppi: local w/lxc?
<marcoceppi> stokachu: local lxc and juju-core!
<stokachu> marcoceppi: nice! let us know how it goes i may try it out
<jcastro> marcoceppi: LXC report please!
<jcastro> also, liferay
<marcoceppi> meeting
<jcastro> hey marcoceppi
<marcoceppi> jcastro: yo
<jcastro> kirkland wants to do shared NFS storage like we do for wordpress for his charm
<kirkland> marcoceppi: howdy
<marcoceppi> kirkland: o/
<kirkland> marcoceppi: yeah, I need a basic charm, that uses NFS, for read/write shared filesystem storage across all units
<marcoceppi> kirkland: WordPress implements the "mount" interface http://jujucharms.com/interfaces/mount which is provided by the nfs charm (and a few others not in the store yet)
<marcoceppi> kirkland: did you just need an example, or did you have a question in particular?
<kirkland> marcoceppi: I'd like a charm to clone, as a template
<kirkland> marcoceppi: and then modify to my needs
<kirkland> marcoceppi: how do I branch the wordpress charm?
<marcoceppi> kirkland: if you have charm-tools installed you can use `charm get wordpress` otherwise you can use bazaar with `bzr branch lp:charms/wordpress`
<marcoceppi> The WordPress charm is mildly confusing in it's setup. It's the result of pushing bash to it's limits as a language
<kirkland> marcoceppi: is there a simpler charm that uses nfs for me to start from, then?
<marcoceppi> OwnCloud might be a better example of a more wholistic view on how to handle the mount interface http://jujucharms.com/charms/precise/owncloud though it doesn't do a very good job of data preservation (ie, if the mount relation is broken there's really no data recovery and the mount point isnt removed)
<marcoceppi> kirkland: owncloud is the only other charm that's actually in the store that uses the mount interface, so it's your next bet
<jcastro> nfs shared storage seems like something we would do as a "cookbook snippet" kind of thing for charm authors
<melmoth> actually, there are others http://jujucharms.com/interfaces/mount
<marcoceppi> melmoth: those aren't actually reviewed though, so not "officially" in the charm store
<marcoceppi> Definitely exist as examples, but take them as you would a grain of salt
<kirkland> jcastro: yeah, I would think nfs and mysql should be the two brain dead simple integration points
<jcastro> there's ceph in there too if you want to totally use a naval cannon on a nail.
<AskUbuntu> Is it possible to use Juju without having sudo access on local machine? | http://askubuntu.com/q/321252
<jcastro> heh, timely question
<marcoceppi> jcastro: got juju-core compiled, about to give local access a go
<jcastro> marcoceppi: save your history so we can use that as the docs for local
<marcoceppi> jcastro: ack
<jcastro> marcoceppi: toss it in an etherpad when you're done, I'd like to try it too
<marcoceppi> jcastro et all, here's the compile instructions: http://pad.ubuntu.com/rzudQRvsmw
<jcastro> on it now!
<marcoceppi> jcastro: havent' gotten it to work for me yet
<marcoceppi> but those are the compile instructions
<jcastro> I am still downloading
<kirkland> marcoceppi: okay, I've written my charm;  it's in a directory locally;  what's the current syntax for deploying from my local directory?
<kirkland> error: Environments configuration error: /home/kirkland/.juju/environments.yaml: environments.openstack.default-image-id: required value not found
<kirkland> meanwhile, juju also told me that that was deprecated
<marcoceppi> kirkland: what version of juju are you using, what provider are you using?
<marcoceppi> `juju version` (or `juju --version`)
<kirkland> marcoceppi: ii  juju-core                                 1.11.2-3~1414~raring1                amd64        Juju is devops distilled
<kirkland> wait
<kirkland> kirkland@x230:~/src/charms$ which juju
<kirkland> /usr/bin/juju
<kirkland> kirkland@x230:~/src/charms$ what-provides juju
<kirkland> juju-core: /usr/bin/juju
<kirkland> kirkland@x230:~/src/charms$ juju --version
<kirkland> juju 0.7
<kirkland> ???
<marcoceppi> kirkland: that explains that
<marcoceppi> and jcastro says update-alternatives aren't important
<kirkland> marcoceppi: shouldn't those two conflict?
<roaksoax> i think jamespage is handling the packaging
<roaksoax> now
<kirkland> jamespage: !
<kirkland> :-)
<kirkland> marcoceppi: okay, I'm purging juju and juju-0.7
<marcoceppi> kirkland: no, you'll want to do this: `sudo update-alternatives --config juju`
<marcoceppi> just select the proper juju version, 0.7 and 1.X can live side by side
<kirkland> marcoceppi: shouldn't I be able to just purge it?
<marcoceppi> kirkland: technically, maybe. I'm not sure how the packaing works entirely
<kirkland> kirkland@x230:~/src/charms$ sudo update-alternatives --config juju
<kirkland> update-alternatives: error: no alternatives for juju
<marcoceppi> kirkland: heh
<kirkland> marcoceppi: I'm reinstalling
<roaksoax> purging it should work
<marcoceppi> kirkland: ack
<kirkland> okay, I'm going now
<marcoceppi> update-alternatives worked at one point
<kirkland> marcoceppi: okay -- now, I'm ready to deploy my charm from my local directory
<kirkland> marcoceppi: and i'm on: kirkland@x230:~/src/charms$ juju version
<kirkland> 1.11.2-raring-amd64
<marcoceppi> kirkland: can you just give me the output of pwd from inside your charm's source directory?
<kirkland> marcoceppi: /home/kirkland/src/charms/john
<marcoceppi> kirkland: so, if your charm is named "john", you'll need to add a series directory between charms and john, /home/kirkland/src/charms/precise/john
<marcoceppi> as this'll be a local repository now
<kirkland> okay
<marcoceppi> aftera a successful bootstrap, you'll need to run the following:
<kirkland> marcoceppi: k
<marcoceppi> `juju deploy --repository /home/kirkland/src/charms local:john` which will default to the LTS series (precise) and look for the john charm in the "local" repository defined as /home/kirkland/src/charms
<marcoceppi> jcastro: there's a patch needed to get bootstrap working
 * marcoceppi ducks
<kirkland> marcoceppi: okay, good, deployed
<kirkland> marcoceppi: and I deployed nfs
<kirkland> marcoceppi: now I need to add the relation?
<marcoceppi> kirkland: now you should just need to `juju add-relation nfs john`
<marcoceppi> You can inspect the deployments by shelling directly in to them with juju ssh john/0 or juju ssh nfs/0 (<service>/<unit num>)
<marcoceppi> there's a log in /var/log/juju/unit*.log that shows you the hook output if you need to debug things
<kirkland> marcoceppi: okay, nice, I have john and nfs up, related, and john has mounted the nfs volume
<marcoceppi> kirkland: nice!
<marcoceppi> kirkland: what you can do now is `juju add-unit john` you'll get another john unit (john/1) and when it spins up it'll be mounted to the same NFS point
<kirkland> marcoceppi: right
<marcoceppi> each subsequent unit will get the same relation stuff, etc
<kirkland> marcoceppi: that's the idea ;-)
<marcoceppi> kirkland: sweet, let us know if you have any other questions!
<marcoceppi> jcastro: http://i.imgur.com/RfhvxBv.png
<thumper> jcastro: morning
<thumper> jcastro: there are a few rough edges with the local provider
<jcastro> thumper: wanna hop on G+ and sync up?
<thumper> jcastro: in particular addressing, but also the containers don't restart with the machine yet
<thumper> sure
<thumper> jcastro: you start one?
<jcastro> yeah one sec
<jcastro> https://plus.google.com/hangouts/_/b99ee2ffdc84a9d7b0fb2eb8e45c88e1c65c771c?hl=en
<jcastro> marcoceppi: ^^^
#juju 2013-07-18
<AskUbuntu> how to deploy charm to a machine which is already installed with some other charm? | http://askubuntu.com/q/321479
<jcastro> marcoceppi: can you link me to that pad with the build instructions again?
<jcastro> last time I promise
<melmoth> jcastro not sure this is what you are looking for http://pad.ubuntu.com/rzudQRvsmw
<melmoth> but it  s in my client historry :)
<marcoceppi> http://pad.ubuntu.com/rzudQRvsmw ^ that's it
<marcoceppi> just added update-alternatives instructions too
<jcastro> melmoth: yep, thanks
<jcastro> marcoceppi: so like these instructions remind me
<jcastro> that we should have dailies
 * jcastro knows what to ask mramm for next
<kirkland> marcoceppi: jcastro: Okay, I got my charm deploying successfully last night, now I'm just trying to get it scaling properly
<kirkland> marcoceppi: jcastro: how can I tell, inside of a unit, how many total similar units are in the cluster?
<kirkland> marcoceppi: jcastro: I need to update a configuration file, with the total number of units, and which number unit each of them are
<jcastro> from inside an existing unity?
<kirkland> jcastro: :-)  "unity"
<marcoceppi> kirkland: You'll want a peer relations
<jcastro> yeah sorry
<kirkland> marcoceppi: oh, interesting
 * marcoceppi digs up docs
<kirkland> juju just became a swingers party
<jcastro> marcoceppi: I need mongodb for local to work right?
<jcastro> is it "mongodb" or "mongo-server"?
<marcoceppi> jcastro: yeah, install mongodb then run sudo service stop mongodb
<marcoceppi> jcastro: err, mongo-server
<marcoceppi> I must have already had that installed
<melmoth> kirkland line 60 http://bazaar.launchpad.net/~pierre-amadio/charms/precise/ntpmaster/trunk/view/head:/hooks/ntpmaster_hooks.py
<melmoth> thats how i list peer in relation
<melmoth> may be there s a better way
<marcoceppi> So, we're missing that in the docs evilnickveitch we don't mention peer relations at all. You still working on charm-author doc?
<jcastro> marcoceppi: and mongo needs to be stopped?
<jcastro> I am getting
<jcastro> juju open.go:89 state: connection failed, will retry: dial tcp 127.0.0.1:37017: connection refused
<marcoceppi> jcastro: do you have the latest juju-core?
<jcastro> over an over on bootstrap
<jcastro> yeah I did the update and it fetched stuff
<marcoceppi> jcastro: oh, it hasn't failed yet?
<marcoceppi> wait for it to fail or succeed
<jcastro> oh ok
<marcoceppi> juju bootstrap -v is crazy verbose
<jcastro> I probably cancelled too early
<marcoceppi> it's running juju-*-local upstart which starts a mongodb instance
<jcastro> error: no reachable servers
<marcoceppi> jcastro: yeah...
<marcoceppi> I thought you had an ssd?
<jcastro> I do
<marcoceppi> destroy environment then try again
<marcoceppi> kirkland: the peer relation is defined like provides and requires, but it's defined as peer and all units automatically get the relation
<marcoceppi> So if you just need to get a list of all the peers you can run get a list of the units with `relation-list` which will print each of the units in a service deployment
<marcoceppi> it's strucutre is just like any other relation, so if you needed to get the address of each unit in the peer relation you could write a relation-joined hook that looked like this
<marcoceppi> http://paste.ubuntu.com/5887679/
<marcoceppi> but with actually syntatically correct bash: http://paste.ubuntu.com/5887680/
<jcastro> what version do you have of goju checked out?
<jcastro> 1.11.3-saucy-amd64
<marcoceppi> that's the version I have compiled, what does bzr revno $GOPATH/src/launchpad.net/juju-core/
<jcastro> 1481
<marcoceppi> jcastro: what does the line `timeout :=` in $GOPATH/src/launchpad.net/juju-core/environs/local/environ.go (~L539) say?
<jcastro> marcoceppi: 10
<jcastro> maybe I should blow it away and reDL
<jcastro> you changed that to 60 no?
<marcoceppi> jcastro: I'm looking at the rev history
<marcoceppi> I don't think my timeout fix landed
<marcoceppi> even though it says it was merged
<marcoceppi> jcastro: blow away and try again just to make sure
<kirkland> marcoceppi: hmm, okay;  is there an example metadata.yaml?
<kirkland> marcoceppi: so that I get the syntax right?
<marcoceppi> kirkland: https://bazaar.launchpad.net/~charmers/charms/precise/wordpress/trunk/view/head:/metadata.yaml
<jcastro> marcoceppi: yeah that did it
<marcoceppi> jcastro: cool, I can't even get it to compile anymore :(
<kirkland> marcoceppi: okay, thanks;  I guess I just want the interface to be ssh
<marcoceppi> kirkland: well, the interface can be anything you want
<kirkland> marcoceppi: http://paste.ubuntu.com/5887726/
<kirkland> marcoceppi: does that look right?
<marcoceppi> kirkland: that looks right to me, I'd be weary of interface naming, it doesn't exactly matter, per se, with peer, but each interface is like an agreement for what data will be sent recv
<marcoceppi> So I typically look to see if an interface has been created already (there is an SSH interface) and see what the charms expect.
<kirkland> marcoceppi: can you suggest something better?
<marcoceppi> Since any charm matching that ssh interface can *technically* be connected to each other
<marcoceppi> kirkland: I really can't, I'm not up to snuff with john the ripper, it looks like it's correct. I was more just saying "so you know"
<marcoceppi> finally, all relations are optional, so explicitly stating that does nothing
<kirkland> marcoceppi: I need all units to be able to ssh to one another
<marcoceppi> and setting optional to false, probably does nothing
<kirkland> marcoceppi: would "john-ssh" be better?
<kirkland> thanks
<marcoceppi> kirkland: So you're probably going to be sending host, port, and a shared private key between units
<marcoceppi> interface name for peer really doesn't matter, was my point, you can name it whatever you want because no other units except for those inside of the service, can connect
<kirkland> marcoceppi: does this make more sense?  http://paste.ubuntu.com/5887750/
<marcoceppi> kirkland: yeah
<kirkland> marcoceppi: okay, now, back to my original question...   how does a unit count how many peers he has?
<marcoceppi> kirkland: relation-list will list all units attached to that unit in that relation
<marcoceppi> kirkland: http://paste.ubuntu.com/5887680/
<kirkland> marcoceppi: okay, and does *every* unit get triggered each time a unit (relation) is added?
<marcoceppi> Just needs to be run within one of the relation-* hooks, can't just be run arbitrarily
<marcoceppi> kirkland: everytime a peer is brought up, it will fire a john-relation-joined on every john unit deployed
<kirkland> marcoceppi: okay, cool, let me hack on this a bit then
<jamespage> wedgwood, adam_g: apologies for my completely fudged landing of roaksoax change for one of the openstack contrib helpers earlier today
<evilnickveitch> marcoceppi, yes, there is a lot to rewrite. if someone wants to make some notes on peer relations, be my guest
<marcoceppi> jcastro: having weird build errors over here, blew away my copy of 1.11.3.1 too soon
<jcastro> marcoceppi: how long was local on "Pending" for you?
<marcoceppi> jcastro: do a ps and look for wget
<jcastro> hmm nope, must be something else
<jcastro> I'm going to do some juju website mainteance, will get back to this shortly.
<marcoceppi> likewise
<kirkland> marcoceppi: okay, next question...  I have juju-ssh'd to a unit, and I want to run relation-list on a command line, so that I can debug the output
<kirkland> marcoceppi: I suspect I need to source some environment or something?
<marcoceppi> kirkland: you kind of can't
<kirkland> marcoceppi: ?  that seems unkind
<marcoceppi> well, you can, but it's going to require quite a few things. You're better off putting juju-log `relation-list` in to your hook so you can just tail the log instead
<marcoceppi> kirkland: there used to be a pyjuju command juju debug-hooks that would allow you to, but that's not been ported to juju-core yet
<kirkland> marcoceppi: does that mean re-deploying?
<kirkland> marcoceppi: or can I manually hack it into /var/lib/juju/somewhere
<kirkland> marcoceppi: yeah, I've used juju debug-hooks before
<marcoceppi> kirkland: well, you could edit the charm live and then add a unit and watch the output (what I sometimes do during development)
<marcoceppi> one sec
<marcoceppi> /var/lib/juju/agents/unit-*/charm
<marcoceppi> jcastro: http://imgur.com/a/L6QJk
<ahasenack> charmers! :)
<ahasenack> hi, can someone review https://code.launchpad.net/~stub/charms/precise/postgresql/bug-1187508-allowed-hosts/+merge/174771 please?
<ahasenack> 52 lines of diff
<ahasenack> just 13 new lines
<hazmat> jcastro, re pending local, about 3-5m on ssd and fios
<thumper> marcoceppi, jcastro: how is your local provider playing going?
<marcoceppi> thumper: got juju-gui running on it, have a bug to report if I can reproduce it though (destroy-environment does weird things now)
<thumper> weird things in what way?
<marcoceppi> thumper: it complains that it can't delete the /var/lxc/auto files
<marcoceppi> because it seems to delete it before it checks for it
<marcoceppi> I had to touch them all before it let me actually destroy environment
 * marcoceppi bootstraps again to verify it wasn't a one time thing
<thumper> marcoceppi: let me try here...
 * thumper twiddles thumbs waiting for things to spin up
<thumper> marcoceppi: I'm wondering if you have a larger test deployment that I could try
<thumper> wondering how much stress it can handle
<marcoceppi> thumper: I had 6 machines running earlier without much issue
<marcoceppi> but it's an i7 and 32G of ram
<thumper> destroy machine seemed to work without issue
 * thumper waits for the wordpress agent to start
<thumper> before killing it
<marcoceppi> thumper: here we go
<marcoceppi> http://paste.ubuntu.com/5888895/
<thumper> yep I get an error too
<thumper> I wonder...
<marcoceppi> thumper: http://paste.ubuntu.com/5888896/
<marcoceppi> It's like it deletes it, then checks if it exists
<marcoceppi> pretty minor, keep running destroy-environment and it clears out eventually
<thumper> dude, that isn't minor
<thumper> that is freaking annoying
<thumper> I'll fix-it right now
<marcoceppi> well, relatively speaking
<thumper> I think it was an assumption on my part
<thumper> it appears that lxc-destroy removes the auto restart symlink for us
<thumper> and we are trying to remove it too
<marcoceppi> I haven't tested reboot survive-ability
<marcoceppi> but I'll probably give that ago later tonight
 * thumper tests manually
<marcoceppi> Otherwise I haven't found anything else with local provider just yet
<marcoceppi> outside what we talked about last night
 * thumper nods
<thumper> that is it
<thumper> lxc-destroy removes the link
 * thumper submits fix
 * marcoceppi trembels at the thought of having to recompile gojuju
<thumper> marcoceppi: what is wrong with "go install ./..."
<thumper> ?
<marcoceppi> thumper: I've just not had a lot of luck with it since I started using it yesterday
<thumper> marcoceppi: now to get someone to review it...
<marcoceppi> thumper: So I've been using `go get -v launchpad.net/juju-core/..." should I be doing something different?
<thumper> marcoceppi: hmm...
<thumper> that works the first time
#juju 2013-07-19
<kirkland> marcoceppi: howdy!
<kirkland> marcoceppi: okay, so my charm is working well now with the autogenerated input
<marcoceppi> kirkland: awesome!
<kirkland> marcoceppi: now, I'd like to pass in data via the config
<kirkland> marcoceppi: how do I do that?
<marcoceppi> kirkland: in the config-changed hook you'll want to `config-get <config-key>` to get the value of that key
<kirkland> marcoceppi: right
<kirkland> marcoceppi: how do I pass a new configuration to a running service?
<marcoceppi> Oh, `juju set john <config-key>="value"`
<jcastro> marcoceppi: relation-failed hook is failing on discourse
<jcastro> I'm just messing with it for my demo
<adam_g> jcastro, how do i set charm rating thru manage.jujucharms.com ?
<jcastro> adam_g: on the top right you need to login
<jcastro> and then go to the charm page
<adam_g> jcastro, oh. login. duh.
<jcastro> there is a link that is real small in the Links section
<jcastro> Quality Assessment
<jcastro> click on that and just submit the webform
<jcastro> adam_g: the form can be flakey with SSO so like, when you get to the form try to complete it quickly
<adam_g> jcastro, hmm not seeing that link. only Repository   Bugs   Ask a question
<adam_g> jcastro, you mind if i email to you?
<jcastro> yeah just email me
<jcastro> I'll handle it
<jcastro> adam_g: this is for ... glance?
<adam_g> jcastro, cinder + glance
<adam_g> oh, i see the link on glance
<marcoceppi> jcastro: I thought you weren't going to use it for your demo?
<marcoceppi> the version in the "charm store" is super old
<marcoceppi> I'll sync with github in a few
<jcastro> marcoceppi: I was just rehearsing and decided to try it
<jcastro> marcoceppi: liferay not working at all is weird, but I wonder if it just needs more than a small
<marcoceppi> jcastro: It worked fine during my tests on the local provider
<marcoceppi> so that's odd
<adam_g> jcastro, http://paste.ubuntu.com/5891692/ here is rating for cinder if you can input it. i apparently cannot, but i was able to for glance
<jcastro> I'll handle it
<jcastro> thanks dude!
<Frank81> ya here is a channel ok i have a basic problem and maybe a real bug ^^ i am new to juju and installed it made a example yaml config edited the amazon details deleted all other envs out of it and even exportet to this env vars the values
<Frank81> but i always get error: environment has no access-key or secret-key
<Frank81> what can i do ?
#juju 2013-07-20
<adam_g> jamespage, swift-storage and cinder redux are about done. i factored out some more common stuff to charm-helpers that is sharable among them, as well as some other merges https://code.launchpad.net/~charm-helpers/charm-helpers/devel/+activereviews
<jcastro> Frank81: what version of juju?
<adam_g> jamespage, nova-compute is off to a good start, too: lp:~openstack-charmers/charms/precise/nova-compute/python-redux
<Frank81> hi again i repaste since i wasn't around last 7 hours
<Frank81> 00:51:32 - Frank81: ya here is a channel ok i have a basic problem and maybe a real bug ^^ i am new to juju and installed it made a example yaml config edited the amazon details deleted all other envs out of it and even exportet to this env vars the values
<Frank81> 00:51:46 - Frank81: but i always get error: environment has no access-key or secret-key
<Frank81> 00:51:53 - Frank81: what can i do ?
<AskUbuntu> Ceph Openstack HA installation with maas and juju but partially on physical servers and few on virtual machines | http://askubuntu.com/q/322235
#juju 2014-07-14
<stub> If my charm only runs under precise with the cloud:icehouse archive, should I automatically add this repository? Or should I require the operator to explicitly specify it, in case they need to pull dependencies from elsewhere?
<blackboxsw> hey gents
<jcastro> heya lazyPower
<jcastro> Do you have the URL to your bundle?
<StoneTable> wrt charm-tools on lp, what determines when a build is made available for download? i.e., 1.2.10 is the latest in that branch, but only 1.2.9 is packaged up?
<StoneTable> I ask because homebrew (osx) pulls 1.2.9, which doesn't support templates w/charm-create
<marcoceppi> StoneTable: 1.2.10 should be packaged up
 * marcoceppi checks
<marcoceppi> StoneTable: rather, 1.3.2
<marcoceppi> I need to update homebrew
<StoneTable> Maybe I'm looking in the wrong place? https://launchpad.net/charm-tools/+download
<marcoceppi> StoneTable: refresh, I just updated the 1.3.2 release
<marcoceppi> I forgot to do so for the last two releases
<StoneTable> Excellent, thanks! I'll update the homebrew formula and test/send pull request
<marcoceppi> StoneTable: awesome, thank you!
<marcoceppi> StoneTable: it should be noted, i think we added some dependencies in 1.3 series, I think that the way the forumual works this won't be a problem but I dont' have a mac to test
<StoneTable> I noticed that. I'll test and update as needed. Thanks!
<jcastro> asanjar, got it, bundle deploying!
<asanjar> great
<jcastro> mysql:
<jcastro>       charm: "cs:trusty/mysql-1"
<jcastro>       num_units: 1
<jcastro>       dataset-size: 512M
<jcastro> is that the correct formatting for dataset size?
<marcoceppi> yes
<jcastro> not sure if I should have " or not around the 512M
<marcoceppi> jcastro: it will work either way, there's no special characters tehre
<jcastro> http://paste.ubuntu.com/7794328/
<jcastro> marcoceppi, I'm still getting that despite setting that config value
<jcastro> any ideas?
<marcoceppi> "Initializing buffer pool, size = 12.3G"
<marcoceppi> that is really wrong
<jcastro> yeah, I set the config value though
<marcoceppi> jcastro: idk what to say
<marcoceppi> I don't have time to debug atm
<jcastro> I mean, I resolved it by hand
<jcastro> I am just wondering why it didn't take with the bundle config
<jcastro> marcoceppi, I will  buy you a bottle of something when you land your new rev of this charm
 * marcoceppi now has incentive
<jcastro> asanjar, hey so
<jcastro> juju run --service hadoop-master â/usr/local/hadoop/terrasort.shâ
<jcastro> that is both mispelled and doesn't exist on disk
<jcastro> juju run --service hadoop-master "/var/lib/juju/agents/unit-hadoop-master-0/charm/scripts/terasort.sh"
<jcastro> I found that on the hadoop-master node
<jcastro> but running it tells me hadoop command not found
<jcastro> asanjar, ^
<jrwren> jcastro: its likely I broke that charm.
<jrwren> jcastro: I'll post a fix
<jrwren> jcastro: nevermind, I'm totally wrong. taht wasn't me.
<asanjar> jcastro: lets look at it .. https://plus.google.com/hangouts/_/gvngjcf6moezlu53i4noau5x4ia
<jcastro> This party is over...
<jcastro> asanjar, go to the team hangout
<asanjar> jcastro: I am there
<asanjar> can u hear me
<jcastro> lazyPower, ping
<lazyPower> pong
<lazyPower> https://code.launchpad.net/~lazypower/charms/bundles/oscondemo/bundle
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: marcoceppi & arosales] || News and stuff: http://reddit.com/r/juju
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: marcoceppi & arosales || News and stuff: http://reddit.com/r/juju
<lazyPower> JoshStrobl: ping
<schegi> jamespage, online??
<jamespage> schegi, hey!
<jcastro> lazyPower, http://pythonhosted.org/juju-deployer/config.html
<jcastro> that has example constraints for deployer
<lazyPower> jcastro: thanks, looking now
<lazyPower> jcastro: circling back - i can make another bundle with constraints as like oscon-recommended - would that be acceptable?
<jcastro> sure
<lazyPower> ok, i'll sneak that in the next update
<mbruzek> Hello Jose
<mbruzek> Are you available?
<jose> hey mbruzek
<jose> yeah
<mbruzek> The chamilo charm failed to install, I saw this error in the log files. Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
<mbruzek> is that a known bug?
<marcoceppi> mbruzek: this sounds like apt-get update not being run in the local provider template
<mbruzek> yes
<mbruzek> likely it is.
<mbruzek> not getting run
<mbruzek> You will have to bug thumper for that fix.
<marcoceppi> mbruzek: that's isolated to local provider templates
<marcoceppi> mbruzek: just rebuild your template or start the template, run apt-get update/ apt-get dist-upgrade, then stop it
<mbruzek> I deleted the template today and got a new one today.
<marcoceppi> mbruzek: did you also delete the cloud image cache?
<mbruzek> marcoceppi, I did juju clean local
<mbruzek> I believe that deletes the image cache now right?
<marcoceppi> mbruzek: for s&g, can you check the timestamp on /var/cache/lxc/cloud-*/*
<jose> mbruzek, marcoceppi: is this a known/reported bug? it's been affecting me and some other people on the local provider
<mbruzek> Jun 30 05:29
<marcoceppi> mbruzek: last cloud images was released the 12th and the 14th
<marcoceppi> you have "stale" cache
<mbruzek>  for power?
<marcoceppi> I'll check the juju-clean plugin
<marcoceppi> I don't know about that
<mbruzek> sudo rm -rf /var/cache/lxc/cloud-* || true
<mbruzek> I see that line in juju-clean
<marcoceppi> is that ppc63el ?
<marcoceppi> is that ppc64el ?
<mbruzek> -rw-r--r-- 1 root root 188205427 Jun 30 05:29 /var/cache/lxc/cloud-trusty/trusty-server-cloudimg-ppc64el-root.tar.gz
<marcoceppi> yeah, it's out of date
<marcoceppi> if you run that command does it actaully delete the image?
<mbruzek> no...
<marcoceppi> well, there's the problem!
<marcoceppi> can you remove the || true and see if it errors?
<mbruzek> http://pastebin.ubuntu.com/7795340/
<marcoceppi> mbruzek: what does `sudo rm -rf /var/cache/lxc/cloud-*` say?
<mbruzek> marcoceppi, http://pastebin.ubuntu.com/7795354/
<marcoceppi> wtf, why isn't sudo rm -rf wildcard working
<mbruzek> I can not do sudo ls -l /var/cache/lxc/cloud-*
<mbruzek> I have to always use cloud-trusty
<marcoceppi> mbruzek: oh, that's because of permissions
<marcoceppi> you don't have o+x on /var/cache/lxc
<marcoceppi> so you can't list, so rm fails
<mbruzek> tricky.
<marcoceppi> luuhhammeeee
<marcoceppi> I'll put a proper patch up tonight
<marcoceppi> for now, just delete the cloud-trusty and cloud-precise
<marcoceppi> rebootstrap, etc
<mbruzek> well power does not support precise so I will just add cloud-trusty for now.
<lazyPower> marcoceppi: should we be implicit with that pathing in the rm command then?
<marcoceppi> lazyPower: no, I'm just going to loop over all the known working series for now
<lazyPower> ok.
<marcoceppi> I mean, it's yes to your question
<marcoceppi> but I'm just going to loop it
<lazyPower> i got what you were saying :)
<marcoceppi> I think there is a way to get all the code names on Ubuntu somehow
<marcoceppi> ah, found it
<mbruzek> marcoceppi, I would like to know more about that
<mbruzek> jose I did open a bug about this here: https://bugs.launchpad.net/juju-core/+bug/1336353
<_mup_> Bug #1336353: juju should run apt-get update <apt-get> <proxy> <stale> <update> <juju-core:Triaged> <https://launchpad.net/bugs/1336353>
<jose> thanks mbruzek
<mbruzek> jose the developers have not fixed it yet it is unassigned. I would appreciate some actual user input on this bug.
<jose> mbruzek: I'm writing a comment right now :)
<mbruzek> jose, I knew you would!
<jose> :P
<marcoceppi> mbruzek: egrep -Eio "^Suite: ([a-z]+)$" /usr/share/python-apt/templates/Ubuntu.info | awk '{print $2}' | sort | uniq
<marcoceppi> there's probably a more condensed way to do that, but  that will worok
<mbruzek> Works on Power!
<mbruzek> One of the few things to.
<lazyPower> gnarly
<mbruzek> Actually marcoceppi, it has "devel" in there.  That was not one of the releases was it?
<dpb1> Hi all -- is calling config-changed on node reboot expected?
<ahasenack> marcoceppi: hi, do you know if juju is really supposed to call config-changed when a unit is rebooted?
<schegi> someone experience with the hacluster charm?
<mbruzek> dpb1, ahasenack, I suspect more information would be needed, but I am going to say, no the state of the VM should not cause config-changed to be called.
<dpb1> mbruzek: here is the log from unit start to where our config-change logging starts: https://pastebin.canonical.com/113471/
<mbruzek> ahasenack, Can you give us more information?
<dpb1> mbruzek: I can give you whatever you would like.  I'm on the node here
<ahasenack> mbruzek: it just happens. I'll reboot a "ubuntu" node and see the logs where it tries to call config-changed
<mbruzek> ahasenack, This is news to me, I have no idea how juju would keep track of the state of the charm like that.
<mbruzek> ahasenack, Is this behavior causing a problem?
<ahasenack> oh yeah
<mbruzek> ahasenack, Because if config-changed is truly idempotent it should be OK to run that script over and over again.
<ahasenack> dpb1 is debugging it, so far seems it's a race between the "service foo start" from config-changed and another "service foo start" from the normal boot process
 * mbruzek is testing this on his machine
<dpb1> yes, at least with start-stop-daemon, there is no locking, so there are races around creation of the pidfile..
<ahasenack> mbruzek: from the docs, it might even be it's on purpose:
<ahasenack> config-changed runs in several different situations.
<ahasenack>     immediately after "install"
<ahasenack>     immediately after "upgrade-charm"
<ahasenack>     at least once when the unit agent is restarted (but, if the unit is in an error state, it won't be run until after the error state is cleared).
<ahasenack> see the "at least" bit
<mbruzek> ahasenack, well there you have it.  I was unaware of that.
<mbruzek> ahasenack, The unit agent must signal to juju that it was restarted
<mbruzek> ls
<schegi> anyone experienced with the hacluster charm?
<sebas5384> i'm starting the scaling part of the charm I'm doing
<sebas5384> what should be the better path here?
<sebas5384> ceph, nfs, glusterFS ... ?
<sebas5384> there's a storage charm too
<sebas5384> i'm really tented into ceph
<sebas5384> someone? :P
<sebas5384> ceph seems to really big for a drupal charm hehe
<jose> well, I've seen mediawiki use nfs
<jose> as well as owncloud
<sebas5384> yeah jose, and the wordpress charm too
<jose> give nfs a shot, maybe?
<sebas5384> yeah but something about distributed file systems took me attention
<sebas5384> ceph and glusterfs distribution is awsome
<sebas5384> but yeah at first nfs is a good option
<sebas5384> i don't know man, scaling with nfs not seems right to me hehe
<sebas5384> but yeah, I should give it a try first
#juju 2014-07-15
<jose> sebas5384: well, you don't lose anything by trying, if it works then awesome! :)
<sebas5384> thks jose
<jose> np, and let me know if you need a hand with testing!
<sebas5384> if there's some pt-br out there take a look http://blog.taller.net.br/taller-lanca-serie-de-infograficos/ :)
<sebas5384> the next one is going to be about ubuntu juju :)
<schegi> jamespage, online?
<jamespage> schegi, I am
<schegi> hey, i got some issues when redeploying a destroyed ceph charm. it seems like something is going wrong with the zapping of the disks. here is the log http://pastebin.com/mbtXtfKA. calling ceph-prepare-disk --zap-disk /dev/sdX manually solved the problem for me. just to let you know
<schegi> jamespage, don't know if this is true for the ceph charm in the store, i used your network-split version.
<schegi> btw anyone out there experienced with the hacluster charm??
<jamespage> schegi, that would be me as well
<jamespage> schegi, hmm - you might need to use the osd-reformat configuration option
<schegi> yeah, thats the problem it is set to yes.
<jamespage> schegi, hmm - I've not exercised that option with btrfs - we test with xfs as default
 * jamespage pokes at the code
<schegi> here is my config, http://pastebin.com/mXBTyuLs
<schegi> jamespage, might be a btrfs problem. but as i said, when i log in to the machine and perform the zapping manually the error state resolves and the odds populate.
<schegi> osds
<jamespage> schegi, the charm should perform a pre-zap prior to calling ceph-prepare-disk - looks like that is not enought....
<jamespage> hmm
<schegi> jamespage, the error is a bit strange, currently deploying ceph to 3 nodes and ceph-osd to 10 more nodes and the error occurs from time to time but not reliably on all of the nodes. But if it occurs, on a node then for all of the disks. Btw the partition table in use it GPT.
<schegi> And as described, just logging in to the node and performing ceph-prepare-disk --zap-disk /dev/sdX manually from the console resolves the error and the deployment succeeds.
<schegi> But you have to do this for all of the disks on the node seperately
<jamespage> schegi, how ubiqutous is the --zap-disk option in ceph-prepare-disk? does that go back to older versions? if so I can tweak how that's called
 * jamespage checks
<schegi> jamespage, to be honest, i have no idea i am just 4 weeks old with ceph/juju and all this. I am scientist and just got hands on a bunch of hardware and trying to make use of it.
<schegi> :)
<jamespage> schegi, its sufficently supported in older versions back to grizzly/raring so I'll poke that in
<jamespage> schegi, I just nudged that into my network-splits branches
<jamespage> if you want to give it a spin
<jamespage> schegi, what question did you have about the hacluster charm?
<schegi> shure, if i have to redeploy my ceph i'll try. currently i hang a bit on percona/corosync/pacemaker but i am not familiar enough with the whole to figure out why. could be caused by my network configuration. seems like all nodes in the corosync cluster running separately nd not connecting to each other.
<g0d_51gm4> hi guys, I've a question for you, I made a test after to install MaaS and Juju deployed a juju-gui, last one works perfectly...why each time I re-boot the host machine and re-launch the VM, Region and Cluster Controler with the nodes, the juju's status is always "WARNING discarding API open error: unable to connect to "wss://1.1.2.21:17070/environment/814bd22b-4438-4ea6-8f22-92b39b866a42/api"
<g0d_51gm4> ERROR Unable to connect to environment "maas".Please check your credentials or use 'juju bootstrap' to create a new environment".  to resolve that I've always to remove the directories .juju/enviroments and .juju/ssh and re-sync  juju and make the bootstrap again????thanks
<g0d_51gm4> the ubuntu's installer re-begin the installation of OS? thanks a lot
<g0d_51gm4> anyone can answer me?
<avoine> g0d_51gm4: I guess the juju team are still sleeping at that time
<avoine> most of them are in pst time I think
<g0d_51gm4> avoine: maybe you right!!! last week they gave support at this time!!!!
<g0d_51gm4> it's strange but it's ok!!!
<lazyPower> g0d_51gm4: in the MAAS gui are your nodes still marked as assigned to whichever user's credentials are plugged into juju?
<lazyPower> if the machine is going through the fastpath installer again, that typically means the node is not registered with maas.
<g0d_51gm4>  lazyPower: hi how are y?anyway the answer for your q is yes
<lazyPower> and the bootstrap node is online, with the juju processes running?
<lazyPower> and the IP address of the node has not changed?
<g0d_51gm4> lazyPower: all nodes are registered on MaaS
<lazyPower> registered != assigned.
<lazyPower> i'm talking the node is assigned to the user.
<lazyPower> http://i.imgur.com/vGhLGZA.png
<lazyPower> as you can see in this screenshot - i ahve several nodes registered. Only node 12, 5, 4, and 0 are actually assigned to my juju environment
<g0d_51gm4> the ip address is not changed is always the same used to allocate the node to MaaS
<g0d_51gm4> after to create a juju environment I've run the following commands: "juju sync-tools -e maas" and "juju bootstrap -e maas --debug". with last one I see the node starts and the ubuntu's installer runs. at the end of this process the prompt of that one is "Bootstrapping Juju machine agent
<g0d_51gm4> Starting Juju machine agent (jujud-machine-0)
<g0d_51gm4> 2014-07-15 13:32:12 INFO juju.cmd supercommand.go:329 command finished"
<g0d_51gm4> I've seen your image and I've the same situation with my node.
<g0d_51gm4> after that I've also deployed juju-gui and it works well.
<marcoceppi> ahasenack: I think it is, at least I think it's coded that way
<marcoceppi> ahasenack: is that causing a problem?
<marcoceppi> ahasenack: re: calling config-changed
<g0d_51gm4> the problem is present when I shut down the host machine (my PC). why after I re-start all virtual environment and, from MaaS  start the node where I've deployed juju-gui juju gives me that error!!!!
<lazyPower> i don't exactly know, i've got a vmaas setup that comes back up when i restart the host.
<lazyPower> i'm thinking, but not really coming up with anything that would be an inherent blocker
<lazyPower> are you using KVM based virtual machines or bare metal?
<g0d_51gm4> and another q is: adding a second node to MaaS how can I register the new node to the same juju's environment used for the first one?
<g0d_51gm4> lazyPower:I use KVM
<lazyPower> g0d_51gm4: are you wanting to manually add machines? otherwise juju deploy will communicate with maas and maas will return a machine randomly to add to the environment.
<ahasenack> marcoceppi: yes, it caused a problem with a reboot
<ahasenack> marcoceppi: a race between juju running config-changed, which restarts a service
<ahasenack> marcoceppi: and the normal service start that happens during boot
<ahasenack> the same initscripts called twice, at the same time
<g0d_51gm4> no, I've added the vm node via PXE
<g0d_51gm4> http://imgur.com/4SylrD6
<marcoceppi> ahasenack: I know config-changed is run a certain times during certain events, may want to chat with a dev about it, but it sounds like the config-changed hook isn't 100% idempotent?
<g0d_51gm4> http://imgur.com/1xJbvyk for the virsh
<ahasenack> marcoceppi: it is, you can run it as many times as you want
<marcoceppi> ahasenack: then why is there a race condition forming?
<ahasenack> marcoceppi: because of the boot!
<marcoceppi> ahasenack: which charm - out of curiousity
<ahasenack> marcoceppi: during boot, ubuntu starts services
<ahasenack> marcoceppi: landscape-server
<ahasenack> marcoceppi: so you have two things calling "service sometstuff start"
<ahasenack> marcoceppi: config-changed, via juju
<ahasenack> marcoceppi: and the boot process of the machine
<marcoceppi> Okay, so I follow why there's a race condition. Upstart is bring up a service while config-change is doing the same
<ahasenack> right
<marcoceppi> ahasenack: why not just do `restart somestuff || start somestuff`
<marcoceppi> instead of assuming that you will always need to start it
<g0d_51gm4> from the first image if I add a second node and leave that in ready status which is the way to say to juju that is present a new node and how to add it to the same environment?
<ahasenack> marcoceppi: we assume that if config-changed was called, something changed
<marcoceppi> ahasenack: that's not a valid assumtion
<ahasenack> it was, from reading the doc
<marcoceppi> any hook can be called at any time for any reason
<ahasenack> which also states that config-changed is called everytime the agent is started
<marcoceppi> configuration change is just an example where config-changed is envoked
<ahasenack> so doc-wise, it's correct
<ahasenack> marcoceppi: I wonder how many charms are tested regarding a reboot, though
<ahasenack> we all seem so concerned with deployments
<marcoceppi> g0d_51gm4: you can do a deploy, or allocate the node with `juju add-machine`
<ahasenack> marcoceppi: we are fixing it in our charm, sure. But heads up, a reboot can ruin your day
<marcoceppi> ahasenack: good question, not sure, I'll make sure we add it to our testing infrastructure
<g0d_51gm4> ok
<marcoceppi> tvansteenburgh: ^^
<g0d_51gm4> thanks marcoceppi
<ahasenack> marcoceppi: we probably hit this because we start several services, like 8 or more
<marcoceppi> tvansteenburgh: tl;dr during testing, when we stand up a deployment, we should stop/start the agents on all the machines and see how the charm reacts
<marcoceppi> ahasenack: it's a good point to test for, this wasn't always the behaviour in juju
<ahasenack> marcoceppi: still, I would question this decision, to have config-changed run at agent start
<marcoceppi> ahasenack: if the hooks is coded with the notion that it's called at anytime, and the right idempotency guards are in place, it /shouldn't/ be a problem
<g0d_51gm4> any suggest to resolve the problem after the reboot of the host machine?
<ahasenack> marcoceppi: I'm not convinced
<ahasenack> marcoceppi: the hook can be called as many times as you want
<ahasenack> marcoceppi: but sure, an admin could be calling "service foo restart" at the same time config-changed is being run by some other reason
<marcoceppi> g0d_51gm4: I've not experienced that issue before, sorry
<g0d_51gm4> I've recreated a new virtual environment with MaaS and Juju, I've made the bootstrap of the environment, deployed the juju-gui and until here everything works. I've reboot the host machine re-run all vMaaS environment and used check the juju's status and I've the same problem with juju " ERROR Unable to connect to environment "maas".Please check your credentials or use 'juju bootstrap' to create a new environment".  to resolve that I'
<g0d_51gm4> ve always to remove the directories .juju/enviroments and .juju/ssh and re-sync  juju and make the bootstrap again"
<lazyPower> g0d_51gm4: do you have your machines configured to only boot over pxe?
<lazyPower> you may want to add hdd booting as well to your boot options, so it doesn't re-provision the machine on boot if its assigned. it *shouldnt* be doing that anyway - but its worth investigating.
<g0d_51gm4> I've also made that.
<g0d_51gm4> I've boot the node with its HD via MaaS, the node status is allocated, ubuntu prompt works:  but if run that command the error is present!!!
<lazyPower> can you juju ssh 0?
<lazyPower> or ssh into the node and valdiate that the juju processes are running?
<lazyPower> show me the output of -  initctl list | grep juju  on your bootstrap node
<g0d_51gm4> the ssh connection using its hostname works
<g0d_51gm4> the output is "jujud-unit-juju-gui-0 start/running, process 3760
<g0d_51gm4> juju-db start/running, process 3667
<g0d_51gm4> jujud-machine-0 start/running, process 3645"
<g0d_51gm4> I've just stop the node via MaaS and re-start it
<lazyPower> if you stopt he node via maas, its going to do a fresh cloud boot
<lazyPower> meaning full pxe install, no juju environment will be left on the virtual machine. its pristine, just like you chose to spin up a new VM on a public cloud.
<g0d_51gm4> just a second in the same time i've added also the second node and used the command marcocepppi suggested me before
<g0d_51gm4> and the ubuntu's installer is started
<g0d_51gm4> in case i've 2 juju environment (maas0 and maas1) to add the new node to maas2 int the command juju add-machine I've to specify some other parameter?
<schegi> does anyone know how to force juju to use a certain network interface? i got a setup with 3 different networks 2 Bonds over 2 1GBit networks and 2 10GBit networks. I like juju to use one of the 1 Gig bonds but it always uses the 10GBit interfaces, when using e.g. juju ssh X. This also leads to charms identifying themselves with this particular interface.
<rbasak> sinzui: around? On SRU review, arges found 1) a binary in the upstream tarball for 1.18.4, and 2) that it changed from 1.18.1.
<g0d_51gm4> lazyPower: i'm rebooting the host machine see y after few seconds
<rbasak> sinzui: pkg/linux_amd64/github.com/errgo/errgo.a
<sinzui> rbasak, The cause is fixed, but was not fixed until a few weeks. The devs or golang changed something to get that crack.
<sinzui> rbasak, The fix was to rm pkg/* when the tarball is assembled. It is safe to do so
<sinzui> rbasak, I also think it is safe to stab the person golang for doing that non-sense
<rbasak> sinzui: thanks. Coordinated in #ubuntu-release, the conclusion is that I'll repack the tarball with the file missing for the Trusty SRU, as a one-off.
<sinzui> rbasak, thank you +1
<rbasak> sinzui: I can delete the pkg/ directory itself too, I take it
<rbasak> ?
<g0d_51gm4> lazyPower: the same error http://paste.ubuntu.com/7799040/
<lazyPower> g0d_51gm4: i'm not sure what to recommend at this point. I would reach out via the juju mailing list and see if another member has run into this scenario before.
<g0d_51gm4> if i remove the directories ./juju/environments, .juju/ssh and run the commands "juju sync-tools -e maas" and "juju bootstrap -e maas --debug" it juju bootstrajuju bootstrap -e maas --debug it restart the node and install ubuntu
<g0d_51gm4> lazypower: it's incredible but also for  the third time i've received the same error!!!i don't know where is the problem with that!!!
<lazyPower> g0d_51gm4: it really sounds like something is going on with your vm's thats wiping out the juju env, or replacing the ssh key
<lazyPower> thats the only thing i can think of, but i dont know what it would be
<rharper> is there any control over when juju bridges eth0 for containers ?  like a value in environments.yaml ?
<marcoceppi> rharper: I believe so, but I think it's undcoumented, let met take a look at the source code
<rharper> marcoceppi: do you know it wouldn't by default?  I've got an openstack environment I'm deploying to; and it's not bridging eth0 by default, so containers come up on lxcbr0 (10.0.3.x) -- but I really want it to bridge eth0 so the public ip comes from the same network as the host
<rharper> when I use a maas provider, it does this "automatically"
<alexisb> rharper, I know they are not online right now but that is a good question for dimiter and jam
<alexisb> fwereade, may be around and could potentially have a quick answer for you
<rharper> alexisb: cool, thanks.
<alexisb> rharper, feel free to send dimiter mail
<rharper> ok
<data> hey, I have googled high and low, but is it possible to do mixed deployments in juju? Because I have so far set up a dozen services "locally" in VMs (maas provisioning), but would like to add other, preexisting servers to it (manual provisioning).
<sebas5384> juju-local can be installed into a debian machine?
<sebas5384> its being quite around here lately hehe
<sebas5384> jcastro: ping
<jcastro> sebas5384, hi!
<sebas5384> hey jcastro o/
<lazyPower> data: manual provider will give you that level of mixed deployments you're looking for
<sebas5384> today we are going to install juju-local in more than 10 pcs
<sebas5384> :D
<sebas5384> but!! there are some with debian
<sebas5384> some tip about that?
<sebas5384> jcastro: :)
<sebas5384> hey lazyPower o/
<schegi> jamespage?
<lazyPower> hey sebas5384
<schegi> jamespage, will redeploy ceph this evening, have bzr updated your ceph/ceph-osd network-split, but nothing new. can you commit pls
<data> lazyPower: So if I add that environment, will the two be shared?
<lazyPower> right, in the manual provider tehre are no real limitations. You can have machines in different datacenters entirely
<lazyPower> lag will be a factor
<lazyPower> but its perfectly reasonable to deploy into say, digital ocean and aws and your in-house maas cluster (assuming it hs the proepr networking to reach all these instances)
<data> k. It's more about ease of deployment for now. It's just for development
<lazyPower> be careful about adding existing nodes that are dirty to juju though
<data> What do you mean by dirty?
<lazyPower> charms always assume a clean cloud image, so if you go adding your corporate ERP server to juju, and deploy something on it, it may do something unintended
<jcastro> sebas5384, I don't think anyone's tried it before
<lazyPower> jcastro: back in 2012 there were some articles on it, iw ent looking
<data> None of the servers are production servers. Not exactly virgin some of them, but not too bad
<lazyPower> i didn't find anything recent post teh go-port on getting juju running.
<lazyPower> (on debian)
<jcastro> lazyPower, was it me posting on the debian-cloud list asking for someone to help on it? :)
<lazyPower> uhm, nope, it was a couple blog posts, some comments on MS's blog, and debian bugs against a package that no longer exists
<sebas5384> shouldn't be like adding some repositories and that kind of stuff?
<jcastro> oh, that was the pyju client in debian
<jcastro> not supporting debian as a deployable OS, which is what he wants
<sebas5384> jcastro: by deployable OS you are saying like, installing juju-local and running charms into lxc containers, right?
<jcastro> right
<sebas5384> jcastro: ahh ok :)
<sebas5384> then its exactly what i want
<sebas5384> htht
<sebas5384> hehe
<lazyPower> i think the components are there, but its largely untested.
<lazyPower> i would give it a go in a manual provider environment
<sebas5384> yeah thats going to be a problem
<sebas5384> troubleshooting is going to by a pain in the a**
<data> while I am here: I had this problem yesterday that juju didn't continue with anything because it was waiting for a server to finish dieing. problem was that the server didn't exist anymore
<lazyPower> did you destroy the machine with a --force
<lazyPower> and then destroy the service data?
<lazyPower> often times, i find that hooks get trapped in a failed relationship cycle
<lazyPower> and resolving the relationship bits, after having destroyed the machine with extreme predjudice, works pretty well
<data> it was something like that, I believe. Are you talking about some kind of "service data" or was it another implication of my unfortunate nick :)
<data> I could not figure it out and completely destroyed the environment
<lazyPower> unfortunate implication of the nickname :)
<jrwren> what is a good way to test a relation-broken hook?
<jose> jrwren: I'd say deploying, relating and destroying the relation?
<jrwren> that does a that does a relation-departed, different from broken, AFAIK
<jose> jrwren: I think it's just an alias, though I'm not entirely sure
<ahasenack> hi, which package has this command:
<ahasenack> andreas@nsn7:~/charms/trusty/ntp$ make sync
<ahasenack> make: charm-helper-sync: Command not found
<schegi__> hey if i bootstrap a maas environemt and later deploy a service in an lxc container and like the container to use a predefined bridge (br0) on the server. how to define the bridge device
<dergrunepunkt> Hi, using maas I run "juju bootstrap" and when de node boots starts the ubuntu installation
<dergrunepunkt> ideas?
<thumper> dergrunepunkt: that is what maas does
<dergrunepunkt> I know
<thumper> I'm not sure what you are asking
<dergrunepunkt> but it souldnt do it over and over and over again
<thumper> yeah, that sounds wrong :)
<dergrunepunkt> maas shoud do it once
<dergrunepunkt> once the node it's installed it shouldn't install the operating system again
<dergrunepunkt> thumper: I have 3 nodes in the "Ready" state, and when run "juju bootstrap" bloody maas wants to install the operating system again
 * thumper doesn't know much about maas
<schegi__> dergruenepunkt, before bootstrapping no system is installed, it justs boots to a pxe image and during commisioning only information about the server is collected and impi user is created.
<schegi__> ready state only means that the node is ready for deployment, if you mean ready in maas.
#juju 2014-07-16
<jam> rharper: marcoceppi: we don't support containers anywhere except for MaaS ATM, we're working on that code (since if you want to make a routable container you need to bridge the network but then also ask for an assign an IP address for the container, etc0
<jam> rharper: marcoceppi: there is the "network-bridge" environment config, but I'm pretty sure that is only used in MaaS (in 1.20) and local
<schegi> hey, if i deploy a service to an lxc container, how do i define the bridge interface the container should use?
<schegi> hey, if i deploy a service to an lxc container, how do i define the bridge interface the container should use?
<schegi> maas environment bootstraped not local
<schegi> is it enought to make the changes in /etc/lxc/default.conf /etc/defaults/lxc-net?
<g0d_51gm4> hi guys, this morning I've re-made all steps from the begin to build a vMaaS with Juju and I saw this warning in the vm node "http://imgur.com/rdRC9hm", the error on the status of juju environment is the same......
<jamespage> schegi, the maas provider should automatically setup a br0 -> eth0
<jamespage> schegi, but its very basic right now in the context of multiple netwokrs
<schegi> jamespage, is it possible to somehow force it to use an existing bridge or create one over another interface?
<jamespage> schegi, trying to figure that out
<schegi> jamespage, btw how to destroy containers from juju?
<jamespage> schegi, 'terminate-machine'
<jamespage> use the machine-id of the container you want to destroy
<schegi> just destroying the service doesnt destroy the related container
<schegi> at least in juju status it still exists
<jamespage> schegi, indeed it does not
<jamespage> schegi, you can do a "juju terminate-machine --force 0/lxc/2" for example
<schegi> ah ok nice
<jamespage> which will just rip the machine out of the service and terminate it, but leave the service deployed
<jamespage> but without units
<schegi> jamespage, played around a little bit with lxc. If i lock in to a machine i like to deploy a service in lxc container to and change the lxc settings in /etc/lxc/defaults.conf and /etc/defaults/lxc-net before deployment and restarted all necessary services
<schegi> jamespage, it seems to work but still got some conectivity issues i think container stucks in pending state and if i look into the container log on the node it was deployed to ill see lots of these http://pastebin.com/agr77dnK
<schegi> and if i try to ssh into container using juju ssh it states ERROR machine "15/lxc/4" has no internal address
<lazyPower> schegi: what did you change?
<lazyPower> schegi: i've modified lxc container bridge devices and network settings successfully for the local provider
<lazyPower> schegi: relevant post: http://blog.dasroot.net/making-juju-visible-on-your-lan/
<schegi> ok, thats what i tried. only difference my bridge interface is not statically defined but gets its ip via dhcp
<lazyPower> shouldn't make a difference so long as thats being passed off to the bridged device
<lazyPower> and LXC knows about it when it gets fired up
<lazyPower> i did however have some issues with the interfaces racing
<lazyPower> i forget how i solved it, i think it was the order in /etc/network/interfaces
<lazyPower> sometimes the bridge device would collide with the physical adapter and collide
<lazyPower> er, yeah. i need more coffee... cant speak english...
<schegi> btw is this your blog?
<schegi> just to mention /etc/init.d/networking restart does not work anymore in trusty
<lazyPower> it is, teh article was written prior to 14.04
<lazyPower> so it was tested on 12.04
<rharper> jam: marcoceppi:  would be interested in testing something; in my case, I wouldn't need to assign an IP if the default network already runs DHCP;  AFAICT, this is how it works with MaaS, bridge the host device, lxc runs a container on it, MaaS sees a new dhpc request with a new mac, gets and IP.  This should work fine for networks that run DHCP.   I suppose "expose" would require floating ip allocation, but I expect that the code is the same since that's n
<rharper> eeded for the hosts as well.
<schegi> lazyPower, still no connectivity ERROR machine "15/lxc/6" has no internal address on juju ssh 15/lxc/6
<g0d_51gm4> lazyPower: hi. have y look here http://imgur.com/rdRC9hm. it's the situation after to reboot the host mahcine!
<lazyPower> hmm
<lazyPower> schegi: this is when you're deploying to an lxc container in a host right?
<lazyPower> eg: deploy mediawiki --to lxc:15
<schegi> right
<lazyPower> that article was for the local provider - i was unaware you were working on an agent-based installation
<lazyPower> that changes a bit - the network bridging there still is pioneer territory for me
<lazyPower> There was a talk on the mailing list about this a while back, about some WIP for a subordinate to handle the networking via relations
<lazyPower> g0d_51gm4: hang on, i doubt i'm going to nkow the answer to this though, as i stated yesterday, it really appears to be a configuration issue with how your maas/juju environment is setup
<lazyPower> g0d_51gm4: so, there's only so many things that can be wrong here. 1) IP Address changes of the bootstrap node. 2) SSH Keys have changed. 3) the juju services aren't running
<lazyPower> if you cannot juju ssh 0 - and it says permission denied, pubkey - you know its the ssh keys.
<schegi> lazyPower can you point me to the mailinglist or some archives?
<lazyPower> https://lists.ubuntu.com/mailman/listinfo/juju
<lazyPower> archive link is at the top, i'd suggest a signup since there's always activity on the list about the future of juju - its a great place to keep informed about whats coming at you in the next revision
<g0d_51gm4> lazyPower: i was thinking if is the problem is the firewall? because everytime i start the vMaaS environment i've to flush the table roles and clean it on the Region Cluster for working with juju.
<lazyPower> g0d_51gm4: could be
<g0d_51gm4> lazyPower: the command juju ssh -0 gives me the prompt of the node.
<lazyPower> thats good! that means juju is there and responding
<lazyPower> i had not considered UFW to be honest
<lazyPower> and it stands to reason that it would be the blocker if its reloading your FW ruleset on reboot
<g0d_51gm4> let me try to disable the ufw and
<g0d_51gm4> reboot everything
<schegi> ok its getting really starnge following situation. did a juju deploy --to lxc:15 mysql, juju status shows the container constantly in pending state. the /var/log/juju/machine-15-lxc-X.log shows plenty of these and juju ssh 15/lxc/X returns ERROR machine "15/lxc/6" has no internal address, but using ssh ubuntu@192.168.25.158 succeeds and the machine is pingable
<lazyPower> is it reachable by the bootstrap node though?
<schegi> http://pastebin.com/agr77dnK -->/var/log/juju/machine-15-lxc-X.log
<schegi> pingable and ssh-able from the maas master and all other nodes in the cluster. but not via juju only plain ssh to the ip of the container works
<lazyPower> interesting
<schegi> the whole cluster is also pingable from within the container
<data> lazyPower: I asked yesterday about the mixed deployment with manual mode. I have the problem now when following https://juju.ubuntu.com/docs/config-manual.html, that I get a "su: authentication failure" directly in the beginning
<data> from what I can tell from the debugging output, it thinks the user it logs in with is root
<lazyPower> the user you add manually will need to be a passwordless sudo user
<lazyPower> eg: the ubuntu user on most clouds
<data> it is
<data> http://pastebin.com/NRMH6D6N
<data> it is just not using sudo
<lazyPower> data: what am i looking at here? verbose output from what juju is doing to add the unit?
<data>  juju bootstrap --debug
<data> sorry, normally, I'd have pasted everything, but too many machine names etc. in there, that I don't want in logfiles
<lazyPower> its doing sudo
<lazyPower> if you look at the tail end of that command
<lazyPower> sudo "/bin/bash -c '
<data> I am blind, thanks
<lazyPower> so make sure it is indeed a passwordless sudo user
<lazyPower> i've had that bite me before
<data> it is
<data> better yet, "someone" created /home/ubuntu on the machine, but it is owned by root:root
<data> is it possible to change the name of the user and the directory it is using? Because we have an ldap on that machine for users, and I'd hate to mess with the pam config
<g0d_51gm4> lazyPower: i disable the ufw on boot and started the whole vMaaS environment. the node now is working and in juju status i see the node without error
<lazyPower> g0d_51gm4: awesome news!
<g0d_51gm4> lazyPower: a question now is...how do i have to set the ufw to use it
<g0d_51gm4> without to disable it!!!!
<lazyPower> jcastro: http://askubuntu.com/questions/174171/what-are-the-ports-used-by-juju-for-its-orchestration-services - candidate for updating since we no longer use zookeeper
<g0d_51gm4> a firewall appliance front to host machine is already present, but to implement some rules also on host which ports i've to open to work with juju
<g0d_51gm4> i see it just now y answer sorry!
<g0d_51gm4> i've to permit just ssh connection from that vnetwork?
<lazyPower> g0d_51gm4: you'll need to expose port 22, and 17017
<g0d_51gm4> juju uses only a tunnel ssh to connect to MaaS, isn't it?
<g0d_51gm4> 17017 for which service?
<lazyPower> 17017 is the API port for juju
<jcastro> lazyPower, yeah, edit away
<lazyPower> jcastro: edits are in queue
<g0d_51gm4> lazyPower: ok thanks, i make a further tests.
<schegi> is it impossible to deploy the mysql charm to an lxc container and then relate it to a ceph cluster? i got some issues that during ceph relation changed the charm tries to load the rbd module, which is not possible from within a container. adding the module outside to the kernel does not help it still tires to load the module
<jamespage> schegi, no
<schegi> how to get it working. in my setting it always fails because it tries to load the rbd module from within the container
<data> just to give a bit of feedback: it is working now, but ssh auth failed due to wrong rights on .ssh (which they weren't), but it had cached the wrong user id as the ubuntu user is local so the nfs server didn't know it, created it there, of course first time around with the wrong user id... But all that mess is now harmonizd
<schegi> http://pastebin.com/FJkSRnkY
<schegi> jamespage, this is from the /var/log/juju/unit-mysql-0.log on the node which actually runs the container
<dimitern> schegi, you can modprobe the rbd module on the host first, and it will be in the container as well
<dimitern> schegi, and you can also do lsmod | grep rbd first to check if it's there before modprobbing it
<jamespage> schegi, one sec
<dimitern> schegi, for an example, look in https://github.com/juju/juju/blob/master/provider/maas/environ.go#L535
<jamespage> negronjl, lazyPower: hrmm - mongodb?
<lazyPower> jamespage: what about it? :)
<jamespage> schegi, HA for MySQL and RabbitMQ backed by ceph is not supported in LXC containers
<jamespage> lazyPower, http://paste.ubuntu.com/7803902/
<jamespage> lazyPower, the last commit was quite a large delta - and breaks the relation_set calls
<lazyPower> service('stop', 'mongod')
<lazyPower> thats line 900 on my local copy - how would stopping the service cause failure?
<jamespage> lazyPower, that's not L900 on the charm in the charm store
<lazyPower> ah wait is ee, i misread the stack trace
<lazyPower> its down in teh relation_set block trying to send the replsets
<jamespage> lazyPower, database_relation_joined
<lazyPower> jamespage: i see this, i've got a fix
<lazyPower> my branch is pretty dirty atm, let me try to cherrypick out the fixes
<lazyPower> its missing a None before the dict its sending
<jamespage> lazyPower, there are a whole heap of incorrect relation_set calls
<jamespage> lazyPower, suggestion - back out the last commit and test this better first....
<jamespage> lazyPower, being explicit is better relation_settings={....}
<lazyPower> what do you mean being explicit? setting the relationship name vs using none?
<jamespage> schegi, sorry - there is a way todo this - the percona-cluster charm provides an active/active mysql option which does not rely on ceph - its all userspace
<jamespage> lazyPower, just saying don't None out relation_id, just be explicit as to which parameter you are intending to pass - in this case its relation_settings...
<lazyPower> this is missing quite a bit of what i've done, what i just fetched from the store
<lazyPower> its still got the gnarly retval block at the bottom
 * lazyPower sighs
<lazyPower> what the hell
<jamespage> lazyPower, I stand by my recommendation to revert and try again
<lazyPower> jamespage: reverted your mongos instances will fail though when you go to deploy this cluster.
<lazyPower> when you relate mongos => configsvr
<jamespage> lazyPower, well right now I can't relate and clients to mongodb
<lazyPower> jamespage: what client are you using? i'll add one to teh amulet test before i resub a MP
<jamespage> lazyPower, ceilometer
<lazyPower> thanks
<lazyPower> jamespage: and you've got mongodb deployed in standalone correct?
<jamespage> lazyPower, yes
<lazyPower> perfect. easy enough
<ctlaugh> Hi - I'm looking for some help with manual provisioning.  I have a working MaaS / Juju environment that I have deployed Openstack on using all the charms.  I have, however, a single node that I can not add into MaaS and want to manually provision using Juju.  I'm having some trouble understanding what I need to do from here: https://juju.ubuntu.com/docs/config-manual.html
<ctlaugh> ^^ Is bootstrap-host the same host or a different host from what I actually want to provision?
<lazyPower> ctlaugh: the bootstrap node is what warehouses the juju api server. its responsible for orchestrating the environment
<lazyPower> ctlaugh: sounds like you want to add this additional host as a unit into your environment
<lazyPower> jcastro: did we ever get adding units manually to an environment docs published?
<ctlaugh> lazyPower: Does it need to be a different host from the one already running Juju?
<lazyPower> ctlaugh: i dont understand the question.
<ctlaugh> lazyPower: I used juju bootstrap (using the MaaS provider) so that node has all the Juju bits running on it.  (I also deployed the juju-gui on another node but that's probably not important)
<ctlaugh> lazyPower: So, do I need to add another bootstrap node?
<lazyPower> nah, you can manually add hosts into an existing environment aiui
<lazyPower> thats why i'm pinging jcastro to find out if we ever published docs, that functionality landed a few revisions ago
<ctlaugh> Do I need to configure anything for the manual provider in environments.yaml, or just do an add-machine ssh:xxxx?
<ctlaugh> ok - thank you for your help -- sorry, mid-typing before I saw your last msg.
<lazyPower> np ctlaugh
<lazyPower> should just be add-machine ssh:xxx
<ctlaugh> lazyPower: I'll try that and see if it works
<ctlaugh> lazyPower: That seems to have worked.  New machine added and in the process of deploying a charm to it.  Thank you
<lazyPower> No problem :) glad its sorted
<ctlaugh> lazyPower: Well, the charm just failed to install, but I was expecting something like that to go wrong.  But, at least Juju can see it.
<lazyPower> ctlaugh: if you need help debugging, dont hesitate to reach out
<ctlaugh> I'll go ahead and ask one question before I start digging into it myself... I just ran juju debug-log and got this:
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 DEBUG juju.worker.rsyslog worker.go:75 starting rsyslog worker mode 1 for "unit-nova-compute-1" ""
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "<root>=DEBUG" to "golxc=TRACE;unit=DEBUG"
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 INFO install Traceback (most recent call last):
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 INFO install   File "/var/lib/juju/agents/unit-nova-compute-1/charm/hooks/install", line 5, in <module>
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 INFO install     from charmhelpers.core.hookenv import (
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 INFO install   File "/var/lib/juju/agents/unit-nova-compute-1/charm/hooks/charmhelpers/core/hookenv.py", line 9, in <module>
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 INFO install     import yaml
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 INFO install ImportError: No module named yaml
<lazyPower> ctlaugh: > 3 lines pastebin it please
<ctlaugh> unit-nova-compute-1: 2014-07-16 16:32:24 ERROR juju.worker.uniter uniter.go:486 hook failed: exit status 1
<ctlaugh> Sorry
<lazyPower> which charm is this?
<lazyPower> nova-compute?
<ctlaugh> nova-compute.  It looks like it's just missing a dependency.  I didn't have this issue when deploying using MaaS nodes.
<lazyPower> series precise?
<lazyPower> i recall on precise you had to install python-yaml
<lazyPower> thats not the case on trusty
<ctlaugh> trusty-icehouse.  Does the MaaS deployment process install dependencies automatically that I might need to install manually here?
<lazyPower> that shouldn't have anything to do with it
<lazyPower> maas is booting cloud-images
<lazyPower> s/booting/serving up
<ctlaugh> lazyPower: I wasn't sure if the images (and the packages installed by default) might have dependencies already present that I wouldn't necessarily have on my manually-provisioned node.
<lazyPower> shouldn't be the case - is the node you manually added to your env a precise host?
<lazyPower> you should be able to do juju run --unit # "sudo apt-get install python-yaml" && juju resolved -r nova-compute/#      -- and it'll at bare minimum get further along in the install process.
<ctlaugh> It's running trusty
<ctlaugh> After installing python-yaml, it's getting a lot further now.
<ctlaugh> lazyPower: Thank you for your help.  I'll reach out if I run into anything else I can't work through.
<lazyPower> anytime ctlaugh
<kirkland> marcoceppi: howdy
<kirkland> marcoceppi: I'm trying to submit my transcode-cluster bundle to the charm store
<kirkland> marcoceppi: according to: https://juju.ubuntu.com/docs/charms-bundles.html#sharing-your-bundle-with-the-community
<kirkland> marcoceppi: which says: juju bundle proof ../bundle-directory/ #default current working directory
<kirkland> marcoceppi: however, whenever I run: kirkland@x230:~/src/transcode/transcode/preciseâ« juju bundle proof transcode-cluster
<kirkland> ERROR unrecognized command: juju bundle
<marcoceppi> kirkland: do you have the latest charm tools?
<marcoceppi> Juju charm version
<kirkland> marcoceppi: sure, I'm on 14.04
<kirkland> ii  charm-tools                              1.0.0-0ubuntu2            all                       Tools for maintaining Juju charms
<marcoceppi> kirkland: archives are severly behind
<kirkland> marcoceppi: well, that should be fixed ;-)
<kirkland> marcoceppi: you're telling me I'm going to need to slop up my laptop with a ppa?  :-)
<marcoceppi> Well, we tried but missed the cut off
<marcoceppi> It's the best ppa around, ppa:juju/stable
<marcoceppi> kirkland: latest version requires software not yet in trust archives. If you know how to get around that I'm all ears
<kirkland> marcoceppi: dfdt?
<ziliu2020_> I'm having some problem with juju authorized-keys add command... It keeps saying my public key is invalid and can't add... any reason?  I'm sure my key is ok because it has been used in other places with no problem,
<ctlaugh> jamespage: I am using the cinder charm and am trying to work through a problem installing on a system with only a single disk.  I see your name all in the charm code and hoped I could ask you a quick question about it:  what's the right way to specify using a loopback file on trusty/icehouse?  I am putting    block-device: "/srv/cinder.data|750G"  in a config file, and I can see that the file gets created, but the loopback device and
<ctlaugh> volume group don't get created.
<cory_fu> I'm having an issue with a service I removed not going away entirely and causing issues when trying to re-deploy it: http://pastebin.ubuntu.com/7805234/
<cory_fu> Any suggestions?
<cory_fu> On how to get rid of it
<lazyPower> cory_fu: is the machine its attached to destroyed?
<cory_fu> Yes, it's gone
<lazyPower> cory_fu: also, what about the relations? are any related services in error?
<lazyPower> 9/10 its a related service thats trapped in error keeping it from going away
<cory_fu> It has no relations at the moment
<lazyPower> weird
<cory_fu> I'm full of lies
<cory_fu> Yes, related services are in error
<lazyPower> haha
<lazyPower> thats why
<lazyPower> if you resolve the services its related to, it'll go away
<cory_fu> Thanks
<cory_fu> Yep, that worked
 * lazyPower thumbs up
<lazyPower> glad we got it sorted
#juju 2014-07-17
<rbasak> wallyworld: around? I just replied to the bug, and I'm still up if IRC is easier to resolve everything.
<rbasak> wallyworld: thank you for your help, BTW.
<wallyworld> rbasak: hey, let me read the bug real quick
<rbasak> Oh, it hasn't even appeared in Launchpad yet (I replied by email)
<wallyworld> ah
<rbasak> I'll forward you a copy
<wallyworld> kk
<rbasak> Done
<wallyworld> rbasak: i added a LICENSE file to gomaaspi as requeted in the original bug, but as you say, you can't see that easily
<rbasak> wallyworld: ah, OK. Sorry.
<wallyworld> rbasak: so on that basis, think we're ok. i can get a tarball to you
<rbasak> wallyworld: thanks!
<wallyworld> but that will be tomorrow as i have to update the dependencies file and let the CI server do its thing
<rbasak> I wonder what the easiest route is to get an upload sorted with licensing fixed, but I can resolve that with sinzui.
<wallyworld> rbasak: np, thank you for being patient with me :-) i no next to nothing about licensing
<wallyworld> know
<wallyworld> rbasak: 1.20.2 will be released real soon (next week) with the correct source with the licensing fixes etc
<wallyworld> we need to get some other development done first though
<rbasak> wallyworld: ah, that'll be the easiest thing then. I'll just hold on - no harm in a few days wait I think.
<wallyworld> rbasak: i can give you a tarball earlier though
<wallyworld> just in case we need to make any other changes
<rbasak> wallyworld: no problem - I appreciate you jumping straight on it.
<wallyworld> welcome. i'm really keen for this release not to be blocked
<rbasak> wallyworld: yeah - that's a good idea. I can work with the tarball - thanks.
<wallyworld> sure, will keep you in the loop
<rbasak> wallyworld: getting closer to keeping the archive up-to-date much quicker with new releases. It'll be great to push this back to Trusty, too.
<wallyworld> oh yes, given the juju/mongo issues that will be fixed in this release
<rbasak> wallyworld: I'd like to also get some process changes in place so that copyright/licensing can be verified earlier in the process, so by the time there's a release, me or James can upload without any further review at that stage.
<rbasak> Then there's less to hold an update up.
<rbasak> We can worry about that later, though.
<wallyworld> rbasak: agreed, we (juju core leadership team) is onto that and will be putting processes in place to properly introduce new 3rd party dependencies
<wallyworld> so all new dependencies are properly vetted and licensed up front
<rbasak> wallyworld: that's great - thanks.
<rbasak> wallyworld: technically the uploader (to the Ubuntu archive) is responsible for checking upstream licensing when uploading a new release.
<rbasak> wallyworld: that's more geared around random upstreams though - not when they're working closely together like this.
<rbasak> wallyworld: and we have to update the debian/copyright file which maps every single file to a list of copyright holders and licenses.
<wallyworld> rbasak: yeah, at the level i'm taling about, it's where a dev will simple add a 3rd party bit of code to the juju core code base
<rbasak> wallyworld: what I'm thinking is that maybe this file can be updated much earlier in the process - basically by you guys at the time you update dependencies.tsv.
<wallyworld> we will ensure at that point there's proper copyright assignment via the CLA etc and license file etc in place
<wallyworld> oh ok
<rbasak> wallyworld: right, but for third party deps also.
<rbasak> (where we can't rely on CLA)
<wallyworld> rbasak: can you email alexis with the details of what yu want and we can follow up from there?
<rbasak> wallyworld: sure
<wallyworld> thanks, that will allow us to properly collaborate and work how how to move forward in the best way
<rbasak> To be clear, these are really just my musings on what we might be able to do to make everything go smoothly.
<rbasak> They aren't requirements or anything.
<wallyworld> sure, understood. but good cnversations to have and if we can expend a little effort now to save pain down the line, that's good imo :-)
<wallyworld> we can collectively agree on how to proceed
<sinzui> rbasak, Each time CI blesses a 1.20.x tarball we get to ask "why not release now" When wallyworld indicates all the fixed packages are imported we can start the release.
<wallyworld> sinzui: there are other bug fixes to com first though
<wallyworld> but i can update the 1.20 dependnecies file so intermediate tarballs get generated with the right source
<sinzui> wallyworld, sure, but I haven't done a release this week, and devel still has regression, so I think 1.20.2 will be released first
<wallyworld> sinzui: i don't think we should release 1.20.2 until the current milestone bugs are all fixed, agree?
<sinzui> wallyworld, I am not strongly inclined to delay goodness. I would rather release often. Since devel gets a new regression every day, I am happy to release a 1.20.x each week
<wallyworld> sinzui: but won't that just case churn for the packaging guys?
<wallyworld> getting the backport into trusty
<wallyworld> regardless, 1 bug we shouldn't release without fixing is bug 1307434 talking to mongo can fail with "TCP i/o timeout"
<_mup_> Bug #1307434: talking to mongo can fail with "TCP i/o timeout" <cloud-installer> <landscape> <performance> <reliability> <juju-core:In Progress by mfoord> <juju-core 1.20:Triaged by mfoord> <https://launchpad.net/bugs/1307434>
<wallyworld> that is the primary focus of the 1.20.2 release
<sinzui> wallyworld, good point. we need a good pace, every two weeks was fine for james last year.
<wallyworld> that bug should be fixed friday or more likely early next week
<thumper> wallyworld: back now
<thumper> kids weren't that fussed on tinkerbell :-)
<wallyworld> thumper: lol, ok, give me a minute
<thumper> menn0: https://github.com/juju/juju/pull/321
<wallyworld> thumper: in call now
<menn0> thumper: looking
<html> hi
<g0d_51gm4> lazyPower: i confirm y the error in the juju status after the reboot of the Host Machine was linked to the firewall's status set on it. thanks a lot for your patience and support. see y soon bye g.
<raywang> hi anyone knows how to change the distro of the series of each nodes  from " juju status "?
<raywang> which nodes have been already added in juju
<Egoist> hi
<Egoist> is -relation-departed hook is executed after remove unit from service?
<william_home> Hi all
<william_home> i'm trying to deploy a wordpress sample using juju maas and lxc containers
<william_home> i'm not connected to internet so I'm facing some challenges
<william_home> when trying to deploy an lxc cotainer it cannot download the cloud-img rootfs from cloud-images.ubuntu.com
<william_home> i have to make this available offline somehow, any pointers?
<Sh3rl0ck> Hello..We have deployed OpenStack Icehouse using the Juju and Maas on Ubuntu 14.04 (Trusty). I am having issues with installing Ceph on it. Are there any reference documents for installing Ceph as a backend to Cinder
<pmatulis> william_home: internal mirror?
<ctlaugh>  I am using the cinder charm and am trying to work through a problem installing on a system with only a single disk.  What's the right way to specify using a loopback file on trusty/icehouse?  I am putting block-device: "/srv/cinder.data|750G"  in a config file, and, once I SSH in, I can see that the file gets created, but the loopback device and volume group don't get created.
<rbasak> niemeyer: around? About src/launchpad.net/goyaml licensing.
<rbasak> niemeyer: which files are covered by which licenses?
<niemeyer> rbasak: I will add a note to the LICENSE.libyaml file
<rbasak> niemeyer: thanks!
<niemeyer> rbasak: The *c.go files were ported from the C files from libyaml
<niemeyer> rbasak: and thus are still covered by its license
<rbasak> Ah - I see.
<rbasak> That makes sense
<niemeyer> rbasak: Please note that lp.net/goyaml is stale
<niemeyer> rbasak: The project currently lives at github.com/go-yaml/yaml
<niemeyer> rbasak: and that's where the update will be made
<rbasak> niemeyer: OK. I guess I'll see it switch in the next 1.20 release tarball then?
<niemeyer> rbasak: I don't really know, sorry.. I'm not involved in packaging juju
<rbasak> Or if not I can deal with that I guess. I don't think this change needs to be in the source tree, as long as I'm clear on how to update debian/copyright.
<rbasak> OK, no problem.
<niemeyer> rbasak: I can tell you that right now: the following files are covered by LICENSE.libyaml:
<rbasak> *c.go? I can match against that. Even directly in debian/copyright :)
<niemeyer> rbasak: emitterc.go, parserc.go, readerc.go, scannerc.go, writerc.go, yamlh.go, yamlprivateh.go
<rbasak> OK
<niemeyer> rbasak: Oh, and apic.go
<rbasak> Right
<rbasak> I can run with that - thank you!
<niemeyer> rbasak: No problem, let me know if I can help further
<Sh3rl0ck> Hello..I have deployed OpenStack Icehouse using the Juju and Maas on Ubuntu 14.04 (Trusty). I am having issues with installing Ceph on it. Are there any reference documents for installing Ceph as a backend to Cinder
<Sh3rl0ck> A new  Juju charm in introdcuced for 14.04 called Cinder-Ceph but I did not find and reference documents (besides release notes)
<niemeyer> rbasak: LICENSE.libyaml was updated with those notes
<ziliu2020_> I asked this question yesterday but no one answered me..  so I raised it again.  I'm looking for a way to distribute my public key to all juju nodes including containers.  I tried to use juju authorized-keys add command but no luck.  it says the key can not be added and invalid key when I issued the command "juju authorized-keys add key-file.pub".  Am I doing anything wrong here?
<william_home> pmatulis: yes, local mirror
<rbasak> niemeyer: thank you!
<john5223> anyone here try  juju + salt?
<ziliu2020_> i just figured it out.. it turned out not add key file, intead add copy/pasted key
<Sh3rl0ck> ziliu2020_: Where to paste the key? environment.yaml?
<ziliu2020_> no use this command
<ziliu2020_> juju authorized-keys add 'ssh-rsa AAAAA.......'
<Sh3rl0ck> ziliu2020_: Ok great! Thanks.
<ziliu2020_> it will update environment and then populate the keys to all juju nodes
<pmatulis> william_home: so you shouldn't have a problem, no internet required
<william_home> pmatulis: well define local mirror then :), i have a mirror off all precise /trusty packages
<william_home> pmatulis: but how do i mirror the cloud-images and how do i define those?
<william_home> my setup is running from trusty maas and juju install
<Sh3rl0ck> william_home: Any idea about ceph deployment using Juju for Trusty/Icehouse?
<MrkiMile> Hello
<MrkiMile> when I try to deploy charm from local dir, that dir has to be named same as the charm. Is there a way to circumvent that? I'm doing: juju deploy local:precise/apache2-test1, and I get an error that there is no charm inside apache2-test1. But if I rename apache2-test1 to apache2, then I can deploy.
<pmatulis> MrkiMile: i'm prolly missing something but how else would juju know what charm you want to use if you don't tell it?
<marcoceppi> MrkiMile: You have to rename the "name" key in the metadata.yaml to match the directory
<marcoceppi> MrkiMile: it's best to simply creat a new tree, ~/charms/test1/precise/apache2
<marcoceppi> and seperate them by JUJU_REPOSITORY than go renaming metadata.yaml
<MrkiMile> marcoceppi: So, there is no way to tell juju to use charm from the directory that's not named as the charm ?
<elarson> I'm playing around with writing my own charm for an app we have
<elarson> basically I'm just trying to install the package and start a process provided by the package
<elarson> does juju deploy myapp run the start hook?
 * elarson just realized where he might find the answer in the manual...
<elarson> how are most folks using juju? do you create your own charm for your application?
<sebas5384> elarson: you can use a charm for your application nature, like I do with drupal, and then a subordinated charm
<sebas5384> yesterday having a juju workshop :) https://www.facebook.com/photo.php?fbid=826602450708048&set=a.590513300983632.1073741832.354420671259564&type=1&theater
<elarson> sebas5384: that sounds like you deploy drupal as a charm and then apply your changes as a subordinate, which means it ends up in the same container?
<sebas5384> elarson: yep
<sebas5384> but thats not a complete solution
<elarson> at this point i'm just looking for a good place to start ;)
<sebas5384> thats a good place then
<sebas5384> hehe
<sebas5384> sshuttle is really a bottleneck in the vagrant workflow
<sebas5384> :(
<sebas5384> just use iptables and you will notice a big diference
<jcastro> sebas5384, hey are you guys devving locally on your laptops and then pushing to a cloud?
<sebas5384> yes!
<jcastro> hey so there's a guy in core leading a team to make local dev to cloud suck less
<jcastro> mind if I link you guys up over email? I'm sure you guys have a bunch of suggestions
<sebas5384> yeahhh sure!!! we already have a lot of troubleshooting and feedback :)
<jcastro> also, if you think we should use iptables instead of sshuttle
<jcastro> write it up and we can put that in the docs instead?
<sebas5384> yes or other proxy things like hipache for example
<sebas5384> i'm doing an experiment with a proxy in the vbox
<sebas5384> so you shouldn't be doing no more iptables thingis
<sebas5384> other things like using a plugin for vagrant
<sebas5384> vagrant install vagrant-nfs_guest
<sebas5384> to mount the directory of the deployed project into the container
<jcastro> yeah
<pmatulis> heh, devving
<ctlaugh> Anyone here have knowledge about the cinder charm?
<sebas5384> ctlaugh: not yet :P
<Sh3rl0ck> Anyone with information about correct ceph charm for OpenStack on 14.04?
<Sh3rl0ck> I am confused between ceph and cinder-ceph charm and how they are different?
#juju 2014-07-18
 * hazmat looks wwitzel3's state services branch
<schegi> jamespage?
<jamespage> schegi, hello!
<schegi> hi got two question. you mentioned that you fixed somthing related to the missing zap with ceph. i updated your network-slpit charms but they are still up to date. have you commited?
<schegi> second. i got problems to deploy the checked-out network-split version of cinder and cinder-ceph. he does not recongnize them as charm if i want to deploy them from local repo. is there something missing in the charm??
<schegi> And i have some strange behavious related to the hacluster charms. when i deploy it like it is described in https://wiki.ubuntu.com/ServerTeam/OpenStackHA or change to go for the percona-cluster charm (doent matter). the deployment works fine very service is up and running, no hook fails, BUT if i log in to one of the machines and check the corosync/pacemaker cluster by crm_mon or crm status it seems to me that the single services are r
<schegi> but it looks like they are not connected. i am no corosync pacemaker pro. but i know from an manual deployment taht all nodes in the cluster should appear in the output of crm status and they didnt.
<jamespage> schegi, rev 84 in ceph contains the proposed fix
<jamespage> schegi, you have multicast enabled on your network right?
<jamespage> (re the hacluster issue)
<jamespage> schegi, cinder and cinder-ceph should be OK - do you get a specific error message?
<schegi> currently rebootstrapping but if i do something like juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:cinder i get some charm not found message
<schegi> doning the same with ceph works fine
<schegi> ok first problem was my fault, just using bzr to rarely. tried update but had to pull. :) (still using too much svn)
<schegi> jamespage, here is the error message i get ERROR charm not found in "/usr/share/custom-charms/": local:trusty/cinder
<schegi> trying to do juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:trusty/cinder or juju deploy --repository /usr/share/custom-charms/ --to 0 --config ~/juju-configs/cinder.yaml local:cinder
<jamespage> schegi, dumb question but it is under a trusty subfolder right?
<schegi> all the charms are there and for all the others it works perfectly
<schegi> could do the same line just replacing cinder with ceoh and it deploys ceph.
<schegi> away for 20 mins brb
<g0d_51gm4> hello guys, my situation is the following: i've an vMaaS environment with 3 vm node and a juju environment. I've deployed juju-gui on vm node 1 and openstack on the other 2 nodes. now I'd like to add other 3 vm nodes and use them to deploy hadoop master plus a slave and cassandra but i've a doubt is it necessary to create another juju environment to dedicate to that? thanks..
<g0d_51gm4> the error is the following http://paste.ubuntu.com/7813669/ while if i specify the node obtain that error http://paste.ubuntu.com/7813674/
<g0d_51gm4> is there  anyone can support me to resolve that? thanks
<schegi> jamespage, back
<jamespage> schegi, great
<schegi> jamespage, ok checked it twice. path is correct but still not able to deploy cinder from local repo
<jamespage> schegi, testing myself now
<g0d_51gm4> anyone can help me?
<g0d_51gm4> please is there someone who can help me to resolve it?please
<Sh3rl0ck> Hello..Has anyone observed ceph.conf being configured wrongly after deploying juju ceph charm?
<Sh3rl0ck> The keyring values in the ceph.conf file seems to be incorrectly populated as it contains "$cluster.$name.keyring rather than actual values
<ctlaugh> jamespage - I'm having an issue with the cinder charm on a single-drive system.  Would you be able to help?
<jcastro> Hey everyone, so for a demo I want to launch a bundle into multiple environments
<jcastro> I know I can do
<jcastro> juju deploy blah
<jcastro> juju switch
<jcastro> juju deploy blah again
<jcastro> Is there a way I can do `juju deploy this bundle into these environments` all at once? Not serially.
<rick_h__> jcastro: I've not tried it but quickstart takes a -e flag
<rick_h__> jcastro: so you could in theory run a bunch of quickstart commands backgrounded each with a diff -e?
<jcastro> yeah, but then I am concerned that they will step on each other
<rick_h__> jcastro: yea, that's the untested part
<jcastro> hmm, actually, as long as I don't switch while they are launching ....
<jcastro> that _should_ work
<rick_h__> but I think it would work since once the script runs it's got all the config loaded.
<jcastro> ok I will try that
<Sh3rl0ck> Hello..has anyone used the ceph charm for ceph backend to cinder? Do we need to add a relation between Nova compute and Ceph?
<jcastro> rick_h__, so that totally worked!
<rick_h__> jcastro: woot!
<Sh3rl0ck> Hello..has anyone used the ceph charm for ceph backend to cinder? Do we need to add a relation between Nova compute and Ceph?
<jcastro> hey sinzui
<jcastro> I am having some issues with juju on joyent
<jcastro> it bootstraps, and deploys the gui
<sinzui> yeah you see that too
<jcastro> but subsequent deploys are waiting on the machine
<sinzui> jcastro, Exactly what CI sees
<jcastro> but oddly enough, juju just returns "pending" and continues on with life
<jcastro> oh whew! so not crazy!
<sinzui> jcastro, I manually deleted all the machines in our joyent account this morning to fix a test that was failing
<sarnold> Sh3rl0ck: note also http://manage.jujucharms.com/charms/precise/cinder-ceph
<jcastro> sinzui, I have one more issue with hp cloud
<jcastro> do you have a network config to put into their horizon console to make juju work?
<sinzui> jcastro, This is an old problem that happens, but Juju might be partially to blame. I often see stopped machines instead of deleted machines. My api calls also fail to delete, so maybe joyent is at fault
<sinzui> jcastro, creating one network is enough to make juju work in new Hp. New accounts get a default network, but migrated accounts may need to add one, just accept the recommended settings
<jcastro> yeah I tried that but no joy, I'll give it another shot
<jcastro> good to know I wasn't going crazy wrt joyent though
<sinzui> jcastro, I wrote this a few weeks ago. HP has been lovely since then. http://curtis.hovey.name/2014/06/12/migrating-juju-to-hp-clouds-horizon/
<Sh3rl0ck> sarnold: Thanks for link. I actually tried out cinder-ceph charm as well but seems like its required only if you want multi-backend support for Cinder
 * sinzui reads hp network config
<jcastro> thanks, I'll try that
<jcastro> <-- lunch, bbl
<Sh3rl0ck> But not sure ceph charm needs to be connected to nova-compute
<Sh3rl0ck> Also, does anyone know if there is a way to add netapp backend to Cinder? Is there a separate charm for this?
<sinzui> jcastro, CI has "default  default 10.0.0.0/24  No	ACTIVE	UP"
<sinzui> jcastro, charm-testing is similar, the only difference is "default" is a string that tells me Antonio named it
<sebas5384> if an agent of a machine is down
<sebas5384> what can I do?
<sebas5384> agent-state: down
<sebas5384> agent-state-info: (started)
<sinzui> sebas5384, There are a few things to do after you ssh into the machine with the down agent
<sinzui> sebas5384, get a listing of the procs: this will show both the machine and the unit agents ps ax | grep jujud
<sinzui> sebas5384, sudo kill -HUP <pid> will restart a stalled agent
<sebas5384> sinzui: there's only the 0 machine running
<sinzui> okay, that lead to starting the unit.  jujud are under upstart, but uniquely named. We need to learn the name: ls /etc/init/jujud-*
<sinzui> sebas5384, you can run something like this: sudo start jujud-unit-arm64-slave-0
<sebas5384> hmmm sinzui yeah! I did a restart of the agent
<sebas5384> because the jujud init isn't there
<sinzui> oh
<sinzui> sebas5384, that implies the service didn't complete installation
<sebas5384> ls /etc/init | grep juju
<sebas5384> juju-agent-devop-local.conf
<sebas5384> juju-db-devop-local.conf
<sinzui> but the status you reported says it was there
<sebas5384> hmmm so it should be there
<sebas5384> this one is really tricky
<sinzui> sebas5384, right those are the machine and db procs that comprise the state server commonly on a machine 0. DId you deploy a service
<sebas5384> sinzui: i'm using the cloud-installer
<sebas5384> that tries to deploy a bunch of services of openstack into one nested container into a kvm
<sinzui> sebas5384, is the first time on that machine that use used it? lxc needs to build a template first, and that is slow. After the first time, it gets fast
<sinzui> ah kvm, not template
<sebas5384> yeah
<sebas5384> hehe
<sinzui> sebas5384, I don't have much experience with cloud installer. the agent you want to start/restart will be in the container, kvm or lxc. your local host only gets the state-server in this case
<sebas5384> sinzui: right
<sinzui> sebas5384, in my example, I sshed to the machine the agent was down in, then restarted the proc (2 weeks ago actually, that example was from my histor)
<sebas5384> but because i can't ssh into the machine
<sebas5384> so i'm a bit into a limbo
<sebas5384> hehe
<sinzui> sebas5384, "juju ssh" will work regardless of the state of the agent, and that is a wrapper for real ssh
<sebas5384> ok, but is like the machine isn't started
<sebas5384> but when i do juju status, its saying that it is
<sebas5384> other thing i went to the virsh console to see the kvm machines
<sinzui> sebas5384, well that is a different matter. kvm provisioning is slow or something else has gone amiss.
<sebas5384> yeah, I think something else happen here
<sinzui> stokachu, do you have any cloud-installer experience that might help sebas5384
<stokachu> sinzui, yea im working with him to figure out whats happening
<sebas5384> thanks sinzui and stokachu !! :)
<sebas5384> so i restarted the machine by the virsh console
<sebas5384> and it seems to worked
<sebas5384> now im into the machine
<sebas5384> holly molly!!
<sebas5384> its all installing now
<shiv> trying to deploy openstack HA on physical servers...anyone see issues while deploying on physical servers....Mysql hook failing with cinder...this is the error: shared-db-relation-changed 2014-07-18 20:22:28.637 28128 CRITICAL cinder [-] OperationalError: (OperationalError) (1130, "Host '4-4-1-95.cisco.com' is not allowed to connect to this MySQL server") None None
<shiv> here is the stack trace
<shiv> http://pastebin.com/DRKfi4d1
<shiv> 4.4.1.95 is the VIP for the for the cinder
#juju 2014-07-19
<Caelifer> Hi everyone
<Caelifer> I'm trying maas/juju for the first time and i have some problems when i try to "juju bootstrap"
<Caelifer> ERROR cannot start bootstrap instance: gomaasapi: got error back from server: 403 FORBIDDEN (You are not allowed to start up this node.)
<Caelifer> it seems pretty simple
<Caelifer> but nobody on Internet seem to had a proper solution
<Caelifer> I think there is something about my rsa keys
<Caelifer> I don't know if I use them correctly
<Caelifer> i don't even know if i really need them
<Caelifer> juju created one key pair in ~/.juju/ssh
<Caelifer> what are these for ?
<Caelifer> is there somebody ?
<hazmat> Caelifer, so the error sounds like you haven't added your api key from maas settings to juju
<hazmat> config
<hazmat> maas has ssh keys for standalone usage, juju will create them for .. ops like juju ssh and for bootstrap (which synchronously executes via ssh on the bootstrap machine).
<Caelifer> you mean this credential in environments.yaml "maas-oauth: "BMpXsbZDqP4xUvyYJu:8ctDLuRsAc7UNj7T2M:YwBHUDXUEvhzuXZpHBky9p8wmsyFdMCL" ?
<Caelifer> I get this from my maas web ui and i put it on my environments.yaml
<Caelifer> sounds right to you ?
#juju 2014-07-20
<thehe> hey guys - is there any option to set maschine0 (in local deployment) to host units? i want (HAVE TO) juju-gui on the host-maschine!
#juju 2015-07-13
<Hue> heyy
<Hue> i want to be a ubuntu user, teach me to setup!
<Odd_Bloke> marcoceppi: I'm catching up on email from Friday; I have submitted a MP for the charm-helpers changes in https://code.launchpad.net/~daniel-thewatkins/charms/trusty/ubuntu-repository-cache/handle_mounted_ephemeral_disk/+merge/261356
<Odd_Bloke> marcoceppi: You can find that MP at https://code.launchpad.net/~daniel-thewatkins/charm-helpers/lp1370053/+merge/260864
<Odd_Bloke> marcoceppi: But I wasn't seeing any movement on that, so I was carrying the patch locally until it landed.
<marcoceppi> Odd_Bloke: awesome, thanks, I'll talk a look today!
<Odd_Bloke> marcoceppi: Thanks!
<Odd_Bloke> marcoceppi: (Up early, or in a different TZ?)
<marcoceppi> Odd_Bloke: up early, I've got a flight to catch
<Odd_Bloke> Early flights. D:
<bloodearnest> Does anyone know how to configure storage/block-storage-broker to work with local provider?
<bloodearnest> I can set provider to be local on storage, block-storage-broker just doesn't like deploying on local at all afaics
<lazyPower> bloodearnest: its only confirmed working on AWS and OpenStack
<bloodearnest> lazyPower, right, I don't want to actually use it - I just want to have my services/relations work unchanged on local
<bloodearnest> sounds like I need to conditionally add broker + relation if I detect we're using openstack
<lazyPower> that or file a bug so BSB can determine if its running locally and noop
<lazyPower> with the storage support juju has grown, i wonder how much shelf life of BSB will retain.
<bloodearnest> lazyPower, indeed, but it will be a good while before we can use 1.24 in prod, and I need it *now*, so... :(
<lazyPower> ah
<lazyPower> fair counter point
<bloodearnest> hence why I am not that motivated to fix the charm, too
<bloodearnest> as its on life support
<bloodearnest> lazyPower, so, about these docker juju images
<lazyPower> that, i know something about ;)
<lazyPower> whats up?
<bloodearnest> I think this might be useful for the devs on our team, who have had bad experiences trying to get mojo/juju setup to run reliably and fast on local provider
<lazyPower> charmbox does work with local provider, but it requires dancing of the jig to get it to work
<lazyPower> you have to bootstrap the local provider, then fire up the docker image. its a bit of a strategic process, and can sometimes yield odd behavior
<lazyPower> a lot of that should go away if we ever get a LXD based local provider.
<bloodearnest> lazyPower, so does it do nested lxc's? Or deploy to local provider on the host?
<lazyPower> you're in an isolated sandbox for dependencies, and leveraging juju-client effectively.  The local provider exists on the host
<lazyPower> its not as native of an experience as the vagrant image provides, but its faster and lighter weight
<bloodearnest> lazyPower, thumper said he was working on lxc provider as friday project, dunno if he's made progress
<bloodearnest> lazyPower, so you still need juju on the host?
<lazyPower> to leverage local provider, yes
<bloodearnest> right
<lazyPower> the AppArmor/CGROUP schenanigans in docker are wonky to say the least.
<bloodearnest> indeed
<lazyPower> i have yet to find the right brew to get a local provider running in the docker image
<lazyPower> cory_fu is the one that actually pioneered that front and found success
<lazyPower> bloodearnest: the instructions for running local provider w/ the docker image are outlined in the charmbox readme
<bloodearnest> my attempts to find a usage that works with both dev and prod have been blocked by that issue. lxc's apparmor profiles are much simpler
<bloodearnest> lazyPower, thanks, I will try it out
<lazyPower> well, we're using it in Jenkins
<lazyPower> any of the juju-ci results you see have been run through these images. That was our primary testing grounds for the images before pushing them out into the wild, getting them stable enough to run our CI Env
<bloodearnest> lazyPower, to deploy production services?
<lazyPower> http://juju-ci.vapour.ws/view/Juju%20Ecosystem/job/charm-bundle-test-aws/181/console
<lazyPower> for example
<lazyPower> as well as my Drone setup that's achieving the same results: http://drone.dasroot.net/github.com/chuckbutler/docker-charm/drone-juju-integration/4ce159d936f4a42ac910aa3ec7f4d498d209dcdb
<bloodearnest> right
<bloodearnest> so, I'm talking about using docker to deploy app payloads in a charm
<lazyPower> That's completely do-able too, whats the application stack you're trying to deploy?
<bloodearnest> many, but lets pick ubuntu sso, a django app
<lazyPower> bloodearnest: actually, this may be of some interest to you. We built a docker/juju based ad-hoc PAAS for dockercon.
<lazyPower> in the interest of saving time, i wrote a single compose charm that clones a git repo, and runs docker-compose pull && docker-compose up
<bloodearnest> the thing is, I want to dev on the code base locally, using the charm deployed on local provider as the dev env
<lazyPower> ah, that's going to be tricky
<bloodearnest> but docker build doesn't work in an lxc
<lazyPower> docker in lxc is notoriously painful
<bloodearnest> right
<lazyPower> I have a MAAS box sitting behind me i use for that
<lazyPower> or i shell out the clams for cloud time
 * bloodearnest think lxd is likely gonna work better for us than docker
<lazyPower> thats entirely possible
<lazyPower> if only the rest of the community felt that way, we wouldn't be investing as much effort in bridging the gap :)
<g3naro> whats difference of lxd vs docker ?
<g3naro> or got a good link to article on this
<lazyPower> g3naro: http://www.zdnet.com/article/ubuntu-lxd-not-a-docker-replacement-a-docker-enhancement/
<lazyPower> g3naro: to put it in my own words - LX[D|C] is focused on full OS containers, a very flexible solution that still gives you the full surroundings of your os, like an init system, multiple processes in the container. Its a lighter weight alternative to KVM without hardware layer isolation. Docker is intended to be immutable process containers, where you deliver a single application thread per container. Such as strictly a web serever, or
<lazyPower> middleware, or a worker process, while LXC can handle the full stack in  a single container. There are some key differences such as the backend technology - docker moved to libcontainer in 2014, while LXC is still based on the LXC/Cgroups code being cranked out by stgrabers team.
<g3naro> ahhh
<g3naro> ok, yeah i have been u sing lxc and seems like a better solution to running a kvm machine
<g3naro> i guess could you just build a cluster of boxes with MAAS and then lxc containers,, vs openstack+kvm ?
<lazyPower> we actually have some openstack deployments that leverage lXC for density on a small number of machines
<lazyPower> it co-locates services using LXC isolation to condense the requirement down for a devstack to ~ 2 machines.
<g3naro> but what would you need to have lxc ontop of openstack then ?
<lazyPower> basically run everything on one machine, then fire up nova-compute on a secondary machine dedicated to providing the vm images.
<g3naro> interesting
<lazyPower> There's a nova-lxd driver charm, which will allow you to consume LXD as your hypervisor.
<g3naro> ahh
<g3naro> so you're juju'ing it on there anyways
<lazyPower> we're all over that stack with containers :)
<lazyPower> hattip @ jamespage and company for exploring that
<g3naro> interestng concepts
<g3naro> so lxd is the hypervisor
<g3naro> https://linuxcontainers.org/lxd/introduction/
<coreycb> gnuoy, jamespage: hello, can I get a review of this from one of you?  https://code.launchpad.net/~corey.bryant/charm-helpers/install-warning/+merge/264340
<jamey-uk> I'm trying to deploy my Rails apps using the Rails charm but it fails when it comes to building the json Gem: https://gist.github.com/anonymous/8271efd25a30732e12c4. This application has been deployed locally and to production Ubuntu servers with no issue. Does anyone know what could be causing this?
<coreycb> niedbalski, would you be able to review this by any chance?  https://code.launchpad.net/~corey.bryant/charm-helpers/install-warning/+merge/264340
<beisner> hi gnuoy, coreycb - this lil race is becoming more noticeable.  it's always been a bit racey, but it's pretty consistent with a few of the charms.  input on getting away from an arbitrary wait on this one?   bug 1474030
<mup> Bug #1474030: amulet _get_proc_start_time has a race which causes service restart checks to fail <amulet> <openstack> <uosci> <Charm Helpers:New> <neutron-api (Juju
<mup> Charms Collection):New> <neutron-gateway (Juju Charms Collection):New> <openstack-dashboard (Juju Charms Collection):New> <https://launchpad.net/bugs/1474030>
<coreycb> beisner, basically it just expects the pid to change since the service is restarted so maybe the code could get the pid ahead of time then make the config change, then watch the pid until it changes
<beisner> coreycb, yeah i think that would simplify things too.  check pid before.  change something.  watch with a timeout, to see if pid changes.
<coreycb> beisner, sounds good
<beisner> coreycb, gnuoy -  on a different race topic  :-/   the mojo-os approach of using  juju run on all units to determine if hooks and relation data have settled ... appears to no longer be reliable.
<gnuoy> beisner, I think one of the charms has a fix
<beisner> coreycb, it's baaaack - even with a double juju run check.  unexpected relation data in cinder cinder-ceph storage-backend - key 'broker_rsp' does not exist
<beisner> gnuoy, coreycb - ^  juju-deployer says a-ok, ready.   the juju run x 2 against all units says a-ok.   yet a bit of relation data isn't always present.   if i run it manually, then wait who-knows-how-long, that relation data eventually arrives.  cannot for the life of me figure out how to know when.
<beisner> gnuoy, re: pid race, do you know which?  i see a few variants on the pid check in c-h.
<gnuoy> beisner, sorry, otp
<beisner> np gnuoy
<beisner> gnuoy, coreycb - i'm dealing with 2 separate races.  2 bugs to track:
<beisner> bug 1474036
<mup> Bug #1474036: amulet openstack tests have race - some tests start before relations/hooks have settled <amulet> <openstack> <uosci> <Charm Helpers:New> <cinder-ceph (Juju Charms Collection):New> <https://launchpad.net/bugs/1474036>
<beisner> bug 1474030
<mup> Bug #1474030: amulet _get_proc_start_time has a race which causes service restart checks to fail <amulet> <openstack> <uosci> <Charm Helpers:New> <neutron-api (Juju
<mup> Charms Collection):New> <neutron-gateway (Juju Charms Collection):New> <openstack-dashboard (Juju Charms Collection):New> <https://launchpad.net/bugs/1474030>
<mbruzek> marcoceppi: I need to run a grep in a set -e bash script that might fail, but I need the result of the grep 0 or 1.  I forget how to do that without exiting the script.  Can you enlighten me?
<lazyPower> mbruzek: set +e
<lazyPower> then check $?
<mbruzek> lazyPower: Yeah I guess I can do that, but this is a charm script so the best practice is to use set -e
<lazyPower> mbruzek: temporarily disable error checking then re-enable
<lazyPower> thats acceptable in a charm
<thedac> grep $SEARCH || true  also works IIRC
<mbruzek> lazyPower: I know I can change it just for that command. ..
<mbruzek> thanks to you both!
<pmatulis> does a configuration change to environments.yaml always require a bootstrap, and thus the current env needs to be destroyed first?
<thumper> pmatulis: changing something in environments.yaml does not impact any running environments
<thumper> pmatulis: if you want to change a setting on a running environment, use 'juju set-env'
<pmatulis> thumper: so i need to do everything run-time (juju set ...) right?
<thumper> pmatulis: bootstrap uses the values in environments.yaml, but if you have a running environment that you are trying to change, then yes,
<thumper> set-env
<thumper> set is for service config
<pmatulis> thumper: so easy to lose track of configuration changes i suppose?
<pmatulis> ok re 'set-env vs set'
<pmatulis> thumper: any idea -> http://paste.ubuntu.com/11874729/
<thumper> pmatulis: yeah, some environment attributes are immutable after an environment has started
<pmatulis> thumper: ok, time to restart. thanks
#juju 2015-07-14
<wolverineav> hi, this is related to openstack neutron-api charm.
<wolverineav> after deploying openstack with neutron as the network manager, I see the following error in the log: http://paste.ubuntu.com/11875141/
<wolverineav> I checked neutron.conf and the relevant section containing keystone_authtoken looked like this: http://paste.ubuntu.com/11875147/
<wolverineav> also important to note is that I deployed neutron-api in a lxc
<wolverineav> I've mostly followed the relations and everything as given here: https://jujucharms.com/openstack-base/34
<wolverineav> except that I don't use ceph and rely completely on cinder. so minus those deploy and relation operations. also I try to keep everything in lxc, except rabbitmq-server and quantum-gateway
<wolverineav> I wouldn't expect debugging the issue here, but quick pointers like which services cannot be deployed in lxc (rabbitmq-server was one, which I figured while debugging earlier :) )
<suchvenu> Hi Team
<suchvenu> I have pushed my code to Launchpad and trying to run amulet test. Getting the following error when i run python tests/10-bundles-test.py
<suchvenu> charm@islrpbeixv665:~/charms/trusty/db2$ python tests/10-bundles-test.py E ====================================================================== ERROR: setUpClass (__main__.BundleTest) ---------------------------------------------------------------------- Traceback (most recent call last):   File "tests/10-bundles-test.py", line 27, in setUpClass     url = config.get('db2').get('db2_url') AttributeError: 'NoneType' object has no at
<suchvenu> This code was working before using Amulet
<lazyPower> suchvenu: can you link me to your test?
<suchvenu> in Launchpad ?
<lazyPower> sure
<suchvenu> Its in my personal branch
<suchvenu> https://code.launchpad.net/~suchvenu/charms/trusty/db2/ibmdb2
<lazyPower> suchvenu: can you also pastebin the full traceback?
<suchvenu> I am getting only this much when i run the test
<suchvenu> charm@islrpbeixv665:~/charms/trusty/db2$ python tests/10-bundles-test.py E ====================================================================== ERROR: setUpClass (__main__.BundleTest) ---------------------------------------------------------------------- Traceback (most recent call last):   File "tests/10-bundles-test.py", line 27, in setUpClass     url = config.get('db2').get('db2_url') AttributeError: 'NoneType' object has no at
<lazyPower> ok, give me a moment to wrap up my current test run. I'll branch the code and give it a run
<suchvenu> sure
<lazyPower> this test looks a bit funky to me at first glance, i'll clean it up and submit a MP to your branch shortly
<suchvenu> Its working when I deploy it manually
<lazyPower> for example, the config.get() stanza's during the standup
<lazyPower> typically we stand up the charm, and isolate the config.get() bits under a scoped test
<lazyPower> have you looked at any other amulet based tests?
<suchvenu> Trhis test was working for me last day... Dont know suddenly what happened
<suchvenu> Without pushing the code to Launchpad, can't we run amulet test ?
<lazyPower> Certainly
<lazyPower> are you using bundletester to kick off your tests?
<suchvenu> yes
<lazyPower> ok, so you're familiar with the JUJU_REPOSITORY variable?
<suchvenu> i am using these two
<suchvenu> 00-setup  10-bundles-test.py
<suchvenu> no
<lazyPower> if you export JUJU_REPOSITORY to your charm repo, eg: export JUJU_REPOSITORY=$HOME/charms  - if you specify local:<series>/<service> it will deploy a local copy 100% every time instead of reaching out to the charm store API to find the charm.
<lazyPower> by default, if you only describe teh service name under test, in this instance db2,  the resulting amulet code looks like: d.add('db2') - it will deploy the local charm by default.
<lazyPower> you can also override the behavior by setting the JUJU_TEST_CHARM=db2 environment variable as well, in the off chance that bundletester is still looking in the wrong place for the charm.
<suchvenu> okk
<suchvenu> the 2nd one i tried
<suchvenu> but it didn;t work
<suchvenu> After export JUJU_REPOSITORY=$HOME/charms also, its fialing for me
<lazyPower> is that where the charm is located? $HOME/charms/trusty/db2 ?
<suchvenu> yes
<lazyPower> ok, i'm rounding the last legs of this test run
<lazyPower> i should be able to switch context in a moment, and get hands on
<suchvenu> sure
<lazyPower> suchvenu: please ask your questions here so that others may benefit from the support :)
<lazyPower> suchvenu: ok, i also see what you're doing here. The IBM charm payloads are behind a paywall, so you're attempting to validate the bundle has been configured with a payload url correct?
<suchvenu> yes, But its not even reaching till there...
<suchvenu> From our local directory how can we find out to which stream in Launchpad its connecting to ?
<lazyPower> I'm not sure what you mean 'which stream in launchpad'
<suchvenu> I mean branch
<lazyPower> suchvenu: ah, if being routed through any of teh juju tooling unless implicitly defined via the branch: key in the bundle, it will always use /trunk
<lazyPower> as /trunk is the only branch in launchpad that will be ingested
<lazyPower> i'm retooling this test a bit, but i'm confused. You're validating local.yaml, then testing with bundles.yaml?
<lazyPower> it seems counterintuitive to validate a bundle we're not using.
<lazyPower> ok, so you're setting values in one bundle, then loading a deployment bundle, and passing the values into that bundle.
<lazyPower> May I make a suggestion?
<suchvenu> I am using both
<suchvenu> sure
<lazyPower> Move this validation logic out of the class setup constructor, and make the local.yaml optional. The charm should have sane defaults and do something meaningful if config is not provided.
<lazyPower> if its present, invoke a method that loads, validates, and returns a dict of these config options that you can then just pass to d.configure('db2', options_dict)
<lazyPower> if that options_dict isn't present, it should deploy the charm as is, and the charm should respond in kind to either noop, set-status that its pending data from the user.
<lazyPower> i'll mock this up and submit a MP
<suchvenu> ok. Did you try to run the test in your env? Are you getting the error which I got ?
<marcoceppi> mbruzek: re resterday, insetad of set +e, just wrap it in an if statement and have grep exit based on match
<marcoceppi> mbruzek: the if block will keep it from exiting, and you'll get your result
<mbruzek> The if is what I was looking for.
<mbruzek> marcoceppi: thanks for the follow up
<marcoceppi> mbruzek: `if ! grep "lolo-don'texist" ~/.juju/environments.yaml; then echo "That does not exist in environment file"; fi`
<marcoceppi> mbruzek: as an example
<lazyPower> suchvenu: i did get the same error. I think this is a byproduct of loading a yaml for the deployment
<lazyPower> marcoceppi: when loading a bundle in amulet, just defining the service name without JUJU_TEST_CHARM exported defaults to looking at the store w/ the default series does it not?
<suchvenu> so what is the output you are getting ?
<lazyPower> suchvenu: however, when implicitly passing local:trusty/db2 in the bundle as the charm, it behaves as expected.
<lazyPower> and i've got this working, let me pastebin my work for your review
<suchvenu> Atleast did it reach the isntall hook ?
<lazyPower> https://gist.github.com/chuckbutler/4b0391675e77ae8ea62d
<lazyPower> its deployed the charm and is pending a machine allocation from AWS
<lazyPower> s/allocation/enlistment
<marcoceppi> lazyPower: any service provided without a fully qualified URl assumes it's a charmstore charm unless the charm is specifically provided during the d.add() method or if it's the JUJU_TEST_CHARM
<lazyPower> marcoceppi: thats what i thought, thanks for confirmation.
<marcoceppi> lazyPower: if a bundle is loaded, the charm: key is passed to d.add
<lazyPower> suchvenu: according to marcoceppi's reply above - this is side-effecty behavior of how you're constructing this test. if you did an implicit d.add('db2') it would do theright thing by default, vs this yaml load which is causing amulet to poll the store for db2
<marcoceppi> lazyPower: hum
<marcoceppi> suchvenu: lazyPower hum
<lazyPower> marcoceppi: i dont know that i buy that ;) if you change the charm: key in this yaml to 'db2' - it does the wrong thing inherently.
<marcoceppi> I see the problem
<marcoceppi> yeah, because it bypases the logic in the add method
 * lazyPower nods
<marcoceppi> let me check the load method, we may need to add a case where it checks JUJU_TEST_CHARM there
<marcoceppi> we have a 1.10.2 release on deck for another bug reported on friday
<marcoceppi> so we may be able to patch this in that release as well
<lazyPower> nice
<lazyPower> suchvenu: another thing to be aware of - when i first pulled this i got a ton of import errors, you're working in the context of python3 and any modules you import will need to be added to 00-setup
<mbruzek> lazyPower: do you have a link to the aufs docker issue you pointed me at yesterday?  Something about the extra libraries needed to be loaded?
<lazyPower> suchvenu: such as apt-get install -y python3-yaml
<lazyPower> mbruzek: i have one better
<marcoceppi> lazyPower:  suchvenu can one of you report this as a bug real quick against lp:amulet?
<lazyPower> mbruzek: i have a branch that implements it!
<lazyPower> marcoceppi: on it
<suchvenu> oh ok
<lazyPower> suchvenu: i'll take the bug report, let me know if you have any questions about the test modifications i posted in a Gist for you
<suchvenu> I couldn't open it
<suchvenu> if you did an implicit d.add('db2') it would do theright thing by default, vs this yaml load which is causing amulet to poll the store for db-> you mean I need to add d.add('db2') to amulate code ?
<lazyPower> mbruzek: https://github.com/chuckbutler/docker-charm/tree/aufs-impl
<lazyPower> suchvenu: you cannot load gist.github.com urls?
<lazyPower> ok, i can push this to launchpad and send you a MP so you get a proper diff, 1 moment
<marcoceppi> lazyPower suchvenu I found the logic falicy
<marcoceppi> if charm is supplied it never gets to check if the service/charm being deployed is infact the cwd
<suchvenu> yes, i could open the link now
<marcoceppi> lazyPower suchvenu have you tried just setting charm to "db2"?
<marcoceppi> in the bundle?
<marcoceppi> that should work
<marcoceppi> and avoid a charm store lookup
<lazyPower> marcoceppi: i have
<lazyPower> and it broke
<lazyPower> suchvenu: https://code.launchpad.net/~lazypower/charms/trusty/db2/test-fixup/+merge/264709
<lazyPower> same code, but in a merge proposal format for your diff viewing pleasure ^
<lazyPower> marcoceppi: sorry broke is the wrong word here - it misbehaved. it polled the charm store.
<marcoceppi> lazyPower: acording to that diff you've changed it from db2 to local:trusty/db2
<lazyPower> marcoceppi: because it was polling the charm store otherwise.
<marcoceppi> what version of amulet is being used, the code says it should be doing otherwise
<marcoceppi> unless the directory the charm is in is not named "db2"
<marcoceppi> which is still to begin with, and we'll make sure it uses metadata.yaml to determine name, but that's what's up
<lazyPower> http://paste.ubuntu.com/11877650/
<lazyPower> dir name is db2, JUJU_TEST_CHARM is exported as db2
<marcoceppi> lazyPower: https://github.com/juju/amulet/blob/863b3bbc0488eaadb8b7fcf4286b31026b1c8c68/amulet/charm.py#L44
<lazyPower> schenanigans
<lazyPower> i dont know why this misbehaved then :|
<marcoceppi> lazyPower: me neither, is that MP the best to use to replicate?
<lazyPower> unwind the changes to the bundles.yaml and it will be
<marcoceppi> lazyPower: should I just use suchvenu's branch instead? or just undo the bundle?
<lazyPower> i would undo the bundle changes
<lazyPower> suchvenu: another suggestion for improvement in user experience: normalize your use of dash and underscore in the config options
<lazyPower> minor nitpick, but small things like this count :)
<suchvenu> hi.. lot of messages when i was away...
<suchvenu> So should I try with the changes you sent
<suchvenu> or just try charm: "local:trusty/db2" in bundles.yaml file ?
<marcoceppi> hey stub are you around? Is the new cassandra charm able to run as a single node? I don't see the configuraiton option anymore
<lazyPower> suchvenu: marco was asking with regard to testing amulets behavior
<lazyPower> suchvenu: i highly recommend you adopt that test pattern i sent in a MP, and test cases where you have empty config, and provided config.
<lazyPower> suchvenu: if the charm has behavior that's expected when you provide none of those configuration options, it should be clear to the user there is expected input from them :) this will help you achieve that requirement.
<suchvenu> so inorder to deploy the charm from my local dir and not look at charm store, i need to give "local:trusty/db2" in bundles.yaml file, right ?
<lazyPower> correct, until marcoceppi has a chance to look into the side-effecting behavior in amulet and issue a patch.
<suchvenu> I still get the same error
<suchvenu> charm@islrpbeixv665:~/charms/trusty/db2$ python tests/10-bundles-test.py E ====================================================================== ERROR: setUpClass (__main__.BundleTest) ---------------------------------------------------------------------- Traceback (most recent call last):   File "tests/10-bundles-test.py", line 27, in setUpClass     url = config.get('db2').get('db2_url') AttributeError: 'NoneType' object has no at
<suchvenu> when i did only the change the bundles.yaml file
<lazyPower> That error, is an error in your python
<lazyPower> the test refactoring i submitted resolves that issue.
<stub> marcoceppi: yes. It can be run with a single node or multiple - no configuration needed.
<stub> marcoceppi: Just don't create any keyspaces with a replication factor higher than your node count.
<marcoceppi> stub: cool, won't do
<marcoceppi> stub: just helping Adam port our cassandra-stress action to the new charm
<marcoceppi> so I just need one node to make sure parsing is working and the parameters are there
<marcoceppi> cassandra 2.0 and 2.1 ship with different stress tools
<stub> marcoceppi: Ta. I would have done that, but back then it was 2.0 by default and we needed 2.1 because 2.0 doesn't do authentication
<sto> Is there a good document explaining how to configure neutron when deployed with juju and quantum-gateway?
<marcoceppi> stub: no worries, we've got most of the ground work there, and we'll look to tap you for help with a postgresql benchmark (when you have time)
<stub> marcoceppi: Extra points for an external load generator that can be scaled out :) I'd love to be able to show the difference between 5 quad core boxes and 20 single core boxes with 25% of the RAM.
<marcoceppi> sto: afaiu quantum-gateway is no longer supported for deployments with openstack, isntead neutron-gateway should be used. coreycb and the openstack charmers have better insight into that
<marcoceppi> stub: right now stress is part of the charm, but we're working on creating a "nosql" load generation tool that cassandra can hook up to
<marcoceppi> stub: we could also create cassandra-stress as a standalone charm, but I'm not sure how that would work, right now it's just run on one of the nodes in the cluster
<sto> marcoceppi: any doc pointers? what charms should I use now?
<stub> marcoceppi: The cassandra-stress standalone charm would just need to install the packages and make use of the client relationship to the real cluster. Not sure if it is easier to start from scratch, or fork the cassandra charm and give it a lobotomy.
<coreycb> sto, so you have a deployment and want to configure neutron?
<marcoceppi> sto neutron-gateway with the neutron-api charm AIUI
<sto> coreycb: I have a deployment but I can redeploy it, I'm testing
<marcoceppi> stub: we'd probably just start from scratch to install the stress tool and then move our actions over and update them to use the relation data. Would you say that's the best way to proceed rather than having it as an action on cassandra?
<coreycb> sto, ok just curious if you're asking about configuring the neutron charms vs configuring the neutron service (subnets, routers, etc)
<stub> marcoceppi: It is the only way to get a genuine benchmark, even for non-clustered services. Otherwise you are measuring how efficient your load generator is.
<sto> coreycb: I'm interested on both, if you have some documentation pointers that will be great ...
<marcoceppi> stub: does cassandra-stress scale?
<coreycb> sto, sure
<stub> marcoceppi: Cool demo - run benchmark, add unit, run benchmark again, demonstrate double load
<sto> marcoceppi: I only see a neutron-gateway-next charm at https://jujucharms.com/u/landscape/neutron-gateway-next/trusty/5 am I missing something?
<stub> marcoceppi: I don't know about scaling the stress tool horizontally. But you *could* just run it on each unit at the same time, and divide the results by the number of units (?)
<marcoceppi> stub: I guess, but if the point is to have an external load-gen, and cassandra-stress is that load gen, I guess we'd just scale that machine vertically give it a giant machine to churn data through
<marcoceppi> then scale the cluster to see the effects of throughput?
<coreycb> sto, I think the easiest all in one thing to look at is a bundle - this is the default bundle we use to deploy openstack for testing
<coreycb> http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/sparse/default.yaml
<jose> jcastro: ping
<marcoceppi> if it's the same, then the stress tool is hitting celings, otherwise, note results
<jcastro> jose: yo
<jose> jcastro: quick PM?
<jcastro> yes
<marcoceppi> stub: but I agree, a great demo would be to stress one cassandra node, scale to 10, stress again
<stub> marcoceppi: yes. Anyway, that is a limitation of the tool we need to live with. And demonstrating that the stress tool flatlines before the juju deployed cluster does is cool too.
<coreycb> sto, if you want to dig into README's and config.yaml for the individual charms you can, but the bundle should get you started
<marcoceppi> stub: agreed, cool, we'll finish up our work on the stress action, then just move it to it's own charm when we're confident it's working
<coreycb> sto, are you familiar with juju-deployer?
<marcoceppi> it's easier to troubleshoot when it's all on one box
<sto> coreycb: I've been there already, now I'm deploying by hand
<sto> coreycb: I need to put some services in known hosts and found it difficult with bundles and juju-deployer
<coreycb> sto, ok fair enough, it's still a good reference for you to make sure you're using the right charms (e.g. quantum-gateway is deprecated)
<sto> coreycb: not when I started... ;)
<sto>     "neutron-gateway":
<sto>       charm: "cs:trusty/quantum-gateway-16"
<sto> From the bundle
<coreycb> sto, actually, sorry quantum-gateway isn't deprecated quite yet --- it'll be deprecated end of july when the next round of stable charms are released
<sto> coreycb: I'm deploying kilo, which charms should I use, the neutron-gateway-next?
<coreycb> sto, no don't use the next branches unless you are testing something new
<coreycb> sto, use the branches in that default.yaml I pasted
<sto> lp:charms/trusty/quantum-gateway
<sto> So it is not renamed yet
<coreycb> sto, that's correct, sorry I misspoke.  it's only deprecated in our next charms which is what we use for development.  the next charms will be released to stable end of july.
<sto> ok
<sto> coreycb: and about how to configure the service? is there a document with a description about how the charm leaves things?
<coreycb> sto, just the README's for the charms
<coreycb> sto, the config.yaml files are also useful to understand config options and defaults
<coreycb> sto, e.g. https://api.jujucharms.com/charmstore/v4/trusty/quantum-gateway-16/archive/config.yaml
<stub> marcoceppi: Apple apparently has a 75,000 node cluster with 10PB, so that seems like a good target.
<coreycb> sto, the charms default to using ovs
<marcoceppi> stub: we'reworking on reproducing the Google Cassandra 1M writes benchmark they blogged about a few months ago, I don't think my wallet is big enough to shell out that many cloud instances ;)
<coreycb> sto, have you come across this?  https://jujucharms.com/openstack-base/34
<sto> coreycb: I know that, but I don't see right now what do I have to set up on openstack to access the public network
<coreycb> sto, that link I just posted has some directions on setting up the ext-net, and initial tenant
<sto> coreycb: I'll look into neutron-ext-net
<sto> Maybe the problem is that I have a wrong network setup on the nodes
<marcoceppi> sto: fwiw, I'm having similar issues with getting ext-net to run, if you get yours figured out let me know (and I'll try to do likewise)
<sto> marcoceppi: I'll do, my guess is that my use of network interfaces is wrong, tomorrow I'll review the physical connections
<jogarret6204> hi. Im looking for information about how parameters that are NOT shown in juju get xxxx are getting down to nodes.  example is tenant_network_types in ml2.ini
<lazyPower> jogarret6204: i'm not sure i understand what you're asking. Is this configuration you're passing services that dont appear to be getting set on a deployment?
<jogarret6204> it's not in the services, but in the config template in the charm..
<jogarret6204> charms/trusty/neutron-api/templates/icehouse/ml2_conf.ini:type_drivers = gre,vxlan,vlan,flat,
<jogarret6204> is on maas/juju server
<lazyPower> ddellav: beisner ^ do you guys have a moment to eyeball this a bit? i'm out of my depth here as i'm not familiar with neutron
<jogarret6204> I added tenant_network_types = gre,vxlan,vlan,flat,local
<lazyPower> jogarret6204: sorry, i'll try to get some eyes on this from people that are more familar with neutron.
<jogarret6204>   <----local
<jogarret6204> It's more of a juju state question.
<lazyPower> s/neutron/the neutron charm/
<jogarret6204> I changed the ml2.ini inside a container, and I also changed it on the maas/juju server
<jogarret6204> I added that local option at the end..
<jogarret6204> today, it is back to the old one
<jogarret6204> ml2_conf.ini:tenant_network_types = gre,vxlan,vlan,flat
<jogarret6204> so I'm assuming there must be something in the middle (perhaps on the juju VM used for deploying?) that needs to be set
<jogarret6204> thanks for looking, btw..  I know you are all busy
<lazyPower> jogarret6204: did you deploy from the local charm or did you juju upgrade-charm neutron --switch local:trusty/neutron ?
<lazyPower> if you updated on the server itself, those charm changes were not stored on the state server, and any number of things could have happened that caused the reversion, such as a charm upgrade which is an atomic update of the charm code
<lazyPower> the way juju delivers charm is it ships a tarbal and nuke/repaves the charmdir for the most part. so any localized modifications you had in prod/staging/etc will not persist when a charm upgrade is issued.
<lazyPower> jogarret6204: the only way to ensure those charm modifications are persisted is by switching the source of that charm from cs: to local: or a namespace,  or to have deployed from one of those resources initially
<jogarret6204> this was local deployment
<jogarret6204> mattrae is involved.  :-)
<lazyPower> ugh, so local deployment and something overwrote the config. there is probably something in the charm thats generating that config template then that stripped the option
 * lazyPower makes with magic hand waving
<jogarret6204> can I trigger this "refresh" at all to see where it's happening?
<lazyPower> i would think so, you can attach over debug-hooks/dhx, and then re-trigger the action required
<lazyPower> either by upgrading the charm to run "normal hook contexts" or by removing/re-adding the relations to invoke those hook contexts
<beisner> hi jogarret6204, generally speaking, charm config options are addressable via juju get / juju set.  anything not represented in juju get / juju set is either formulated by, or hard-coded by the charm and/or its included templates.
 * beisner tries to assimilate the objective via backscroll
<jogarret6204> ok - that's what I was seeing. no get/set optionn for this setting.  so I changed the central file
<beisner> jogarret6204, are you trying to add "local" to the tenant_network_types in the deployed unit?
<jogarret6204> and the remote file
<jogarret6204> yes
<beisner> jogarret6204, i see.  i've not flipped that particular switch personally, and i know we don't test that scenario.  if there is a use case for it, we are happy to take suggestions/proposals in the form of a new bug against the charm in question.
<beisner> jogarret6204, also be aware that any time you manually modify a conf that is also parsed/managed by juju, you run the risk of losing that setting if/when things like config-changed happen, maybe even on a relation change and/or charm upgrade (?).
<jogarret6204> I don't know how prevelant it would be, but in our case, and outside API call to OpenStack is requesting the option to do a local network.
<jogarret6204> so we need a way to add it..
<beisner> jogarret6204, i'd say if it is a real world use-case, it's a feature worth considering.
<jogarret6204> now we will know ot have it in the initial design tho...
<jogarret6204> so is there something between the charms local directory, and the deployed option out in the container?
<jogarret6204> something on the juju VM that I need to change?
<beisner> just so i make sure i follow - we're switching gears from "how do i change this conf, and keep it changed?"  to  "how should I use neutron with a local tenant_network_type?"
<jogarret6204> no
<jogarret6204> change config and keep it changed is exactly what I need
<beisner> ok i misunderstood
<jogarret6204> central server has local configured in it (using local charms)
<jogarret6204> remote containers is showing this:
<jogarret6204> tenant_network_types = gre,vxlan,vlan,flat
<beisner> jogarret6204, you could modify the charm to either add a config option to manage that item, or modify the charm to include local as a hard-coded thing in the template;   then do a juju charm upgrade.   both would be forking the charm in effect.
<jogarret6204> I changed both the central and the remote a few days ago to have the local option:
<jogarret6204> can I do "upgrade" with a local charm?  that sounds perfect
<beisner> jogarret6204, i think the-right-thing-to-do (tm) is to teach the relevant charms to manage that configuration directive via a charm config option.
<beisner> then propose that back, have it reviewed, merged, and never have to worry about it again ;-)
<beisner> i've got to run to a meeting, bbl
<jogarret6204> k - thanks for the help
<beisner> yw  jogarret6204 -  as far as doing the local charm upgrade, i think the docs are pretty good around that, or lazyPower ;-)  <<
<lazyPower> beisner: you never know
<lazyPower> i try to be thorough
<marcoceppi> lazyPower: beisner there aren't any docs on upgrading, I just checked :\
<marcoceppi> in the jujudocs
<beisner> marcoceppi, oh i must've been thinking of this ;-)  http://marcoceppi.com/2015/01/force-upgrade-best-juju-secret/
<beisner> also, upgrade-charm doc:  https://jujucharms.com/docs/stable/commands
<beisner> jogarret6204 fyi ^^
<jogarret6204> beisner - mine were all local.  did not work to copy down the changed parameters
<jogarret6204> remote:  tenant_network_types = gre,vxlan,vlan,fla
<jogarret6204> local: tenant_network_types = gre,vxlan,vlan,flat,local
<jogarret6204> I don't mind changing the remote files manually - bot something changed them back over this past weekend.  so I'm trying to stop that part
<jogarret6204> although if it's a coworker then maybe you can't help.  :-)
<pmatulis> when i attempt to use juju-deployer i get a msg saying the environment is already bootstrapped. i then need to forcibly destroy the environment in order to proceed. strangely, destroying before using deployer says the env does not exist. any tips?
<jogarret6204> biesner:  upgrade-charms works.  just took some time
<tvansteenburgh> pmatulis: are you passing -e <envname> ?
<pmatulis> tvansteenburgh: no
<tvansteenburgh> pmatulis: try that :)
<ddellav> pmatulis: also may need to use the force flag for destroy-environment. I had that issue the other day
<tvansteenburgh> pmatulis: deployer should do the right thing re bootstrapping
<pmatulis> ddellav: that's what i meant by 'forcibly'
<pmatulis> tvansteenburgh: ok, lemme try it
<ddellav> gotcha
<pmatulis> tvansteenburgh: nope, didn't work (http://paste.ubuntu.com/11879399/)
<tvansteenburgh> pmatulis: you can't tell it to bootstrap if it already is
<pmatulis> tvansteenburgh: what i did: on a fresh vm i copied ~/.juju/environments.yaml in (default set to petermatulis env) and issued the deployer command. that's all
<tvansteenburgh> pmatulis: is there a ~/.juju/environments/petermatulis.jenv file?
<pmatulis> tvansteenburgh: right now? or before i issued any commands?
<tvansteenburgh> right now
<pmatulis> yes, there is
<tvansteenburgh> and you're sure it wasn't there prior to running deployer?
<pmatulis> yes, i'm sure it wasn't there. i even created the .juju directory manually. then scp'd environments.yaml over
<tvansteenburgh> weird, what kind of vm?
<pmatulis> openstack instance
<tvansteenburgh> pmatulis: okay, i thought maybe it was lxc, but if it's not, i'm fresh out of ideas. no idea why that's happening
<lazyPower> pmatulis: aiui - the glance bucket that gets created keeps a sentinel file in it w/ environment state/data
<lazyPower> pmatulis: check your control-bin and see if a STATE file exists in there. if the machines are completely destroyed, you can safely delete that file and retry
<lazyPower> er, swift not glance.
<pmatulis> lazyPower: sorry, where is my control-bin?
<lazyPower> pmatulis: you define it in your openstack environments.yaml config
<lazyPower> eg:         control-bucket: lazypower-8286c32025b604f3815681ac40ac8551
<pmatulis> lazyPower: yes, there is 'control-bucket: <some UUID thing>'
<lazyPower> thats your swift control-bucket
<pmatulis> cool
<lazyPower> there's a sentinel file in there
<lazyPower> so long as it exists, juju will think its bootstrapped and do its best to attempt to recreate teh JENV by communicating wth the API through whats in that control file.
<pmatulis> lazyPower: can i just erase that line? or is it required?
<lazyPower> natefinch: right? ^ i know this is the behavior of AWS and i'm 90% certain this is the case with the OpenStack provider.
<lazyPower> pmatulis: you can probably just generate a new control-bucket, it really depends on what your ACL's are on your openstack instance
<pmatulis> lazyPower: lemme alter that UUID thingy. i just recreated a fresh instance
<natefinch> yeah pretty sure you can just comment out that line and it'll create one from scratch
<lazyPower> ta, i thought so.
<natefinch> needing that line in there is a holdover from before we really knew what we were doing ;)
<lazyPower> natefinch: we have a TODO to remove that right? make it non-required so we can support providers w/out object storage
<natefinch> lazyPower: we already do support providers without storage.  Those changes landed like 9 months ago
<lazyPower> i thought so, but i wasn't going to quibble with evidence that states otherwise ;P
<lazyPower> those particular providers are highly jenv centric right? if the jenv goes away you're basically in trouble.
<marcoceppi> beisner: that's just the output of `juju help upgrade-charm` honetsly upgrade charm should be documetned just like deploy is
<natefinch> well, sort of. I mean, obviously you need something local that can figure out where to talk to.  I'm not sure what the minimum amount of info is that you need to connect
<lazyPower> ok, i may run a chaos lab over the weekend and find out for ya ;D
<lazyPower> thanks for the clarification nate o/
<natefinch> welcome
<pmatulis> lazyPower: you nailed it; i incremented the value/ID by one and that did the trick. muchos!
<lazyPower> anytime pmatulis :) happy to help
<lazyPower> pmatulis: anytime that fails to work, a juju destroy-environment --force , should however clean up that sentinel file
<lazyPower> if it doesnt, its bug worthy and should probably be filed.
<lazyPower> as --force is taking a chainsaw to a bread and butter party.
<pmatulis> yeah, since i expected the error i thought i could do that before deployer but evidently not
<lazyPower> pmatulis: what were you trying to do if i can be nosey? we may have tooling already to help
<pmatulis> lazyPower: set up openstack :)
<lazyPower> ah ok
<jogarret6204> experts:  not sure if I need to open a bug for an issue, after I fixed it.  looking for advice.  openstack dashboard x 3 wiht horizon ha-cluster.  2 of 3 nodes had the VIP in the corosync.conf file somehow.  So no qourum would come up
<jogarret6204> chmod the file, replace the VIp address with the LXC address, quorum comes up
<jogarret6204> second times I've seen this
<lazyPower> jogarret6204: certainly bugworthy
<lazyPower> jogarret6204: include steps to reproduce
<jogarret6204> laxypower- that's the hard part.  dashboard down.  scratch head.  start restarting stuff.
<jogarret6204> don't know how it gets that way is my point
<jogarret6204> but I'll go open something to track
<lazyPower> jogarret6204: did you use a bundle to deploy, manually deploying, etc.
<lazyPower> if you have that info that should be a stellar start. it may be a path we dont have in CI
<jogarret6204> ok.  btw - where to open?  I've only used juju-core before
<lazyPower> i'm thinking horizon charm, let me fetch a link
<lazyPower> 1 moment
<lazyPower> jogarret6204:  https://bugs.launchpad.net/charms/+source/openstack-dashboard
<beisner> o/ jogarret6204, please do file a bug ... that'd be stellar indeed.  it's important that we know some details about the deployment:  juju version, ubuntu release, openstack version, the charms' charmstore version, and the configuration options used for each charm.  hopefully that is all expressed in a bundle that you can sanitize and attach.  juju status output is also really helpful.
<beisner> ps thanks jogarret6204, lazyPower
<jogarret6204> sure.  bug is opened, already got info request from Billy Olsen
<rick_h_> NOTICE: having an issue with prodstack and causing jujucharms.com to be unresponsive. Also means 1.24.X juju deploys will probably not be successfull atm. Working with webops to keep an eye and correcting
#juju 2015-07-15
<rick_h_> NOTICE: jujucharms.com and the charmstore are back up. The storage in IS is workingto rebalance/sync and might time out or be slow for a bit longer.
<kirkland> lazyPower: hi
<lazyPower> kirkland: o/ i understand you want to compose some charms
<lazyPower> kirkland: whats the core fo what you would like to do? I have it on good authority you were planning on dockering in your charm
<kirkland> lazyPower: just one new charm, probably a subordinate charm
<kirkland> lazyPower: I've already created and published a docker image
<kirkland> lazyPower: it's a simple little cpu scavenging utility (like protein folding or seti), that search for huge prime numbers
<lazyPower> ah so you're looking to squeeze in some CPU time on anything that's idle. gotchya
<lazyPower> thats a perfect candidate, and you can skip all the docker logic if you inheret from our current docker charm.
<lazyPower> all you'll need to do is set your config-changed hook logic to deliver the image and spin it up, as it will be called after the docker (or in this case, the base charm) hooks are executed
<kirkland> lazyPower: hmm
<kirkland> lazyPower: I would definitely like to skip any redundant docker logic
<kirkland> lazyPower: so, should I clone an existing charm, or try this new juju-compose thingy
<lazyPower> there's one caveat to the current docker charm that i'll state now since it looks like you're delivering something supremely light weight. We're using ansible in the current charm. so you'll be getting ansible in that payload
<lazyPower> is this acceptable?
<lazyPower> You'll benefit from getting upgrades of docker, and the newest features we land every time we cut a release if you build on top of the current docker charm
<lazyPower> some of that is landing the new docker plugin support for libnetwork, as well as AUFS is scheduled to land sometime late next week
<lazyPower> once you've written your extension, you'll only need to regenerate the logic that is important to you
<lazyPower> by proxy, your service will be able to connect to everything we've already charmed in the docker ecosystem - such as logspout (log shipping) and registrator (service discovery) to name a few
<lazyPower> If you'd like a better view into what you've got exposed to you, https://github.com/juju-solutions/tupperware
<kirkland> lazyPower: I don't care much one way or another about ansible
<kirkland> lazyPower: I like the sound of all of that
<lazyPower> Its a perfect candidate then, we'll make sure the dependencies work, all you have to be concerned with is delivering your container from the hub, and running it properly :)
<kirkland> lazyPower: it would be nice if I could just point to the docker image, and voila, i'm 99% done :-)
<kirkland> lazyPower: okay, let's do it...
<lazyPower> kirkland: you're already there actually...
<lazyPower> there's an action to launch images
<lazyPower> 1 moment, it just occured to me that you're delivering a subordinate to eat spare cpu cycles
<lazyPower> docker is a primary charm, let me see if subordinate conversion is supported by composer
<kirkland> lazyPower: right
<kirkland> lazyPower: it could run as a primary or subordinate
<kirkland> lazyPower: I'd think it's more interesting as a subordinate
 * lazyPower nods
<kirkland> lazyPower: perhaps even if only to the ubuntu charm
<lazyPower> You'd have more flexibility in where you can stuff it, say maybe that one machine on the fringe of your network that only handles log shipping
<jcastro> sub is interesting the same way a boinc sub would be
<lazyPower> attach it there and let it search for primes
<jcastro> if the unit is idle, use the CPU to do science
<jcastro> if not, work for the primary service
<lazyPower> i'd rather not pigeon hole you into a primary. I'm fairly certain we can extend it. if not, we can always publish a "fork" of the docker charm as a subordinate and you simply switch which base your logic inherets from
<lazyPower> run the generator and viola
<kirkland> lazyPower: sure
<kirkland> lazyPower: honestly, I'm not really worried about it getting serious use, as it needs ~7 days to do anything useful
<kirkland> lazyPower: it's more about the exercise of creating a docker container, and then very easily (I hope) creating a Juju charm and a Snappy snap :-)
<kirkland> lazyPower: if it's not easy, then something is broken that we need to fix
<lazyPower> kirkland: i've got interesting work on that front underway as we speak
<lazyPower> i'm building a RPI2 cluster, mixed with 2 lxd nodes, 2 docker nodes
<kirkland> lazyPower: fun
<lazyPower> lxd by default supports distributed containers, docker requires swarm - building services that work in concert
<kirkland> lazyPower: okay -- so no what do I do?
<kirkland> lazyPower: coolio
<kirkland> lazyPower: my docker image is here: https://registry.hub.docker.com/u/kirkland/mprime/
<lazyPower> kirkland: i'm assuming you have juju-compose pulled? our prototype for "inheriting" from an existing charm?
<lazyPower> https://github.com/bcsaller/juju-compose
<lazyPower> You'll also need a copy of the docker charm, obtainable via `charm get trusty/docker`
<kirkland> lazyPower: juju-compose is not executing for me
<kirkland> lazyPower: I cloned it, pip-installed it locally, but I'm hung up chasing dependencies
<lazyPower> ok, so you just want to springboard that image - *snap* be done with it
<lazyPower> got it
<lazyPower> start with juju deploy docker, i highly recommend you set latest=true, version=1.7.0  via `juju set docker latest=true version=1.7.0` once its registered via juju deploy
<kirkland> lazyPower: are you going to be particular about what version of charm-tools you want me to use?
<lazyPower> not at all
<kirkland> lazyPower: i just apt-got it from the 15.04 archive
<lazyPower> everything we will be doing is perfectly doable from vanilla juju
<lazyPower> last question that i can think of - are you going to be trying this on the local provider?
<kirkland> kirkland@x250:~/src/mprime/charmâ« charm get trusty/docker
<kirkland> Error: trusty/docker not found in charm store.
<kirkland> lazyPower: I was probably going to use amazon by default, but I can do whatever
<lazyPower> no worries, we can skip fetching the local charm - you're not going to be doing layers with composer.
<kirkland> lazyPower: why couldn't I get the docker charm like that?
<lazyPower> I'm not sure, but i've taken a note to investigate.
<lazyPower> juju deploy cs:trusty/docker - will achieve the desired result.
<kirkland> lazyPower: hmm, before I deploy the generic docker charm, I want to add my docker pull image, though, right?
<lazyPower> You wont need to, no. the docker charm delivers the infrastructure required to run your container.
<lazyPower> will your service have any relations to any other services in the topology?
<kirkland> lazyPower: nope
<kirkland> lazyPower: so is there a juju set config for my image?
<lazyPower> juju run --unit docker/0 "docker run kirkland/mprime"
<kirkland> lazyPower: oh, hmm, juju run?  that sounds slightly hacky
<lazyPower> Once the latest MP lands, an action will be available to run, recycle, kill - running containers.
<kirkland> lazyPower: I would have hoped something more like "juju set image=kirkland/mprime"
<lazyPower> i apologize this hasn't been pushed into the store yet, it received some negative feedback on the last review so it took some polishing and is awaiting another set of eyes.
<kirkland> lazyPower: sure no worries, bud
<lazyPower> kirkland: the problem with juju set image= is you're not altering state of docker, you're altering state of something in docker
<marcoceppi> lazyPower: seems like this is better suited for anaction?
<lazyPower> and we want to model that differently.
<lazyPower> marcoceppi: correct!
<kirkland> lazyPower: hmm, run/action doesn't feel right to me
<kirkland> doesn't feel like what I'm trying to accomplish
<lazyPower> kirkland: you can deliver your image with a subordinate in a few different methods. 1) via dockerfile (charm written) 2) via docker-compose (seems overkill for one service) 3) via a quick and dirty bash charm
<kirkland> I mean, it's great as the quickest way to run a specific docker image
<lazyPower> but, if you scale the docker service, it inherently runs across *all* docker hosts you juju add-unit to
<thumper> o/
<lazyPower> if thats side effecty behavior is troublesome for this service, its better suited as an action in todays model.
<kirkland> lazyPower: so as I scale out the docker charm, that specified action will get run everywhere?
<lazyPower> negative, if you run the action, it stays isolated to the target unit you executed it against
<thumper> lazyPower: I suppose I should get around to submitting my celery worker branch for django since I'll be putting it in production soon
<kirkland> lazyPower: okay, definitely not what I want
<lazyPower> the "scale everywhere" is in relation to subordinate based delivery of docker images, against a cluster that comprises a service.
<lazyPower> in this case, the docker charm.
<kirkland> lazyPower: okay, right, so how can we get this over to the subordinate charm case?
<lazyPower> whats your preferred method to write a charm? Are you a prototype in bash kinda guy?
<lazyPower> charm create -t bash mprime
<kirkland> lazyPower: I'm a "cp -a transcode" and the sed from there :-)
<lazyPower> kirkland: ok, a few changes then in the metadata should get you started
<kirkland> lazyPower: but yes, this is a bash charm, not python or otherwise
<lazyPower> in metadata, you'll need to change
<lazyPower> subordinate: true
<kirkland> lazyPower: charm-create: error: unrecognized arguments: -t
<lazyPower> marcoceppi: we have something strange going on in 15.04 with charm-tools
<lazyPower> cannot fetch charms, generators aren't recognizing templates
<lazyPower> i'll file a bug in a moment
<kirkland> ii  charm-tools                1.0.0-0ubuntu2     all                Tools for maintaining Juju charms
<lazyPower> that explains it.
<lazyPower> 1.5.1 is latest, and available from the PPA
<kirkland> http://pad.lv/u/charm-tools
<marcoceppi> yes
<marcoceppi> get from ppa
<kirkland> dude, you guys haven't uploaded a new charm-tools in Ubuntu in 4 releases, 2+ years
<kirkland> this is one thing I continue to despise about all things juju -- everything has to be done from a ppa, doesn't play nice with the Ubuntu archive :-/ :-/ :-/
<marcoceppi> kirkland: you're the snappy guy ;)
<marcoceppi> we tried to last release
<marcoceppi> but it didn't make it in time
<marcoceppi> I'll resubmit it for wily
<kirkland> <kirkland> lazyPower: are you going to be particular about what version of charm-tools you want me to use?
<kirkland> <lazyPower> not at all
 * kirkland reminds lazyPower of that exchange ^
<kirkland> marcoceppi: goto https://launchpad.net/ubuntu/+source/charm-tools
<lazyPower> kirkland: my mistake.
<kirkland> marcoceppi: the last time charm-tools was uploaded to the ubuntu archive was 2013-10-18
<marcoceppi> kirkland: I know
<marcoceppi> as the maintainer, I'm aware
<marcoceppi> we tried to get a more recent version uploaded last cycle
<marcoceppi> it didn't make it
<kirkland> so now...
<kirkland> which ppa do I use?
<marcoceppi> ppa:juju/stable
 * marcoceppi eod
<kirkland> thx
<kirkland> lazyPower: okay, I have a new charm template
<kirkland> lazyPower: let me fill that out
<lazyPower> kirkland: first thing you'll need to do is edit the metadata
<lazyPower> remember, with subordinates you need to require a host relationship, this is special
<kirkland> lazyPower: quick docker question...do I need to do something to create the latest tag?
<kirkland> 2015/07/15 03:20:02 Tag latest not found in repository kirkland/mprime
<lazyPower> the current model we have all across teh subordinates in our ecosystem to date are like this: http://paste.ubuntu.com/11880797/
<lazyPower> you gave your containers specific version tags
<lazyPower> https://registry.hub.docker.com/u/kirkland/mprime/tags/manage/
<lazyPower> if you want a latest tag, you either omit, or implicitly give it the latest tag
<kirkland> ah
<kirkland> thx
<kirkland> lazyPower: what's the requirements on icon.svg?
<lazyPower> kirkland: https://jujucharms.com/docs/stable/authors-charm-icon
<kirkland> lazyPower: hmm, tag suggestion?
<lazyPower> misc - as it dont see it applying to any of the other approved categories.
<kirkland> lazyPower: ack
<kirkland> lazyPower: maybe analytics
<kirkland> lazyPower: or (anti)social
<kirkland> or performance :-)
<kirkland> lazyPower: okay, help me with the relations...
<lazyPower> kirkland: the only relation you will need to implement starting, is the host relationship.
<lazyPower> it goes under requires
<lazyPower> and use the pastebin link above as your guide ^
<kirkland> lazyPower: so no provides/requires/peers, right?
<lazyPower> unless it needs to exchange data among each unit as it scales, no peering
<lazyPower> does it provide anything? Can it relate to anything other than the host it is attaching to?
<kirkland> lazyPower: none -- they all phone home to mersenne.org
<kirkland> lazyPower: nope
<lazyPower> then no other relations required. just the one to the host that makes it a subordinate.
<kirkland> lazyPower: help me with that pastebin, then
<kirkland> lazyPower: is it literally copy-n-paste?
<lazyPower> copy and paste it under requires.
<kirkland> lazyPower: i'm missing something here
<kirkland> lazyPower: no peers, I agree with that.
<kirkland> lazyPower: do I have a provides?
<lazyPower> You're not exposing anything to any other charms
<lazyPower> from what has been described to me, it is only a consumer of resources.
<kirkland> lazyPower: right, okay
<lazyPower> so it has a single relationship, which is the host relationship.
<kirkland> lazyPower: but, I do put:
<kirkland> requires:
<kirkland>   docker-host:
<kirkland>     interface: juju-info
<kirkland>     scope: container
<lazyPower> thats it.
<lazyPower> kirkland: you may want to take a moment to review the subordinate docs - https://jujucharms.com/docs/stable/authors-subordinate-services
<lazyPower> this will help you understand the scope of that relationship you just implemented.
<lazyPower> kirkland: its nearing midnight local time. I'm going to be checking out in about 5 minutes.
<lazyPower> do you have any last minute questions i can help with before i check out?
<kirkland> lazyPower: ack, thanks
<kirkland> lazyPower: nah, I'll figure out the rest
<kirkland> lazyPower:  you can review tomorrow
<kirkland> lazyPower: thanks.
<lazyPower> kirkland: just to be clear, charms are reviewed on a first come first serve basis - and you can find the current working queue http://review.juju.solutions, my review time is sloted on fridays unless you have an emergent fix/patch release.
<kirkland> lazyPower: sure, that's fine
<lazyPower> best of luck on your adventure o/ if you run into any problems feel free to ping tomorrow or hit up the mailing list juju@lists.ubuntu.com
<caribou> Hi, (on Wily) do I need to do anything special to have "$ juju charm create" add the charm-helpers ?
<telegrapher> Hello everyone!! I have an easy question that I can't find in the docs :)
<telegrapher> I'm trying to use juju with a local Openstack cluster without swift or nova-objectstore
<telegrapher> I found this answer https://askubuntu.com/questions/173342/is-swift-object-storage-a-requirement , but since juju 1.21 it seems that those requirements are not needed anymore
<telegrapher> I'm trying with 1.24, but it doesn't seem to work, maybe the openstack provider still needs the object store?
<caribou> Is config.yaml mandatory in a charm ?
<marcoceppi> caribou: what version of charm-helpers do you have
<marcoceppi> caribou: also config.yaml is not mandatory. The only mandatory file is the metadata.yaml file
<caribou> marcoceppi: lemme check
<caribou> marcoceppi: well, the charm-tools I have is 1.5.1
<caribou> marcoceppi: I ended up fixing it in my charm's makefile
<caribou> marcoceppi: but "charm create" doesn't add the ./hooks/charmhelpers directory
<marcoceppi> caribou: cool, that's the latest version
<caribou> marcoceppi: regarding the config.yaml, I may have uncovered a bug
<marcoceppi> caribou: did you specify the python-basic template? charm create -t python-basic
<marcoceppi> caribou: oh?
<caribou> marcoceppi: doing a final verification, but if the config.yaml file is missing, my unit_tests never return unless I do a <Ctrl>-C
<caribou> marcoceppi: I didn't specify any template
<marcoceppi> caribou: so by default you get the services framework template, which should just attempt to install charmhelpers with pip iirc
<marcoceppi> instead of embedding charm helpers
<caribou> marcoceppi: that's what I get after $juju charm create : http://paste.ubuntu.com/11882195/
<caribou> marcoceppi: trusty with ppa:juju/stable
<marcoceppi> caribou: right, so if you look at the install hook, it calls a method in setup.py which will pip install charmhelpers when you deploy the charm
<caribou> marcoceppi: ah ok, didn't know that
<marcoceppi> caribou: unless you have a restrive network, we've been trying to move charms away from embedding charm helpers. I'm about to kick off a discussion this week which will highlight how we hope to acheive this so that people with restrictive networks and those who don't have that requirement can work still have working charms
<caribou> marcoceppi: but that's only good for a deployed charm.
<marcoceppi> caribou: well, that's what charms typically do
<caribou> marcoceppi: when writing unit_tests, you would expect to have them around
<marcoceppi> caribou: when writing the unit tests, you should mock those methods
<caribou> marcoceppi: yes, that's what I did
<marcoceppi> charm-tools have their own unit tests. It's a python library, you typically don't have those installed in your projects tree either
<caribou> marcoceppi: I also have a sync: anchor in my Makefile to do that if needed
<marcoceppi> s/charm-tools/charm-helpers
<caribou> marcoceppi: well, I made the mistake of starting my charm off an existing one, carring over old helpers
<caribou> marcoceppi: that's all fixed now.
<marcoceppi> caribou: ah, I see
<caribou> marcoceppi: btw, I'll open the bug about the missing config.yaml. I'm able to reproduce it at will
<marcoceppi> caribou: yeah, I think that's just an issue with that unit test template
<caribou> marcoceppi: https://bugs.launchpad.net/charm-helpers/+bug/1474824
<mup> Bug #1474824: Charm's unit_test never returns if config.yaml is missing <Charm Helpers:New> <https://launchpad.net/bugs/1474824>
<caribou> marcoceppi: not sure, as I did not use the template
<marcoceppi> caribou: interesting, well make test isn't directly tied to charm-tools, it's whatever the unit_testsare for that charm
<marcoceppi> i'll take a look though
<caribou> marcoceppi: for my charm, test = amulet, unit_test = python unittest
<marcoceppi> caribou: I'm guessing I have to install nosetest and coverage?
<caribou> marcoceppi: coverage is not needed
<caribou> marcoceppi: hence the error in the report
<marcoceppi> caribou: this isn't a bug with charm-tools, it's just something that in the test utilities in your unit_tests directory
<marcoceppi> check line 13 of unit_tests/test_utils.py
<marcoceppi> CharmTestCase instantiates it on line 54
<marcoceppi> if you comment that out, it should work without a config.yaml file
<caribou> marcoceppi: that's what happens when one blindly yank code out of other people's project :-/
<caribou> marcoceppi: sorry for the noise
<marcoceppi> caribou: no worries! Most all charms have a config.yaml which is why that test_util, not sure who wrote it, but I've seen it from time to time, just assumes it'll be there
<marcoceppi> from a charm perspective, everything is optional except for the metadata.yaml file
<caribou> marcoceppi: well, my charm will eventually needs it, so having a placeholder for the time being is fine
<caribou> marcoceppi: I'll document & close the bug
<caribou> marcoceppi: thanks for your help!
<marcoceppi> caribou: cheers, no problem at all!
<tvansteenburgh> cory_fu: you around?
<jamespage> gnuoy, hey - could you take a look at https://code.launchpad.net/~james-page/charm-helpers/drop-ensure-packages
<jamespage> right now the legacy mode flag on nova-compute does not work that well because of that
<gnuoy> sure, I'll take a look
<gnuoy> jamespage, isn't this going to break cases where nova-compute has been deployed without the neutron-ovs subordinate?
<jamespage> gnuoy, there is an associated change to nova-compute
<gnuoy> jamespage, kk, approved
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/nova-compute/neutron-plugin-sub-config/+merge/264854
<jamespage> thats the mp for that bit
<coreycb> jamespage, gnuoy: could one of you review this?  https://code.launchpad.net/~corey.bryant/charm-helpers/install-warning/+merge/264340
<jamespage> coreycb, got it
<gnuoy> coreycb, I'm sorry that hasn't been done, it looks good to me. let me land
<gnuoy> oh, ok
<coreycb> jamespage, gnuoy: appreciate it thanks!
<beisner> hi jamespage, gnuoy - we had talked a while back about moving to pxc instead of mysql in the next/default test bundles, as it's generally what is actually consumed.  shall i make that switch in o-c-t and mojo specs?
<jamespage> beisner, +1
<gnuoy> I think so, +1
<gnuoy> (and thanks)
<beisner> ok thanks & yw!
<thedac> jamespage: when you have time, it would be good to get some more direction on the workload status work:
<thedac> https://code.launchpad.net/~thedac/charms/trusty/neutron-api/status/+merge/264315
<thedac> https://code.launchpad.net/~thedac/charm-helpers/openstack-workload-status/+merge/264353
<thedac> I have another branch I am working on now that is a bit more destructive to get the missing context data available to set_context_status() but that may be getting too far into the weeds for our first go round.
<thedac> Also do we like 'blocked' or 'waiting' as a status for incomplete contexts?
<marcoceppi> thedac: blocked is the user needs to take action, waiting is the charm waiting for something to happen which may be queued but no additional itneraction fromthe user may be required
<thedac> marcoceppi: excellent description. in this case waiting then
 * marcoceppi will draft a status page for the docs
<thedac> that would be great
<pmatulis> marcoceppi: not already here? https://jujucharms.com/docs/stable/reference-status
<thedac> pmatulis: that is handy
<marcoceppi> pmatulis thedac: that's just the output of `juju help-tool status-set`
<marcoceppi> which doesn't quite describe the states
<marcoceppi> hey stub let me know when you're up and working, I've got a few cassandra charm questions
<stub> marcoceppi: Last chance before bed time
<marcoceppi> stub: it's going to be a long, long, long time
<marcoceppi> but tldr, cassandra's java startup is core dumping on GCE
<marcoceppi> and we don't knwo why
<stub> marcoceppi: The only thing I'm aware of that we aren't doing that we should be doing is installing this JNA thing.
<stub> marcoceppi: Oracle JRE or OpenJDK? Both? But I'm no expert, and would need to turn to one to debug something like that.
<marcoceppi> stub: it's just OpenJDK atm
<marcoceppi> should we try Oracle JRE?
<marcoceppi> for whatever reason it works for local, amazon, and I'm about to test azure
<stub> marcoceppi: I would try Oracle JRE, yes. And if things still fail, try to manually install that JNA thingy (which is a suspect for a core dump if my understanding of what it is is correct)
<marcoceppi> stub: I realize it's late (early?!) you're time, so we'll try those and circle back the next time you're around
<stub> marcoceppi: Yup. Good luck. Writing that charm did not improve my opinion of Java much, so I doubt I could be much real help :)
<cory_fu> stub: Can you explain http://bazaar.launchpad.net/~charm-helpers/charm-helpers/devel/revision/158.2.37 to me?  Specifically, why are we using old (precise) library versions instead of the newest from pypi that will work?
<stub> charm-helpers needs to run on precise boxes, with precise dependencies installed. We need to ensure charm-helpers works with these old dependencies.
<stub> If we didn't do this, people would write code using the newest APIs and charms on precise would start failing.
<stub> (which actually happened, as there were a heap of features missing in precise's six module that meant charm-helpers could not even be imported)
<cory_fu> Unless, of course, the Python dependencies were install from pypi.  But I guess we can't assume that.
<stub> Not that I care, since I'm only dealing with trusty
<cory_fu> Hrm.  It just seems backwards to be locked into really old library versions like that.
<stub> cory_fu: Right. egress rules and all that.
<stub> cory_fu: Yes. It is a pain in the bum.
<stub> cory_fu: But if it is a problem, it doesn't have to be all or nothing. It might mean some code paths stop working under precise unless you install more modern dependencies, but people can deal with that.
<marcoceppi> well, hopefully with the new method of dpeloying charm-helpers, and dropping a lot of the contrib code
<marcoceppi> we can get ride of most of the dependencies
<stub> We probably want to drop precise entirely when Wiley is released. That many years of skew, we could hit the opposite problem.
<stub> Its not like people patching precise charms actually want the latest charm-helpers installed.
<beisner> keep in mind that those pesky openstack charms ;-)  have a single charm which is expected to support all currently-supported ubuntu releases.  ie.  the trusty/keystone charm is expected to work for precise through vivid at the moment, with dev focus on wily.
<beisner> and in so doing, certain precise-icehouse charm fixes trickle back through charmhelpers
<marcoceppi> beisner: yes, but hopefully the openstack charm-helpers are underway of being removed from contrib
<marcoceppi> so they can do whatever they want/need, and the new slimmer charm-helpers will have little to no external dependencies
 * marcoceppi looks longingly into the distance
<marcoceppi> one day
<stub> doctor, doctor, it hurts whenever I do this...
<beisner> marcoceppi, i know it's been discussed, i'm not sure it's underway ;-)
<beisner> lol
 * beisner completes interjection.  as you were as you were.
 * stub wanders bedwards
<kirkland> lazyPower: howdy
<kirkland> lazyPower: so I got my charm deployed and working last night, a little after midnight
<kirkland> lazyPower: I went a slightly different direction, so I hope that's still kosher
<lazyPower> kirkland: Thats the power of juju, there's more than one way to do something, and as long as it fits your objectives - you've always got your namespace to warehouse any artifacts for sharing.
<lazyPower> glad to hear you found success in your mprime charm however. thats awesome
<kirkland> lazyPower: curious, I finally got the code to your docker charm
<kirkland> lazyPower: and I'm trying to figure out when/where/how you install docker itself
<kirkland> lazyPower: are you pulling it from the archive?  a ppa?  or, god help us, piping a shell script to sh as root?
<lazyPower> kirkland: playbooks/latest-docker  or playbooks/universe-docker
<lazyPower> it d efaults to installing from archive, when you set latest=true and specify the version, it pulls from the docker ppa
<lazyPower> that's covered in the charm docsite
<kirkland> lazyPower: oh, hmm, wow, this newfangled way of writing charms has my head spinning
<lazyPower> kirkland: its an Ansible based charm. Juju has a hard requirement of hook files, so we take the concept of a hook, and use python glue code to call an ansible playbook - the tags in the playbook are what scope the actions to the context of the hook thats running.
<kirkland> lazyPower: okay, so point me to the docs on how to submit this charm for review now
<lazyPower> kirkland: https://jujucharms.com/docs/stable/authors-charm-store#submitting
<kirkland> lazyPower: ta
#juju 2015-07-16
<tvansteenburgh> stub are you around?
<stub> tvansteenburgh: morning
<reith> Somehow juju master advertise wrong IP address for us, do you have any idea to fix it?
<reith> It worked for serveral months but now it sends `apiaddess` as a private ip address (10.0.*)
<jamespage> thedac, feedback on mps
<bloodearnest> hazmat, tvansteenburgh, dpb1: hello! Could one of you juju-deployers folks please merge this already-reviewed branch for deployer?
<bloodearnest> https://code.launchpad.net/~bloodearnest/juju-deployer/annotate-branches/+merge/239270
<hazmat> bloodearnest: looking
<hazmat> bloodearnest: getting an error running the tests on import annotation test
<hazmat> test_data/wiki-branch.yaml missing
<tvansteenburgh> anyone know how to work around this problem?
<tvansteenburgh> $ juju set postgresql install_keys="`cat ACCC4CF8.asc`"
<tvansteenburgh> 2015-07-16 04:19:50 INFO install yaml.scanner.ScannerError: mapping values are not allowed here
<tvansteenburgh> 2015-07-16 04:19:50 INFO install   in "<unicode string>", line 2, column 8:
<tvansteenburgh> 2015-07-16 04:19:50 INFO install     Version: GnuPG v1
<tvansteenburgh> i get the same error if i paste the key contents into juju-gui
<tvansteenburgh> full traceback: http://pastebin.ubuntu.com/11887422/
<tvansteenburgh> i'm just trying to figure out how to escape the colon
 * tvansteenburgh tries \x3A ...
<stub> tvansteenburgh: That property is yaml encoded as a string
<stub> tvansteenburgh: So "echo '[' "'"`cat ACC.asc` "'"']'" or similar bash monstrosity that actually works
 * stub wishes juju cli didn't make things difficult by trying to keep things simple
<tvansteenburgh> stub: thanks i'll try that. i'm surprised it breaks in the same way when using the gui
<stub> tvansteenburgh: The charm is just passing the string to charm-helpers, which spits out that traceback since it isn't valid yaml.
<stub> It could do with better validation and error messages, until juju gives us actual structured data here and we can stop serializing things ourselves.
<stub> Whoops, I'm late
 * stub wanders off
<tvansteenburgh> o/
<bloodearnest> hazmat, hmm, ok looking
<bloodearnest> hazmat, added and pushed
<bloodearnest> wthanks
<coreycb> jamespage, gnuoy: could one of you take a look?  it fixes a master branch deploy from source failure that came about yesterday.  https://code.launchpad.net/~corey.bryant/charm-helpers/upgrade-pip/+merge/264886
<jamey-uk> Can anyone help me with the error when deploying my Rails charm? I can't recreate the problem on Heroku, Dokku or locally: https://gist.github.com/anonymous/be6132615872fde0848a. I also have tried recreating the error on the VM itself with no luck.
<beisner> gnuoy, jamespage - fyi bug 1475320
<mup> Bug #1475320: rmq next charm:  pkg install fails when reverse dns fails - Error: unable to connect to node 'rabbit@10-245-173-55': nodedown <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1475320>
<asanjar> cory_fu: kwmonroe bigdata review q
<cory_fu> asanjar: kwmonroe has a meeting.  I thought you did, as well
<cory_fu> You want to shift it 30 min?
<asanjar> cory_fu: sure
<kwmonroe> +1, thanks
<jamespage> gnuoy, beisner: https://code.launchpad.net/~james-page/charm-helpers/liberty-versioning/+merge/264998
<jamespage> beisner, I see the problem
<jamespage> my fault
<jamespage> refactoring missed something
<jamespage> beisner, can you try this branch - lp:~james-page/charms/trusty/rabbitmq-server/fixup-configure-nodename
<jamespage> beisner, gnuoy: https://code.launchpad.net/~james-page/charms/trusty/rabbitmq-server/fixup-configure-nodename/+merge/265001
<thedac> jamespage: thanks for the feedback. That all makes sense.
<thedac> gnuoy: any chance you could comment on jjo's nrpe bug? https://bugs.launchpad.net/charms/+source/nrpe/+bug/1473205
<mup> Bug #1473205: nrpe charm creates checks with _sub postfix, breaking compatibility with nrpe-external-master <canonical-bootstack> <nrpe (Juju Charms Collection):New> <https://launchpad.net/bugs/1473205>
<gnuoy> thedac, sure, otp atm
<thedac> take your time
<beisner> jamespage, thanks - sure will do
<jamespage> thedac, np
<asanjar> lazyPower: hi there
<lazyPower> asanjar: greetings
<asanjar> lazyPower: I am reviewing your AWESOME etcs charm, but bundletester fails with http://paste.ubuntu.com/11888125/. Am I missing anything?
<lazyPower> asanjar: hmm looks ilke one of the etcd services didn't start in the cluster
<asanjar> lazyPower: i am using "local"
<asanjar> lazyPower: going to try on AWS
<asanjar> lazyPower: according to this I got all services http://paste.ubuntu.com/11888149/
<asanjar> lazyPower: but idle
<lazyPower> asanjar: what i imagine happened is it race conditioned and we haven't seen it happen in our testing/deployments yet
<marcoceppi> hey stub, I noticed that cassandra only runs on the public-address, is that by design?
<marcoceppi> it's problematic because it appears that we're hitting network latency since cassandra has to be routed out of the cloud then back in again
<beisner> jamespage, no love with that rmq branch.  fails @ the same place.  it looks like the main diff here is that rmq is in lxc, and the ptr record @ maas doesn't match the juju unit's hostname:  http://paste.ubuntu.com/11888267/
<beisner> (diff between this test and our os-on-os testing that is)
<xoritor> hello
<xoritor> does juju have support for scaleio?  i know it has support for ceph
<xoritor> ceph is fast enough, so scaleio is not an issue
<beisner> jamespage, added to bug
 * xoritor is a juju newb
<xoritor> i am looking here https://jujucharms.com/store
<xoritor> is that the right place to look for supported things?
 * xoritor is doing a lot of reading
<lathiat> xoritor: Charm store is the right place to look, though, there may be charms not in the store you can find elsewhere (github or something)
<xoritor> lathiat, thanks!
<xoritor> lathiat, or it may be named something that i am not searching for
<lathiat> xoritor: also may simply not exist
<xoritor> lol
<xoritor> true
<xoritor> ceph is what i have been using and is fast enough for me
<aisrael> lazyPower: That drone-ci stuff is really sweet. Nicely done!
<aisrael> http://blog.dasroot.net/2015-continuous-integration-with-juju-and-drone-ci.html
<lazyPower> aisrael:  thanks! :)
<xoritor> so when using juju if i want all machines to run all things i can manually place then all on all hosts?
<xoritor> s/then/them/
<xoritor> in the web ui
<lazyPower> xoritor: using machine view, you can do unit placement, correct.
<xoritor> i need the ha-proxy to make them H/A though correct
<xoritor> lazyPower, that seems too eazy
<lazyPower> Juju is good at that. Making complex things deceptively simple
<ddellav> :D
<xoritor> if that REALLY is that simple you may have won me over
<marcoceppi> rick_h_: is there still a delay in the charmstore
<rick_h_> marcoceppi: not that I'm aware of.
<xoritor> one more thing... can juju+maas run on one of the openstack nodes?
<rick_h_> marcoceppi: it took a few hours for things to sync, but that was all.
<xoritor> or does it need to be outside of the stack
<marcoceppi> I pushed a charm to lp:~marcoceppi/charms/trusty/cassandra-stress/trunk a bit a go, but don't see it
<marcoceppi> rick_h_: is there anything that will reject a charm from ingestion?
<rick_h_> marcoceppi: looking
<rick_h_> marcoceppi: proof (we still have to keep charmworld/charmstore in sync so it first has to be in charmworld before the charmstore will load it)
<xoritor> ok i also see you need a minimal of 7 machines... will it work with 5?
<marcoceppi> rick_h_: will Warning stop from ingestion of personal namespace?
<rick_h_> marcoceppi: no
<xoritor> we are a really small shop
<marcoceppi> xoritor: so, maas can be put in a virtual machine on one of the nodes, it's not really recommended however
<marcoceppi> xoritor: juju however runs from your laptop, it's a client side tool
<xoritor> hmm
<xoritor> could i put it in a VM?
<marcoceppi> xoritor: could what? juju or maas?
<xoritor> marcoceppi, juju
<marcoceppi> xoritor: sure, again, it's just a command line tool for managing a deployed juju environment. It could be installed on windows, mac, or linux, in a vm or a docker container, doesn't matter
<xoritor> cool... multiple instances?
<xoritor> ie... one in a VM and one on my laptop?   without detriment?
<aisrael> xoritor: You could, but it's not necessary. You can control multiple juju deployments from your laptop
<marcoceppi> xoritor: sure, again, it's just the client side tool. Juju then uses MAAS, or Amazon, or GCE, or Azure, or OpenStack, or whatever other cloud to actually provision and deploy machines
<xoritor> AWESOME!!!
<lazyPower> xoritor: Juju is a client/server model - each environment gets a single state server to drive the deployment/orchestration/statemanagement of the environment.
<lazyPower> marcoceppi: dont forget about the state server :)
<marcoceppi> lazyPower: I'm not, but you don't install that
<marcoceppi> juju installs that
<lazyPower> well,
<lazyPower> ok you're right
<lazyPower> in that context
<xoritor> where is the "state server" located?
<rick_h_> marcoceppi: getting logs pulled from IS and looking into it. Sorry for the delay. I'm not aware of any issues and lazyPower had something ingested yesterday so not sure what it doesn't like about this one
<lazyPower> xoritor: it occupies a single node in your environment that you're managing with juju
<xoritor> lazyPower, you mean runs on one node beside other things or takes an entire node
<marcoceppi> xoritor: you, from your computer, run `juju bootstrap`, this connects to whatever machine provisioning tool you want to use (gce, aws, maas, openstack, etc) and creates a node on that service. Once teh environment is bootstraped you can start deploying and managing that environment. Juju can manage and control multiple environments at a time and each environment can have it's on Juju GUI so you don't even have to use the command line
<marcoceppi> xoritor: while the bootstrap node does take an entire instance, you can put other things on it
<xoritor> ok
<xoritor> whew
<xoritor> just making sure
<marcoceppi> xoritor: I've managed to deploy openstack on to just two physical nodes, so it's definitely possible to model that level of complexity but at a very small physical footprint
<marcoceppi> well, technically three machines if you include MAAS
<xoritor> can i use just 5 physical machines if i have juju+maas
<xoritor> if they all have everything with h/a
<marcoceppi> xoritor: I'm not sure about the minimum requirements for OpenStack HA, but I think 5 nodes would be enough
<marcoceppi> though, it would be pretty tight ;)
<xoritor> they are all pretty beefy (12 cores, 128 GB DDR4, 4x1 Gbit, 2x40 Gbit, 1x256 GB SSD, 1x1 TB SSD, 1x2 TB PCIe)
<xoritor> i can get more ram if we need it
<marcoceppi> xoritor: tight in the amount of services you'd have to spread out, though those are some nice machines
<xoritor> yea the new haswell xeons are pretty nice i have to say
<xoritor> we are small but we are not afraid of buying what we need to get the job done
<xoritor> thats why i was asking if scaleio was supported
<xoritor> it is supposed to be much much faster than ceph with ssd/pcie
<xoritor> http://virtualgeek.typepad.com/virtual_geek/2015/05/emc-day-3-scaleio-unleashed-for-the-world.html
<xoritor> that says 24 times faster with SSD only
<marcoceppi> xoritor: so, scaleio can be supported, the charms are a plugin architecture and there's a pretty easy way to build a charm for cinder/storage
<marcoceppi> we have a few charms for other EMC products, while it's not scaleio, they could be modifiedto add support for it
<xoritor> hmm
<xoritor> i have my reservations on using it anyway...
<xoritor> i dont fully trust emc
<xoritor> ;-)
<marcoceppi> rick_h_: it showed up fwiw
<rick_h_> marcoceppi: ah that's good. /me checks when you pushed it up
<beisner> jamespage, gnuoy - i've changed the stance of bug 1475320 to be:   The rabbitmq-server next charm works on bare metal but not in a container; the stable charm works fine on both.
<mup> Bug #1475320: rmq next charm:  pkg install fails when deployed to lxc <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1475320>
<beisner> also added a reproducer, and that enviro is still up @ dellstack for a bit while i collect logs.
<rick_h_> NOTICE: jujucharms.com is having a webui outage due to a failed redis. Charm deploys should work as normal and the API is available.
<xoritor> aaah redis
<xoritor> how i love/hate thee
<rick_h_> NOTICE: jujucharms.com webui is back up
<xoritor> can juju and maas be run in docker containers?
<lazyPower> xoritor: teh juju client can
<lazyPower> xoritor: i haven't tested MAAS ina  docker container, but theory states you should be able to isolate the applications into containers. I know it can be driven from a LXC container - as that has been tested to drive VMAAS. setup your MAAS region/cluster controller in lxc, and drive KVM containers as the nodes.
<marcoceppi> lazyPower: maas needs at least a KVM, as it needs - or works best - with two nics
<lazyPower> marcoceppi: you can  create two virtual nics in a container and make it work.
<xoritor> i have found a few docker files to build maas server(s)
<xoritor> interesting idea
<xoritor> i am trying to figure out the easiest approach to getting maas+juju up and running with my limited hardware
<xoritor> i will run everything everywhere if i need to that does not bother me ;-)
<xoritor> just need to get things up and running
<xoritor> i also cant spare a machine or three for just doing maas
<xoritor> sorry for bringing that up here not much response over in #maas when i asked
<jcastro> well, if you don't have a ton of hardware then you don't really need maas for anything right?
<xoritor> i would like to use it for the initial setup and the "in case i get hit by a bus" scenario
<xoritor> i like to be a bit pro active for things like that
<xoritor> heck its the whole reason i am looking into this in the first place
<xoritor> if it were just me i would hand roll a custom solution that fit my needs and worked exactly how i wanted it to work as minimal as i could make it
<xoritor> ;-)
<xoritor> but thankfully i am not immortal
<marcoceppi> xoritor: so MAAS is really geared for if you have 10 or more machines, though that's not to say you can't use it for your use case (I have 4 machines total at home, 3 I use to deploy onto and 1 for MAAS)
<marcoceppi> but MAAS as some pretty specific requirements, the first being that it really /really/ will want two nics
<marcoceppi> but you don't need a giant beefy machine to run maas, at home I use an old optiplex, and regularly people have used $300 little Intel NUCs
<xoritor> i have an 8 core system i can use a VM for and give it 16GB ram 200 GB SSD and up to 4 1Gb nics via passthrough would that work?
<xoritor> i just cant give it the whole machine
<marcoceppi> xoritor: yeah, that should work fine
<xoritor> it would have to be kvm
<marcoceppi> right
<marcoceppi> I mean, you only need like 2-4 gb
<xoritor> cool
<marcoceppi> and a core
<marcoceppi> and maybe 50-80GB
<marcoceppi> depending on how many images you want to roll out
<xoritor> i really do not have much ;-)
<xoritor> but i want to be able to have something in place should i have a heart attack
<marcoceppi> well if it's just Ubuntu 14.04, that's one image
<xoritor> it has happened to us before
<xoritor> we are overly paranoid
<xoritor> im 43 and the YOUNGEST employee
<xoritor> ;-)
<xoritor> heh
<xoritor> you would never believe how many people drop a load on that one
<tasdomas> hi
<tasdomas> amulet does not support actions yet, does it?
<tvansteenburgh> tasdomas: no
<tasdomas> is there a good e
<tasdomas> sorry
<tasdomas> tvansteenburgh, thanks
<tasdomas> what version of juju are tests currently being run against?
<tvansteenburgh> tasdomas: which tests? :)
<tasdomas> tvansteenburgh, well, when a charm goes through the approval process
<tvansteenburgh> charm tests run against latest, so currently 1.24.2
<tasdomas> tvansteenburgh, ah great - thanks
<tvansteenburgh> btw, fwiw, adding action support to amulet is on the todo list. should be pretty straightforward once we get a little time to do it
<tasdomas> tvansteenburgh, hm - I'll take a look at it, have a charm that uses actions extensively (lp:~tasdomas/charms/trusty/git/trunk)
<tvansteenburgh> tasdomas: sure if you wanna submit a PR that'd be awesome :)
<tasdomas> hm, running charm test -e local returns:
<tasdomas> ERROR there was an issue examining the environment: failure setting config: mkdir /.juju: permission denied
<marcoceppi> tasdomas: try using bundletester instead
<tasdomas> marcoceppi, bundletester?
<marcoceppi> tasdomas: yes, bundletester is actually what we use in CI
<marcoceppi> https://pypi.python.org/pypi/bundletester
<coreycb> marcoceppi: any chance you could review this?  the other guys are eod.   https://code.launchpad.net/~corey.bryant/charm-helpers/upgrade-pip/+merge/264886
<marcoceppi> coreycb: sure, giv me a few mins
<coreycb> marcoceppi, thanks
<marcoceppi> coreycb: LGTM, this may actually be the errors I was hitting with neutron
<marcoceppi> it was something about version strings
<marcoceppi> never thought it was because pip was too old
<coreycb> marcoceppi, thanks.  could be if you were using master.  could you land that if you weren't going to already?
<marcoceppi> coreycb: merged, will this be in the 07.2015 release of the openstack charms?
<coreycb> marcoceppi, it will be now!
<marcoceppi> err, rather, can you let me know when these land in trunk?
<marcoceppi> s/trunk/devel
<marcoceppi> I'll give the deployment another go
<coreycb> marcoceppi, yep will do
<marcoceppi> thanks!
<tasdomas> marcoceppi, thenks
<tvansteenburgh> tasdomas: we also have a docker container with bundletester already installed, so you can, for example:
<tvansteenburgh> sudo docker run --rm --net=host \
<tvansteenburgh>     -u ubuntu \
<tvansteenburgh>     -v ${JUJU_HOME}:/home/ubuntu/.juju \
<tvansteenburgh>     -t jujusolutions/charmbox:devel \
<tvansteenburgh>     sudo bundletester -F -e $env -t $url -l DEBUG -v
<tasdomas> tvansteenburgh, interesting - where can I find that container?
<tvansteenburgh> jujusolutions/charmbox in the docker registry
<tvansteenburgh> tasdomas: you may also like lazyPower's new blog post on charm testing with drone (uses this same docker container) http://blog.dasroot.net/2015-continuous-integration-with-juju-and-drone-ci.html
<tasdomas> tvansteenburgh, ah, thanks
<lazyPower> tasdomas: full rundown of context management with that container: http://blog.dasroot.net/2015-local-isolation-with-docker-and-juju.html
<lazyPower> but thats extra credit :)
<tasdomas> lazyPower, thanks ;-]
<tasdomas> lazyPower, are unit tests for charm functionality required to get the charm approved?
<lazyPower> its best practice and will expedite any reviews
<lazyPower> but its not strictly required, only preferred
<lazyPower> the requirement is an amulet level integration test so we ensure it continues to deploy as the charm ages
<lazyPower> which involves standing up your charm, any ancilliary charms related to it, and inspecting data sent over the wire + host configuration. eg: if you alter networking settings, verify they did indeed happen.
<marcoceppi> tasdomas: unit tests are a super ++ in the review process, since it gives us an idea of how much coverage is done for the code, but like lazyPower said not required to be in the store
<tasdomas> marcoceppi, lazyPower: thanks
<xoritor> thanks everyone
<thumper> tvansteenburgh: hey there
<thumper> tvansteenburgh: still around?
<tvansteenburgh> thumper: hey
<thumper> hey
<thumper> do the python-jujuclient tests pass for you?
<tvansteenburgh> hee, hot topic!
<tvansteenburgh> i haven't tried, but whit just submitted an MP that allegedly fixes them
<tvansteenburgh> wanna review it? :P
<thumper> I'll look too, but I need something to run it in
<tvansteenburgh> thumper: running tests now
 * beisner stands by waiting eagerly for rabbitmq-server next charm bare metal + lxc unbreak validation
<tvansteenburgh> thumper: 18 passed, 5 skipped
<tvansteenburgh> py27 and 34
<whit> thumper: ported them to py-test
<whit> you still have to set a env var
<thumper> whit: is there instructions on how to run the tests anywhere?
<tvansteenburgh> $ JUJU_TEST_ENV=local tox
<tvansteenburgh> that's it
<tvansteenburgh> just bootstrap local env
<whit> thumper: should be in the readme iirc
<whit> thumper: I've set the tests to skip some dodgy incomplete implementation and some tests with broken cleanups
<whit> thumper: until the implementations and cleanups are complete
<whit> hazmat: do you have a commit waiting to be pushed somewhere for the facade stuff?
<tvansteenburgh> facades are already in
<whit> tvansteenburgh: removed that print statement
<whit> tvansteenburgh: I marked several tests as skips because of what appeared to be incomplete implemenation that caused the tests to error
<whit> tvansteenburgh: but maybe there was some magic I missed
<whit> tvansteenburgh: I did add comment to the skips so it should be obvious
<whit> <3 py.test
<tvansteenburgh> whit: ack
<beisner> dear jamespage and/or gnuoy scrollback,
<beisner> bug 1475320 is sorted, new proposal ready for you
<mup> Bug #1475320: rmq next charm:  config-changed hook fails when deployed to lxc <openstack> <uosci> <rabbitmq-server (Juju Charms Collection):New> <https://launchpad.net/bugs/1475320>
<marcoceppi> beisner: UHG
<beisner> marcoceppi, uhgtastic?
<marcoceppi> that bug was kicking my ass
<beisner> ditto
<beisner> lp:~1chb1n/charms/trusty/rabbitmq-server/fixup-configure-nodename2
<marcoceppi> I would have to ssh in, force kill rabbit, then service rabbitmq-server start, then resolved --retry
<beisner> ^ that should do it for now, until reviewed/landed
<marcoceppi> glad this is going to be fixed!
<marcoceppi> should be able to do a smoosh without any issues now
<beisner> yes
<beisner> happy smooshing, gotta run
<thumper> whit: where is the main code for python-jujuclient now?
<thumper> whit: I assumed it was LP
<thumper> whit: or is this fix just proposed ATM?
<thumper> whit: I hit problems trying to run tests with your branch
<thumper> whit: no idea what I'm doing wrong
<tvansteenburgh> thumper: sorry i saw your comment right after i merged it :/
<thumper> :)
<tvansteenburgh> thumper: pip install -U tox
<thumper> tvansteenburgh: I did that earlier
 * thumper tries again
<thumper> I currently have 2.0.2
<thumper> tvansteenburgh:  E   EnvironmentNotBootstrapped: Environment "test" is not bootstrapped
<thumper> lots of that
<tvansteenburgh> juju bootstrap -e test
<thumper> so I'm guessing it expects an environment called "test" to be available?
<thumper> tvansteenburgh: should that exist first?
<tvansteenburgh> no you have to tell it the name of your juju env
<tvansteenburgh> $ JUJU_TEST_ENV=local tox
<tvansteenburgh> but you must bootstrap that env first
<thumper> tvansteenburgh: would be nice to have that written down somewhere
<tvansteenburgh> agree, not user friendly
<thumper> tvansteenburgh: however I thiink I have enough to get this working with JES now
<tvansteenburgh> thumper: i'll make a proper Makefile/readme when i run out of other things to do :D
<thumper> haha
<thumper> yeah...
<thumper> like that'll happen
<tvansteenburgh> thumper: check the HACKING doc
<tvansteenburgh> turns out there are some instructions
<thumper> :)
#juju 2015-07-17
<fcorrea> hello everyone. I'm running juju 1.24.2 here and I'm getting this upon bootstrap with the local provider: http://pastebin.ubuntu.com/11890609/
<fcorrea> anyone seen it?
<fcorrea> that's syslog fwiw
<thomi> can anyone suggest a charm that requires mongodb and uses charmhelpers? I'm hoping to steal..err.. learn from another charm to see what I need to do to
<thomi> more specifically, it's not clear to me how to adapt the example at http://pythonhosted.org/charmhelpers/examples/services.html#service-definitions-overview to something other than mysql. How do I know what  keys the 'mongodb' relation has?
<coreycb> gnuoy, jamespage: hey, could I get a review of this?  it's tiny.  https://code.launchpad.net/~corey.bryant/charms/trusty/nova-cloud-controller/neutron-db-manage/+merge/265065
<gnuoy> coreycb, lgtm, I'll land
<coreycb> gnuoy, thanks!
<marcoceppi> fcorrea: could you paste the output from /var/log/upstart/juju-*
<marcoceppi> actually, bleh, that won't work, you're using systemd. I have no idea where those init jobs log to. But if you do, that's what would be interesting for us
<fcorrea> marcoceppi, I ended up giving up after I found this: https://bugs.launchpad.net/juju-core/+bug/1450092
<mup> Bug #1450092: juju fails to bootstrap with vivid and upstart <systemd> <upstart> <vivid> <juju-core:Triaged> <juju-core 1.23:Won't Fix> <juju-core 1.24:Won't Fix> <https://launchpad.net/bugs/1450092>
<fcorrea> marcoceppi, it's seems it will be fixed in 1.25 and not in 1.24 so might as well just use some other substract
<lazyPower> tvansteenburgh: ta for the review on the docker charm!
<lazyPower> how did the new testing structure work out for you? I moved it to tox in the hopes it would be way more consistent
<tvansteenburgh> lazyPower: ran it with charmbox + bundletester, worked fine
<lazyPower> solid
#juju 2015-07-19
<R3ido101> Hi, Can i have restarting JUJU after restarting my Dedicated Server i am using a LXC eviroment
#juju 2016-07-18
<jobot> hello, is it possible for a charm to know that it is related a mysql db vs a postgresql db, or is that something that would need to be in the configuration?
<marcoceppi> magicaltrout: which meeting was that?
<Prabakaran> Could somebody help me juju 2.0 uninstallation steps on ubuntu?
<magicaltrout> uninstallation?
<magicaltrout> apt remove --purge juju-2.0 (ish)
<magicaltrout> then rm -rf ~/.local/share/juju if anything is left there
<MonsieurBon> Hi all
<MonsieurBon> I have bootstrapped the bootstrap node and am now trying to access the juju gui. I can reach the login page, but nothing ever happens when clicking the login button. Does anyone know, what could be the problem?
<MonsieurBon> I have checked with the developer tools in firefox and I can't see any activity
<MonsieurBon> Mainly I'm using Firefox, but I have the same effect with Chrome
<babbageclunk> Bonjour MonsieurBon! Hmm. Can you see anything interesting in the logs? Try "juju debug-log" while visiting the login page?
<MonsieurBon> No, nothing. Should I execute juju debug-log on the host, where juju was installed initially or on the bootstrap node?
<MonsieurBon> The dev tools don't show any activity at all. No logs, no network activity, nothing. I also can't see any errors there.
<babbageclunk> On the host - whereer
<babbageclunk> Oopd
<babbageclunk> gah
<babbageclunk> Wherever you are running juju status.
<MonsieurBon> ok, I did that. It doesn't show anything
<babbageclunk> So you don't even see a request back to the server when you click login?
<MonsieurBon> Nope, nothing at all
<magicaltrout> i've not been keeping up with the gui compatability. is there a gui <> beta offset currently or should it work?
<MonsieurBon> It does call /api when loading the page, but that just returns a 101 response. And no XHR calls after that
<MonsieurBon> just to let you know, I did not install the juju-gui charm manually, but am accessing the automatically deployed gui on the bootstrap node.
<babbageclunk> magicaltrout: good point - there was that breaking api change.
<babbageclunk> MonsieurBon: What version of juju are you running?
<MonsieurBon> 2.0-beta7-xenial-amd64
<MonsieurBon> The bootstrap node is running trusty though. Might this be a problem?
<babbageclunk> I don't think so. That's a pretty old beta though - 12 is just out. There have definitely been some bugs with the GUI, so it would be worth upgrading and rebootstrapping.
<MonsieurBon> this is just a test setup, so I can switch the juju node to trusty any time
<MonsieurBon> apt-get upgrade does not show me any updates!
<magicaltrout> its probably in a ppa MonsieurBon
<MonsieurBon> I'm using the ppa
<magicaltrout> 2.0 beta12
<MonsieurBon> deb http://ppa.launchpad.net/juju/stable/ubuntu xenial main
<magicaltrout> nope
<magicaltrout> devel ppa
<MonsieurBon> I'll give that a try
<MonsieurBon> why is stable installing beta packages anyway?
<babbageclunk> Yeah, that does seem weird.
<magicaltrout> well
<magicaltrout> you don't need to install 2.0 you can still install 1.25
<MonsieurBon> well this is just testing, so i'll give 2.0 beta 12 a go and will reinstall with 1.25 if it doesn't work
<MonsieurBon> How do I "rebootstrap"? destroy and recreate?
<magicaltrout> erm
<magicaltrout> yeah juju kill-controller
<babbageclunk> Yes, unfortunately these betas aren't upgradable yet.
<magicaltrout> or something
<magicaltrout> the names keep changing:)
<babbageclunk> juju destroy-controller is the currently blessed command.
<babbageclunk> I mean, that's always been the command! ;)
<MonsieurBon> I'll give destroy-controller a chance before killing it :)
 * magicaltrout clearly just gets bored and goes hardcore
<magicaltrout> didn't know remove-controller existed :)
<babbageclunk> Yeah, we're trying to get people not to use kill-controller - if destroy-controller isn't working for something that's a bug.
<magicaltrout> pfft
<babbageclunk> :)
<magicaltrout> kill kill kill!
<MonsieurBon> Well it's nice to at least ask politely first :)
<babbageclunk> Apparently it's too easy for kill-controller to leave some instances running in the cloud and people getting billed without realising!
<babbageclunk> Ooh, I finally see how you can use chrome dev tools to spy on websocket traffic - that might come in handy if you're still having a problem after bootstrapping on beta12, MonsieurBon.
<MonsieurBon> of course now destroy-controller does not seem to do anything... :)
<babbageclunk> Uh oh.
<babbageclunk> So what's happening?
<MonsieurBon> nothing
<MonsieurBon> kill did the job :)
<babbageclunk> Ah well.
<magicaltrout> KILL!
<babbageclunk> Were you bootstrapping to lxd?
<MonsieurBon> Unable to open API: open connection timed out
<MonsieurBon> nope, an old laptop :)
<babbageclunk> But what cloud?
<MonsieurBon> it's kind of a weird test setup
<babbageclunk> Using maas? Or manual provider?
<MonsieurBon> juju and maas are running on vm's
<MonsieurBon> maas yes
<MonsieurBon> and there are two laptops commisioned through maas I'm playing around with
<babbageclunk> Right - it's worth making sure that all the nodes are released in the maas ui.
<MonsieurBon> yes sure
<MonsieurBon> they were
<babbageclunk> Cool cool]
<MonsieurBon> and it works!
<MonsieurBon> thx for your help
<magicaltrout> MonsieurBon: the later beta?
<magicaltrout> or 1.25?
<MonsieurBon> beta 12
<magicaltrout> cool
<MonsieurBon> I'll make sure to use the stable once we're deploying maas/juju for production :)
<magicaltrout> this is what happens when the hackers keep changing API ! :P
<babbageclunk> \o/
<MonsieurBon> and it is blazingly fast!
<magicaltrout> if you're interested in local testing as well MonsieurBon
<magicaltrout> make sure you check out the lxd provider
<babbageclunk> +1
<MonsieurBon> well I was wondering, can I bootstrap the controller for maas into an lxd container?
<MonsieurBon> Or will I always "lose" one node to bootstrapping whatever cloud I'm using?
<babbageclunk> Well, if you're using lxd you don't actually need maas - you can just deploy to lxd directly.
<babbageclunk> Or you can deploy to lxd containers on maas nodes.
<MonsieurBon> yes, but I would still like to use physical machines to deploy to
<babbageclunk> I don't think you can deploy to a container on the node, sorry.
<babbageclunk> You can deploy things to the controller machine though, if you want to.
<babbageclunk> Sorry, should have said the controller model.
<babbageclunk> juju deploy -m controller <application> --to lxd:0
<babbageclunk> The only problem with that is that you can't make relations between applicatons in the default model and others in the controller model.
<Rajith> Hi I tried uninstalling and re-installing juju , but I am getting version 2.0-beta7 , I need latest 2.0-beta12 to install.  for installation I am following https://jujucharms.com/docs/devel/getting-started
<magicaltrout> Rajith: you need the devel ppa
<Rajith> let me know the steps for getting devel ppa
<magicaltrout> http://lmgtfy.com/?q=juju+devel+ppa :)
<babbageclunk> Rajith: sudo apt-add-repository ppa:juju/devel
<babbageclunk> then: sudo apt update; sudo apt install juju-2.0
<magicaltrout> lazy question: anyone have a one liner to shutdown all lxd containers?
<Odd_Bloke> $ sudo halt  # ;)
<babbageclunk> I feel like this is a trick, but isn't it `sudo lxd shutdown`
<babbageclunk> ?
<magicaltrout> sorry
<magicaltrout> s/shutdown/delete
<babbageclunk> Huh, no - they come back up again.
<magicaltrout> i have a bunch of stale juju lxd images and I can't be bothered deleting them all one by one
<magicaltrout> docker you can pipe a list in
<magicaltrout> I was hoping there was a similar lxc trick
<babbageclunk> lxc delete can take multiple containers - you can pipe the list into xargs lxc delete, I think.
<magicaltrout> yeah I  can't figure out how to get just a list of container names
<magicaltrout> lxc list gives you all the extra crud
<Odd_Bloke> magicaltrout: `lxc list --format json | jq -r '.[] | .name'` looks like it will do it.
<magicaltrout> ah Odd_Bloke cool! I was playing around trying to find an output that I could manipulate
<magicaltrout> you beat me to it
<Odd_Bloke> :)
 * magicaltrout will blog it for posterity
<babbageclunk> Odd_Bloke: nice - I normally just use grep and cut, but it's a bit fiddly.
<Odd_Bloke> Yeah, I was using jq for something else last week, so it was fresh in my mind as a possibility.
<Rajith> Trying to uninstall juju getting error:  cannot destroy 'lxd-pool': pool is busy
<magicaltrout> kwmonroe: cory_fu FYI I deployed the hadoop processing bundle on LXD earlier and it failed miserably waiting for Java
<MonsieurBon> hi guys
<magicaltrout> uh oh
<MonsieurBon> I know now why juju 2.0 beta4 was installed even though I had the stable ppa added
<MonsieurBon> because that's the version in the ubuntu repositories!
<MonsieurBon> for xenial at least
<MonsieurBon> same for maas: beta2 (or so...)
<MonsieurBon> I think it's a bit weird to supply beta versions in an LTS server operating system!
<MonsieurBon> How can I make sure, only the stable version will be installed for juju?
<magicaltrout> erm
<magicaltrout> you're on Xenial?
<MonsieurBon> yes
<magicaltrout> apt install juju-1.25
<MonsieurBon> same applies to maas I guess?
<magicaltrout> dunno never used maas
<magicaltrout> probably
<MonsieurBon> hum, there's no maas-version package...
<magicaltrout> there is a stable PPA
<magicaltrout> oh
<magicaltrout> no xenial build
<magicaltrout> dunno, you require the support of a canonical person :)
<MonsieurBon> or from the #maas channel! :)
<magicaltrout> babbageclunk: --^
<magicaltrout> the us folk will start appearing shortly MonsieurBon
<MonsieurBon> hehe
<magicaltrout> the traffic will pick up with brainy folk
<magicaltrout> rick_h_ will know someone who knows
 * rick_h_ knows nothing!
<MonsieurBon> is there a problem with juju 1.25 talking to maas 2.0?
 * magicaltrout shrugs 
<magicaltrout> :)
<rick_h_> 1.25 doesn't talk maas 2.0
<magicaltrout> rick_h_ will know someone who knows ;)
<rick_h_> only juju 2.0
<MonsieurBon> You know nothing, rick_h_
<MonsieurBon> :)
<magicaltrout> rick_h_ knows.....
<rick_h_> and yes, juju 2.0 is in xenial as juju, and a new beta is coming  today
<rick_h_> beta12
<MonsieurBon> I feel like beta12 is still giving me troubles.
<MonsieurBon> lets see how to install maas < 2.0 then...
<MonsieurBon> might have the added benefit of wake-on-lan! :D
<magicaltrout> kjackal: you available?
<MonsieurBon> apparently I need to use 14.04
<rick_h_> MonsieurBon: what with beta12 is giving you trouble?
<rick_h_> MonsieurBon: yes, to run maas 1.9 but you can run 16.04 on top of that maas 1.9 ok. So the one machine maas runs on needs to be 14.04 if you're using maas 1.9
<MonsieurBon> don't know. mysql charm seems to deploy to lxd on machine x but state always shows as unknown
<rick_h_> MonsieurBon: the machine state or the application status?
<MonsieurBon> the application state
<MonsieurBon> machine boots fine
<rick_h_> may just be the charm isn't setting anything awesome or useful in there I guess
<MonsieurBon> nothing else runs on that machine (yet) so no relations. Just the mysql charm
<magicaltrout> MonsieurBon: try the mariadb charm as well
<magicaltrout> if the mysql is giving you problems
<MonsieurBon> is that completely interchangeable?
<MonsieurBon> i.e. it'll talk to the likes of nova-compute?
<magicaltrout> ah you doing openstack stuff
<magicaltrout> no in that case :)
<MonsieurBon> ok, I'll try a downgrad then. According to the maas channel I shouldn't use 16.04 until the point 1 release :)
<magicaltrout> wimps
<magicaltrout> they need to man up
<MonsieurBon> hehe
<magicaltrout> cory_fu: as kjackal isn't talking to me, ping me when you have 2 mins, just got a quick Hadoop Q
<magicaltrout> (please)
<kjackal> magicaltrout: hey
<kjackal> let me read what's up
<magicaltrout> not the backlog
<magicaltrout> i've not asked my question yet
<kjackal> go on
<magicaltrout> kjackal: i just wondered, I'm finishing up this PDI charm, and the Hadoop configuration is optional but obviously I'd like it in place for interaction with your bigtop stuff
<magicaltrout> if I told it to use the hadoop-client layer stuff, I could get access to the *-site.xml config files it needs I guess, but would that make it a mandatory relation with Hadoop?
<magicaltrout> i'm trying to avoid having a hadoop charm and a non hadoop charm version
<kjackal> magicaltrout: cool! making a relation mandatory or not has to do with the logic you put on the reactive part of your charm
<kjackal> you can completely ignore a relation by not having any hook acting on joining and/or departing
<kjackal> there are a couple of techniques that you may find interesting in case of optional relations
<kjackal> first you can test for a state
<kjackal> and then you can build the interface from any existing relation
<kjackal> magicaltrout: give me a moment to find an example
<magicaltrout> well i want to act on  the relation, but I don't want the charm to be subordinate to hadoop being available
<magicaltrout> basically
<magicaltrout> which maybe just  how it is, i've not yet tested it
<magicaltrout> i'm just making sure PDI can talk to hadoop  then i'll dump it all in the charm logic
<kjackal> ok, let me understand what you have right now
<kjackal> right now you have PDI that does not relate to hadoop, right?
<magicaltrout> it doesn't
<kjackal> awesome
<magicaltrout> but to do so all i need to do is copy in the hadoop configs
<magicaltrout> and set a couple of parameters
<kjackal> ok, cool! So you need to build on this layer:  'layer:hadoop-client'
<magicaltrout> okay cool
<kjackal> you would better set an option silent: True so that the hadoop client does not report "ready"
<magicaltrout> i'm  sure i can manage that
<kjackal> have a look at what spark is doing: https://github.com/juju-solutions/bigtop/blob/spark/bigtop-packages/src/charm/spark/layer-spark/layer.yaml
<kjackal> spark will deploy as standalone and if you relate it to a hadoop-plugin it will be able to use hadoop and hdfs
<magicaltrout> ah
<magicaltrout> marvelous
<magicaltrout> just what i need
<kjackal> so you add the hadoop-client layer
<kjackal> then you react to whatever might change your configuration: https://github.com/juju-solutions/bigtop/blob/spark/bigtop-packages/src/charm/spark/layer-spark/reactive/spark.py#L104
<kjackal> magicaltrout: the fact that hdfs is available is spotted by https://github.com/juju-solutions/bigtop/blob/spark/bigtop-packages/src/charm/spark/layer-spark/reactive/spark.py#L123
<kjackal> magicaltrout: there is also the option to create an interface object from a state: https://github.com/juju-solutions/bigtop/blob/zookeeper/bigtop-packages/src/charm/zookeeper/layer-zookeeper/lib/charms/layer/zookeeper.py#L81
<Prabakaran> Hi marcoceppi: do have any updated on this issue https://github.com/marcoceppi/charm-mysql/issues/4
<Prabakaran> Hello Team, Could someone help me with Juju 2.0 Uninstallation steps?
<MonsieurBon> juju deploy juju-gui returns cannot retrieve charm network is unreachable
<MonsieurBon> But I can telnet to the displayed IP address / port
<MonsieurBon> what might be wrong?
<magicaltrout> routing, dns,  something else?
<MonsieurBon> I can resolve api.jujucharms.com
<magicaltrout> from the controller?
<MonsieurBon> this is juju 1.25 now :)
<MonsieurBon> not from the bootstrap node
<MonsieurBon> should I try this from the node I'm deploying juju-gui to?
<MonsieurBon> i.e. the bootstrap node?
<magicaltrout> yeah
<MonsieurBon> how do I connect to the node?
<MonsieurBon> what ssh user?
<MonsieurBon> oh, found juju ssh
<MonsieurBon> nope, no connectivity on that host. I'll go and check the FW
<MonsieurBon> magicaltrout, stupid me: missing default GW in maas interface configuration!
<magicaltrout> \o/
<MonsieurBon> I should have noticed this before, as bootstrapping had some troubles already!
<lazyPower> MonsieurBon really glad you got it figured out though :)
<magicaltrout> he's no MonsieurBon, dont believe  a  word lazyPower says!
<lazyPower> O_o
<MonsieurBon> lazyPower, me too :)
<lazyPower> magicaltrout - who gave you access to the peanut gallery today?
 * lazyPower throws peanuts @ magicaltrout 
<magicaltrout> hehe
<MonsieurBon> magicaltrout, no I believe it. Because if I hadn't he would have to try to support me. So I'll bet he's glad :)
<magicaltrout> i'm allowed, I've been trying to finish my pdi charm all afternoon
<magicaltrout> i believe that buys me peanut credits
<lazyPower> magicaltrout - http://traefik.dasapp.co -- i've managed to figure out proper reverse proxying and networking in our k8s bundle(s)
<lazyPower> and you bet, I'll give you 36 hours of peanut gallery access for hte mention of completion of the current iteration of the pdi charm(s)
<lazyPower> in other words, i have no idea what i am talking about
<magicaltrout> hehe
<lazyPower> magicaltrout - how have you been addressing networking in mesos?
<magicaltrout> i've not (yet)
<lazyPower> does marathon/dcos bring with it, its own flavor of SDN business?
<lazyPower> ah ok
<lazyPower> this wasn't nearly as painful as I was expecting.
<magicaltrout> the problem is, you dump stuff into the cluster, and locally it will spin up the stuff I need and create endpoints
<lazyPower> granted its not fully spaces aware so its completely possible to model yourself into a corner as it stands today
<magicaltrout> but there is the conundrum of the juju managed firewall
<lazyPower> exactly
<lazyPower> juju run --service foobar "open-port 3600"
<lazyPower> thats schenanigans
<lazyPower> i have spoken with marcoceppi about spiking on a daemon that runs to handle that on behalf of hte user. Enable it via config/etc.
<magicaltrout> yeah, but its not nice and doesn't close if you undeployed a serive
<magicaltrout> service
<lazyPower> well the idea is the daemon listens on the service level and opens whats defined ther
<magicaltrout> that said, dc/os has a command line util I believe you can use to deploy stuff, so if you didn't use the dashboard and instead used a juju action and cli
<magicaltrout> that could manipulation the fw#
<lazyPower> or closes respectively
<lazyPower> like every minute, scrape the api, and determine if an action is required
<magicaltrout> yeah
<magicaltrout> that would be good
<lazyPower> ok i'll draft that up to the mailing list so we can get more feedback on this.
<magicaltrout> cool
<lazyPower> thanks for riffing :D
<magicaltrout> no worries, i'm working on cleaning stuff up this week when i'm not being annoyed by NASA bods
<magicaltrout> so I'm gonna try and get PDI & DC/OS Master scalaing done
<magicaltrout> once the master scaling is done, the rest of the basic stuff is pretty straightforward
<magicaltrout> just need a few actions to sort out logins and automate the cli installation
<magicaltrout> I'm  sat in the Snappy Sprint roundup and there is a guy with a far too sculpted beard for a developer
<magicaltrout> I'm not sure whats going on in the world any more ..... ;'(
<magicaltrout> MonsieurBon do you sport a beard?
<MonsieurBon> magicaltrout, sort of. you?
<magicaltrout> only when I've woken up too late to shave.
<MonsieurBon> why are you asking?
<magicaltrout> beards are acceptable, but if you have time to sculpt your beard you're clearly not overworked enough ;)
<MonsieurBon> hehe :)
<MonsieurBon> but wouldn't being shaved hint into the same direction?
<mskalka> magicaltrout, I think you're onto something there. Beard-groomedness could be a positive indication of employee health
<magicaltrout> lol
<magicaltrout> maybe
<mskalka> following the same logic, all developers should grow beards
<MonsieurBon> mskalka, or that it's time for some downsizing :)
<mskalka> downsize the beards or the employees?
<MonsieurBon> when there's too much beard downsizing then there's not enough employee downsizing, I'd say!
<magicaltrout> i can see a gap in the silicon value market
<magicaltrout> sod onsite nurseries and basketball courts
<magicaltrout> get a beard groomer on the google campus and I'm there!
<mskalka> all I'm getting out of this is to let me beard run wild. That way my employer thinks I'm overworked and thus productive, and I get to keep my beard and my job
<Odd_Bloke> I'm not sure Silicon Valley needs any more ways of signalling to women that they aren't welcome.:p
<magicaltrout> women can sport beards if they so choose.....
<mskalka> women are welcome to sport beards as well
<MonsieurBon> well you could always include the beard groomer in some general wellness facility that would supply other hairy services
<lazyPower> ... this went in the weeds pretty quickly
<magicaltrout> lol
<mskalka> we're on the cutting edge of HR here lazyPower
 * lazyPower silently closes the window and resumes hacking on a blog post
 * lazyPower shakes head
<magicaltrout> lazyPower is jealous
<magicaltrout> because he doesn't have a groomed beard
<MonsieurBon> and I still don't know why exactly magicaltrout asked the initial question :)
<lazyPower> considering how disshelved i look all the time, i think i'm qualified to no-op on this one.
<magicaltrout> cause i'm sat in a sprint roundup and someone with a groomed beard stood up
<lazyPower> thats probably stokachu
<stokachu> o/
<magicaltrout> stokachu: got a beard?
<lazyPower> well, he did anyway :)
<lazyPower> not sure if thats still the case
<MonsieurBon> I have to say, all the stable version work much better, than the beta ones! :)
<stokachu> haha
<MonsieurBon> thank guys for all you help today
<MonsieurBon> cu later
<magicaltrout> no problem
<magicaltrout> happy grooming!
<MonsieurBon> yes, same to you!
<Prabakaran> Hi marcoceppi: do have any updated on this issue https://github.com/marcoceppi/charm-mysql/issues/4
<marcoceppi> Prabakaran: no. When I do I will reply to that thread.
<Prabakaran> thanks marcoceppi :) i am struct on this point ... please confirm to me once u r done making these changes at your ealiest convenience
<magicaltrout> *chuckle*
<x58> Are there any charm helpers that grab all network interface names, and their assigned IP's?
<x58> Or any charm helpers that parse /etc/network/interfaces
<mskalka> what language are you working in?
<x58> Python
<x58> mskalka: ^
<mskalka> x58: https://pythonhosted.org/charmhelpers/api/charmhelpers.contrib.network.html might lead you somewhere
<x58> Perfect.
<mskalka> I ran into the same issue developing a charm in Rust, not super familiar with the python charm libraries
<x58> mskalka: Looks like it has part of what I need. Will just go steal some code :P
<mskalka> x58, glad to help. Might also want to look into python's sh package
<magicaltrout> kjackal: I think we've had this convo beforee
<magicaltrout> does hadoop bundle stuff run ZK or not?
<magicaltrout> i vagely remeber some HA thing
<cory_fu> magicaltrout: kjackal is EOD, but the answer to your question depends on which Hadoop bundle you mean.  The Bigtop one (https://jujucharms.com/hadoop-processing/) doesn't currently include ZK because it doesn't currently include HA, but kjackal is currently working on adding that back in.  The slightly older vanilla Apache bundle (https://jujucharms.com/apache-processing-mapreduce/) does include HA and thus does include ZK
<magicaltrout> cory_fu no worries just filling in the blanks in the PDI  charm, thanks
<bdx> whats up all?
<bdx> has anyone used https://jujucharms.com/u/stub/storage with aws successfully?
<bdx> I'm trying to use the storage charm to get my elasticsearch datadir mounted on an ebs vol ...
<bdx> it seems the storage charm should be capible of this, but I can't seem to get it to attach a pre-existing vol, or to create and attach a new vol
<bdx> anyone use this yet, at all?
<bdx> capable*
<bdx> can spell
<bdx> stub: ping
<cory_fu> bdx: Unfortunately, it's almost 2am stub's time, so you likely won't get a response from him for a few hours.
<cory_fu> Also unfortunately, I haven't used that charm, so I can't really offer any advice.
<cory_fu> lazyPower: I don't suppose you've used it before?
<magicaltrout> timezones are for wimps
 * lazyPower reads backscroll
<lazyPower> bdx ah - yeah
<lazyPower> bdx - you need to pair that charm with the block-storage-broker charm
<lazyPower> however, that stack is quite old, and is deprecated in favor of modeled storage with juju
<bdx> ooooh
<lazyPower> it was based on the euca2tools python lib, which targets a very old inmplementation of the AWS API
<bdx> i see
<lazyPower> its rife with mismatches that may potentially cause headachs, i'm not certain if we're using that anymore. I should really mail the list about it
<cory_fu> lazyPower: But does Juju storage support connecting to an existing EBS volume?
<lazyPower> cory_fu - only ebs volumes it has provisioned
 * lazyPower will craft a quick post to see if anyone is still using it, and what the status/feedback is
<bdx> cory_fu, lazyPower: how are you guys dealing with elasticsearch datadir mounts then?
<cory_fu> bdx: Re-reading your ask, I'm not clear if you're trying to connect to a pre-existing, non-Juju provisioned EBS volume or just want it stored on EBS instead of transient storage
<lazyPower> bdx - at the moment, the charm doesn't support storage. So its per-instance. it would be a good idea to patch the charm to work with storage however.
<bdx> cory_fu: in all reality, I just need my datadir on a separate vol
<lazyPower> bdx - add to matadata the storage bits, and give it a storage-attached hook to format and mount the volume accordingly
<cory_fu> lazyPower: If Juju provisions the storage mount, and you need to reconnect it for some reason, is that possible?
<lazyPower> cory_fu - i'm pretty sure that is TODO. you can attach it manually
<lazyPower> but the storage feature itself needs modifications to support that
<bdx> I've provisioned all my es instance data dirs manually so far ... I'm going to be spinning a bunch of these up and down ... I stumbled across the storage charm and respective hooks in the es charm over the weekend
<cory_fu> lazyPower: That charm was updated by stub not too long ago, so it seems he's still maintaining it.  Perhaps it has features that Juju storage don't yet support?
<bdx> I was super pumped to spin up my next few elastic clusters using it ... bah
<bdx> yeah - it seems fairly recent
<lazyPower> well thats promising
<magicaltrout> yeah the storage stuff is pretty cool, we've been doing a bunch of stuff at JPL in docker which I'd like to port to juju
 * lazyPower stops composing mail
<magicaltrout> makes sense to keep persistent storage though
<bdx> magicaltrout: what do you do for your es data dirs?
<x58> Does config.changed in the reactive always fire after the service is installed?
<bdx> magicaltrout: manual prov?
<magicaltrout> yeah bdx hacked up mounts
<bdx> shnacks
<bdx> https://jujucharms.com/u/stub/storage
<bdx> looks so promising
<bdx> lol
<bdx> great idea
<magicaltrout> well the native storage stuff in juju makes sense
<magicaltrout> depends on charms supporting it though
<bdx> magicaltrout: ya - what do you spec your elasticsearch instances at?
<cory_fu> x58: It will fire at least once, yes, but it will probably happen during the very first hook invocation and then go away.  If your service blocks installation for any reason (e.g., waiting on a relation, or storage) it might miss the initial config.changed state.
<cory_fu> bdx, lazyPower: I'd really like to know from stub what advantage that charm has over the built-in storage support in Juju
<cory_fu> bdx: You should consider the built-in storage support.  It may well do everything you need
<lazyPower> cory_fu - i know when we were talking about deprecating the storage charm before... ther was some talk of production deployments that needed the bsb and storage charm
<lazyPower> and thats why it hung around
<cory_fu> bdx: https://jujucharms.com/docs/devel/developer-storage
<magicaltrout> we have an ES cluster for genomics stuff, I know its 15 nodes, not entirely sure of the spec, I tink they are m4.4xlarge but i'm not entirely sure
<lazyPower> it may be time to re-visit that conversation and start talking about how the start the migration of people using bsb over to the storage feature. and as we've covered, there's mismatch there so who knows
<cory_fu> lazyPower: But then why update it with xenial support?
<x58> cory_fu: Better question, do I need to have a service.installed state that gets set if I don't have any installations?
<x58> cory_fu: I only care about config, that then renders config, and adds/removes some network state. But there is no installation step.
<cory_fu> x58: No, you don't need to set any states unless they're relevant to your charm, or part of the API of a layer your charm is using
<x58> I am using the basic layer.
<cory_fu> Yeah, there aren't any states required by that
<cory_fu> So you should be able to just watch for config.changed and handle that, and it will get triggered at least once
<x58> Awesome.
<x58> Thanks!
<bdx> cory_fu: when I try to deploy es with the '--storage' it fails with
<bdx> ERROR cannot add application "elasticsearch": charm "elasticsearch" has no store called "data"
<bdx> the command I ran: `juju deploy elasticsearch --storage data=ebs,30G`
<cory_fu> bdx: You have to define what storage your charm supports in metadata.yaml
<bdx> cory_fu: ahh ... so I see https://api.jujucharms.com/charmstore/v5/trusty/elasticsearch-16/archive/metadata.yaml
<bdx> has a block-storage interface defined
<bdx> ooooh, it should be under 'requires' eh?
<cory_fu> No, it should not be under requires
<bdx> oh, well then
<cory_fu> bdx: Let me find you an example.  That one is not good for built-in storage
<bdx> ok, thx
<bdx> cory_fu: got it
<bdx> cory_fu: juju deploy elasticsearch elasticsearch2 --storage elasticsearch:data=ebs,30G
<bdx> cory_fu: whhoops,  `juju deploy elasticsearch --storage elasticsearch:data=ebs,30G`
<cory_fu> bdx: Odd.  I don't see a storage stanza in https://api.jujucharms.com/charmstore/v5/elasticsearch/archive/metadata.yaml
<bdx> cory_fu: is it not the 'data' interface?
<bdx> or, 'data' provider, 'block-storge' interface
<cory_fu> bdx: No, it's a specific storage section, separate from relations, as described on https://api.jujucharms.com/charmstore/v5/elasticsearch/archive/metadata.yaml
<bdx> oooh
<bdx> yea, es deployed, but no storage was created or attached
<cory_fu> lazyPower: Where is the version of elasticsearch that support storage hosted at?
<bdx> gotcha
<cory_fu> bdx: The kubernetes charm has a storage block: https://api.jujucharms.com/charmstore/v5/kubernetes/archive/metadata.yaml
<cory_fu> So that would be `juju deploy kubernetes --storage kubernetes:disk-pool:ebs,30G`
<cory_fu> I think
<bdx> ahh, very nice
<bdx> cory_fu: thank you
<magicaltrout> whats with all the ec2 bdx.. i thought you were just mr openstack
<bdx> magicaltrout: I got swooped on by a new employer
<bdx> :-) :-)
<magicaltrout> ooooh
<magicaltrout> nice
<bdx> now I'm in aws land  ...
<magicaltrout> at least you got to keep with the juju stuff, for better or worse ;)
<bdx> I am making a strong push for openstack in my new company though ... our aws bill is enormous ..... lets just say I could stand up a new ha openstack every month for what we pay aws ... kindo redic
<magicaltrout> yeah, its the easy option though isn't it
<bdx> yea, sadly
<bdx> some customers require content hosting via not aws though  ....  openstack will have its day here yet
<bdx> :-)
<magicaltrout> jesus h christ
<magicaltrout> my hadoop hacks to my pdi charm worked first time
 * magicaltrout needs to save that moment and dine off it for years
<cory_fu> magicaltrout: Whoa.  Nice
<magicaltrout> never happens cory_fu
<magicaltrout> i must have missed something
<cory_fu> magicaltrout: https://cdn.meme.am/instances/66241182.jpg
<magicaltrout> thanks cory_fu :P
<magicaltrout> you're like my project manager, he has an amimated gif for everything
<magicaltrout> infact the sys admins quite often miss my posts in slack because he follows up my requests with rediculous gifs
<magicaltrout> i'll also get to test our merlijns claim of fixed resources shortly when i try and push 700mb upstream
<hatch> with the latest Juju tip has how you define a default series changed when bootstrapping? ERROR unknown config field "--default-series"
<magicaltrout> yeah hatch
<magicaltrout> --bootstrap-series
<magicaltrout> i think you're after
<hatch> juju bootstrap aws aws --upload-tools --config --bootstrap-series=trusty
<hatch> like so?
<magicaltrout> that be the one
<magicaltrout> dunno what --config does, i'll take your word on that one
<hatch> thanks, I'll give that a go once this bootstrap completes
<hatch> lol
<hatch> maybe it's also no longer required :)
<magicaltrout> i do
<magicaltrout> juju bootstrap jujudev aws/eu-west-1 --bootstrap-series=trusty
<hatch> ahh ok ok so that's changed I guess
<hatch> any idea why the name change to bootstrap-series?
<magicaltrout> probably because you use it to bootstrap and not any other time :P
<hatch> haha, reasonable :D
<magicaltrout> i think they went through a big vocab clean up
<magicaltrout> I guess that got swept up in it
<hatch> appears to have worked, thanks magicaltrout
<magicaltrout> no probs
<magicaltrout> i should sit around and not do  much most days.... i answer many more questions on IRC
<hatch> lol
<magicaltrout> https://www.kickstarter.com/projects/328582971/bakblade-20-the-ultimate-diy-back-and-body-shaver/?
<magicaltrout> just when you thought you had everything.....
<magicaltrout> don't tell me i don't bring useful information to this channel!
<arosales> magicaltrout: a constant wealth of info :-)
<magicaltrout> exactly arosales !
<magicaltrout> I knew you'd find that kickstarter useful
<magicaltrout> i bet your mrs  is sick of shaving your back!
<arosales> magicaltrout: and you can read minds too, amazing
<magicaltrout> lol
<magicaltrout> i wont lie... its been a slow day ;)
<arosales> magicaltrout: but you discovered the bakblade
<arosales> so seems like a productive day
<magicaltrout> i'm trying to do anything but finish off this charm
<magicaltrout> focus has been lacking today
<arosales> magicaltrout: which charm?
<magicaltrout> pentaho data integration arosales , hooked it up to Hadoop so it can do HDFS put/get, and run MapR jobs
<magicaltrout> gonna add a few DB hooks as well so it can register database details
<arosales> magicaltrout: ah nice
<magicaltrout> but I needed to get it done as Bluefin is hosting the next pentaho user group in London and I have a demo at the european community meetup in antwerp in nov
<magicaltrout> so for that I'm sorting out this PDI charm and charming their BI server which is easy
<magicaltrout> making sure PDI registers and scales properly was more of an issue
<arosales> when is the bludfin pentaho meetup?
<magicaltrout> I was saying earlier, I wan't to get DC/OS master scaling fixed this week as w ell
<magicaltrout> Sept 1st
<arosales> ya, I also need to follow up with SaMnCo's email on the DC/OS charming
<magicaltrout> i spoke with the mesosphere chap sam introduced me to
<magicaltrout> he was interested in juju for MAAS deployments
<magicaltrout> I'll fix up my charms and get them production ready, but I could do with some tie in with Mesosphere  to make some of the build process easier
<arosales> ya I chatted with some of the DC/OS folks at MesosCon and they seemed interested in the charm
<magicaltrout> i have a talk proposed for mesoscon europe
<magicaltrout> dunno if it'll get accepted, i'm supposed to find out today
<arosales> magicaltrout: I haven't had any technical conversatons with Mesosphere, but I had planned to. When I do I'll be sure to make you aware of those efforts
<magicaltrout> ta
<arosales> cool to hear re mesoscon, juju related?
<magicaltrout> yeah
<arosales> right on
<magicaltrout> i pitched to mesoscon and linuxcon
<magicaltrout> along with bigdata spain
<arosales> I think Canonical will be at ContainerCon and Apache BigData Europe
<magicaltrout> and I'll submit to apachecon europe as well
<arosales> cool
<magicaltrout> and then hopefully back out in pasadena with you guys. dumped 2 proposals into the charmer summit i think
<magicaltrout> and a few uk user groups
<magicaltrout> oh here's the mesoscon talk title: Flexibility across the cloud - Managing and scaling your High Availability DC/OS cluster using Juju
<magicaltrout> hopefully it gets accepted
<arosales> magicaltrout: you may have a busy travel season between sept - nov
<magicaltrout> hehe, arosales, working from home, travelling keeps me sane
<magicaltrout> at least i get to interact with people :P
<arosales> magicaltrout: I hear you on that one
<magicaltrout> i'm sat in Hidelberg currently having my ear bent on how many products I can bring to Snappy :P
<arosales> ha
<magicaltrout> be good though, i was thinking about a bunch of stuff like databases at the ASF which are a pain in the ass to package
<magicaltrout> so most distros don't bother
<magicaltrout> be great if we could snap them and have juju deploy that stuff somehow
<arosales> magicaltrout: I was thinking of caffe as a snap
<arosales> http://caffe.berkeleyvision.org/
<magicaltrout> ah yeah caffe, we tinkered with some of that earlier in the year
<magicaltrout> yeah that would be cool
<arosales> on the to do list which keeps getting longer :-) which I am sure you can relate to
<arosales> lots of good stuff to do
<arosales> little time
<magicaltrout> i deal mostly in java apps, generally speaking they should be cross platform enough, but the C/C++ type stuff at the ASF would be great for snaps
<magicaltrout> save on build  and deployment effort if 1 build runs everywhere
<arosales> that is what I was thinking
<arosales> and now pkgs yet for caffee
<arosales> *caffe
<magicaltrout> yeah
<magicaltrout> makes perfect sense
<magicaltrout> just pinged my JPL boss arosales about caffe asked him if he'd used it
<magicaltrout> said  he had, but they use Tensor Flow more
<magicaltrout> so there's another one for your list ;)
<jose> hello, who can I talk about the revq not updating?
<magicaltrout> marcoceppi:
<jose> I believe he's away
<magicaltrout> the master  of the review queue
<marcoceppi> jose: one second
<magicaltrout> he's like the grand wizard
<jose> oooh, magicaltrout is in fact magic!
<magicaltrout> indeed
<magicaltrout> don't doubt a fish
<cory_fu> bcsaller: Thanks for your review on https://github.com/juju/charm-tools/pull/235  I added a reply to explain my reasoning for the admittedly more invasive than strictly necessary refactor.  Let me know what you think?
<arosales> magicaltrout: ah yes tensor, that should be pretty easy to integrate into the spark charm
<magicaltrout> i never thought the live text commentary from a republican convention would keep me entertained on a monday night.. but it is
<magicaltrout> arosales: yeah there is work underway to integrate it into apache tika as well
<magicaltrout> I should charm the tika webapp one day that would take about 10 minutes
<arosales> Interesting re Tika
<magicaltrout> yeah there's a bunch of stuff their...
<magicaltrout> i WILL finish my backlog first! :P
<magicaltrout> when it gets delivered, i'm building a poormans orange box as well arosales
<magicaltrout> https://www.kickstarter.com/projects/udoo/udoo-x86-the-most-powerful-maker-board-ever/ bought a bunch of these
<arosales> Ohhh poor man OB
<magicaltrout> hehe
<magicaltrout> well i can't afford a rich mans orange box.. but it looked like a fun project
<magicaltrout> build a few x86 boards and tie them together to prototype juju/maas stuff
<arosales> Ya I looked at a Maas pi
<arosales> But it had its limits
<arosales> This looks much more useful
<magicaltrout> yeah this has a bit more horsepower
<arosales> And x86
<magicaltrout> but would certainly be cool for usergroup meetings etc
<magicaltrout> just like the OB
<arosales> For sure
<arosales> Experimenting with MAAS as well
<magicaltrout> yeah its not something i've had the chance to do yet
<magicaltrout> all these people swinging by talking about maas and openstack
<magicaltrout> i feel left out ;)
<arosales> Ha
<arosales> You can do openstack on LXD
<arosales> Less maas
<magicaltrout> does that work these days?
<arosales> But Maas is pretty cool to transform a rack into a cloud endpoint
<magicaltrout> i came back from ghent wanting to test that and jamespage said he had to hack some script so I left it
<arosales> It does :-)
<magicaltrout> cool
<magicaltrout> yet another thing to add to the list *sob*
<arosales> You still need to update the LXD profile but works other than that
<arosales>  magicaltrout https://github.com/openstack-charmers/openstack-on-lxd
<arosales> Ref from http://docs.openstack.org/developer/charm-guide/getting-started.html
<magicaltrout> aww rubbish
<magicaltrout> now i'm going to bed at like 4am or something
<arosales> It will use some resources
<arosales> Just an fyi
<arosales> Just tack it on the to-do list :-)
<magicaltrout> if jamespage can run it on his laptop i'm pretty sure my dev server with 64gb of ram will cope
<magicaltrout> of course that means i'll then feel complled to throw as  much at is as possible
<magicaltrout> is/it
<arosales> Oh your server for sure can handle ir
<arosales> It
<magicaltrout> my host has disused server auctions
<magicaltrout> I got this 16 core 64gb ram box for Â£15/month
<magicaltrout> I could build the worlds most cost efficient MAAS cloud service if i had the bandwidth ;)
<magicaltrout> arghhh the docs template changed
 * magicaltrout wonders how long it will take to get over it
<magicaltrout> google is failing me
<magicaltrout> where is some sample code for resources in python
<magicaltrout> cory_fu knows this stuff
<cory_fu> magicaltrout: 2.0 resources?  I'm not sure, TBH.  mbruzek might know, though
<magicaltrout> you don't know cory_fu
<magicaltrout> what going on with this world?
<magicaltrout> anyone code a snippet of python doc for getting a file from a resource?
<mbruzek> I got you magicaltrout
<magicaltrout> thanks beardy
<marcoceppi> magicaltrout: this charm makes simple use of them https://github.com/marcoceppi/layer-charmsvg
<mbruzek> https://github.com/juju-solutions/layer-etcd/blob/master/reactive/etcd.py#L179
<magicaltrout> there we go, i knew there was a sample
<magicaltrout> thanks chaps, blame merlijn and his claim resources work now
<mbruzek> magicaltrout: Our use of it is a little more advanced, but we do it in a try/except to gracefully fail on older juju versions
<magicaltrout> yeah thats helpful mbruzek i was wondering about that stuff
<mbruzek> magicaltrout: 1.25 is dead to me.
<magicaltrout> i used 1.25 about 6 months ago
<mbruzek> but some others still use it
<magicaltrout> but some people seem to use it
<mbruzek> jinx
<magicaltrout> on that note regarding terms, if I add terms to the saiku analytics charm does that  mean it just explodes on 1.25?
<jose> what does test status 'retry' mean on the revq?
<jose> marcoceppi: is something going on with juju-ci?
<mattrae> hi, i'm trying to enable-ha with juju 2.0 beta and running into this bug.. is there a workaround for enabling ha, or will ha not be working to 2.1? https://bugs.launchpad.net/juju-core/+bug/1563705
<mup> Bug #1563705: cmd/juju: "juju enable-ha" fails if you're not operating on the admin model <bitesize> <juju-release-support> <papercut> <juju-core:Triaged> <https://launchpad.net/bugs/1563705>
<rick_h_> mattrae: it might get updated but for the moment you need to juju switch controller
<rick_h_> mattrae: and then the command will succeed
<mattrae> rick_h_: great thanks!
<rick_h_> mattrae: updated the bug with the workaround, thanks for pointing out that it wasn't helpful there.
<mattrae> rick_h_: great thanks, i was just about to do the same thing :D
<cholcombe> thedac, thanks for the mojo video :)
<thedac> no problem
<magicaltrout> nooooo
<magicaltrout> resource upload fail
<lazyPower> repost from way back, because i had no idea my bouncer had tanked
<lazyPower> [15:28:08] lazyPower:	cory_fu - ah, there is no elasticsearch with storage support yet. I was proposing to make the modification and contribute it back
<x58> I'm attempting to test my charm, however when it fails and I try to do juju charm-upgrade my updates never seem to make it to the test machine...
<magicaltrout> x58: is it in a running state prior to teh upgrade?
<magicaltrout> the
<x58> magicaltrout: Yeah, failed due to a missing import.
<x58> So I emptied out my .py, then all "hooks" succeed, and then it would download the updated version install it and run it.
<x58> Kind of a shame I can't force upgrade when it has failed.
<magicaltrout> x58: right so if its not in a running state
<magicaltrout> you can resolve the failed hooks
<magicaltrout> and then run the upgrade
<magicaltrout> i suspect  the post by marcoceppi still holds true as well http://marcoceppi.com/2015/01/force-upgrade-best-juju-secret/
<x58> I tried resolving the failed hook
<x58> with juju resolved -r
<magicaltrout> dunno what -r does
<magicaltrout> but yeah
<magicaltrout> juju resolved service/unit
<magicaltrout> you might have to run it a few times as it runs through a bunch of failed hooks
<magicaltrout> resolve failed hook one, resolve failed hook 2 etc
<x58> ah
<x58> I am using the charmhelpers.contrib.network.ip and when it tries to import netifaces it fails, it installs it, but once installed the import still fails.
<x58> I am using layered charms, do I need to specify netifaces in my wheelhouse?
<x58> or as a package?
<x58> charmhelper installs python-netifaces instead of python3-netifaces, and all charms are run with Python 3.
<x58> So that's a problem.
#juju 2016-07-19
<lazyPower> x58 - yeah you're finding some rough edges with charmhelpers vs reactive
<lazyPower> really sorry about that too :( We've been moving kind of fast, and its left a trail in the tooling that we haven't been able to circle back and get either updated, or deprecated, just yet.
<kjackal> hello juju world
<gnuoy> hey stub, I see that https://code.launchpad.net/~stub/charms/trusty/nrpe/py3/+merge/300153 has a prereq branch which isn't proposed yet. Is that an oversight? I assume that the normal workflow here is to merge the prereq branch then this one.
<neiljerram> Morning
<neiljerram> Does anyone happen to know if I can control the names of the machines that Juju creates when I ask it to deploy a bundle?
<neiljerram> In other words, I'd prefer something more meaningful than 'juju-466b6192-d900-4e7a-8fa4-e3935e5c25f5-machine-0' or 'juju-26a5f7-0'
<bryan_att> gnuoy: ping
<bryan_att> any charm experts here that can help with a layer question?
<marcoceppi>  bryan_att ask away
<bryan_att> marcoceppi: I have an issue where a value is not getting set in one of the config templates. It seems that the "layer"  is not passing the right value, or the value name changed. How do I debug this?
<marcoceppi> bryan_att: is this a layer.yaml option or a config.yaml option?
<bryan_att> https://github.com/blsaws/charm-congress - see https://github.com/blsaws/charm-congress/blob/master/src/templates/liberty/congress.conf
<bryan_att> the value "connection = {{ shared_db.uri }}" is not getting set
<bryan_att> this comes from I think includes: ['layer:openstack-api'] in https://github.com/blsaws/charm-congress/blob/master/src/layer.yaml
<bryan_att> But I don't know how to check this "layer" to see if the variable has changed somehow.
<bryan_att> I further need to document this dependency (if the root of my current build/deploy issue) if it can result in CI/CD failure - so I need to know what is the root issue here and how to debug such (potential) issues with layers
<marcoceppi> bryan_att: ah, so shared_db.uri comes from the mysql-shared interface layer
<marcoceppi> bryan_att: the openstack-api layer pulls in interface:mysql-shared: https://github.com/openstack/charm-layer-openstack-api/blob/master/layer.yaml
<marcoceppi> bryan_att: I don't see a .uri key for that library though
<bryan_att> marcoceppi: sorry, I don't know what a .uri key is, and can't see from the file what is being referenced (or at least where to find the real code)
<marcoceppi> bryan_att: sorry, is shared_db.uri a key in your template or a key in the openstack-api template?
<bryan_att> I don
<bryan_att> I don't know
<jamespage> bryan_att, hey - can you join #openstack-charms please
<bryan_att> sure
<acovrig> I get this error running conjure-up openstack: cannot find network interface "lxdbr0": route ip+net: no such network interfaceERROR invalid config: route ip+net: no such network interface
<stokachu> acovrig: try running lxd init
<acovrig> stokachu: still get the error...
<acovrig> and I donât see lxdbr0 in ifconfig
<stokachu> acovrig: ugh, such a pain point for users
<acovrig> stokachu: what if I install docker first?
<stokachu> acovrig: that has nothing to do with lxd
<stokachu> acovrig: sec
<acovrig> yea, but wasnât sure if it would configure the if, but I guess it uses docker0 so I guess not
<stokachu> acovrig: what does, sudo ifup lxdbr0 do?
<acovrig> stokachu: Unknown interface lxdbr0
<stokachu> acovrig: that bridge is socket activated so something has to touch it to turn on
<stokachu> stub: one sec
<stokachu> acovrig: one sec
<stokachu> stub: disregard
<stokachu> acovrig: try running 'lxc list' as a normal user
<stokachu> see if that activates the bridge
<acovrig> stokachu: it doesnât show anyting in the list, would a reboot help?
<lazyPower> stokachu - wasn't there a bit where if you had lxc prior, you had to dpkg-reconfigure to get the proper bridge?
<stokachu> acovrig: now what does `ifconfig` show
<stokachu> lazyPower: yea we do that for them but sometimes it doesn't active the new bridge :(
<lazyPower> :|
<acovrig> stokachu: 2 physical, lo, and openstack0
<stokachu> acovrig: :((((
 * stokachu sad
<stokachu> acovrig: ok give me another second
<stokachu> acovrig: for kicks try running `conjure-up openstack` again
<stokachu> acovrig: what about `ifconfig -a`
<acovrig> stokachu: same as ifconfig; should I pick OpenStack or OpenStack with Nova-LXD?
<stokachu> acovrig: if youre doing it on a single machine go with novalxd
<acovrig> stokachu: intend to run on a cluster of 3 machines (once I get an update to fix a bug in maas)
<stokachu> acovrig: the other openstack spell requires i think 4-5 machines
<acovrig> stokachu: same errorâ¦ (cannot find network interface "lxdbr0")
<stokachu> acovrig: bah
 * acovrig sudo reboot pending
<stokachu> acovrig: yea try that :|
<acovrig> stokachu: ifconfig shows lxdbr0, running conjure now
<stokachu> acovrig: yay
<stokachu> acovrig: it shouldn't be this difficult to start that bridge
<stokachu> we may try to handle it better in conjure-up
<stokachu> acovrig: are you running the pre-release?
<acovrig> lol, yea; also, when installing it added me to the lxd group, but I had to logout and back in to get it to accept that, I wonder if I should have rebooted then
<acovrig> stokachu: Iâm running ubuntu 16.04
<stokachu> acovrig: yea or log completely out
<stokachu> acovrig: what does dpkg -l conjure-up show
<acovrig> stokachu: 0.1.2 and juju 2.0~beta7-0ubuntu1.16.04.1
<acovrig> still bootstrapping
<stokachu> acovrig: ok, do you want to test the latest stuff?
<stokachu> acovrig: if you're curious https://www.irccloud.com/pastebin/91OmlH1S/
<acovrig> stokachu: I presume that last line should be âconjure-up openstackâ instead of âconjure-up openstack|â
<stokachu> yea sorry
<acovrig> I may look at it myself, Iâm currenlty setting this up at work to compare an openstack cluster w/an windows hyper-v cluster so I may set it up at the servers at my house to tinker w/it.
<acovrig> stokachu: this looks promising, itâs deploying 17 services and is ~1/2 done
<stokachu> acovrig: cool, should be good to go
<acovrig> stokachu: yup, now if I can get a bug fixed in maas, Iâd scale this out across 3 machines lol
<stokachu> :D
<stokachu> what bug in maas?
<acovrig> stokachu: https://bugs.launchpad.net/maas/+bug/1604128
<mup> Bug #1604128: [2.0RC2] Unable to add a public SSH Key due to lp1604147 <cdoqa-blocker> <verification-done> <MAAS:Fix Committed by allenap> <MAAS 2.0:Fix Committed> <MAAS trunk:Fix
<mup> Committed by allenap> <maas (Ubuntu):Fix Released> <maas (Ubuntu Xenial):Fix Released> <maas (Ubuntu Yakkety):Fix Released> <https://launchpad.net/bugs/1604128>
<stokachu> ah ok rc2
<stokachu> shouldnt be long for that
<acovrig> stokachu: yea, Iâm talking to someone on #maas and they said itâs fixed in -updates, I apt upgradeâd and got 2.0.0~rc2+bzr5156-0ubuntu2~16.04.2, but it still isnât letting me put a key in...
<stokachu> hmm
<acovrig> yea, and I think theyâre AFK (~1hr)
<stokachu> acovrig: is this just putting the key in the UI
<stokachu> copy and pasting?
<acovrig> stokachu: yup
<stokachu> acovrig: that is weird
<acovrig> a differnet install (VM) (I think it was the same version) accepted this key
<acovrig> lol, was about to ask how long I should expect this to take (conjure-up openstack), but then I looked at topâ¦44% iowait, 12% idleâ¦ *sigh* slow hardware...
<geetha> Hi, My IBM WAS Base charm needs 2 fixpack packages, when fixpack already installed and user wants to upgarde using latest fixpack packages, I have written code like this : http://paste.ubuntu.com/20056926/. But when I run juju attach command for latest fixpacks, upgrade_charm hook will be triggered and comparing checksum values for existing fixpack packages and new fixpack packages. Here 2nd packges is not getting updated with the lates
<geetha> when I try to verify checksum , for second package it's showing same checksum value for old and new fixpack package
<stokachu> acovrig: yea it takes a toll on the hardware for a single install
<stokachu> acovrig: we usually recommend at least 16G ram for that spell
<geetha> could any one please suggest me how to check whether the packages are old or new in multiple package scenario
<acovrig> stokachu: lol, yea I have exactly 16
<stokachu> acovrig: ok yea, should take about 45 minutes
<acovrig> stokachu: yea, itâs been going for ~1.25hr, itâs done with 9/18, 8 installing apt packages, 1 incomplete relations: identity
<stokachu> acovrig: ok
<stokachu> acovrig: if it takes that long i would kill it and try the latest from our ppa
<stokachu> update juju to beta12 as well
<stokachu> tons of fixes
<acovrig> stokachu: âour ppaâ being âppa:juju/develâ and âppa:conjure-up/nextâ?
<stokachu> acovrig: yea
<acovrig> stokachu: apt purge juju conjure-up && apt install juju conjure-up or just upgrade? (I.E. will there be any broken leftovers if I ^c the current conjure process)
<stokachu> shouldn't be any leftovers if you ^c
<stokachu> acovrig: your first one would work
<stokachu> or apt dist-upgrade
<acovrig> stokachu: apt upgrade? wouldnât dist-upgrade upgrade my distribution of ubuntu?
<x58> Any guides on publishing stuff to the charm store?
<magicaltrout> https://insights.ubuntu.com/2016/04/16/charm-charm-tools-2-0/
<magicaltrout> the docks currently are a bit lousy because it changes
<magicaltrout> d
<magicaltrout> you need to charm push
<magicaltrout> charm publish
<magicaltrout> charm grant
<magicaltrout> the help on the cli is pretty useful
<x58> I'm logged into the charmstore and there is a button that says "create new" but clicking it does all of nothing.
<magicaltrout> dunno what that is
<magicaltrout> on the command line
<magicaltrout> charm login
<magicaltrout> then you can interact with the CS from the cli
<magicaltrout> and push your stuff
<x58> Found some docs, will follow, will report back if I fail.
<magicaltrout> you don't need to setup anything on the web
<x58> https://jujucharms.com/docs/devel/authors-charm-store
<x58> This tells me I need to login to the charm store in my browser.
<magicaltrout> yeah
<magicaltrout> but then login on the CLI
<magicaltrout> oh yeah they were the docs I was grepping for
<x58> Published my first charm =)
<x58> https://jujucharms.com/u/bertjwregeer/staticroutes
<x58> I read the docs, and the README could be a rst file, yet it is not being rendered correctly :-(
<magicaltrout> woop
<magicaltrout> change rst to md
<magicaltrout> ignore the lies!
<x58> I'm a python dev... rst is my life.
<magicaltrout> sad times!
<magicaltrout> better than this crazy go stuff :)
<x58> configuration also loses my formatting (specifically newlines that make it readeable and understandable :-()
<magicaltrout> okay 2 things a) not sure https://jujucharms.com/u/spicule/pentahodataintegration/trusty/8 thats one of mine, the formatting is acceptable
<magicaltrout> b) you've not granted public read writes
<magicaltrout> so I can't see it and have no clue :)
<x58> How do i grant it?
<x58> I can see it, clearly I am the most important person ;-)
<magicaltrout> charm grant cs:~spicule/wily/dcos-master --acl=read everyone
<x58> magicaltrout: done.
<x58> charm grant cs:~bertjwregeer/staticroutes everyone (from the docs)
<magicaltrout> yeah
<magicaltrout> not sure how rst works
<magicaltrout> but for blocks
<magicaltrout> just indent 4
<magicaltrout> and headings use #, ##
<magicaltrout> usual markdown stuff
<x58> Yeah, that's markdown... rst has headings, so ------- underneath a line is a top-level, heading <h1>, ~~~~~~ underneath a ------- heading is a <h2>
<x58> .. code:: is valid rst.
<magicaltrout> yeah, but rst has never rendered for me on the charms store
<x58> See rendering here: https://gitlab.com/bertjwregeer/juju_staticroutes
<x58> Ah
<magicaltrout> so unless you're magic, it'll be broke :)
<x58> That's a different story. I followed docs. Docs said md or rst.
<magicaltrout> yeah i have no clue
<magicaltrout> the samepl charm is .rst file
<magicaltrout> but it never formats when its deployed
<x58> Where can I file bugs against the charmstore?
<magicaltrout> https://github.com/juju/charmstore
<magicaltrout> probably
<lazyPower> https://github.com/CanonicalLtd/jujucharms.com/issues
<magicaltrout> i lied
<lazyPower> x58 - link is in the footer of the page :)
<magicaltrout> there be no bugs
<magicaltrout> its written by professionals
<jrwren> where do docs say RST? that is incorrect and has never been supported AFAIK.
<lazyPower> jrwren - RST was a supported format back in the juju 1.x days, in the old store. Kapil made heavy use of RST
<magicaltrout> jrwren: the charm template creates you a RST
<magicaltrout> or it did last time i tried
 * jrwren is disappointed.
<x58> Well I just filed two bugs in the wrong place :P
<magicaltrout> lol
<x58> https://jujucharms.com/docs/devel/authors-charm-policy#readme.md
<x58> that is where .rst is mentioned.
<magicaltrout> lies all round
<magicaltrout> chatting with Bill on Thursday lazyPower, got most of my PDI charm done.. and then I get sidetracked writing a maven plugign for snappy *boom*
<magicaltrout> I really need to stop doing new stuff
<lazyPower> Whats the fun in that?
<magicaltrout> hopefully still get DC/OS polished off this week
<x58> People tend to complain when people DON'T read docs, and now I read the docs and it's wrong...
<x58> That just hurts.
<lazyPower> x58 we apologize, and will take steps to address the bug
<magicaltrout> blame it on evilnickveitch
 * magicaltrout runs for the hills
<magicaltrout> lazyPower: well there is no fun in that, but apparantly no one had thought of adding snapcraft to existing build chains
<magicaltrout> which seems rather weird
<magicaltrout> so i'm taking some steps to address that
<lazyPower> :D
<lazyPower> thats phenomenal
<magicaltrout> well if you have a build setup like a big maven multi module project or something, why would you replace that with a snapcraft maven weird thing?
<magicaltrout> especially if you've got CI, reporting etc
<magicaltrout> and I'm sure its not just me in java world
<magicaltrout> but it seems reasonable to build artifacts then programatically create a snap definition to create your image post build
<magicaltrout> a bit like with juju where you might want to replace specific bits of infrastructure but not replace all your nicely crafted puppet scripts or  cookbooks
<x58> lazyPower: I
<x58> lazyPower: lol. I've filed bugs :-)
<lazyPower> x58 thank you, sincerely :) i appreciate the effort
<x58> lazyPower: I maintain WebOb (python request/response library) and work heavily with the Pylons project on Pyramid. I understand that docs can be out of date or wrong. No-one likes writing docs.
<magicaltrout> see the funny thing is if i said a sentence like that i'd be taking the piss
<magicaltrout> lazyPower is being honest though I think
<x58> magicaltrout: haha
<magicaltrout> x58: where you based?
<lazyPower> i am :) I have no idea how many users come by and dont file bugs when we're just plain wrong on something
<lazyPower> to see someone take a couple minutes and file a bug, which makes us better makes me happy
<magicaltrout> see there's the american sincerity shining through
<magicaltrout> you don't get that in the uk ;)
<magicaltrout> we might think it, we just don't articulate it in that manner ;)
<lazyPower> magicaltrout - i appreciate you too sir
<lazyPower> :D
<magicaltrout> lol
<magicaltrout> *cringe*
<x58> magicaltrout: Denver, CO. Currently got a mattrae sitting next to me working on our production deployment.
<x58> magicaltrout: I am from The Netherlands though. So we like to tell people how it is, Dutch Directness.
<magicaltrout> hehe
<magicaltrout> maybe but the dutch isn't as scary as german
<magicaltrout> s/isn't/aren't
<x58> Germans are scary because their language always sounds angry. Even when expressing love.
<x58> We Dutch have toned down that "anger" in our language.
<magicaltrout> yeah i'm sat in a bar now, and every 10 minutes the woman walks past and shouts "1 more?" at me
<magicaltrout> i feel like i should say yes everytime
<x58> magicaltrout: Where do you work? Found an email address on one of your charms and the bare naked domain takes me to an ESXi welcome screen :P
<magicaltrout> does that still happen?
<x58> analytical-labs.com
<magicaltrout> I don't even know who owns that box :)
<magicaltrout> i'm part self employed data guy based in London, part NASA JPL data scientist based in Pasadena and part time Juju/Snappy hacker
<magicaltrout> did apachecon in denver a few years ago, jesus christ, i'm never drinking in denver again
<x58> magicaltrout: Welcome to alttitude.
<x58> Learn to drink up here, and you can drink like a fish at sea level.
<magicaltrout> lol
<magicaltrout> i'm sure
<magicaltrout> the only place i've switched on a satnav in a hotel basement parking lot and it warns me I should get lower
<x58> I don't drink anymore, but in Denver I could put away a fair bit. Ended up in Vegas for DEF CON... let's just say I spent a lot of money.
<magicaltrout> hehe
<petevg> cory_fu: I'm chewing on a thing with the interaction between puppet and juju in the Bigtop charms.
<petevg> The idea behind puppet is that it should be safe to run arbitrarily on a running system. It will reset config files that you've manually modified, but that's expected -- you're supposed to change the puppet config if you want stuff to change.
<petevg> So the question is: I have a config value that I may want to set at deploy, or may want to change while my server is running.
<petevg> What's the best practice for handling that situation?
<petevg> Basically, I want to wind up in a situation where I can do juju upgrade, and have the value stay set.
<petevg> cory_fu: you mentioned having a juju action manually update the juju config on the host machine. Do I have to worry about updating the config locally, too? Or will juju respect the config on the host if it's re-run?
<cory_fu> petevg: I did not mention having an action update config.  I mentioned having a config option update the config.  ;)  But yes, the puppet scripts ought not wipe out any data but c0s suggested that they might (for HDFS, at least).  If they do, the most correct (if perhaps not the most expedient) solution would be to fix the Bigtop puppet scripts
<cory_fu> For the case of Kakfa, it seems unlikely that the scripts will wipe out any data, but it's worth verifying
<petevg> cory_fu: Got it. So the charming way to do things would be to update your config and re-run juju deploy, but there may be broken behavior in Bigtop. I'll try to test for it. Thx.
<cory_fu> petevg: I assume you mean re-run puppet apply?
<petevg> cory_fu: no, I didn't. But that's why I typed it -- I wanted to make sure that I wasn't walking away with wrong things to do :-)
<petevg> What I'm intending to do is to have a layer option that sets the "bind" address for kafka.
<petevg> If I want to change that, I can update the layer option, but then what command do I want to run to tell juju to update the config on the host?
<petevg> *sets should be "specifies"
<cory_fu> petevg: No, no.  Not a layer option, a config option.  Those can be changed after deployment (and on a per-deployment basis), while layer options cannot
<petevg> cory_fu: got it. So I do juju deploy --config <some config>.yaml kafka
<petevg> ... and then later, I change my mind about the config. What do I run to update it?
<stokachu> cory_fu: just making sure you saw some PR's i created https://github.com/juju-solutions/bundle-apache-processing-mapreduce/pulls
<cory_fu> petevg: juju set-config listener=0.0.0.0
<cory_fu> petevg: juju set-config kafka listener=0.0.0.0
<petevg> cory_fu: Ah, cool. And then we call the config-changed handler, which I believe runs puppet, and all is good.
<petevg> Cool. Thank you :-)
<cory_fu> stokachu: Yeah, sorry, I saw it.  It seems fine to me, but I'm still a bit unclear about the drive behind conjure.  Also, we're deprecating that bundle in favor of hadoop-processing, which lives in Apache Bigtop
<stokachu> cory_fu: if you run the instructions in the PR it'll give you a better idea
<stokachu> is apache bigtop a different repo?
<stokachu> cory_fu: im guessing this is it https://github.com/juju-solutions/bigtop/tree/master/bigtop-packages/src/charm
<stokachu> cory_fu: and what hoops do i have to jump through to make contributions?
<petevg> stokachu: juju-solutions/bigtop our fork apache/bigtop. You can make and push branches off of it, but you want to make your pull requests against apache/bigtop. (github will do that by default).
<petevg> As for hoops, there are a few more :-)
<magicaltrout> will you get sucked up into apache/bigtop without an ICLA etc signed?
<petevg> No.
<stokachu> i just care about the charms, is opening a PR against that repo enough?
<petevg> Hang on. Typing up an explanation :-)
<petevg> Basically, the Bigtop folks will be happiest if you first file a ticket in their Jira: https://issues.apache.org/jira/browse/BIGTOP/
<petevg> Then when you create your PR, you want it to have a title exactly in the following format:
<petevg> BIGTOP-<ticket number> <Ticket Title>
<petevg> ... where <ticket number> is the four digit (so far) ticket number, and the title is the same string as you put in the title of your ticket.
<petevg> That way, the PR will get slurped up automatically into the Jira issue, and people will be able to find it easily.
<stokachu> ok
<magicaltrout> just caring about charms makes the apache software foundation sad.....
<magicaltrout> ;)
<petevg> They actually seem to be pretty happy about charms ... so long as we play nicely with their Jira automagic github integration thing :-)
<cory_fu> stokachu: Also, even though new dev is focused on Bigtop, the vanilla Apache bundle isn't going to go away, so we can accept the PR into that in the meantime
<magicaltrout> petevg: i can speak as an insider its pretty important else you just end up with a bunch of unclosable pull requests
<magicaltrout> and its a right pain in the balls
<stokachu> so just to clarify, a new contributor will need to first learn charm development with the reactive layer, then try to figure out where the charms live under different projects and learn their submission process?
<stokachu> obviously the charmstore doesn't show this information
<magicaltrout> well there is talk of making charms live with their parent projects
<magicaltrout> with the new push mech
<petevg> stokachu: when you put it that way ... :-/
<magicaltrout> doubt that flies with cory_fu
<stokachu> right, where do you go to find all this out?
<magicaltrout> but you know, technically the charms could live in the BigTop
<stokachu> right now im going to 3 different places to figure out how to make a contribution to a charm
<magicaltrout> well if you found the source, and they live with their projects, you're in the right place?
<stokachu> https://jujucharms.com/apache-processing-mapreduce/
<stokachu> thats not the source of bigtop which seems to be where development is heading, right?
<magicaltrout> yeah, it is true, but its also legacy
<magicaltrout> this stuff will iron itself out over time
<stokachu> how do i know that
<magicaltrout> well you don't
<stokachu> :\
<magicaltrout> but
<magicaltrout> since the new charm push/charm publish stuff is available now
<magicaltrout> it makes charms in their respective projects much easier
<stokachu> just b/c you dont have to push to launchpad first before it gets in the charmstore doesn't make discovery easier
<stokachu> for people who want to contribute
<petevg> stokachu: I'm definitely paying attention and thinking about it ...
<magicaltrout> no, but if you go to a charm, it will give you a like to the source (doesn't it?)
<stokachu> https://jujucharms.com/apache-processing-mapreduce/
<stokachu> is that the same source as bigtop??
<petevg> Putting stuff in the bigtop repo is meant to centralize the big data charms and make them easier to find.
<magicaltrout> oooh
<petevg> But you are having to deal with two separate communities, which is a little tricky.
<magicaltrout> jujucharms directs me to LP
<stokachu> so you have 2 big barries, getting people to learn charm layers, then having them go through a potential new process everytime they want to submit something
<stokachu> barriers*
<magicaltrout> okay thats a bit shit
<magicaltrout> the charm layer contains a repo path
<magicaltrout> (i may just have not filled mine in)
<magicaltrout> it should direct you to the source
<magicaltrout> and if you manage the source of the charm with the source of the project then thats fine
<stokachu> petevg: a start would be to update the README in the toplevel dir about how to contribute a charm
<petevg> stokachu: That's an excellent idea.
<petevg> I'm also going to badger the big data team about updating our metadata so that there's always a link to click to view source from the charm store.
<stokachu> petevg: ok thanks
<petevg> No problem. Thank you for the feedback!
<stokachu> petevg: the readme could be as simple as what you told me here on irc
<stokachu> cory_fu: ok, give that PR a test run and let me know if you have questions
<stokachu> cory_fu: once you run it once it'll give you a good idea what conjure-up does
<magicaltrout> on a more useful note... i ask my colleagues if my diet of beer and hotel peanuts will kill me
<magicaltrout> http://www.ncbi.nlm.nih.gov/pubmed/12119580
<magicaltrout> i got sent that as a response
<petevg> it sounds like the paper says eat as many peanuts as you want: you won't even get tired of them. I approve of these findings :-)
<magicaltrout> hehe
<magicaltrout> i'm working on an authentic conclusion
<lazyPower> "The effects of hotel peanuts and beer on magicaltrout over a decade, as computed by apache-realtime-streaming"
<lazyPower> "and visualized with zepplin"
<magicaltrout> i think it would likely show a sharp decline in something
<magicaltrout> productivity probably
<magicaltrout> that said, i've been sat here doing nothing whilst waiting on infra for about 3 hours
<magicaltrout> luckily
<magicaltrout> i'm paid by the hour
<lazyPower> or, you consistently reach the ballmer peak and you gain superhuman coding powers for 4 hours a day, and manage to charm all the things, thus rendering the rest of us puny mortals insignificant in your charming wake
<lazyPower> clearly i've thought about this...
<magicaltrout> well... yesterday i tested my pdi <-> hdfs tweaks to the charm
<magicaltrout> and they worked first time
<lazyPower> banking on unnatural phenomenon
<magicaltrout> so maybe you have a point
<lazyPower> see?
<lazyPower> may the lesson be:  feed magicaltrout more beer
<lazyPower> and hotel peanuts*
<magicaltrout> oh i can vouch for the beer part
<magicaltrout> I did all my coding at uni whilst drunk and that was one of the few modules i passed with a 1st
<magicaltrout> and lets face it, i work a lot for NASA in the evenings.....
<lazyPower> anecdotal evidence is still evidence right?
<magicaltrout> indeed
<magicaltrout> of the highest order
#juju 2016-07-20
<kjackal> Hello juju world!
<magicaltrout> lightning talks today....
<magicaltrout> hopefully mark has a good sense of humour.... https://imagebin.ca/v/2oh1GLMMHBVk
<sharan> Data Server Manager product monitor the all DB2 which is present. I have deployed multiple units of DB2. In my Data server manager, it is having a relation with DB2 charm. It should get the all the DB2 units like private-address as well as db2Port,db2username, db2password of each unit in DSM charm. As DB2 interface uses Scope.Service, i am getting only one DB2 unit information. How do i get multiple DB2 unit information please help me o
<sharan> Data Server Manager product monitor the all DB2 which is present. I deployed multiple units of DB2. In my Data server manager, it is having a relation with DB2 charm. It should get the all the DB2 units like private-address as well as db2Port,db2username, db2password of each unit in DSM charm. As DB2 interface uses Scope.Service, i am getting only one DB2 unit information. How do i get multiple DB2 unit information please help me on thi
<sharan> Data Server Manager product monitor the all DB2 which is present. I deployed multiple units of DB2. In my Data server manager, it is having a relation with DB2 charm. It should get the all the DB2 units like private-address as well as db2Port,db2username, db2password of each unit in DSM charm. As DB2 interface uses Scope.Service, i am getting only one DB2 unit information. How do i get multiple DB2 unit information please help me on thi
<lazyPower> magicaltrout BRILLIANT!
<lazyPower> sharan - you will need to change the scope of the relationship to scope: unit, and handle each conversation
<magicaltrout> hehe
<magicaltrout> https://docs.google.com/presentation/d/1UdKSsuXpYSy25V9HuxnqgzQRmlC1DEtLJUgEY-5PDoc/edit?usp=sharing
<lazyPower> magicaltrout - https://www.sunfrog.com/LINUX-SYSTEM-ADMINISTRATION--WE-DO-T4-Dark-Grey-Guys.html?70283
<magicaltrout> lol
<lazyPower> nice deq
<magicaltrout> i feel like I should own one
 * lazyPower approves
<lazyPower> very tongue in cheek :)
<magicaltrout> hehe
<magicaltrout> the presentation tonight is a similar thing to what we discussed in juju re: adoption
<magicaltrout> if you tell everyone they're going to have  to rip their existing stuff out
<magicaltrout> they'll tell you to get lost
<magicaltrout> which hinders adoption
<magicaltrout> so i figured i might as well make an amusing lightning talk out of it
<jrwren> that is a REALLY good point. I've thought for a while that a nice pattern would be to allow any setting that is set by a relation to be overridden by config. That way juju deployed apps could play nicely with existing apps.
<sharan> lazyPwer-will it work if i change scope:unit? handle each conversation means, i need to handle that in my DSM charm by having for loop
<magicaltrout> yeah jrwren and why in a juju context puppet/chef etc interoperability either by embedding modules or whatever is important
<magicaltrout> because companies already have all their automation stuff
<sharan> i have update the DB2 interface scope:service to scope:unit right
<magicaltrout> if you tell them they need to throw it all in the bin, they'll just ignore you
<lazyPower> sharan - right, any unit that's not currently broadcasting its information on the relationship will need to be updated to scope:unit, and you'll need to iterate over each conversation and handle it appropriately
<lazyPower> sharan - mbruzek and i have an example of this in some of our relationship interfaces we've done. I'm sure there are others though.  as an example, here is a peer relationship that is scope unit  - https://github.com/juju-solutions/interface-etcd/blob/master/peers.py
<sharan> ok let me go through this link, thanks
<lazyPower> and for a slightly more complex example, here is the peering interface for layer-tls  https://github.com/juju-solutions/interface-tls/blob/master/peers.py  - its managing certificate signing requests, and managing host details.
<sharan> so after updating scope in interface then i need to change some code in DB2 charm also right
<lazyPower> yeah, mostly you'll need to iterate over the conversations and address as appropriate.
<lazyPower> conv = self.conversations();  for unit in conv: # do something useful here like unit.set_remote(data={'foo':'bar'})
<lazyPower> magicaltrout - today kicks off with a phil collins diddy - https://www.youtube.com/watch?v=sRY1NG1P_kw
<magicaltrout> hehe
<magicaltrout> actually I was listening to a bit of phil collins today
<magicaltrout> but with a sligtht twist
<magicaltrout> https://www.youtube.com/watch?v=YIVUCwRVUBY
<magicaltrout> the guy playing the final solo at 17mins is pretty epic
<sharan> lazyPower - in DB2 charm i have to use self.conversations(); to iterate each unit right. I have never used this in any of my charm but i saw this code in DB2 interface "provides.py" and "requires.py" files
<lazyPower> sharan - link to your interface code?
<lazyPower> oooo magicaltrout :fire:
<sharan> lazyPower - http://bazaar.launchpad.net/~ibmcharmers/interface-ibm-db2/trunk/files
 * lazyPower looks
<lazyPower> http://bazaar.launchpad.net/~ibmcharmers/interface-ibm-db2/trunk/view/head:/provides.py#L73  -- yep you're already working with the same primitives i was speaking to. this line here http://bazaar.launchpad.net/~ibmcharmers/interface-ibm-db2/trunk/view/head:/provides.py#L73
<lazyPower> you're using that same iterator and pulling the charm unit-name from teh conversation.
<lazyPower> and sorry about hte double link paste.... hurr durr i need coffee
 * lazyPower goes to put the kettle on
<lazyPower> sharan - make sure you're familiar with https://jujucharms.com/docs/devel/developer-layers-interfaces#communication-scopes
<lazyPower> pay special attention to the example that is scope unit
<lazyPower> https://jujucharms.com/docs/devel/developer-layers-interfaces#writing-the-requires-side
<marcoceppi> cory_fu kwmonroe it's demo time, whats the best bundle to deploy for big software
<marcoceppi> lots of bigdata people here
<magicaltrout> where you at marcoceppi ?
<lazyPower> DOD Minnesota
<lazyPower> er Minneapolis
<magicaltrout> close enough ;)
<marcoceppi> magicaltrout: if you've got something too ;)
 * magicaltrout adds "finishing off apachedrill" to his todo list
<sharan> lazyPower -  thanks, i will look into the link shared by you
<lazyPower> sharan np. let me know how you get along after reviewing the source material. I think it's pretty straight forward but if there's anything we can improve there for clarity i'd love to get your feedback
<cory_fu> marcoceppi: Sorry, was afk.  https://jujucharms.com/hadoop-processing/ is good.
<cory_fu> It's the Bigtop Hadoop bundle
<cory_fu> https://jujucharms.com/apache-processing-mapreduce/ if you want more units
<lazyPower> magicaltrout - ok my response to your big band https://www.youtube.com/watch?v=KKXBAw5K6Fg
 * lazyPower wishes we still had turntable.fm now
 * magicaltrout wonders how far he can push the lighthearted trolling without getting kicked out of the sprint
<acovrig> I have 3 machines running 16.04 from MAAS; I would like to use them as a VM failver cluster; is juju->openstack-base/43 the right way to go?
<magicaltrout> woop
<magicaltrout> jcastro: i got turned down for linuxcon.... sad times
<magicaltrout> jcastro: i got accepted for mesoscon..... good times
<magicaltrout> so you're dispatching me to amsterdam
<magicaltrout> lazyPower: --^
<lazyPower> Nice!
<lazyPower> I wish i could come with
<lazyPower> It would be great to tag-team the casuals
<magicaltrout> hehe
<magicaltrout> does mean I have some work to do... only a month away :P
<lazyPower> wooo thats true
<lazyPower> again, if there's anything I can do to lend a hand and make your presentation a success lmk
<lazyPower> cory_fu https://github.com/juju-solutions/layer-tls/pull/42/files
<lazyPower> mind taking a look there? I think this was along the lines of what we had discussed
<cory_fu> lazyPower: +1
<lazyPower> ta
<cory_fu> lazyPower: Oops.  https://github.com/juju/plugins/pull/68
<lazyPower> +1'd
<cory_fu> Anybody seen this error before?  http://pastebin.ubuntu.com/20203784/
<cory_fu> (This is on AWS)
<lazyPower> i have not, and thats an odd duck cory_fu
<lazyPower> that looks like the image is missing core system components
<cory_fu> It was working on this instance a few minutes ago
<lazyPower> was this a single hiccup or is this consistently happening?
<cory_fu> It worked, and now it's failing consistently
<cory_fu> Ah!
<cory_fu> http://www.sillycodes.com/2015/06/quick-tip-couldnt-create-temporary-file.html saved my bacon
<lazyPower> ah so the rest of that was red herring
<lazyPower> cleaning up the apt bits worked, good to know
<magicaltrout> here's a very weird observation..... its funny how different the juju dev team is to the snaps dev team
<mgz> in what way?
<magicaltrout> the type of person
<magicaltrout> thats not a bad thing in either case, but the juju team is much more US based I guess than the snaps team and of course that makes a difference in interaction
<lazyPower> you mean we're more optimistic? :D
<magicaltrout> lol
<magicaltrout> if i'm  honest i'd probably call you more "ballttle hardened"
<magicaltrout> (except for marcoceppi )
<magicaltrout> stupid wifi lag: battle hardened
<lazyPower> haha
<lazyPower> we're battle hardened
<lazyPower> hear that jrwren? He called us a noble steed
<marcoceppi> magicaltrout: great observation
<jrwren> well, I know /I/ am.
<jrwren> but I'd have called marcoceppi the most battle hardened.
<jrwren> not the exception.
<lazyPower> you're not wrong :)
<petevg> cory_fu, kjackal, kwmonroe: I think that we may need to come up with a better way of ensuring that the patches in the bigtop base layer actually match up with the code that we merge upstream.
<petevg> I'm kind of in tool hell with the kafka patches right now -- in order to get the charm to deploy, I need to patch on top of what's in the patches directory, but that's different than the code that I actually want to submit upstream.
<petevg> (I think that the solution is probably to sync up the BIGTOP-1624.patch file first; I think that I can do some git magic to make that process a little less painful.)
<petevg> Actually, github does the magic for me -- just needed to add .patch to the url of the old PR. Awesome :-)
<arosales> any ~charmers around?
<arosales> based off recent reviews in
<arosales> https://code.launchpad.net/~stub/charms/trusty/nrpe/py3/+merge/300153
<arosales> and https://code.launchpad.net/~jamesj/charms/trusty/haproxy/xenial-support/+merge/299196
<arosales> it looks like we can land these
<arosales> just needs a charmer , and I think we need to update the charms to get them in a non-lp ingestion world
<arosales> lazyPower: ? ^
 * lazyPower looks
<lazyPower> arosales - have we found resolution on https://code.launchpad.net/~jamesj/charms/trusty/haproxy/xenial-support/+merge/299196/comments/772700?
<arosales> ah mult-series and charm push
<lazyPower> who needs access to it?
<lazyPower> i can go ahead and start the process but i think there's more to be done here,
<arosales> lazyPower: so we can't just bzr branch the latest and then push because it would be under your name space correct?
<lazyPower> well, thats what i'm saying. we can create the new lp group and push it, and i am happy to add any maintainers.   This will wind up unpromulgating the precise charm however
<arosales> for haproxy James Jesudason (jamesj)  needs it according the the MP and the landscape team I also think needs it according to dpb1_
<lazyPower> is that something we're super concerned about?
<arosales> does this one support precise?
<dpb1_> I don't care about precise, no
<lazyPower> ok, give me a bit to take a look and i'll work those tickets
<arosales> lazyPower: it doesn't looks like it does
<arosales> dpb1_: well I wonder if others do though
<arosales> hmm, 36 deploys with https://jujucharms.com/haproxy/precise/36
<arosales> why  can't precise by eol already
<dpb1_> arosales: they can use old versions just fine
<arosales> dpb1_: old version of the charm?
<dpb1_> yes
<dpb1_> the readme could even be updated if that is really something we are concerned about...
<arosales> lazyPower: what name space would https://jujucharms.com/haproxy/precise/36 go to if you were to charm push the trusty/xenial charm?
<arosales> dpb1_: I am guessing the precise one isn't in use, but trying to have some due dilligence
<dpb1_> sure, I respect that
<dpb1_> still
<dpb1_> precise.
<dpb1_> :)
<arosales> I hear you :-)
<dpb1_> but, yes, thanks for giving it a good think
<arosales> precise is killing us in reactive to
<arosales> perhaps we email the juju list and just ask if we can deprecate precise charms
<arosales>  lazyPower same issues with https://code.launchpad.net/~stub/charms/trusty/nrpe/py3/+merge/300153 ?
<arosales> nrpe doesn't look to touch the series
<lazyPower> no i think nrpe is just a push
 * lazyPower double checks
<lazyPower> https://jujucharms.com/nrpe/ - yeah, there's already a push target for xenial
<lazyPower> oh wait, no, this is the older lp ingested target, so that will require another owner
<lazyPower> which in turn creates the same problem if nrpe is not multi-series, iirc, promulgation works across series, and this would cause the trusty nrpe to drop unless we copy/push to the same owner org
<lazyPower> jrwren ^ check my math?
<arosales> ya https://jujucharms.com/q/nrpe looks to support all series
<arosales> so we would need to add a series to the meta-data as well
<arosales> lazyPower: can you school me on why we  can't just push then promulgate?
<arosales> why we need to move the charm under a new name space
<arosales> I guess cause the owner of the charm would show as the person who pushed the charm, correct?
<lazyPower> the short answer - is ~charmers shouldn't own anything. we're a fixture of the store, not the owners of everything good :)
<arosales> lazyPower: so the process should look like
<lazyPower> but there's another answer thats longer winded that i'm not fully certain i remember. it has to do with push targets though, and how ownership works in the newer store model
<arosales> 1. make a new lp group with charm-<service-name>
<arosales> 2. add current maintainer(s) of the charm as admins including ~charmers
<arosales> 3. push the charm under the new group name
<arosales> 0. See if the new push unpromulgates any other charms
<marcoceppi> arosales: you don't need 2 anymore, for ~charmers
<marcoceppi> since we don't ingest anymore
<marcoceppi> arosales: also, <service-name>-team is a better name for lp
<lazyPower> i did <app>-charmers
<marcoceppi> lazyPower: don't
<lazyPower> but i'm not married to any of that
<marcoceppi> charmers is too confusing for people not familar with "charmers" as a concept
<arosales> I have see <app>-charmers more
<marcoceppi> it's tested poorly with user testing
<marcoceppi> <app>-team or something similar is better
<arosales> ok, we should document this somewhere
<arosales> marcoceppi: doesn't ~charmers need to be a part of the new team so folks like lazyPower can do the initial push?
<marcoceppi> https://irclogs.ubuntu.com/2016/07/20/#juju.txt
<marcoceppi> arosales: no
<arosales> so what team would lazyPower  push nrpe under then?
<marcoceppi> <app>-team
<lazyPower> nrpe-team i assume
<arosales> marcoceppi: sorry I meant docs some where other than obscure irc logs
<lazyPower> or nagios-team
<marcoceppi> arosales: file a bug on docs, this just shook out last week from the design folks
<arosales> will do
<arosales> for docs
<arosales> but on the team name
<marcoceppi> tbh, the <blah>-charmers isn't documented anywhere either
<lazyPower> i'm less concerned with the team name
<lazyPower> thats painting the bikeshed
<arosales> lets say we create <nrpe>-team
<lazyPower> more concerned with validating my statements above that i remember this correctly
<arosales> team thre just seems redundant
<arosales> how does lazyPower push a new charm under nrpe-team if he or charmers isn't included as a member of the new group?
<arosales> sorry I am being dense here
<marcoceppi> arosales: well, he needs to be a member.
<arosales> ah ok
<marcoceppi> charmers doesn't push for people, we simply promulgate
<arosales> I was just saying add ~charmers in general
<marcoceppi> arosales: why though?
<marcoceppi> we have no dog in that teams charm
<arosales> for initial promulgation
<marcoceppi> don't need it
<marcoceppi> we can promulgate even without being in the team
<marcoceppi> promulgation is an implicit action, allowed to anyone in ~charmers, against an entity
<arosales> I guess we could just add the person doing the initial push if that person isn't an active maintainer of said charm
<arosales> ack on promulgation
<arosales> I am just hung up on the initial push
<arosales> it sounds like in that case we just make lazyPower a member of nagios-team
<arosales> lazyPower: does the initial push under nagios-team
<lazyPower> arosales - so your steps were fairly spot on
<marcoceppi> arosales: let the team do the push
<lazyPower> but yeah, ^
<marcoceppi> why are we doing things for people, if they're going to own, let them own it
<arosales> cause its an infrastructure change
<marcoceppi> it's not
<arosales> it is
<marcoceppi> anyone in ~nagios-team can run `charm push . ~nagios-team/nagios`
<marcoceppi> you don't need a charmer for that
<lazyPower> in an ideal world, jamesj will have pushed this to nagios-team/nrpe
<arosales> but sure we can also the maintainers to do it
<lazyPower> and i just come along ahd issue charm promulgate on its published revision
<marcoceppi> exactly
<arosales> marcoceppi: agreed any person in nagions-team can push, that is my point is in adding lazyPower
<lazyPower> arosales - this is in the copy of hte new revq
<arosales> but it is a infrastructure change
<arosales> from LP ingestion to charm push
<marcoceppi> the teams going to have to do the next push
<marcoceppi> they need to learn how to do it
<arosales> just looking for ways to ease that on charm authors
<marcoceppi> us doing it affords us nothing in the long term
<arosales> we can certainly ask the maintainer to do the push and even set up the team
<arosales> but we should note that it is due to the move to charm push, which is infrastructure change
<arosales> fair point on maintainers knowing where their code lives
<lazyPower> https://www.evernote.com/l/AX4egQ_nhRRCSaoWHZgwEnxC-k7lQc_EBI8B/image.png  -- as we're working towards this end goal, here's a clip of reference material that will help soften the distribution of knowledge here
<arosales>  I think we just need to refer folks to some sort of page that tells them what to do
<marcoceppi> arosales: the documentations.
<arosales> 1. make an lp team with <app-name>-team
<marcoceppi> https://jujucharms.com/docs/stable/authors-charm-store
<arosales> 2. add maintainers to the team
<arosales> 3. charm push
<arosales> 4. ping in here for promulgation until new review queue is running
<arosales> correct marcoceppi ^
 * marcoceppi eod
<arosales> marcoceppi: I don't think https://jujucharms.com/docs/stable/authors-charm-store explains the steps we specifically need here to move a charm from LP auto-ingestion to charm push
<arosales> but I'll file a bug and take it from there have a good evening, thanks for the info
<arosales> lazyPower: I'll ping maintainers of nrpe and haproxy and give them the instructions to move to non-lp charm push
<arosales> until the review queue is ready I'll ask folks to ping in here for promulgation
<blahdeblah> While we have a few ecosystem folks around, I have a question: do you have any guidelines for writing reactive charms in a "defensive" manner that guards against race conditions?  Yesterday I spent most of my day fixing those in a couple of our charms.
<lazyPower> arosales - ok so, pause working those tickets until tomorrow?
<lazyPower> its nearly 6, so i'm good to hold until tomorrow ;)  but if they are emergent i can triage and deal with whatever mess i create :)
<arosales> lazyPower: ya per marcoceppi suggestion we need to get the maintainers to push
<blahdeblah> Also, any best practices on how to automate branch management (in both bzr & git) of layer source vs. built layers?
<lazyPower> ok, i'll follow up in the morning then.
<arosales> lazyPower: no many thanks for the help and sticking around to make sure its all wrapped up well
<lazyPower> blahdeblah - we haven't been versioning the assembled charms outside of what is pushed to the store. we only concern ourselves with keeping the layer in DVCS
<lazyPower> s/dvcs/vcs/
<blahdeblah> lazyPower: So that works for public charms; what about private ones?
<lazyPower> blahdeblah - same :)
<lazyPower> you dont have to add --acl=READ everyone
<lazyPower> thats only if you *want* the charm to be public. You can give a select few people read access to your charm, which will prevent it from showing up in search/etc.
<blahdeblah> Hmmm.  Doesn't it require charmers permission to push charms to the store?
<lazyPower> naw
<lazyPower> you can push charms to your namespace
<blahdeblah> Including team namespaces?
<lazyPower> when its g2g for promulgation, we will ask you add the everyone read, and we then promulgate from your namespace
<arosales> as long as you are a member
<lazyPower> ^
<blahdeblah> OK - might have to look into that.  stub, mthaddon ^ FYI
<arosales> blahdeblah: https://jujucharms.com/docs/stable/authors-charm-store should explain it
<blahdeblah> thanks
<arosales> but if it doesn't file a bug at https://github.com/juju/docs/issues/new
<blahdeblah> arosales, lazyPower: What about the issue of writing reactive charms "defensively"?  Any good resources?
<arosales> we know we can make it better, just need feedback
<arosales> blahdeblah: that is a tougher one
<blahdeblah> yer not wrong :-)
<lazyPower> blahdeblah - when you say defensively, what d you mean?
<arosales> I don't think we have any docs on it, but seems like a good candidate for best practices
<blahdeblah> lazyPower: I mean at the moment it's really easy to shoot yourself in the foot
<lazyPower> i'm still not sure i follow
<arosales> in creating states what are the best practices so one doesn't deadlock themselves
<blahdeblah> ^ that right there
<blahdeblah> deadlock is bad; race conditions that mean that charms sometimes work and sometimes randomly fail in CI is even worse, IMO
<blahdeblah> lazyPower: example: https://code.launchpad.net/~paulgear/charms/trusty/grafana/layer-grafana-reactive-states-fix/+merge/300547
<arosales> ya deadlock we can usually  reproduce and debug, race that is just a hard to debug
<blahdeblah> lazyPower: if you read through the target branch of that merge, there are some conditions that simply weren't guaranteed to work in the right order
<lazyPower> races can be guarded against, but your reactive modules become a huge stack of states
<lazyPower> i encoutered this very problem with ETCD earlier today and had mattrae help debug it
<blahdeblah> *huge stack of necessary states <-- FTFY :-)
<blahdeblah> So in that particular example we had the admin password of grafana getting reset on every run of the update-status hook, and the db initialisation getting tried before the db existed
<lazyPower> so the admin pasword being set - why isn't that a state?
<blahdeblah> It is now :-)
<lazyPower> \o/
<lazyPower> thats basically my answer though, is to split apart your states to be more granular and even adopt some singleton states
<lazyPower> meaning they only have a single context and are quite silly, but keep you from running the same method block 10000000x
<blahdeblah> My point is that going into writing reactive charms without a really clear plan and knowing the pitfalls can result in this.
<lazyPower> yes
<lazyPower> yes it can
<lazyPower> you are not wrong at all
<blahdeblah> And it would be good to have some patterns & anti-patterns documented somewhere
<lazyPower> we need to just put cory_fu on paper. he's got a ton of patterns and anti-patterns
<arosales> but at least collecting a few them in best practices would help folks
<arosales> as a starting point
<blahdeblah> Yep
<lazyPower> arosales - you're really pounding the drum of docs today...
<arosales> blahdeblah: I'll also file a docs bug on that
<lazyPower> does this mean i'm going to see not bruzer and lazy doc commits soon?
<blahdeblah> If I get a chance to start on this, where and in what form would you like my thoughts?
<arosales> lazyPower: if users don't know about it then its not a feature
<lazyPower> (dont mind me nic and team, you guys do it all <3)
<arosales> blahdeblah: https://github.com/juju/docs/issues/new
<blahdeblah> arosales: So just start an issue for best practices docs and throw things in there as they come up?
<arosales> blahdeblah: I'll start the issue, and you can comment. We'll probably start with an initial page then take pulls and issues to continually update as we learn more.
<blahdeblah> Is there any ability to add graphviz (or graphviz-like) diagrams to the docs that way?
<arosales> lazyPower: we all need to do more doc commits :-)
<lazyPower> <3
<arosales> juju 2.0 is the default now in the store and in docs
<lazyPower> blahdeblah - so, there's some interesting conversation around that
<lazyPower> blahdeblah - i was using omnigraffle, but i did stumble across a markdown javascript lib that gives you the ability to make inline flow/uml charts
<lazyPower> i know, markdown javascript lib?  give me a sec to find the link, it'll blow your mind
<blahdeblah> lazyPower: omnigraffle - that's like a proprietary package that's only available on proprietary OS platforms, right? :-P
<lazyPower> yep
<lazyPower> http://markdown.dasapp.co/editor
<lazyPower> its embedded in this markdown editor -- i think its the mathjax lib
<blahdeblah> I'll be giving omnigraffle a flying miss... :-)
 * lazyPower redirects the obvious trolling about my choice in diagramming software to the real conversation about the js lib embedded in this markdown editor
<arosales> blahdeblah: https://github.com/juju/docs/issues/1263
<blahdeblah> lazyPower: Sorry dude; for me, it's still all about the licensing. :-)
<blahdeblah> I just want to be able to put stuff like http://graphviz.org/content/fsm or http://graphviz.org/content/traffic_lights in docs
<lazyPower> there's nothing stopping you from comitting an svg and embedding it as an image
<blahdeblah> Looks like that's what existing docs use, e.g. https://jujucharms.com/docs/stable/developer-getting-started
<blahdeblah> thanks arosales - will put some thoughts there
<lazyPower> oh you like my graphs? :)
<lazyPower> i did those
<blahdeblah> With some proprietary app, no doubt. :-P
<arosales> blahdeblah: thanks for adding your thoughts to it
<lazyPower> !!!
<lazyPower> Why you gotta hate on my non-proprietary pngs
 * lazyPower sulks in the corner
<blahdeblah> lazyPower: The results are just fine. :-)
<lazyPower> <3 ;)
<lazyPower> arosales - we good then on all of the above? i need to run to catch a dinner date
<arosales> lazyPower: I think we are good for today
<arosales> have a good evening
<arosales> thanks for the help
<lazyPower> aight, same bat channel /time tomorrow then
 * lazyPower doffs hat
<arosales> you know it
<blahdeblah> thanks guys
<jrwren> lazyPower: i'm not sure, but I think so... 1hr ago :]
<magicaltrout> look what happens when my battery goes flat and canonical employees buy me beer
<magicaltrout> you lot have a bit fat irc chat
<lazyPower> :)
<lazyPower> its like we knew
<lazyPower> jrwren - gracias
<magicaltrout> sickening
<magicaltrout> instead someone from opensuse told me brexit was cool! ;)
<lazyPower> what conference are you at?
<magicaltrout> lol
<magicaltrout> Snappy Sprint
<lazyPower> oh right, i knew this
<lazyPower> s/this/that/
<magicaltrout> hehe
<magicaltrout> well got invited by dustin
<magicaltrout> showed up
<magicaltrout> flamed ya boss
<magicaltrout> have a hangout with Bill Bauman tomorrow
<magicaltrout> sounds like a regular week
<magicaltrout> oooh lazyPower unlike kostas Mark knew who you were and infact told me to speak to you re DC/OS.....
#juju 2016-07-21
<lazyPower> magicaltrout :)
<magicaltrout> lazyPower: as a  non employee showing those slides of mark makes my life so  much more entertaining ;)
<magicaltrout> ironically i was the last talk of the night
<jrwren> brexit was cool if all you care about is your currency purchasing power when you visit UK.
<magicaltrout> hey jrwren i get paid in USD... if i ignore the political side of it, i'm fine ;)
<magicaltrout> i dont' visit the uk either, I live there.. being english n'all ;)
<jrwren> so... if you live in UK and get paid in USD... you got a pretty hefty raise when the pound sunk?
<magicaltrout> yeah
<magicaltrout> i voted remain..
<magicaltrout> shows what i know
<blahdeblah> Anyone around to give me some initial feedback on https://code.launchpad.net/~paulgear/charm-helpers/nrpe-service-immediate-check/+merge/300682 ?
<blahdeblah> I'm not quite sure if I've updated the test suite correctly.
<stub> arosales: I have already set up a new nrpe upstream - https://bugs.launchpad.net/charms/+bug/1603891
<mup> Bug #1603891: Promulgate cs:~nrpe-charmers/nrpe <Juju Charms Collection:New> <https://launchpad.net/bugs/1603891>
<stub> gnuoy: There is nothing to review in the prereq branch - it is just pulling in updated charmhelpers
<stub> cory_fu, bdx : That charm is just cs:trusty/storage (the storage subordinate) with an updated series. I use it for testing PostgreSQL with Xenial.
<bdx> stub: does it currently support aws ?
<stub> bdx: I don't know. I just use it for testing locally. The storage subordinate (cs:storage) is dumb. The actual backend integration is done by https://jujucharms.com/block-storage-broker/. So whatever it supports.
<stub> It claims to support EC2
<bdx> totally
<bdx> stub: I can't seem to get it to work .. no one else can verify that it works ... would you mind giving it a wherl using aws if you get a chince?
<bdx> chance*
<stub> I'm not setup for aws.
<gnuoy> stub, merged. Shall I create a mp for updating maintainer in metadata.yaml ?
<stub> gnuoy: That is https://bugs.launchpad.net/charms/+bug/1603891 really.
<mup> Bug #1603891: Promulgate cs:~nrpe-charmers/nrpe <Juju Charms Collection:New> <https://launchpad.net/bugs/1603891>
<gnuoy> stub, ok. Are you going to sync the lp branch over to the github one since I've just merged your mp into the lp branch?
<stub> gnuoy: There is a github branch? I created a git branch on lp.
<gnuoy> stub, oh, sorry, misread the bug
<stub> I can resync
<stub> (or can I?)
<gnuoy> stub, Are you going to sync lp-bzr branch to lp-git branch ?
<gnuoy> sorry, out-of-sync
<gnuoy> ack
<stub> I was hoping to have this promulgated last week
<stub> gnuoy: resynced
<parad0xz> hi!!
<parad0xz> is there any tutorial how to install juju on maas on ubuntu 16 xenial?
<parad0xz> because as I understand things have changed
<parad0xz> how to setup juju on maas 2??!!
<Shruthima> Hello Team,
<Shruthima> can anyone suggest on these issue https://github.com/juju/amulet/issues/141?
<Shruthima> Hello Team, Can anyone suggest how to resolve these issue  https://github.com/juju/amulet/issues/141 ?
<Shruthima> tvansteenburgh: Hi, Can anyone suggest how to resolve these issue  https://github.com/juju/amulet/issues/141 ?
<shruthima> Hello Team, Can anyone suggest how to resolve these issue  https://github.com/juju/amulet/issues/141 ?
<lazyPower> shruthima - The tool merges referenced in that bug are still in flight and have not landed as best i can tell. We will certainly update bug 141 on amulet when everything has landed and is easy to install/upgrade. right now its a manual process and unless you know what you're doing can leave you in an inconsistent state.
<mup> Bug #141: Revisions created with baz 1.1 are corrupt after archive-mirror <Baz (deprecated):Won't Fix> <https://launchpad.net/bugs/141>
<lazyPower> thanks mup
<shruthima> oh k Thankyou lazypower :)
<godleon> hi all, is juju 2.0 + OpenStack with HA ok now ?
<lazyPower> godleon - juju 2.0 is still in beta
<godleon> I just tried to deploy mitaka openstack with HA(use HAProxy) by juju 2.0 and maas 2.0, but it seems not worked properly.
<godleon> lazyPower: so......not suggest use juju 2.0 to deploy openstack ?
<lazyPower> godleon - we dont currently recommend production deployments for 2.0 until its released as GA
<lazyPower> otherwise, go for it :)
<godleon> lazyPower - ok, but...... can juju 1.25 deploy OpenStack mitaka with HA ?
<lazyPower> I do believe we have several mitaka HA deployments in the wild deployed with 1.25 yes
<godleon> lazyPower - oh......do you have bundle configuration could be reference.....?
<lazyPower> godleon - there's a couple options here. you can use autopilot (this ships with landscape) to build and deploy a custom openstack configuration. Or you can use the openstack base bundle  - https://jujucharms.com/openstack
<godleon> lazyPower - yap, I have tried base bundle, it can work. But I can not find OpenStack with HA example on charm store. So I tried to add haproxy charm for every openstack service, but the result did not work properly.
<godleon> lazyPower - So that's the reason I ask if there is Openstack with HA bundle configuration for reference.
<lazyPower> Well that's the rub godleon, some of that is highly dependent on your infra and how that is setup. A bundle for ha I don't think is one size fits all. We could use more examples sure but I know a lot of effort has been taken with autopilot to make modeling that easy.
<lazyPower> If that's not an option, I would recommend mailing the juju list to see if someone can share their bundle or lend a hand. I'm pretty sure we have some write ups on this subject but my Google fu is failing
<godleon> lazyPower - hmm... got it. Thanks for your suggestion. I will talk about this with my coworkers. :)
<lazyPower> Godleon: np. Highly recommended you ping the list though. A lot of our open stackers are still busy with the rollover from becoming an official project upstream. So chances are your question will be answered there once they see the post
<lazyPower> And as always were here to help and guide you in the mean time :)
<godleon> lazyPower - Good point! I'll take your advice, thank you very much! :)
<godleon> Can juju 1.25 combined with MAAS 2.0 rc2? I can not even bootstrap after setting the environment.
<lazyPower> godleon - juju 1.25 does not support maas 2.0
<lazyPower> i missed that portion of your question, i apologize. When you asked if 1.25 would work with mitaka i didn't connect you were using a maas 2.0
<godleon> lazyPower - oh......my....god... :(
<magicaltrout> now you've gone and done it lazyPower .....
<godleon> lazyPower - that's ok. I can reinstall MAAS 1.9.3
<lazyPower> i know, this is all clearly my fault magicaltrout
<lazyPower> i should be flogged
<magicaltrout> hehe
<magicaltrout> in the stocks for you!
<lazyPower> http://www.reactiongifs.com/wp-content/uploads/2012/11/chicken_regret_nothing.gif
<magicaltrout> thats making me dizzy
 * lazyPower underlines the text 
<magicaltrout> woop after 3 weeks of messing around with stupid sysadmin rules we finally have our docker based CI/CD environment setup on our Genomics project
<magicaltrout> now I just need to convince them to use Juju ;)
<lazyPower> not bad magicaltrout
<lazyPower> what did you backend it with? swarm, k8s, or dc/os?
<magicaltrout> swam for now
<magicaltrout> i have a DC/OS  plan in the works but I need to get onsite to chat with the guys so i'll probably try and figure that lot out when Im out in pasadena
<magicaltrout> i had a chat with Bill today, this type of stuff is exactly what projects like this one I'm on need. I need to be able to deploy the same stuff on my laptop, on another developers laptop, in the public dev system or in production without changing loads of stuff
<magicaltrout> whether thats done with Juju, LXD, Docker etc, doesn't really matter, but until we did this there was just the worlds largest puppet script and a 30 minute build to dev only system
<magicaltrout> you couldn't stand this stuff up locally
<magicaltrout> and there  are 100's of projects like this that would benifit wildly from CI/CD container development
<magicaltrout> as  a slight aside lazyPower you know you can run k8s on DC/OS?
<lazyPower> in a meeting, bbiaf
<magicaltrout> :'(
<magicaltrout> no one to talk rubbish to
<magicaltrout> http://kubernetes.io/docs/getting-started-guides/dcos/
<jrwren> magicaltrout: i'll listen. so... why would I run k8s on dc/os? does it use dcos features? what features?
<arosales> stub: gnuoy for nrpe we still need to add a series to the metadata yaml, then ping in here to have a ~charmer promulgate once you have pushed you latest version
<magicaltrout> jrwren: i don't have a real answer as for the "why", choice I guess :)
<magicaltrout> they did write a pretty good blog post about it
<magicaltrout> https://mesosphere.com/blog/2015/09/25/kubernetes-and-the-dcos/
<jrwren> whoa, 10 months ago!
<jrwren> i'm so behind the times.
<magicaltrout> lol
<magicaltrout> don't worry i havve this presentation at mesoscon in a month with charms that don't fully work yet
<magicaltrout> i'm the one behind ;)
<lazyPower> magicaltrout - its typically for marathon support
<lazyPower> people are in love with that scheduler
<lazyPower> sorry jrwren ^
<lazyPower> magicaltrout - there's an interesting crossover opportunity here, we could take the current k8s bundle and gank the scheduler in leu of marathon and release another bundler with both our work, whicih mightbcool
<magicaltrout> yeah i was wondering that the other day lazyPower
<magicaltrout> see how much of a crossover we can get
<lazyPower> we can do it all G-funky, it just takes time
<lazyPower> :)
<jrwren> we could also make juju the best way to deploy dcos
<magicaltrout> jrwren: we already have one of their guys interested for DC/OS MAAS deployments
<magicaltrout> so there is certainly an opportunity there
<bladernr> Hye, what's the trick for making juju 2.0 talk to MAAS 2.0?
<bladernr> ISTR there was something specific you had to do to enable it...
<bladernr> but can't remember what that was.
<magicaltrout> jrwren: apart from some random config stuff i need to put into actions, the only real big issue with my charms currently is the unscalablility of master, other than that they work. But i need to get mesosphere on the hook to help with the packaging because I had to munch up their really weird installation procedure to build a deployable package
<lazyPower> bladernr - unless i'm mistaken, beta-12 once you've added the cloud it just works with maas 2.0-rc2
<bladernr> lazyPower, I followed this https://jujucharms.com/docs/2.0/clouds-maas#defining-maas-clouds
<bladernr> but when I do an add-credentials, it complains that oauth is not supported
<bladernr> ERROR auth type "oauth" for cloud "maas" not supported
<bladernr> haha, nevermind... sheesh.  oauth != oauth1.  head->desk
<magicaltrout> picnic!
<lazyPower> bladernr  :) glad we got it sorted
<bladernr> Yeah, I should probably put my glasses back on :/
<lazyPower> also, cool nick :) how do we know you're not a replicant?
<bladernr> That IS the question... I'm pretty sure the sheep I dream of are not electronic.
<bladernr> because I like puppies?
<lazyPower> I'll reserve judgement until you submit a charm/bundle for review
<lazyPower> that'll tell me if you originate from the Tyrell corp or not
<bladernr> Heh... my charm is weak.  All it does right now is install one package.  It needs some enhancement.
<lazyPower> bladernr - if you're interested, we do office hours and it might b fun to have an interactive session during the office hours to peek at your charm and make suggestions. get some real time engineering support/face-time with charmers
<bladernr> perhaps, we'll see what happens (I'm about to go on vacation, so focusing on this is becomming difficult).
<magicaltrout> come to pasadena \o/
<lazyPower> i completely understand that too
<lazyPower> and as magicaltrout suggested, there's also a charmer summit coming up - http://summit.juju.solutions
<bladernr> Yeah, I actually would LOVE to attend a charmer summit (I missed the one in Feb)
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Charmer Summit Sept 12-14th  http://summit.juju.solutions || Juju 2.0 beta-12 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<bladernr> but that one happens to end the day before I fly to .EU for a family trip
<magicaltrout> sod the family... ! ;)
<lazyPower> bummerrrr, scheduling conflicts :(
<bladernr> magicaltrout, I could say that, but my wife would be ... displeased... besides, charmer summit in Pasadena vs 5 days in London and 6 days in Malta?  really no choice there.
<magicaltrout> hmm malta seems acceptable
<magicaltrout> I can vouch for london being nothing special ;)
<bladernr> Well, she's never been to London.  I can agree with you there.  But I LOVE Malta, I'd live there if I could.  heh
<magicaltrout> hmm
<magicaltrout> might have to pay it a visit then
<magicaltrout> London's okay, Im just messing, I live about an hour outside. Certainly there is plenty to do for families
<magicaltrout> all the free museums, old stuff etc
<lazyPower> Malta is tons of fun
<lazyPower> i really enjoyed our sprint there
<bladernr> Yeah, I actually like London, it's fun, but ulitmately, it's just another large city.  They're all basically the same, with different wrappers and accoutrement
<bladernr> lazyPower, yeah, me too, I spent an extra week after just to explore it.  Hence the trip back now with the spousal unit.
<lazyPower> All the Tolkein bars ^_^  but i think thats around oxford
<lazyPower> magicaltrout - i'd like to be pro-active about talking to you re dc/os efforts. when your schedule has settled to where we can find a half hour to kick off, we should do that. I'd like to bring along mbruzek too as he's my co-pilot with containers
<magicaltrout> indeed lazyPower sounds good
<magicaltrout> i'll get this master charm squared away so its usable, and then we can figure out where to go next
<lazyPower> well i have ulterior motives to try and fold you into the /u/containers namespace :)
<lazyPower> because we <3 community contributions
<mattrae> hi, with juju 2.0 is it possible to deploy a service to a controller? previously with 1.x i could deploy --to 0 which would put services on the controller
<bladernr> mattrae, I was about to ask that exact question.
<bladernr> I tried with --to=<controller name> but that didn't seem to work
<bladernr> it accepted that, but failed to deploy...
<mattrae> bladernr: ahh cool, i haven't tried that yet
<mattrae> bladernr: ah ok i'm getting an error as well 'cannot run instance: No available machine matches constraints: name=jumpiest-yuko'. the node is already deployed and its not realizing its the controller
<lazyPower> mattrae  you can
<lazyPower> mattrae - juju switch admin && juju deploy foo --to 0
<lazyPower> mattrae - in juju 2.0, the admin model is a reserved case, we dont recommend colocating on the controller, but if you want to - thats how you can do it. Note: you cannot relate anything to this application unless it too is deployed in the admin model
<mattrae> lazyPower: thanks! we plan to have 3 controllers, but we want each of those controller nodes to have some services because they are big machines
<bladernr> lazyPower, i'd prefer to run instances on the bootstrap node as well. I only have two physical nodes, and need to run one servince on one, and one on the other, and relate them.
<bladernr> so juju switch admin && juju deploy foo --to 0 && juju deploy bar?
<bladernr> which I think means foo goes on bootstrap node and bar goes on machine 1?
<lazyPower> correct
<bladernr> lazyPower, thanks
<petevg> cory_fu, kwmonroe: I made some PRs for binding kafka to a non default interface, and kicked the ticket into review. I still need to write the same thing, but for Zookeeper, but I'd appreciate some feedback on the Kafka side. Does my approach look correct?
<kwmonroe> petevg: got a pr link for me?  me too lazy to open the board
<petevg> kwmonroe: there are three PR links, which is why I pointed you at the board :-p  Hang on ...
<kwmonroe> no no petevg, i'm there now :)
<lazyPower> bladernr mattrae - ah, for clarity, post beta-10 (i think) the admin model was renamed to 'controller' to better clarify that it's intended for juju internals. i just ran list-model and realized i was stale :)
<magicaltrout> like an old biscuit
<bladernr> So, dumb question, but how do you debug a hook in 2.0?
<bladernr> AIUI, juju debug-hooks <unit> <hook> would open a tmux session in the unit in the hook context, yes?
<bladernr> all I get now is just a root session and no ability to run hook tools or hooks manually.
<magicaltrout> when the hook fires
<magicaltrout> you'll drop into it
<magicaltrout> i don't believe it's changed
<bladernr> Hrmmm, ok.  All I get is a root login, and not dropped into it.
<bladernr> hrmmm... ok, worked when "start" was called, but took several minutes before that happened.  How can you trigger a hook (outside of the debug console)?
<magicaltrout> yeah so stuff like that works on a 5 minute timer
<magicaltrout> ... and i'd tell you how to trigger it manually
<magicaltrout> but the dev server i have isn't responding and its not in my bash history :):
<magicaltrout> lazyPower: help me out here
<magicaltrout> ah
 * lazyPower reads backscroll
<lazyPower> 1 sec, in/out of a meeting thats almost done
<magicaltrout> juju run --unit ss/0 "hooks/install"
<magicaltrout> or whatever
<magicaltrout> is what you want bladernr
<bladernr> magicaltrout, aiii, thanks
<lazyPower> ok back
 * lazyPower reads backscroll for real this time
<lazyPower> ahhh no
<lazyPower> juju run is executed in an anonymous hook context
<lazyPower> that'll work for primary hooks like install/config-changed et-al
<lazyPower> but you wont want to try a relationship hook that way, it'll fail on you every time, as it doesn't have the env setup for the conversation
<lazyPower> bladernr - typically what we do, is either "break a hook" (not ideal) or attach and wait for the next hook execution. Its much easier to intercept the hoook you're intended to debug, if you break that hook so you can attach and juju resolved --retry application/unit.  We've got some older bugs referencing requests to execute hooks like so: "juju debug-hooks mysql/0 upgrade-charm" and it drops you in the upgrade-charm hook context, but it
<lazyPower>  hasn't been addressed yet. Maybe some day when we're less focused on stabilizing 2.0 and putting the tint and chrome on the release.
<lazyPower> magicaltrout - but decent hack :) i like it
<magicaltrout> yeah i believe it came from cory_fu and kwmonroe
<lazyPower> big data holding out on the good tricks these days eh?
<lazyPower> typical cory_fu and  kwmonroe - silently genius
<magicaltrout> thats lies
<magicaltrout> i know that trick
<lazyPower> lies, damn lies, and statistics
<magicaltrout> because kwmonroe kept telling lies when writing the pdi charm
<magicaltrout> and breaking stuff
<lazyPower> well, i do a fair amount of breaking myself
<lazyPower> just to be fair
<kwmonroe> i didn't lie magicaltrout, i just made stuff up about how python worked.  totally different.
<magicaltrout> lol
<kwmonroe> hey petevg, the change to BIGTOP-1624 on bigtop base is just to bring it in-line with what we had upstream, right?
<petevg> kwmonroe: correct.
<petevg> ... and in line with the patch that is in master.
<petevg> ... though the patch in resources/master should go away, now that we have it merged.
<bdx> how does  __init__.py make it into lib/ on `charm build` ?
<bdx> it seems, if lib/__init__.py doesn't exist in any layers included by the charm being built, then it never makes it to build/<charmname>/lib
<bdx> this causes `charm build` to fail
<bdx> :-(
<kwmonroe> bdx: there's an __init__.py in layer-basic: https://github.com/juju-solutions/layer-basic/tree/master/lib/charms/layer
<bdx> shoot
<bdx> darn it
<bdx> ooho
<bdx> no
<kwmonroe> bdx: you're talking crazy.. what's wrong?
<bdx> kwmonroe: it needs to be in lib though right?
<kwmonroe> charm build will pull it in
<bdx> to lib/ ?
<bdx> from where?
<kwmonroe> assuming you include layer:basic in layer.yaml
<kwmonroe> wait, you mean lib/__init__.py or lib/charms/layer/__init__.py?
<bdx> lib/__init__.py
<kwmonroe> hm.. bdx where are you seeing __init__ in ./lib?  is it possible you have a local layer-basic that didn't stick it in ./lib/charms/layer?
<bdx> kwmonroe: omp, I'm grabbing you debug output
<bdx> kwmonroe: paste.ubuntu.com/20366435/
<kwmonroe> bdx: thanks for not making that link clickable
<bdx> aha
<bdx> http://paste.ubuntu.com/20366435/
<kwmonroe> bdx: blow away /home/bdx/allcode/charms/builds/kibana first.. if you can
<kwmonroe> then charm build it
<bdx> #$(&*@
<bdx> kwmonroe: that worked
<bdx> kwmonroe: thx
<bdx> my bad
<kwmonroe> bdx: please deposit $11 into my paypal account
<bdx> I didn't want to blow it away, bc its my .bzr
<bdx> aha
<bdx> kwmonroe: you rock
<kwmonroe> bdx: if you want to keep a built charm in bzr, i suggest you 'bzr branch <foo> foo-bzr"
<kwmonroe> then after every charm build, cp -a foo-bzr/.bzr foo
<bdx> yea, thats where I figured this was headed ... ok
<kwmonroe> that's not real syntax, but basially, copy the .bzr dir from your bzr charm to the output dir of your 'charm build' and you'll be able to bzr commit that
<bdx> totally
<bdx> ok
<bdx> thanks man
<petevg> kwmonroe: darn it. I think that I rebased my kafka-bind-address branch from master this morning, when I was troubleshooting, and that means bad things for merging back to the kafka branch :-(
<petevg> Time to brush up on cherry picking ...
<kwmonroe> bdx: consider the value of keeping a built charm in bzr (or any vcs).  we're not ingesting anymore, so when you 'charm push', you are basically preserving a copy of your charm..
<magicaltrout> no one really wants to use  bzr...... ;)
<lazyPower> bdx - yep kwmonroe is on to something there. The current charm push model is very much akin to an image based workflow, where your image is the assembled charm you've versioned in your namespace's charm stream.
<kwmonroe> right, especially with layered charms.. the layer lives under some VCS, but the output of 'charm build' is what you 'charm push[; charm publish <x>]' to the store and is therefore locked to a point in time when you want that charm to be available.
<kwmonroe> petevg: please don't worry about re-syncing the kafka branch with master
<petevg> kwmonroe: that's exactly the opposite of what I'm worrying about.
<kwmonroe> lol
<petevg> I'm working on getting a version of my branch that won't introduce a bunch of muddled stuff from master.
<petevg> I think that I have it -- cherry picking turns out to be simpler than I've always thought it to be.
<petevg> I'm just re-running tests before I push, to make sure that I'm not introducing new bugs.
<kwmonroe> petevg: i think the crux of the issues happened because master's 1624 patch wasn't in sync to begin with
<petevg> kwmonroe: That's one of the problems, yes :-)
<bdx> tracking
<kwmonroe> whenever i find myself in these types of problems, i just commit, force rebase, take a few days off, and cory_fu seems to have it cleaned up when i return.
<kwmonroe> i think he likes it
<petevg> kwmonroe: new PR. It's a thing of beauty: https://github.com/juju-solutions/bigtop/pull/28
<petevg> kwmonroe: I guess the other question is whether the puppet changes should still get merged to master or not (https://github.com/juju-solutions/bigtop/pull/27/files).
<petevg> ... and that also ties into the discussion on the bigtop mailing list about how hard it is to figure out how to review our merges, with things split between PRs containing charms, and PRs containing patches to the puppet scripts.
<petevg> For now, though, I've gotta EOD -- I'm on deck for dinner tonight. Catch ya later!
<kwmonroe> petevg: i am firmly in the camp of PR 27 being redundant.
<kwmonroe> we'll chat manana
<kwmonroe> have a great dinner!
<magicaltrout> dinner?!  I had a burger in the bar for the 3rd night out of 4 ;(
<bdx> lazyPower: should I just iterate over the templates with *.spvsr.conf and extract the appname from them?
<bdx> lazyPower: for layer-supervisor
<magicaltrout> no one give bdx a new job
<magicaltrout> he asks too many questions!
<bladernr> hrmmm... another dumb question, how do you define the default series for a charm?  is series: in metadata.yaml valid?
<magicaltrout> i believe it is bladernr
<magicaltrout> i still use it at least
<magicaltrout> some of it is all a bit weird thoug
<magicaltrout> h
<mgz> bladernr: it takes a list now
<bladernr> ok.. I can keep using -s, but it makes more sense for me to learn how to do so (and i guessed that series: was a valid key)
<magicaltrout> like pushing a multiseries charm is funky
<bladernr> mgz, so... series: xenial,trusty
<mgz> series:
<mgz>   - xenial
<mgz>   - trusty
<bladernr> ahhh... yeah..
<bladernr> thanks
<mgz> first one is the default
<magicaltrout> bdx: you planning to be in Pasadena?
<bdx> you know it
<bdx> magicaltrout: you?
<magicaltrout> indeed. After your drunken night in ghent, i wouldn't miss it ;)
<bdx> magicaltrout: baha thx
<bdx> yea...
<bdx> no
<magicaltrout> also as a caveat although i live outside london, my office is really JPL in Pasadena, so I'm just combining a bunch of trips if  I can make it all align
<magicaltrout> although my eldest kid starts school that week as well, so its going to be a rush
<magicaltrout> as is my life
<magicaltrout> actually bdx if you didn't see this you'll like it
<magicaltrout> https://docs.google.com/presentation/d/1UdKSsuXpYSy25V9HuxnqgzQRmlC1DEtLJUgEY-5PDoc/edit#slide=id.g15623877ea_0_95
<magicaltrout> i did a presentation yesterday with Mark in the room
<magicaltrout> and unleashed this slide
<magicaltrout> not sure if I'll be allowed into the charmers summit ;)
<bladernr> YAY, charm is working!
<magicaltrout> woop
<bladernr> Ok, before I quit for the day... one last question... is there any way to set config on a per-unit basis?
<magicaltrout> erm
<bdx> magicaltrout: wow ... yea ... I was in the process of swapping out the job, woman, and house immediately before belgium .... errrgg I was hustling real hard around that time to finish school, maintain multiple openstacks, a damn dungeon dweller I was in the months preceding that event
<bdx> I had no idea what that european vino would do to me
<bdx> aha
<magicaltrout> hehe!
<magicaltrout> we should've got a video
<bladernr> the charm I wrote is for iperf3, which can be run as either a server or a client.  I suppose the proper answer is to create two charms, one for each case, rather than one charm that can do both
<magicaltrout> bladernr: if it makes you happier i do exactly that
<magicaltrout> because config options will diverge
<bdx> treacherous
<bladernr> right... just wondering if two separate charms are more "proper".
<bladernr> because right now, I can't just add-unit and then change the config on the new unit.
<magicaltrout> bladernr: the other option is leader election etc, i don't know enough of what you're doing
<magicaltrout> but it sounds like separate charms makes sense if you have server -> client stuff
<magicaltrout> and yeah
<magicaltrout> scale out will screw you as you seem to have noticed :)
<bladernr> ack... I'm just happy it's finally working.  I can do more tomorrow.
<bladernr> tonight is food, beer and then Star Trek
<magicaltrout> never seen star trek
<magicaltrout> not seen that other one...
<magicaltrout> whats it called
<magicaltrout> oh yeah
<magicaltrout> star wars
<lazyPower>  bdx - nah that sounds too magical. I did however miss this gem https://github.com/jamesbeedy/layer-supervisor/blob/master/layer.yaml#L5
<lazyPower> i looked in the reactive module, saw the state and status, and grinned when there was no install bits there
<lazyPower> and fwiw i retract that statement about moving it into the module. What you have there is bueno
#juju 2016-07-22
<magicaltrout> argh balls
<magicaltrout> as if i'm  double booked for juju talks
<magicaltrout> how is that possible?!
<rcj> Is there a charmer who could do a quick review of an MP to restore function in the bip charm lost moving from precise to trusty?  7 line diff @ https://code.launchpad.net/~rcj/charms/trusty/bip/lp1604759/+merge/300585
<stub> bradm: You're listed as maintainer of that one
<stub> rcj: I've reviewed and merged it. It won't get in the charm store until the promulgation dances are done, which I think requires bradm since he is listed as the maintainer.
<rcj> stub, thanks
<ryebot> Is there some juju 2 command that will allow me to recover a registration key given a user name? e.g., if the originally provided key is lost
<ryebot> Also, is there a way to delete a user? Not just disable; I want to be able to reuse the name for testing purposes.
<petevg> @kwmonroe, @cory_fu: Pushed revised PRs for https://github.com/juju-solutions/bigtop/pull/27 and https://github.com/juju-solutions/bigtop/pull/28  lmk if they look good (I did re-run the tests, and they still pass.)
<balloons> there's two juju-core packages sitting in -proposed for xenial. Could the package from 7/18 be rejected, and the package from today be accepted?
<bdx> mbruzek: how are you installing bundletester? - I can't get past the cheetah dep on trusty or xenial ... `pip install bundletester` fails
<bdx> on the cheetah dep
<bdx> mbruzek: hmmm, ok, a fresh trusty container, and it installed correctly .... strange
<bdx> mbruzek: I'm guessing bundletester doesn't work with 2.0 yet either, so I need to get a 1.x env setup .... grrr
<tvansteenburgh> bdx: it'll work next week
<tvansteenburgh> it actually works now if you run it in charmbox:latest
<tvansteenburgh> lazyPower: is that the right tag? ^
<lazyPower> tvansteenburgh - its in charmbox:devel, but he'll need to grab the deps. we didnt publish a tag
<tvansteenburgh> oh ok
<bdx> tvansteenburgh, lazyPower: on it, thanks guys
<lazyPower> i should do that though
<kwmonroe> wait wat?  lazyPower, charmbox:devel will soon have bundletester back?
<lazyPower> kwmonroe - well, it technically has it now... but you have to manually upgrade some components
<lazyPower> i was going to push a tag release so we could hack with it until it gets in the right spots and it just shows up in devel
<lazyPower> kwmonroe https://github.com/chuckbutler/charmbox/commit/9e917fb4d43e91149870e3fdf6070b708d641a65
<lazyPower> sorry kwmonroe - i cleaned that up for legibility -- https://github.com/chuckbutler/charmbox/commit/cfa3b42b0ed74fded798f0b172881dedf2a202bc
<lazyPower> bdx tvansteenburgh - i just pushed lazypower/charmbox:beta-12 -- if you want to test with bleeding edge, that was just pressed ~ 5 minutes ago
<bdx> marcoceppi: I think I have some stale controllers bootstrapped in my aws charm dev account, would you mind clearing them for me?
<bdx> http://paste.ubuntu.com/20507893/ :-(
<marcoceppi> bdx: I will later tonight, thnaks for the heads up
<bdx> thank you
<bdx> lazyPower: amulet
<bdx> amulet hasn't been refactored for 2.0 yet has it?
<lazyPower> bdx - i have all the latest deps put together for you in a container already :)
<lazyPower> bdx  lazypower/charmbox:beta-11
<lazyPower> er beta-12
<bdx> lazyPower: I'm running it
<lazyPower> otherwise, you'll need to fetch tip from the following:   bundletester, amulet, deployer, jujuclient
<lazyPower> and install them in that order
<lazyPower> additionally, if any of your amulet tests do something like apt-get install amulet or pip install amulet, the house of cards falls apart
<bdx> ahh, so https://github.com/jamesbeedy/layer-kibana/blob/master/tests/tests.yaml
<bdx> would bring her down
<bdx> ?
<lazyPower> yep
<bdx> ok, thats whats up then
<lazyPower> so, you can comment that out or just rm it while you get on with life until it lands in stable next week
<bdx> bah
<bdx> ok
<lazyPower> sorry its hinky but we're mid-flight, packaging solves this
<bdx> for the sake of finishing what I set out to do today
<bdx> no worries.
<bdx> perfect
<bdx> thanks for your help today
<lazyPower> anytime :)
<lazyPower> anything else before i run off for the weekend?
<lazyPower> magicaltrout - get some rest :) Big weeks coming up for you. Try to enjoy your weekend
<bdx> nah, I'm going to try to take some time away from the screen myself this weekend
<bdx> thanks tho - enjoy
<lazyPower> :) awesome. see you on Monday then
<lazyPower> cheers
<linnn> With juju Relations between charms what's the convention when a port needs to be exposed?
<linnn> I have a website charm and a database charm, so when I add the relation between the two the database port needs to be open.
<linnn> Is there a way to expose ports in relation hooks?
#juju 2016-07-23
<jobot> hello, I am getting a lot of "update-status" hook (missing) in the debug-log, but in the documentation I don't seem to find this Hook: https://jujucharms.com/docs/stable/authors-charm-hooks
<godleon> Hi all, is there any way to flush maas's DHCP address pool, I found the ip range I configured on maas was almost run out of after deploying OpenStack many times.
<hloeung> godleon: probably best ask your question in #maas ?
<godleon> oh...ok....thanks!
<flicflac> 185.30.166.37 @verne
<flicflac> admin !a!
<bbaqar> guys .. my rabbitmq-server units are giving "Unit has peers, but RabbitMQ not clustered" status
<bbaqar> refusing login to nova-comute "AMQPLAIN login refused: user 'nova' - invalid credentials"
#juju 2017-07-17
<cumulus_vert> Anybody here ?
<cumulus_vert> @wolverineav
<cumulus_vert> I want to know what kind of cool things I could do with jujucharms.com
<cumulus_vert> spining programs on clouds easily sounds sweet but If I don't have exciting project and or ideas t o ofer any one value based on the services I am still at square one.
<cumulus_vert> What are you excited about juju charms for anyone ?
<cumulus_vert> If i could use this to move ojbjcts i the physical world it would be fun.
<NileshSawant> Hello All, Can someone please point me documentation for configuring neutron + ovs in juju  base setup ?
<ybaumy> hmm how do i redeploy a unit?
<ybaumy> the openstack dashboard is not wokring
<ak_dev> I am trying to code kubernetes-master and kubernetes-worker charms for now, since changing options in the already existing ones is pretty complicated
<ak_dev> I am installing services via snap, but for some reason the kube-apiserver always fails to start (due to apparmor i think)
<ak_dev> how do I start kube-apiserver then? Rest all start fine
<dakj> hello guys, why I can't open Juju Gui on a VPN connection? while I've not any issue with MAAS or Landscape? someone knows why? thanks
<dakj> I've opened a post here https://askubuntu.com/questions/936248/not-view-juju-gui-on-vpn-connection
<rick_h> dakj: can you open the web console and see what errors might be present?
<rick_h> dakj: you got to the base page so seems the path/port is accessible but for some reason it's not able to connect to the Juju controller websocket and it's hanging there.
<dakj> rick_h: the error on its dashboard is reported on that post. it rests in pending in Requesting code, and nothing else.
<rick_h> dakj: so there's likely to be a browser error in the browser console. On Linux you can use ctrl-shift-j I believe to open the console tools.
<dakj> rick_h: so let me make that, I'll post the result.
<rick_h> dakj: awesome ty
<dakj> rick_h: here is the result https://pasteboard.co/GBlNRzf.png
<rick_h> dakj: so that looks like the computer the GUI is running on can't reach the api.jujucharms.com url. Now, that shouldn't prevent the loading of the GUI though. That's a bug I'll get looked at.
<dakj> rick_h: ok thanks
<pranav_> Hello good folks. Need some assistance with subordinate charms
<pranav_> I am writing a subordinate charm which is subordinate of ceilometer application
<pranav_> but needs a regular relation with mysql:shared-db interface
<pranav_> is this possible to achieve?
<dakj> rich_h: can you help me with Openstack base? I've already deployed on my lab and about that I've not issue, but when I try to create a new instance the result is "no networks defined". Have you any idea?
<rick_h> dakj: hmm, that's going to be in the openstack setup. There's a bunch of questions around that online: https://askubuntu.com/questions/934780/no-networks-defined-on-openstack and https://ask.openstack.org/en/question/58923/error-http-404-no-networks-defined/ and such
<dakj> rick_h: the first one is mine.
<rick_h> dakj: oh heh, ok.
<rick_h> dakj: so looking at https://github.com/openstack-charmers/openstack-community seems the best bet might be an email to the openstack team email list.
<dpawar> pranav_: PM ?
<pranav_> tinwood: PM?
<tinwood> pranav_, ??
<stub> pranavs: The telegraf charm is a subordinate that mixes subordinate and regular relations : https://git.launchpad.net/telegraf-charm/tree/
<pranavs> stub: Many thanks for the pointer! I will take a look
<Dockerya> Hi
<Dockerya> Had a power outage on openstack infra provisioned using MaaS juju
<Dockerya> everything else came up except ceph-mon and osds
<Dockerya> https://thepasteb.in/p/oYhlWGZBnBZtZ
<Dockerya> was wonderng if someone can help resolve leader election issue
<Dockerya> https://jujucharms.com/ceph-mon/
<Budgie^Smore> o/ juju world
#juju 2017-07-18
<kjackal_> Good morning Juju world!
<magicaltrout> kubernetes hackers I have a snap pattern question for you
<magicaltrout> it looks to me like you provide the snap as a resource
<magicaltrout> how does the charm know an upgrade is available when you push a new snap k8s version?
<kjackal_> hey magicaltrout
<kjackal_> we provide the snap as a resource so as to deploy on network restricted environments. The resource is usualy a zero sized file and that causes the charm to go and fetch the snap from the snap store
<magicaltrout> ah cool
<magicaltrout> that solves that riddle
<magicaltrout> i snapped up openldap and wondered how best to ship it
<kjackal_> in the config of each charm we set a channel from which the snaps are fetched (eg, 1.7/stable). Everything we push to that channel you get it transparently. So if you deploy kubernetes now you will get the 1.7/stable channel and we will be pushing (through the snap store) all the 1.7.x updates
<kjackal_> if you want to upgrade from 1.7 to 1.8 you should follwo the upgrade instructions that involve updating the channel in the charms config
<kjackal_> magicaltrout: ^
<magicaltrout> ah yeah the multichannel stuff as well, cause 1.7 branch updates should happen automatically right?
<kjackal_> yes, that is ithe idea. 1.7.x releases of kubernetes should be compatible with eachother
<magicaltrout> what happens when you screw everything up like conjure-up? ;)
<kjackal_> we get fired... I guess
<kjackal_> what do you not like about conjure-up ?
<magicaltrout> i have no problem with conjure up
<magicaltrout> but there was an email sent out last night saying it was borked
<magicaltrout> and everyone will suffer cause of the rolling updates ;)
<magicaltrout> https://lists.ubuntu.com/archives/juju/2017-July/009211.html
<magicaltrout> that one :)
<stokachu> magicaltrout:to be fair it wasn't conjure-up it was snapd
<magicaltrout> indeed stokachu, I'm only ribbing ya'll, but I am curious about the checks and balances
<stokachu> magicaltrout:yea obviously there are some additional validations that need to be made as this is a problem that affects a _ton_ of users
<stokachu> in the same vein as how apt packages are handled
<magicaltrout> yeah but also apt updates are generally user defined so you'd hope to find out about some massive issue on a small number of users before everyone gets it
<magicaltrout> snaps automatic updating makes that harder I guess
<rick_h> well it's trading one pain for the other. Otherwise you have folks with giant old xxx out there with holes in it left and right.
<rick_h> but since they're afraid of touching production, wheeeee
<rick_h> there was a post on that recently, let me find it. they're going to allow more control
<magicaltrout> nice
<rick_h> https://forum.snapcraft.io/t/disabling-automatic-refresh-for-snap-from-store/707/11
<magicaltrout> looks sensible
<rick_h> so the agreement is some additional knobs, but fundamentally the thing is servers ship with automatic apt updates as well and someone publishes a new apt package for something it hits.
<rick_h> I think it's generally in line on both the snap/apt side to try to help keep these production microservices/scale out stuff safe by default
<magicaltrout> thanks for those sage words rick_h
<rick_h> heh, haven't had my coffee yet so :P
<magicaltrout> I was saying on the drive in this morning that it makes sense, like "install charm a", forget about it, but keept it in sync via snap updates
<magicaltrout> i like it so its certainly a pattern I'm trying to construct for my charms going forward
<rick_h> yea, I think for a lot of folks it's something helpful. e.g. every self hosted wordpress ever lol
<rick_h> and it helps with that idea of CI/CD we all think is the holy grail
<rick_h> and hopefully folks use the different channels in there to beta test/etc
<ak_dev> https://www.irccloud.com/pastebin/cFegoDAR/
<ak_dev> There seems to be a problem with the above interface (peer)
<ak_dev> the states go as such : .connected -> .unsigned.cert.available -> .signed.cert.available
<ak_dev> .connected and .ip_request is set by the first charm and .master_ip.available and .signed.cert.available are set by theother
<ak_dev> somehow, the relation state does not change after".signed.cert.available" , anything I could be doing wrong?
<ak_dev> the ideal states should be such : .connected -> .unsigned.cert.available -> .signed.cert.available -> .ip_request -> .master_ip.available
<ak_dev> thank you!
<xnox> hi
<xnox> where is the source code for https://jujucharms.com/u/containers/kubernetes-master/ ?
<xnox> the git tree and/or bzr tree? i'm failing to find it.
<xnox> found it at https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/config.yaml
<xnox> finally
<kjackal_> hi xnox the code for the kubernetes charms are pushed upstream, give me a sec to get you the link
<kjackal_> ahh and now i saw your last two lines!
<xnox> kjackal_, still confused about snaps, and the "canonical distribution" charms they seem to be complicated.
<kjackal_> xnox: this is why we are here :) Tell me!
<kjackal_> xnox: What do you want to do?
<xnox> kjackal_, i'm confused what enable-dashboard-addons do and what is the contents of cdk-addons snap
<xnox> and what it translates upstream
<xnox> i guess my lack of understanding of kubernetes itself and their addons shows now.
<kjackal_> xnox: no do not loose faith! The dashboard-addons are enabled here:https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L462
<kjackal_> they are essentialy passed as an argument to the cdk-addons snap https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L462
<xnox> which i traced to deploying charms / installing snap which is based on https://github.com/juju-solutions/cdk-addons/blob/master/get-addon-templates
<kjackal_> cdk-addons
<kjackal_> https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L469
<xnox> but i'm confused as to what is inside the cdk-addons snap =) https://github.com/juju-solutions/cdk-addons
<kjackal_> you got it!
<kjackal_> Lets see
<xnox> oooh, but https://github.com/juju-solutions/cdk-addons/blob/master/Makefile is hm. building cdk-addons.yaml from kubernetes upstream ?!
<kjackal_> Yes, https://github.com/juju-solutions/cdk-addons/blob/master/Makefile will grab the addons from upstream and package them as snaps
<kjackal_> xnox: here is what gets enabled by the addons flag: https://github.com/juju-solutions/cdk-addons/blob/master/cdk-addons/apply#L30
<xnox> kjackal_, but that for example does not deploy influxdb or grafana services, these should be deployed by something else already right? E.g. grafana charm?
<xnox> hm.
<kjackal_> xnox: you need something like this: https://jujucharms.com/canonical-kubernetes-elastic/ ?
<xnox> unless enabling the dashboard plugin, pulls grafana k8s container image, and deploys grafana as a k8s container in the k8s cluster.
<xnox> kjackal_, i mean if i do juju config kubernetes-master enable-dashboard-addons=true as advised on
<xnox> Accessing the Kubernetes dashboard section
<xnox> where would influxdb and grafana come from. Do i need to provide them (e.g. my influxdb is available at influxdb.example.net on my network)
<xnox> or would grafana charm be deployed in my environment
<xnox> or would a grafana k8s contaienr image be deployed in my juju environments' k8s cluster.
<kjackal_> xnox: based on https://github.com/juju-solutions/cdk-addons/blob/master/get-addon-templates#L92 the yaml files that would be applied are found in https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-monitoring/influxdb
<joedborg> hey all, is there a way to find a controller name, even if juju status won't respond (because the controller has gone)
<kjackal_> xnox: And that should include the services deployed within kubernetes
<xnox> kjackal_, ack.
<kjackal_> xnox: for example here is a docker image that gets deployed: https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/cluster-monitoring/influxdb/influxdb-grafana-controller.yaml#L24
<kjackal_> joedborg: try cat ~/.local/share/juju/controllers.yaml
<joedborg> cheeers kjackal_
<kjackal_> anytime
<xnox> kjackal_, thanks.
<Budgie^Smore> morning 0/ juju world
<ak_dev> hello, could anyone please help me out on the peer relation bug which I posted?
<Purnendu> is it possible to disconnect the JUJU controller and connect it back again
<bdx> Purnendu: yea!
<ak_dev> kjackal:
<ak_dev> open_port did not work on the CENGN pod it seems
<ak_dev> dont know why exactly though
<infinityplusb> morning! How can I kill an application deployed into Juju? I have one that is stuck in some sort of loop and won't finish trying to install. I'd rather just kill it and redeploy if such a thing is feasible?
#juju 2017-07-19
<rick_h> infinityplusb: try doing a juju resolved xxxx and such to get it through the error so that it can start the destroy.
<rick_h> infinityplusb: if it's on a system where it's the only application on the machine you can remove-machine --force
<rick_h> and skip the application side of things
<ak_dev> juju remove-application <name> should do
<ak_dev> ^infinityplusb
<ak_dev> If u want to remove the entire machine, use
<ak_dev> juju remove-machine <machine_num> --force
<ak_dev> kjackal: are you online now?
<kjackal___> ak_dev: I am here!
<kjackal___> Could you explain where open_port is failing? I did not get that
<kjackal___> ak_dev: ^
<ak_dev> kjackal___: Hey!
<ak_dev> I tested the charm on CENGN pod, and I had open_port call in the principal master and worker charms
<jwd> morning
<ak_dev> i will forward you the bundle so you can see for yourself
<ak_dev> https://www.irccloud.com/pastebin/Z6EA1mtY/
<ak_dev> do change the "gateway-physical-interface" option if you are deploying
<ak_dev> kjackal___:
<ak_dev> kjackal:
<ak_dev> ^^
<kjackal___> ak_dev: I am not sure what "on CENGN" refers to. These acronyms do not ring a bell
<kjackal___> especially combined with the "pod"
<ak_dev> kjackal___: oh sorry, it is this https://wiki.opnfv.org/display/pharos/CENGN+Hosting
<ak_dev> something like GCE where we can test the charms
<kjackal___> you tested a charm on a pod? We are talking about a kubernetes pod
<ak_dev> even I was super confused the first time someone mentioned this
<kjackal___> ak_dev: Ah I see now, is it a openstack cloud?
<ak_dev> yeah, if I understood it correctly
<kjackal___> and you are saying the opn_port does not work if you call it from within a charm but it works if you open-ports from the cli... strange...
<ak_dev> oh, no I did not try from cli
<kjackal___> do you get anything on the logs?
<ak_dev> how do I do that?
<kjackal___> juju run --application kubernetes-worker open-port "1234/tcp"
<kjackal___> ak_dev: ^
<ak_dev> kjackal___: oh okay, i will redeploy and try that and get back to you
<ak_dev> :-)
<kjackal___> lets see, which charm are you deploying?
<ak_dev> the bundle I forwarded you earlier
<ak_dev> kjackal___: ^
<kjackal___> ak_dev: I do not see open_port on the ovn-5 charm
<ak_dev> I put it in the kubernetes-master charm
<ak_dev> kjackal___: and the kubernetes-worker charm
<kjackal___> ok, I see!
<kjackal___> 8080, 6641 and 6642
<kjackal___> let me try to deploy
<ak_dev> kjackal___: yeah, sure, I too am trying here
<ak_dev> kjackal___: is it possible to open_port for a machine rather than an application?
<kjackal___> ak_dev: do a juju run --help
<kjackal___> there is a --machine option, it should work
<ak_dev> kjackal___: oh okay
<ak_dev> "juju run --machine 2 open-port "6641/tcp"
<ak_dev> says command not found
<ak_dev> kjackal___:
<kjackal___> probably because you are out of the context of an application...
<ak_dev> kjackal___: oh, I don't know what that mean actually, is it that I am not allowed to run such a command on the machine?
<ak_dev> the OVN subprdinate charm the principal charm both require the same ports to be open
<kjackal___> I _think_ open-port is not in your path if you are not running it within an --application
<ak_dev> ah I see
<ak_dev> okay so if the subordinate OVN charm requires 6641 to be open and I open it in master, will  that work?
<ak_dev> I don't think so, but still asking for confirmation
<kjackal___> you would need it open on the workers as well, right?
<ak_dev> worker i require only 8080
<ak_dev> kubernetes-master : 8080
<ak_dev>       ovn : 6641, 6642
<ak_dev> kubernetes-worker : 8080
<ak_dev>       ovn : 6641, 6642
<ak_dev> kjackal___: I think this is what should work
<ak_dev> i just modified to have open ports in all three charms according to above, lets see if that helps
<jwd> anyone know what a status: unknown means?
<ak_dev> jwd: I think that means that the charm did not update its status, but kjackal___ might know better
<ak_dev> kjackal__: that did not work
<jwd> i do a wild test anyway atm. using lxd on debian to run juju :-)
<kjackal__> anything interesting in the logs? Did you see the ports opening?
<kjackal__> are you deploying kubernetes on lxd?
<ak_dev> nope, its on GCE
<ak_dev> should i check /var/log/syslog?
<kjackal__> /var/log/syslog and /var/log/juju/unit-?????
<kjackal__> jwd: you mean you got an lxd container and inside there you deploy juju?
<jwd> no i use juju to create lxd containers for me
<kjackal__> jwd: you might find deploying juju through snaps an easier way if you want to move away from ubuntu
<jwd> i used snaps
<kjackal__> nice!
<jwd> running it on debian stretch
<jwd> just wondered why the machine states always ends in unknown
<jwd> evenn tho all is working
<kjackal__> jwd: the machine state is set by the charm. All of the charms you deployed ended on an unknown state? Or was it only one?
<jwd> all
<kjackal__> that is strange
<kjackal__> you would better file a bug
<jwd> https://pastebin.com/jfRED3Wd
<jwd> just a few things i tested
<kjackal__> I think filing a bug and including all the logs is the right way to go
<jwd> kk
<ak_dev> kjackal__: okay so nothing about opening ports on syslog
<ak_dev> and neither see it in juju log
<ak_dev> just checked the code, it has the open_port function call
<ak_dev> 8080 port opened on both though, verified in the GCE firewall rules
<kjackal__> ak_dev: this does not make sense, my deployment on aws did open ports properly
<ak_dev> I might be doing something really silly then, I will recheck everything
<ak_dev> did kubernetes run on your deployment?
<kjackal__> ak_dev: let me see, there must be an open_ports (with an s) call
<kjackal__> ak_dev: no it did not work because it did not have easyrsa i think on the bundle
<ak_dev> kjackal__: ah no, I have implemented it inside the kubernetes-master charm
<ak_dev> if you still have it, could you try "sudo kubectl get pods" on master?
<kjackal__> I got a "hook failed: "cni-relation-joined" for ovn:cni"
<kjackal__> on the ovn subordinate of the master
<ak_dev> oh
<ak_dev> let me check
<kjackal__> ak_dev: and I have this error in the logs: http://pastebin.ubuntu.com/25124335/
<ak_dev> kjackal__: what is the gateway-physical-interface you are using?
<kjackal__> ak_dev: I did not set one. It is what the smart default  is
<ak_dev> looks like I did not use the default right
<ak_dev> kjackal__: thanks for pointing out that error though! Seems like the errors dont ever end on this one!
<kjackal__> ak_dev: remind me again how to get the proper gateway
<ak_dev> ip route | grep default
<ak_dev> this should do
<ak_dev> kjackal__: ^
<kjackal__> ip route | grep default
<kjackal__> default via 172.31.0.1 dev breth0
<kjackal__> And it is the breth0, right ak_dev
<kjackal__> ?
<ak_dev> kjackal__: did you run it on the node where this charm ran?
<ak_dev> cause it creates a new interface with 'br' as prefix
<kjackal__> yes
<ak_dev> kjackal__: so i guess your interface should be eth0
<ak_dev> kjackal__: but its strange that it created the interface, since it should not after you got that error
<ak_dev> I too am confused now
<BlackDex> Hello there, i currently have a HA Juju env, but i need to remove two of those nodes and i'm currently not able to create a new node for more HA
<BlackDex> What is the best way to undo the HA of juju, and keep just one running?
<rick_h> BlackDex: what cloud is this?
<BlackDex> what cloud?
<rick_h> BlackDex: what provider? is this on AWS, GCE, an openstack?
<BlackDex> ah
<BlackDex> maas/openstack :)
<BlackDex> i think i have it working btw
<BlackDex> :)
<rick_h> BlackDex: oh ok awesome
<BlackDex> needed to do some manual mongodb stuff
<tvansteenburgh> jacekn: mthaddon: you guys want/need to meet? sorry, I got hung up in a sprint session but i'm available now if you want to hangout
<tvansteenburgh> or we can just wait til next week, either way is fine
<ak_dev> hello, any reason why kube-proxy can fail on kubernetes-worker ?
<ak_dev> what I mean by fail is fail to start
<ak_dev> https://www.irccloud.com/pastebin/yWnQ1FoP/
<ak_dev> that is what i get
<vds> Hi all, suggestion on how to debug a reactive charm that is not registering a nagios hook? here's the reactive module https://paste.ubuntu.com/25126369/ apparently the relations are added correctly but the nagios check is not registered
<Budgie^Smore> o/ juju world
<rick_h> Reminder juju show in 10minutes
<rick_h> Getting stuff setup to chat
<rick_h> https://www.youtube.com/watch?v=3lcl51SVX2E for watching and https://hangouts.google.com/hangouts/_/gccrypbjbbbcrniklqvt2gkjcue for chatting live!
<ak_dev> hello
<ak_dev> one more question guys
<ak_dev> does a subordinate charm receive events from the principal charm?
<ak_dev> thedac: ^^
<thedac> ak_dev: it can if there is a relationship defined and data is passed across that relationship, but that has to be coded.
<ak_dev> oh, I have a cni relationship
<ak_dev> basically, its kubernetes-cni b/w my charm and kubernetes-master
<ak_dev> thedac: ^
<ak_dev> but it has to be coded in the relationship you mean?
<narindergupta> thedac, can u show any example if there is any?
<thedac> ak_dev: yes, the charm has to set something on the relationship and the other side needs to react
<narindergupta> ak_dev, in that case you can use the existing event in kubernetes-master and kubernetes-worker
<ak_dev> thedac: thanks :-)
<thedac> no problem
#juju 2017-07-20
<digv> charm-proof fails with "E: Unknown root metadata field (resources)"... but resources is KNOWN_METADATA_KEYS
<digv> never mind.. resolved
<pranav_> Hello #juju. Need help on uploading charm to the store
<pranav_> We need to rename our previous charm and update the source.
<pranav_> How do we go about it?
<kjackal> hi pranav_ you can revoke permissions of a charm (charm revoke) and that would render it essentialy unreachable
<kjackal> pranav_: then you can push a new charm under a new name
<pranav_> kjackal: Thanks. Will do that.
<cnf> morning
<Budgie^Smore> o/ juju world, so excited and nervous for next week
<rick_h> Budgie^Smore: all good :)
<Budgie^Smore> rick_h of course, life wouldn't be as fun if those opposites didn't exist together ;-)
<joajjajo> lmao wtf is juju
<joajjajo> it looks gay wtf
<joajjajo> lmao silly web js thingi people wtf man could azure openstack lmao wtf is this shiz get it together niggaz speak english holy fuk
<joajjajo> holy shizzle ma nizzle
<catbus> Hi, the link to submit charm for review at the bottom of the https://jujucharms.com/docs/stable/developer-getting-started is for reviewers, is there one for submitter?
<joajjajo> wtf is juju nigga
<vasey_> hey folks, is there a way (using the Landscape/Autopilot deployment for Openstack) to specify for juju to use Zesty rather than Xenial for the charms it deploys?
<catbus> I believe this is the one I am looking for: https://review.jujucharms.com/reviews/new
<joajjajo> vasey_:  mang wtf is this shiz mah nigga, pls explain im black and whitey keepin me down:<
<catbus> vasey_: I don't think it can be done with Landscape/Autopilot.
<catbus> vasey_: are you open to use juju directly to deploy instead?
<vasey_> catbus: probably not, unfortunately
<joajjajo> wow mang
<vasey_> catbus: do you know where the charms' directory is for autopilot? might try and replace all the xenials to zesty in the charm files
<catbus> vasey_: the autopilot deploys clouds that are validated and supported by Canonical, I don't think zesty is enabled there yet.
<catbus> vasey_: I do not know.
<catbus> vasey_: Have you looked at conjure-up?
<vasey_> catbus: i haven't
<catbus> vasey_: https://conjure-up.io/, and you can find the bundle each spell is using at for example openstack-base: https://github.com/conjure-up/spells/blob/master/openstack-base/metadata.yaml, it's pulling 'openstack-base' bundle from the charm store. There may be a way to change it to something local, I don't know, but it might be just a lot easier to download the openstack-base bundle, modify it and deploy it using juju.
<joajjajo> juju banks the guy that made that star wars movie?
<catbus> vasey_: I don't see zesty available for 'ubuntu' or 'keystone' charm. So even if you specify zesty, it will report errors when trying to download the charm for the zesty series.
<stokachu> vasey_: you can download the bundle and then clone our spells directory to deploy from local
#juju 2017-07-21
<pranav_> Hello #juju. Need help publishing our charm for review.
<pranav_> juju review page is showing error
<pranav_> https://pastebin.ubuntu.com/25138715/
<pranav_> The charm pull URL here is incorrect "charm pull ~vtas-hyperscale-ci/xenial/hyperscale-controller-0"
<pranav_> it should be ~vtas-hyperscale-ci/hyperscale-controller-0
<kjackal> hi pranav_ I was able to pull the charm with 'charm pull cs:~vtas-hyperscale-ci/hyperscale-controller-0' after agreeing to the terms
<kjackal> pranav_: isn't this what you expected?
<gdupont> hi all, anyone know the prerequisite to connect juju to an openstack? (not to deploy openstack with juju)
<gdupont> it seems it's expecting some default things such as a network and flavors I guess
<pranav_> kjackal: yes. But the review page is showing error and due to that the source is not visible on the review request
<pranav_> i am trying to push new changes to refresh the review
<pranav_> https://review.jujucharms.com/reviews/124
<kjackal> pranav_: we are in the process of re-structuring the review process. tvansteenburgh (off-line now) may have more on the subject. It might worth asking on the list since I do not know much on the subject
<pranav_> kjackal: sure will try and follow up. Surprisingly our other reviews were raised just fine.
<pranav_> kjackal: Thanks for the pointer though. I will follow up :)
<Budgie^Smore> o/ juju world
<vasey_> stokachu: where do i download the bundle from?
<hml> vasey_: there are openstack bundles in the charm store, check the openstack-base bundle
<ak_dev> hello, I am trying to set values in etcd which is installed via etcd charm
<ak_dev> I am using etcdctl todo that
<ak_dev> any reason whyit would give connection refused?
<ak_dev> etcdctl --endpoint 'https://10.142.0.3:2379'                     --cert-file /etc/ssl/ovn/client-cert.pem                     --key-file /etc/ssl/ovn/client-key.pem                     --ca-file /etc/ssl/ovn/client-ca.pem                     set /ovn/old_interface 'ens4'
<ak_dev> ^^ the command
#juju 2017-07-22
<tvansteenburgh> ybaumy: fyi, the canonical-kubernetes-elastic bundle was updated
<ddellav> wow I'm still in here... lol ok, well so long, and thanks for all the fish, much love to you all :)
#juju 2017-07-23
<ak_dev> Hello all, could you guys please deploy this bundle https://github.com/AakashKT/ovn-kubernetes-charm/blob/master/bundles/kubernetes-ovn/bundle.yaml, to test it?
<ak_dev> This is Kubernetes with OVN, I need to test it on different cloud providers, I have tested it on Google Cloud
<ak_dev> thedac, tvansteenburgh, rick_h, kjackal ^^
<ak_dev> also, do try to run some pod to test it :-) thank you all
<ak_dev> do ping here if you have deployed, for now we have to run 4 commands manually, since I still havn't figured out where to put them, any suggestions welcome :-)
#juju 2018-07-16
<anastasiamac> rick_h_ or anyone keen on a simple review: https://github.com/juju/juju/pull/8932 (renames human readable uuid to differentiate btw controller and model)
<babbageclunk> anastasiamac: approved!
<anastasiamac> babbageclunk: \o/
<veebers> babbageclunk: git query, If I've branched of tip of develop, but I wanted to branch of a commit or 2 back from tip, can I rebase that branch onto that commit or something like that?
<veebers> babbageclunk: reason, something landed that breaks things for me in caas, and I wanted a quick way to keep moving forward while not having to worry right now about the issue
<babbageclunk> veebers: sorry, back now!
<babbageclunk> Yes, you can use git rebase --onto
<babbageclunk> (assuming this is still a problem) so something like `git rebase --onto upstream/develop~2 upstream/develop branch`
<manadart> stickupkid: Care to take a look at https://github.com/juju/juju/pull/8933 for me?
<stickupkid> manadart: sure
<stickupkid> manadart: LGTM
<manadart> stickupkid: Ta.
<stickupkid> I'm finding the schema for describing credentials a bit of pain, I want it to be strict, but I need optional to make it work for different scenarios :|
<stickupkid> has anyone else seen this test failure, it's in cmd/juju/machine.TestPackage - issue is there is no stack trace or report
<stickupkid> http://ci.jujucharms.com/job/github-check-merge-juju/2361/
<stickupkid> guess it must be in a tear down?
<stickupkid> manadart: CR for finalizing creds  https://github.com/juju/juju/pull/8938
<jam> stickupkid: Tim seems to remember that you had some desires around how Trust password, etc work with LXD and clusters. Do you remember what you wanted to ask for?
<jam> We're sitting here next to Stephan so its a good time to ask for changes.
<jam> manadart: ^^
<veebers> Morning all
<veebers> cheers babbageclunk, wallyworld fixed the issue while I slept so I don't have to think about it now ^_^
#juju 2018-07-17
<kelvin> anastasiamac, would you mind to take a look this tiny PR plz https://github.com/juju/juju/pull/8941/files  thanks
<anastasiamac> sure
<kelvin> thanks : )
<vino_> hi kelvinliu: quick question regrading ur PR
<vino_> 8941
<vino_> why the change in allfacades.go for v2 in Dev mode is required ?
<vino_> sorry i am missing the background
<vino_> kelvinliu: i just looked into ur PR. i amgeeting why its there.
<veebers> Is there a nice way to see if this commit landed in develope too ? https://github.com/juju/juju/pull/8928
<veebers> oh nvm it's not a complete answer yet
<anastasiamac> veebers: but yes :) if u need it in future, i tend to follow the actual commit that was landed by the bot... here it'd be https://github.com/juju/juju/commit/117c2b8274202693ef28b710dddd51c97a9b71f8
<anastasiamac> veebers: at the botom of the description, it'll show what branches/tags it's on
<veebers> anastasiamac: ah so in this case just 2.3?
<anastasiamac> yes :)
<anastasiamac> veebers: and this is how a commit would look like if it has been added in mulitple tags/branches - https://github.com/juju/juju/commit/5d14a9151104a53ebca82d3c148f66d000708282
<veebers> anastasiamac: awesome, thanks for that!
<anastasiamac> veebers: unusual to have a commit in 2.3 only tho... it was commited only 4 days ago... mayb 2.3 has not been merged into 2.4 since...
<anastasiamac> veebers: anytime \o/
<veebers> anastasiamac: perhaps jam is chasing a bug and is looking for some logging details
<anastasiamac> veebers: it actually looks like a bug fix.. so i'd expect it'd be forward-ported
<kelvin> anastasiamac, would you help to take a look this one https://github.com/juju/bundlechanges/pull/42 ? thanks
<anastasiamac> kelvin: i might not make it today. is tomorrow k? m looking at ur juju PR at the mo :)
<kelvin> anastasiamac, sure, we can do it tmr thx
<kelvin> anastasiamac, I will need to update bundlechanges dep in juju after the above PR landed.
<anastasiamac> kelvin: ack
<anastasiamac> kelvin: doen with juju one ;)
<kelvin> anastasiamac, thanks?!
<kelvin> thanks!
<anastasiamac> kelvin: replied to ur replies... mayb worthwhile talking thru it in person?
<anastasiamac> kelvin: unfortunately, i need to do family time, so mayb tomorrow?
<kelvin> anastasiamac, yup, HO?
<kelvin> anastasiamac, sure
<kelvin> anastasiamac, let's have a chat tmr after standup. thanks
<anastasiamac> kelvin: m planning to be available later on tonight but might b too late for u.. around 10/11pm bne time?
<anastasiamac> kelvin: k
<kelvin> anastasiamac, awesome, thanks, have a good night! : )
<anastasiamac> o/
 * vino_ internet issues. will be fixed soon.
<manadart> jam: I think the trust password works sufficiently well for us. If the cluster is using one, we just need that and the server URL.
<manadart> jam: We generate and upload a new client cert/key with the password, then once auth'd, we get the server cert and add it to the credential.
<manadart> stickupkid may have other particulars...
<stickupkid> and the client cert, we need that, but it seems pretty easy once you align everything
<stickupkid> manadart: did you get time to look at this yet https://github.com/juju/juju/pull/8938?
<manadart> stickupkid: Will do so now.
<stickupkid> manadart: i've got auto detection of existing lxc servers working, I'm just ironing out a few things, but I should be able to push this up as well later...
<stickupkid> manadart: also can you have a look over this - I'm unsure if the effort is worth it, https://github.com/SimonRichardson/juju/pull/1
<manadart> stickupkid: Yep. Reviewed the other one.
<stickupkid> manadart: although it feels like magic, it works really nicely in practice
<stickupkid> manadart: OTTOMH <-- that took longer to work out than it should from your PR - hahaha
<stickupkid> manadart: let me know when you've got 5 minutes to discuss the lxd credential types
<manadart> stickupkid: Yep, just need a few mins.
<manadart> stickupkid: standup HO?
<stickupkid> manadart: sure
<jam> stickupkid: manadart: we've got a bit of time if you're around
<jam> stickupkid: manadart: Tim's feedback is that if it isn't hard to backport cluster support to 2.4.* then he encourages you to do so.
<jam> I thought most of 2.4 got the new LXD codebase, didn't it?
<manadart> jam: Not so much of the provider.
<manadart> jam: But nearly all of container/LXD should be common.
<jam> manadart: do you feel it is high risk to backport?
<hml> guild: 10 min.  :-)
<manadart> jam: No. But I just recently reinstated tools/lxdclient tests there to backport constraints.
<manadart> manadart: Little bit divergent now, but the new code is more cogent and efficient.
<jam> manadart: why is that?
<manadart> jam: A bunch of tests were commented while we were strangling out the old LXD client and bringing in the new.
<jam> manadart: would it be reasonable for you to target a 2.4.2 backporting the 2.5 code?
<jam> I don't think we'd backport to 2.3, so we don't get LTS support of clustering, but I think that would probably be a lot more of a stretch/risk.
<manadart> manadart: I think so. If I do a revert of my changes as a commit, we should be able to cherry pick everything backwards.
<manadart> Oops tagged myself. Then we can work off 2.4 and roll into dev as normal.
<jam> hml: is rick still gone?
<manadart> About the fork in branches - The 2.4 release freeze came midway through the work, then it proceeded in dev and the old stuff was all thrown out.
<jam> We were wanting to check in about where 2.4.1 would be at
<manadart> Rick is still out.
<jam> manadart: understood. Yeah, its one of those "how much are we freezing the interfaces for 2.4". I know we talked about doing a faster 2.5 release based on Bionic support.
<jam> and targeting something like 18.04.1 as Bionic default.
<jam> manadart: but chatting with Tim seems that he's thinknig 2.5 close to Cosmic, and that means we want to have Cluster support released significantly before that.
<jam> so 2.4.2
<jam> hml: manadart: is anyone else looking at shepherding a 2.4.1 release?
<jam> There was a discussion that the Openstack multi-network stuff was supposed to be in 2.4.1 ?
<hml> jam: not to my knowledge
<hml> jam: not sure who is shepherding it
<jam> hml: can we ask you guys to think about it at standup today?
<hml> jam: sure
<hml> jam: are we waiting for anything else with 2.4.1?
<jam> hml: offhand the only thing we feel is waiting is the Openstack fixes, which I thought I saw patches for.
<jam> we discussed whether we wanted to have my patch around pruning size, but we want to land that earlier in a cycle and have some experience with it.
<jam> the other bugs on https://bugs.launchpad.net/juju/+milestone/2.4.1 don't look to be blocking, and could be rolled to 2.4.2
<jam> manadart: isn't your patch for https://bugs.launchpad.net/juju/+bug/1780274 ? It doesn't show in progress or have a link to your PR
<mup> Bug #1780274: [2.4] add model produces error code when overlapping subnets exist <cpe-onsite> <juju:Triaged by manadart> <https://launchpad.net/bugs/1780274>
<manadart> jam: One sec.
<manadart> https://github.com/juju/juju/pull/8937
<manadart> jam: Are you looking to land either of your PR is the "review" lane for 2.4.1?
<jam> manadart: bug #1776673 landed in 2.3, I think. I think we can bring that forward to 2.4. I don't know if I would block on it, though it probably would be nice to get into 2.4
<mup> Bug #1776673: txn suddenly went missing in a txn queue <juju:In Progress by jameinel> <juju 2.3:In Progress by jameinel> <juju 2.4:In Progress by jameinel> <https://launchpad.net/bugs/1776673>
<magicaltrout> cory_fu_: which reactive charms follow the endpoint relation pattern we can look at, i'm trying to talk rmcd through relation building and we're sat in different countries
<magicaltrout> so we've been looking at the mysql relation but the charm itself isn't reactive
<magicaltrout> or stub or anyone else who knows stuff about endpoint relations
<jam> manadart: putting up a PR now
<jam> manadart: https://github.com/juju/juju/pull/8942/files brings 2.3 into 2.4
<jam> manadart: bug #1771906 is the one we want to wait until 2.4.1 goes out and then land early in 2.4.2
<mup> Bug #1771906: txnpruner failing to prune on large controller <mongodb> <pruning> <juju:In Progress by jameinel> <juju 2.4:In Progress by jameinel> <https://launchpad.net/bugs/1771906>
<manadart> jam: OK.
<hml> jam: we have a card for the 2.4.1 with an assignee
 * jam away for next meeting
<jam> hml: great.
<jam> hml: is your self-signed openstack cert patch landing/landed/
<jam> ?
<jam> brb
<hml> jam: no - not near ready - having issues with the openstack
<cory_fu_> magicaltrout: It might be overkill, but you could look at https://github.com/juju-solutions/interface-gcp-integration and the corresponding charm code: https://github.com/juju-solutions/charm-gcp-integrator/blob/master/reactive/gcp.py#L52 and https://github.com/kubernetes/kubernetes/blob/master/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py#L1595
<cory_fu_> magicaltrout: You could also look at the PR for converting the http interface which keeps it backwards compatible: https://github.com/juju-solutions/interface-http/pull/10
<cory_fu_> magicaltrout: stub also has https://git.launchpad.net/interface-cassandra/tree/requires.py
<manadart> stickupkid: https://github.com/juju/juju/pull/8943
<jam> hml: k. Tim was thinking that should go into 2.4.1. I'll bring it up to him
<magicaltrout> thanks cory_fu_ much appreciated
<stickupkid> manadart: done - looks nice and clean
<thumper> hey team
<jam> manadart: you're backport of 2.4 for tools/lxd_client messes up merging 2.4 into 2.5 again... :(
<jam> hm. about 10 conflicts of 2.4 into 2.5 after I just merged it a couple days ago.
<jam> everyone's been busy. :)
<manadart> jam: As far as that one goes, you can turf everything 2.4 and take 2.5.
<jam> hi thumper
<stickupkid> hey thumper
<jam> externalreality: Tim and I agree we shouldn't block 2.4.1 for hml's openstack cert if it is going to take a while to validate.
<manadart> jam: Commented on PR 8937. Net effect is really not to discover subnets when there is no single network resolution. hml has approved. Please look when you have a mo' and we can land for 2.4.1.
<stickupkid> manadart: https://github.com/juju/juju/pull/8945
<stickupkid> this ensures that we have the right API version
<hml> externalreality: the refactor to common is here: https://github.com/juju/juju/pull/8947
<hml> babbageclunk: ping
<domc> Hi everyone, I'm very new to Juju charms, just followed instructions to install everything and create my fist charm with "charm create vanilla" but getting a Python error: TypeError: a bytes-like object is required, not 'str'. Anyone run into this?
<veebers> looks like I'm starting from scratch for this resource test, NewResourceState takes a *State, whats a good test to crib off where I don't need to use connsuite?
#juju 2018-07-18
<babbageclunk> veebers: do we need to be able to build on go 1.8 any more? Is it safe for me to use a type alias?
<babbageclunk> I don't absolutely need it, would just simplify some tests I'm writing that need to pass around a lot of funcs taking map[corelease.Key]corelease.Info.
<anastasiamac> babbageclunk: m not veebers, but I do not think so... should 1.10 be backward compatible anyway? i would have though we only have 1 mayb 2 releases in 1.9 like 2.1 and mayb 2.2 ...
<babbageclunk> anastasiamac: well, I'm trying to use syntax that was introduced in 1.10, and I *vaguely* remember someone needing to remove a type alias for some reason... not sure whether it's still needed.
<anastasiamac> babbageclunk: ah.. ur memory is better than mine :)
<babbageclunk> oh sorry, introduced in 1.9
<anastasiamac> babbageclunk: my first feeling is "no, don't worry about prev go vesions" :) we r on 1.10, developing for 1.10 and ci uses 1.10 as well...
<veebers> babbageclunk: sorry I'm AFK for the arvo. Um yeah I agree with anastasiamac go with 1.10, especially on develop. I can't think of a reason to need 1.8 compat at this stage
<babbageclunk> veebers: oh right, forgot sorry! Thanks both of you - I'll use it then. Worst case, I can remove it if needed later.
<stickupkid> manadart: when you've got a sec https://github.com/juju/juju/pull/8945
<manadart> stickupkid: Commented.
<stickupkid> manadart: so rick noted that he wanted to see this, but having done it, I'm with you, I just think it was a safety net that said if you wanted clustering, then we can make sure that we're talking the same thing
<manadart> stickupkid: If your server is clustered, then its API supports clustering (> 3.0).
<manadart> I'd put it on ice and talk to Rick when he's better.
<stickupkid> manadart: i think this is superfluous in the end, and by the very nature the underlying nature will return false for everything < 3.0
<manadart> stickupkid: Ja.
<stub> cory_fu: I'm not sure if this is the same issue I saw before?  https://pastebin.ubuntu.com/p/XrByccMWyd/ . I think here the relation is gone (it is during teardown), and the endpoint lookup returns None, and it falls over
<stub> Actually, the failure is in the install hook so the relation certainly doesn't exist at that point
<stub> cory_fu: https://pastebin.ubuntu.com/p/fR8kjtvJQT/ does work, and  is better anyway. But it would be nice if we could handle this better.
<stub> Would be ugly though.
<stub> some special case rule 'if the handler accepts a single, required parameter and the relation used for RelationBase style handler invokation is None, skip calling the handler'
<stub> Or fail better if silently skipping isn't an option.
<stub> Maybe better for endpoint_from_state to never to return None if a suitable relation is defined, even if it hasn't been joined yet.
<stickupkid> Dmitrii-Sh: your PR failed with ProxyUpdaterSuite.TestSnapProxyConfig
<stickupkid> link http://ci.jujucharms.com/job/github-check-merge-juju/2412/consoleFull
<Dmitrii-Sh> stickupkid: ok, I will check the test out and modify it
<Dmitrii-Sh> stickupkid: should be easy enough
<jam> externalreality: manadart: now that we have a hash, you're building the snap into the 2.4 or latest candidate channel?
<manadart> jam: I believe it will go to candidate channel in the 2.4 track.
<domc> Hello, I've installed charm 2.3.0 and charm-tools 2.2.4 and I'm trying to create a simple charm using command "charm create vanilla", is that supposed to still work? I am unable to get it to work, I keep getting "TypeError: a bytes-like object is required, not 'str'" in /snap/charm/158/lib/python3.5/site-packages/charmtools/templates/reactive_python/template.py
<Dmitrii-Sh> stickupkid: just modified the PR
<stickupkid> Juju cloud instructions for setting up a LXD Cluster can be found here: https://discourse.jujucharms.com/t/setting-up-lxd-cluster-as-a-juju-cloud/73
<rick_h_> stickupkid:  woot!
<stickupkid> rick_h_: manadart spent some time cleaning it up and verifying it for me - so thanks to him :D
<rick_h_> bdx: magicaltrout cory_fu ^
<rick_h_> zeestrat: ^
<rick_h_> Ty too manadart. Solid team effort
<rick_h_> So awesome
<cory_fu> Nice
<rick_h_> kwmonroe: as well ^
<rick_h_> Any chance to.bang on it feedback welcome
<Dmitrii-Sh> stickupkid: could you trigger another build? :^)
<stickupkid> Dmitrii-Sh: sure
<Dmitrii-Sh> thx
<Dmitrii-Sh> stickupkid: looks like the test went well, thanks again
<bdx> stickupkid, mandart: that is just awesome! thanks!
<babbageclunk> hey hml, did you see my answer about the featureflag worker?
<hml> babbageclunk: i didâ¦
<hml> babbageclunk: have to circle back to see what can be done for a non controller machine agent
<babbageclunk> hml: you signed out before I could say, I don't think it would be *too* hard to change it to work via the api
<hml> babbageclunk: hmm
<babbageclunk> hml: the main difficulty is just the mismatch between the state.NotifyWatcher and watcher.NotifyWatcher (I think)
<hml> babbageclunk: would it require a new featuretag manifold type?  or just modifications to the current?
<babbageclunk> I think you could paper over the differences in the manifold with only minor changes to the ConfigSource interface. The manifold would need to switch between different input sources - either the APICaller or State, and wrap them differently to be ConfigSources.
<babbageclunk> hml: want to have a hangout to chat about it?
<hml> babbageclunk: can we do it tomorrow?  my brain is on other topics right now.  :-)
<babbageclunk> sure sure
<hml> babbageclunk: ty
#juju 2018-07-19
<babbageclunk> gah, all my tests pass but only when I put in the magic number 3, and I can't work out why!
<veebers> babbageclunk: where is the magic number put?
<babbageclunk> veebers and anastasiamac: In the call to clock.WaitAdvance - it's the number of things that should be waiting for the clock before we move time forward. But I think it should be 2, and the computer insists it's 3!
<veebers> babbageclunk: you're not off by one? zero base vs 1 base
<anastasiamac> babbageclunk: the last time i've encountered something like that was with channels and loops...
<babbageclunk> maybe you're off by one!
<babbageclunk> Sorry, that was uncalled for.
<babbageclunk> anastasiamac: yeah, I've definitely got those.
<veebers> hah ^_^
<anastasiamac> babbageclunk: in my case i had to send my changes twice even tho i only wanted one change... turned out the intialisation was an issue in workers...
<babbageclunk> anastasiamac: yeah, it's bound to be something like that - I'm about to hack the clock to store tracebacks so I can see where all the waiters are coming from.
<anastasiamac> babbageclunk: sounds good... keep us posted if u find the reason or if u need to talk at someone
<stickupkid> does anyone know what the `cmd/plugins/juju-meteadata` is for?
<Dmitrii-Sh> Hi, if anybody has time to triage pad.lv/1782236 and review the PR for it https://github.com/juju/juju/pull/8949 I'd really appreciate it
<anastasiamac> stickupkid: yes, it's for images
<manadart> Dmitrii-Sh: I will take a look today.
<Dmitrii-Sh> manadart: thx
<manadart> stickupkid: Need a review. Shame I didn't get this in for 2.4.1: https://github.com/juju/juju/pull/8952
<stickupkid> manadart: just needs a code comment to be addressed
<manadart> stickupkid: Done.
<stickupkid> manadart: done
<stickupkid> LGTM
<manadart> stickupkid: Pushed another commit. Lock doesn't need a method wrapper.
<stickupkid> manadart: i quite liked that, but up to you
 * stickupkid gets auto credentials working without all the hacks - yessss!
<manadart> stickupkid: Was also thinking re: the Discourse page. Would it not be simpler to have instructions for the credential use the interactive CLI rather than the file?
<manadart> Seems to me it would be simpler - don't have to copy certs into another file.
<stickupkid> Potentially, I don't mind, we can look into that... there are two methods, both work...
<stickupkid> we could just make an ammendment tbh
<manadart> I thought to do that...
<stickupkid> that way you can also improve your discourse kama :p
<manadart> Dmitrii-Sh: Reviewed/triaged. Just add the test and I'll get it landed.
<Dmitrii-Sh> manadart: thanks, will
<Dmitrii-Sh> will do*
<bdx> https://imgur.com/a/JndWKxu
<bdx> the outdated and now entirely removed docs still show up when users search for juju* it seems
<bdx> I have new users telling me this often
<veebers> yay, with the testing streams generation it updates all the 'updated' entries with now as it's being updated, which means I can't easily expire old entries
#juju 2018-07-20
<stickupkid> manadart: did you look into the multiple container IPs issue yesterday...?
<manadart> stickupkid: Which issue is that?
<stickupkid> where you sometimes get a container IP that's not valid, or at least you can end up with multiple addresses
<manadart> You mean what I was getting from a new MAAS cluster? No.
<stickupkid> yeah, I never tracked it down, but it was annoying, retrying seemed to solve it though
<manadart> I am going to set up the cluster again.
<stickupkid> let me know if you need any help debugging
<stickupkid> manadart: if you've got a second https://github.com/juju/juju/pull/8946
<manadart> stickupkid: Just have to duck out for 30 mins or so, then will review.
<stickupkid> manadart: take your time, I'm hunting down the test that keeps failing
<stickupkid> it's in "github.com/juju/juju/cmd/juju/machine", but jenkins doesn't actually tell you which one fails :sigh:
<stickupkid> so it must be in a tear down
<stickupkid> manadart: updated CR for trust-password only lxd sign on https://github.com/juju/juju/pull/8955
<rick_h_> stickupkid: can you make sure to update the discourse post as new bits like that hit edge and make sure to ping pmatulis  when you do please?
<stickupkid> rick_h_: sure will, nothing landed just yet though :)
<rick_h_> stickupkid: yep just thinking one step ahead
<stickupkid> rick_h_: :D
<rick_h_> stickupkid: I'm trying to think of how that might turn into the starter docs work for pmatulis basically
<stickupkid> yeah, that would be good
#juju 2018-07-22
<dmdooyy> hi,
<dmdooyy> anybody get these errors when bootstrapping
<dmdooyy> ERROR authentication failed.  Please ensure the credentials are correct. A common mistake is to specify the wrong tenant. Use the OpenStack "project" name for tenant-name in your model configuration.
<babbageclunk> veebers: looks like that github https issue might be better - at least, rebuilding my PR's gotten past downloading deps from github.
<veebers> babbageclunk: excellent, I'm not sure if I remembered to CC in the team for the RT as IS were looking into it as well
#juju 2019-07-15
<anastasiamac> a review plz https://github.com/juju/juju/pull/10439 :D
<hpidcock> anastasiamac: looking
<anastasiamac> hpidcock: \o/ tyvm
<hpidcock> anastasiamac: LGTM just added a few comments https://github.com/juju/juju/pull/10439
<anastasiamac> hpidcock: thnx, i've addressed these minor comments. could u plz actua;lly +1 the PR?
<anastasiamac> hpidcock: LGTM on irc but not the PR is not enoigh :D
<hpidcock> anastasiamac: LGTM for real this time
<anastasiamac> hpidcock: haha ! thnx ... :)
<hpidcock> anyone know how I can get access to the model type/cloud type from the uniter? Is CloudSpec on the uniter facade the only way?
<stickupkid> hpidcock, are you wanting to know exactly the type/cloud, or do you not care?
<hpidcock> just want to know if it's running in CAAS or IAAS
<stickupkid> hpidcock, look at the remotestate package in uniter, you may have to thread an API call to get the information you want
<hpidcock> stickupkid: thank-you
<stickupkid> jam, https://github.com/juju/python-libjuju/pull/321 - can you look over this as i've re-writen code generation in pylibjuju
#juju 2019-07-16
<timClicks> hpidcock: hey I might get your advice on some go codegen stuff if you have a minute?
<hpidcock> sure, want to HO?
<timClicks> cool
<timClicks> the file I'm looking at is https://github.com/juju/juju/blob/a5ab92ec9b7f5da3678d9ac603fe52d45af24412/provider/vsphere/internal/vsphereclient/ovf_ubuntu.go
<timClicks> I'm concerned that we're hard coding xenial in there
<timClicks> it's built from https://github.com/juju/juju/blob/a5ab92ec9b7f5da3678d9ac603fe52d45af24412/provider/vsphere/internal/vsphereclient/ubuntu.ovf
<timClicks> hpidcock: the VM we're trying to create has the same MAC address as the tmp VM that creates it "
<timClicks>     The static MAC address (00:50:56:04:71:0c) of juju-4e9d86-0 conflicts with MAC assigned to juju-4e9d86-0.tmp
<timClicks> "
<hpidcock> Is that just a side affect of not deleting the tmp vm?
<timClicks> trying to figure that out
<hpidcock> Ideally the provider should: "Download the OVA", "Upload the OVA to the datastore", "Create a VM from the OVA", "Update tuneables like mem/cpu", "Extend vmdk", "Start"
<hpidcock> possibly the reason it mutates the ovf before creating the vm, is that vsphere might have some bin-packing algorithm maybe to find a suitable esxi host
<timClicks> I think the VM cloning operation also clones the MAC address as it's been provided in "hardcoded" form by Juju
<manadart> hml: That WiP: https://github.com/juju/juju/pull/10442. Still a few packages to get green; diff will be > 100 files I think.
<hml> manadart: ta
#juju 2019-07-17
<achilleasa> jam: can you take a look at https://github.com/juju/charm/pull/284? It's a bit long but I think the majority of the lines are comments/tests
<hml> manadart:  reviewing pr: the intention is for the new default space to be independent of the maas default space?
<manadart> hml: I think we need to vet that idea, but that's what I am thinking. Our default space abstracts away any default/undefined/etc concept from providers.
<hml> manadart: what will the juju default space do in a maas config then?
<manadart> hml: Not sure what you mean. Example?
<hml> manadart: ho?
#juju 2019-07-18
<anastasiamac> a trivial review plz - https://github.com/juju/juju/pull/10445
<manadart> Need a review for adding space ID to juju/description: https://github.com/juju/description/pull/56
<hpidcock> anastasiamac: looking
<hpidcock> anastasiamac: LGTM just two small changes please.
<hpidcock> manadart: LGTM one non-critical comment.
<anastasiamac> hpidcock: thnx i'll look but it was literary just movinf existing code into callable funcs
<hpidcock> the comments were on new code from what I could see in the diff.
<hpidcock> maybe changes from another PR, since we have such a huge backlog
<anastasiamac> hpidcock: no, all good.. english is improtant and ur commented on my msg modification and var name.. good calls both
#juju 2019-07-19
<anastasiamac> kelvinliu_: could u PTAL https://github.com/juju/juju/pull/10447
<anastasiamac> double plz :)
<kelvinliu_> yep looking
<anastasiamac> kelvinliu_: \o/
<anastasiamac> kelvinliu_: thnx for review. I've modified upload slightly - if there is no remote cloud, we guide users to add it instead of trying to upload and err... happy with the modification? :D
<anastasiamac> kelvinliu_: nm
<manadart> Thanks hpidcock.
<hpidcock> Np
<anastasiamac> manadart: the funny thing about .maasrc is that maas team don't have quick reference to its format either :) m waiting for them to find it... :D
<manadart> anastasiamac: They dev in LP too, which doesn't have the search convenience of GitHub. Could always pull it down I guess...
<kelvinliu_> anastasiamac: sorry, just back from lunch. looks good to me. thx
<anastasiamac> \o/
<anastasiamac> a simple, trivial review plz :D https://github.com/juju/juju/pull/10448
<hpidcock> anastasiamac: looking
<hpidcock> anastasiamac: looks great. weird it's formatted as yaml since I'd expect it to be formatted nicer for help output, but it's not json so there is that.
<hpidcock> I like that it's there though, nice to know what the config I can specify actually is haha
<stickupkid> hml, Approved https://github.com/juju/juju/pull/10444#pullrequestreview-264229467
<hml> stickupkid: ta
#juju 2020-07-13
<wallyworld> kelvinliu: trvial PR - we've had an outage on 2.7 due to  a nil pointer https://github.com/juju/juju/pull/11823
<kelvinliu> wallyworld:  just went to get some food, looking now
<wallyworld> kelvinliu: trivial forward port of the previous 2.7 fix https://github.com/juju/juju/pull/11825
<kelvinliu> approved
<wallyworld> ty
<kelvinliu> np
<manadart_> achilleasa stickupkid: need a tick on https://github.com/juju/juju/pull/11820
<stickupkid> fyi: I never think empty is good for a unknown value https://github.com/juju/juju/pull/11820/files#diff-78b22e8095fbb037f2aea902e906cfb5R41
<achilleasa> stickupkid: it's probably the same if you are using iota. I've also seen the empty value for enum-like definitions being unexported and only used in the package for sanity checking
<stickupkid> I just think it should be a value
<stickupkid> error if it's not
<achilleasa> unknown is a value ;-)
<stickupkid> "" isn't ;p
<achilleasa> technically, it's a typed empty value :D
<stickupkid> ha
<stickupkid> achilleasa, this fixes the 2.8 ci integration failures https://github.com/juju/juju/pull/11826
<achilleasa> stickupkid: looking
<achilleasa> Can I get a CR and QA on the manual cleanup script backport? https://github.com/juju/juju/pull/11827?
<achilleasa> and a cherry-pick for 2.8 https://github.com/juju/juju/pull/11828
<achilleasa> manadart_: I think I found a (hacky) workaround for getting the info we need for the subnets on bionic; as the controller instance will be already up, we can query its state (that api works on bionic) and extrapolate the subnet data. I think we won't be able to figure out the parent bridge though which is probably fine I guess
<petevg> hml: I'm working on QA for https://github.com/juju/juju/pull/11822, btw.
<hml> petevg: ty
<petevg> np!
#juju 2020-07-14
<timClicks> Is there a minimum Kubernetes version required by Juju (not charms)?
<tlm> timClicks: we don't have one defined but here would be based on our API usage
<tlm> would have to go through the code and work it out
<timClicks> I can probably do that myself
<kelvinliu> wallyworld: got this pr for adding startupProb https://github.com/juju/juju/pull/11830  +1 plz
<SpecialK|Canon> What's the "opposite" of `juju deploy` please? I'm trying to clean up after `juju deploy cs:~juju/mariadb-k8s` and, well, it's not `destroy` or `rm` or `undeploy`...
<stickupkid> juju remove-application
<SpecialK|Canon> stickupkid: thanks!
<achilleasa> manadart_: can you take a look at https://github.com/juju/juju/pull/11831? Try to see if you can break it while QAing
<manadart_> achilleasa: Yep.
<stickupkid> manadart_, added tests https://github.com/juju/python-libjuju/pull/430
<manadart_> stickupkid: Thanks. It's already approved.
<manadart_> achilleasa: https://github.com/juju/juju/pull/11832
<achilleasa> manadart_: since subnets are stored per model, the upgrade step for the ports collection would have to do N space reloads, followed by a per-machine operation... that seems wasteful for large controllers
<manadart_> achilleasa: Yes, and we want to change it to avoid per-model repetition. One of many things we'll get to one day.
<manadart_> Not sure what we can do here though. Need to think about it.
#juju 2020-07-15
<manadart_> achilleasa: Some more link-layer mechanics: https://github.com/juju/juju/pull/11833
<achilleasa> manadart_: got some questions on ^^^
<stickupkid> hml, charmhub url model-config https://github.com/juju/juju/pull/11834
<manadart_> achilleasa: I pushed changes that I think address the feedback, but GH is not reflecting the new commits yet...
<achilleasa> manadart_: I will take a look once the commits appear and approve it
<achilleasa> manadart_: can you try to push again? Still don't see the new commits
<stickupkid> hml, charmhub cli formatting options https://github.com/juju/juju/pull/11835
<hml> stickupkid: ack - iâll look at both this afternoon
<hml> :-)
<stickupkid> no rush
<stickupkid> back on model charm origin tomorrow
#juju 2020-07-16
<achilleasa> any ideas why the multiwatcher (core/multiwatcher/types.go) seems to track both Ports and PortRanges? Is it safe to drop Ports and emulate a PortRange -> []params.Port operation for legacy API calls?
<stickupkid> achilleasa, what happens if you have holes in the PortRange, is that what Ports can do differently?
<achilleasa> stickupkid: we don't seem to be storing ports ATM and everything ports-related has deprecation warnings (see core/network/ports.go)
<achilleasa> I am considering to introduce extra functionality (in a separate PR though) to do port slicing, e.g. open 100-200/tcp, close 150/tcp; result: 100-149/tcp, 151-200/tcp
<stickupkid> achilleasa, seems logical
<achilleasa> plus all the allwatcher bits completely ignore subnets which is a pain
<stickupkid> manadart_, achilleasa should we be using API results for the CLI, we have done in the past, wondering if we should in the future?
<stickupkid> I'm on about this stuff https://github.com/juju/juju/blob/develop/api/charmhub/data.go#L107-L158
<stickupkid> the issue I have with it, is that we have to encode yaml tags on to types for view only scenarios
<manadart_> stickupkid: I guess it's less worse if the lib is ours, but technically we should have display types.
<stickupkid> manadart_, exactly my thoughts, but no where in juju does this
<manadart_> stickupkid: Isn't there generation/branch and space stuff that has display types?
<stickupkid> let me check
<stickupkid> if so that's precedence enough for me
<stickupkid> then we have vertical boundaries and nice code separation
<stickupkid> manadart_, pointer?
<manadart_> See core/model/generation.go. Those are actually transformed prematurely and returned by the API client, but they are for display.
<stickupkid> manadart_, quick ho?
<manadart_> And cmd/juju/space/show.go
<gnuoy> I'm looking at a charm which provides a service to multiple applications in multiple models via cross model relations. Would I be right in saying there is no way for the charm to say which relations come from the same remote model ?
<stickupkid> hml, charmhub url model-config is ready for review https://github.com/juju/juju/pull/11834
#juju 2020-07-17
<wallyworld> kelvinliu: if you have any time this arvo, otherwise can wait til next week https://github.com/juju/juju/pull/11838
<kelvinliu> wallyworld: yep, will have a look later
<kelvinliu> wallyworld: so will existing dev path entries be replaced with UUID one?
<wallyworld> kelvinliu: yeah
<wallyworld> if the uuid is known
<kelvinliu> wallyworld: lgtm, if tested on existing applications with storage.
<wallyworld> kelvinliu: ty
<kelvinliu> np
<manadart_> stickupkid or achilleasa: https://github.com/juju/juju/pull/11839
<stickupkid> manadart_, you need to rebase to trigger the github actions
<manadart_> stickupkid: Done.
<manadart_> Need a forward merge review: https://github.com/juju/juju/pull/11841
<stickupkid> hml, I don't think we should update the error message for the invalid endpoint, because we're not using live data yet. i.e. do we get a 404 for `juju info invalid`
<stickupkid> hml, if that's the case, then we'll need to do something else to validate the url
<hml> stickupkid: it was from google.com
<hml> so passes url validity checks, butâ¦ canât do anything useful
<stickupkid> hml, quick ho/
<stickupkid> ?
<hml> stickupkid: sure
#juju 2020-07-18
<mirek186> hi, can someone help me out with vault hacluster?
<mirek186> It looks like it is in the cluster but there is no pacemaker resource defined, neither I can access it via VIP, don't know much about vault HA so hard to say weather it should have any hacluster extra resources assigned or haproxy or something
<mirek186> did anyone deployed openstack with vault using hacluster?
