#juju 2012-06-25
<lifeless> https://juju.ubuntu.com/docs/faq.html#is-it-possible-to-deploy-multiple-services-per-machine might need updates
<lifeless> links at the bottom of https://juju.ubuntu.com/docs/faq.html are broken,obolete too - https://www.jujucharms.com/ and https://juju.ubuntu.com/kanban/dublin.html
<lifeless> is there an elasticsearch charm ?
<imbrandon> yea
<imbrandon> its not in the store yet
<imbrandon> but its in the review que
<imbrandon> not tried it myself
<imbrandon> but i think jorge may have
<imbrandon> he was talking about it
<imbrandon> there are elb and rds ones as well
<imbrandon> both need a tiny bit of polish but function
<imbrandon> if you can handle rough edges :)
<imbrandon> lifeless: ohh i read that as elasticache
<imbrandon> not elasticsearch
<imbrandon> no i dont think so
<imbrandon> not that i ahve seen tho i havent looked specificly
<lifeless> imbrandon: I would love ones for elasticsearch and opentsdb; it may be time for me to get into it :.
<imbrandon> btw i found whay looks to be all the info i need for the osx stuff ealier , not time to go over it yet but it looked completish
<imbrandon> so ty
<imbrandon> hehe yea, elasticsearch would likely be a good starter charm
<imbrandon> as its probably going to be simple. most external services that are charmed are
<imbrandon> you could see my newrelic charm or clints elb charm as examples
<imbrandon> of two and they are both pretty simple
<imbrandon> well mine is VERY , like 10 lines of code
<imbrandon> but his not much more but still relates etc
<imbrandon> but yea to be your first one, thats actually a good idea, a total beginner i might not say that but you have a solid progmatic head on you and are semi familiar with it already ( via just interaction is all i mean )
<imbrandon> so i think you should be able to seriosuly pick it upa nd even have it done in an hours time or so
 * imbrandon afk
<lifeless> btw
<lifeless> https://juju.ubuntu.com/docs/user-tutorial.html
<lifeless> fails to talk the user through environments.yaml config
<lifeless> first time I was looking into this stuff, it was -painful- as a result. Fear and confusion all around.
<lifeless> https://juju.ubuntu.com/Charms is a little incoherent on 'starting a new charm' - it says to push before describing how to init-repo etc :> {I figured it out, others may be more puzzled}
<lifeless> hazmat: your branch generates configs like: Acquire::HTTP::Proxy "{'apt-proxy': 'http://192.168.1.102:8080/'}";
<lifeless> hazmat: this isn't quite what you intended, I think :)
<lifeless> hazmat: not sure if its the cloud-init support, or the juju code yet, looking into it.
<lifeless> metadata service claims - 'apt_proxy: {apt-proxy: 'http://192.168.1.102:8080/'}'
<lifeless> so I think juju
<lifeless> definitely: (Pdb) print machine_data
<lifeless> {'apt-proxy': {'apt-proxy': 'http://192.168.1.102:8080/'}, 'machine-id': '0', 'constraints': <juju.machine.constraints.Constraints object at 0x348c190>}
<lifeless> hazmat: also, it looks like you override it if not set, which breaks images with that preconfigured. I will see about fixing that too
<lifeless> hazmat: and done - https://code.launchpad.net/~lifeless/juju/apt-proxy-support/+merge/111763 - for merging into your WIP branch
<imbrandon> lifeless: coool ( re: the env.y thing, i'll see if i cant get some clarification there and be a bit more verbose about some of the things that yea it looks to barely touch if at alll even for getting started
<imbrandon> ty
<imbrandon> if not i'm minimally add a trello task for it so someoe will ( hopefully heh )
<lifeless> :)
<lifeless> imbrandon: I haven't gotten to actually doing the charm yet, more yak shaving happened.
<lifeless> imbrandon: is there a howto for charms? e.g. step-by-step including deploying from local tree ?
<imbrandon> 100% complete i'm not sure or optimistic
<imbrandon> but i do bet that there is a recorded charm schoool
<imbrandon> that does tho
 * imbrandon looks for where they are storeed
<lifeless> thanks
<imbrandon> yea , there are a few listed there it looks like
<imbrandon> https://juju.ubuntu.com/CharmSchool
<imbrandon> in the webinars
<imbrandon> ( skip to the end of the first one to get a list
<imbrandon> of all the current and upcomming ones )
<imbrandon> after a once over they look to be fairly complete
<lifeless> this stuff -really- needs to be in the main docs
<lifeless> since we want folk doing it easily.
<imbrandon> yea , i am workingf on that , so is jorge and mims
<lifeless> cool
<lifeless> I realise folk are working hard
<lifeless> I guess I'm ensuring that this is on the radar
<imbrandon> trying to move EVERYTHING to the docs
<imbrandon> yop yup
<imbrandon> many and big docs importvements are on most of ours
<imbrandon> i know mine and jorges for sure, but client and mims and kapil i know all spend copious ammounts of time on them too at times
<imbrandon> heh
<imbrandon> i think this last thursday we had clint me kapil mimms jorge and there was one more , or juan
<imbrandon> all working on them at the same time
<imbrandon> for a good solid 4 to 6 hours
<imbrandon> lots done, still lots to do tho :)
<imbrandon> hehe
<imbrandon> definatly was an intresting time to see that many all at once on a docs setup like we have
<imbrandon> :)
<imbrandon> jujucharms.com/docs is much more updated thanks to that day too compared to the normal docs
<imbrandon> we have a IS ticket in to fix the packages on the one with the main docs but its not completed yet so they are a week or so outdauted but that week saw a metric ton of updates
<imbrandon> so hopefully someone can fix it up tomarrow ( just needs -backports enabled and then packages updated that are already installed ) [ if you had a lil weight to get er done quicker that would be awesome and I could fix more faster :P heheh ]
<lifeless> whats the RT # ?
<imbrandon> umm i need to look in the irc back log, clint filed it
<imbrandon> but he told us in the chan about ~18 hours ago
<imbrandon> or so
 * imbrandon looks quickly
<imbrandon> lastlog rt
<imbrandon> bah
<lifeless> aiee
<lifeless> imbrandon: http://jujucharms.com/docs/ is a 403
<lifeless> 'fobidden'
<imbrandon> nice
 * imbrandon pulls the branch to see if its building 
<imbrandon> not sure wherer the output of the cron build for juju.ubuntu.com/docs is either
<imbrandon> if there is one :)
<imbrandon> but i know its supose to build them evert 15 min on cron and did for a long time till we broke 0.6.4 and need 1.0+ 1.1.3 is current
 * imbrandon checks the build currently uploaded
<imbrandon> thurs day we considered "dogfooding" it like mmims brought up ( and good idea i think personally ) to charm ( and this cleanup too ) juju.ubuntu.com and then just redirect the page to it on ec2/hpcloud etc
<imbrandon> heh
<imbrandon> not sure how far it would "actually" fly if it was attempted
<imbrandon> but its worth a thought later :)
<imbrandon> lifeless: http://jujucharms.com/docs/ fixed up
<imbrandon> made a typo in my last commit :) but that is what the official stite will also look like as soon as builds resume on a newer sphinx than Lucid default ships
<imbrandon> :)
<imbrandon> and hopefully shuffling some of the content to the docs form juju.ubuntu.com its self also so its all central in one place
<imbrandon> ( as well as sexu looking too I think, but just a tad bias, definately easier to read/navigate )
<imbrandon> :P
<lifeless> did you file the rt #?
<imbrandon> clint did
<lifeless> s/file/find/
<imbrandon> not yet
<imbrandon> i am actively looking tho
<lifeless> heh
<imbrandon> :)
<lifeless> ok http://jujucharms.com/docs/write-charm.html is *much* better
<lifeless> kudos
<imbrandon> :) ty
<lifeless> http://jujucharms.com/docs/write-charm.html - uses oneiric
<imbrandon> yea there are still many oneiric refs
<imbrandon> i am affraid to sed them out
<imbrandon> bit mostly that can be done
<imbrandon> providers is my next section to hit up hard i think
<imbrandon> lots of pain ares there esp with env.y config options etc
<lifeless> http://jujucharms.com/docs/write-charm.html doesn't know that revision is a separate file yet either.
<imbrandon> nice, so VERY out dated
<bkerensa> SpamapS: I think I borked your merge
<imbrandon> fixes that one now
<imbrandon> bkerensa: nice going slick heh
<imbrandon> bkerensa: j/k , something i can help sort ?
<imbrandon> your golden, once overed the commit and it looks correct, i'll look closer when i make my next docs commit here in a few myself too
<imbrandon> but i'm fairly certain your solid :)
<sidnei> SpamapS, added a comment on #920454, seems like precise might be missing some patches on libvirt
<_mup_> Bug #920454: juju bootstrap hangs for local environment under precise on vmware <local> <juju:Confirmed> < https://launchpad.net/bugs/920454 >
<SpamapS> sidnei: ahh! good sleuthing.. maybe its already reported in precise?
<sidnei> SpamapS, couldn't find a !juju bug for precise no
<imbrandon> SpamapS: do you now what these corospond to for HPcloud
<imbrandon>     project-id:
<imbrandon>     nova-uri:
<imbrandon>     auth-mode:
<imbrandon> ?
<SpamapS> imbrandon: no, I don't even know what that is. :)
<imbrandon> openstack hpcloud ( not openstack_s3 ) gonna try pure first ( i hope )
<imbrandon> env.y settings
<SpamapS> imbrandon: ohhh
<imbrandon> i pulled them from conf
<imbrandon> but its the only ones i couldent match up
<imbrandon> to something i knew
<SpamapS> sidnei: Well I do think thats a libvirt problem, and it looks like there might be a patch
<SpamapS> imbrandon: I'd suspect auth-mode is sort of an enum
<imbrandon> said proj id is a int, wonder if its the tennant id
<imbrandon> thats and int
<SpamapS> imbrandon: perhaps HPCLOUD will give you those when you get your account?
<SpamapS> I've still never signed up
<imbrandon> yea i got a whole screen of creadentials
<SpamapS> tho with the new ostack provider maybe I will :)
<imbrandon> but not sure what they map to, as the name are slightly off, so i was hoping to at leaste get bootstrap wortking doe i could document them
<imbrandon> heh
<imbrandon> yea i'm trying the ostack provider now
<imbrandon> when you do let me know and i'll pastebin my whole env.y and give ya a head start
<imbrandon> err if
<SpamapS> it shouldn't be this hard
<SpamapS> perhaps the provider needs better docs
<imbrandon> yea
<SpamapS> or HPcloud needs a good slapping if they aren't using the standard terms
<imbrandon> there is -0- now, i;'m reading code
<SpamapS> oh wtf?
<SpamapS> nothing in docs?
<imbrandon> well i think they do, i think the provider is off
<imbrandon> its a bitdiff than ec2 as well
<imbrandon> but yea
<imbrandon> -0- docs
<sidnei> SpamapS, yes, i agree. so mark the bug as affecting libvirt, invalid in juju? :)
<imbrandon> i'm reading source to fig this stuff out
<SpamapS> sidnei: I'm not 100% sure its invalid in juju, so I'm waiting to do that
<SpamapS> sidnei: there may be a workaround we could apply after all
<sidnei> oki
<imbrandon> SpamapS: http://cl.ly/HdVH
<imbrandon> for your ref too, so you have an idea should you need it in a pinch
<SpamapS> imbrandon: I bet tenant-id is project-id
<imbrandon> you'll likely have to click it to zoom to read anything and it should be safe to not have to worry about hoarding ( got bits blured )
<imbrandon> kk
<SpamapS> I remember hearing that the name in the API was up for consideration to be renamed
<imbrandon> heh
<imbrandon> that page should stay there indefinately too if you wanna bookmark that
<imbrandon> for ref later or something
<SpamapS> No I think I'll just sign up :)
<imbrandon> its actaly my upload account
<imbrandon> kk
<imbrandon> kool, cuz those other two are optional
<imbrandon> so with that i *should* have a full stanza
<imbrandon> of pure OSAPI on hpcloud
<imbrandon> *crosses fingers and prepares to fail*
<SpamapS> Have they at least moved up from Diablo to Essex yet?
<imbrandon> no idea
<imbrandon> i prettu sire its essex
<imbrandon> but dunno how to tell
<imbrandon> i'm prtty ignorant on openstack
<imbrandon> only have the absolute minimum in my head for uit so far
<SpamapS> I don't know how to tell either
<imbrandon> really not even  that
<imbrandon> heh
<imbrandon> thats one reason i'm using hp and not rak, zomg their rest api is the suxors
<SpamapS> well, its OSTACK now IIRC
<imbrandon> 8 rest calls to create one mysql db on the new mysql service
<imbrandon> at rak
<imbrandon> nah
<imbrandon> not all of it
<SpamapS> Oh they're not using red dwarf?
<imbrandon> they have joey on half and half , mine only has db access
<imbrandon> then i got a toy acct that only has new access
<imbrandon> yea no, its all helter skelter in their control pannel right now, feels like aws but with 2 competing back ends
<imbrandon> heh
<imbrandon> presonaly i would avaoid rak for like a nother month id say and let them settle some dust
<imbrandon> heh
<imbrandon> btw i do those "full page screenshots + annotations on save" with a BA chrome extension "Awesome Screenshot" heh its perfect to get a whole page when it s like like that
<imbrandon> with one click
<imbrandon> nova_project_id == tennant name
<imbrandon> woot
<imbrandon> bah, these forum are such a pita to use, then o grok some of the source to see why, its d7 heh non-custom d7 at that , heh no wonder
<imbrandon> :)
<imbrandon> these forums == help docs at hpcloud , d7 == drupal 7
 * imbrandon gets ready to bootstrap env hpc.1-az.1-regon-a.geo1
<imbrandon> wow
 * robbiew senses a disturbance in the cloud force...velocity and google IO this week.
<imbrandon> heheh
<imbrandon> i always love IO but i always like it like 3 weeks late as videos
<imbrandon> :)
 * m_3 sad to be missing out on both events :(
<twobottux> aujuju: Is juju specific to ubuntu OS on EC2 [closed] <http://askubuntu.com/questions/149952/is-juju-specific-to-ubuntu-os-on-ec2>
<hazmat> morning
<m_3> o/
<jimbaker`> SpamapS, , i took a look at the failing test on buildd, juju.lib.tests.test_format.TestPythonFormat.test_dump_load. this test normally takes on the order of 0.001s on my laptop and involves no resources other than a json serializer/deserializer. i just ran it for 10000 loops and i didn't see it explode
<jimbaker`> SpamapS, just looking at the test and the code it exercises, i would not expect resource starvation failures. maybe in similar code in test_invoker, which involves processes. but not here
<SpamapS> /wi/win 7
<SpamapS> arghhh!!
<SpamapS> bursty network.. yargh
<SpamapS> jimbaker`: So perhaps it is a race of some kind?
<jimbaker`> SpamapS, in this specific code, no. twisted can report problems in the wrong place, so that could be the problem
<jimbaker`> in terms of the twisted trial test runner
<jimbaker`> SpamapS, i assume this failing test is stable?
<jimbaker`> as reported on buildd?
<SpamapS> jimbaker`: I'm not sure, I'll retry a few of the builds.
<SpamapS> jimbaker`: it may have been transient. quantal succeeded on retry
<SpamapS> jimbaker`: I'm retrying all the others. If they succeed, we can chalk this up to a buildd problem I think
<jimbaker`> SpamapS, again, this is not a test i would expect to fail transiently, since it's not async. but again, it's completely possible for twisted trial to point the wrong finger from some other transient bug
<jimbaker`> so given that, i think this is the best strategy for now
<sidnei> uhm, is there a 'juju scp' counterpart to 'juju ssh'? if not, it could be handy
<jimbaker`> there was one other error reported, which was a zk closing connection problem
<jimbaker`> sidnei, yes
<jimbaker`> sidnei, just to be clear, juju scp exists, not that would be hypothetically handy ;)
<sidnei> ah, i totally missed it in juju --help
<SpamapS> sidnei: its quite useful for pulling down modified hooks actually. :)
<sidnei> SpamapS, im trying to figure out if i can get rsync to work with that, using 'juju ssh' as the remote shell, but service name has '/' so it gets interpreted as a path instead of a machine name
<SpamapS> sidnei: IMO we need a 'juju get-public-hostname' so you can just do rsync `juju get-public-hostname foo/3`:/path/to/file
<SpamapS> sidnei: I think juju-jitsu might have that actually
<sidnei> ah, that could work too
<SpamapS> bkerensa: Hey
<SpamapS> bkerensa: that was a pretty MASSIVE merge proposal you merged into lp:juju/docs
<SpamapS> bkerensa: I'd have liked to hear more than just your +1 ;)
<lifeless> SpamapS: hazmat: morning guys; hope my spam overnight wasn't an issue ;)
<imbrandon> heya
<imbrandon> SpamapS: whats the rt# for that docs ticket? lifeless would like to know :)
<lifeless> hey!
<hazmat> lifeless, no that was awesome
<hazmat> lifeless, i'm running around today at velocity conf with meetings, i'm going to try and circle back to your branches/bugs this evening.
<lifeless> cool, thanks!
<lifeless> hazmat: the main one I'd like info on is the use of ip; that could be contentious for reasons other than code.
<hazmat> lifeless, yeah.. that changes the display in status as well.
<hazmat> lifeless, ideal would be to capture both, and use the appropriate one if the other is missing
<lifeless> hazmat: in a good way IMNSHO :)
<lifeless> hazmat: 10.0.0.3 is much more useful than 'server-2345'
<hazmat> lifeless, so do you have a a valid dns name in your context, or is it just ip
<hazmat> hmm.. so you have invalid dns names
<lifeless> hazmat: I assert that noone running openstack outside of rackspace and perhaps hp has valid public DNS names.
<lifeless> hazmat: its not even part of the deploying openstack guidelines yet.
<lifeless> let alone labs etc
<hazmat> lifeless, but everyone running in a public cloud or maas probably does
<hazmat> and for them i would posit its better to have things displayed by name then ip
<lifeless> hazmat: public cloud will, maas *may* (if they route DNS to the maas controller)
<lifeless> hazmat: maas seed clouds though won't, same issue as openstack.
<lifeless> dns will be usable w/in the cloud, but not by machines outside it.
<hazmat> lifeless, i've seen and assisted with several maas demos where dns does work fwiw
<lifeless> hazmat: from a machine that isn't the cloud controller ?
<lifeless> hazmat: because, that machine will be specially configured to use the dnsmasq instance on the controller.
<hazmat> lifeless, yes, its a machine on the same network though
<hazmat> lifeless, probably
<lifeless> I'll match your probably and raise you an almost certainly ;)
<lifeless> anyhow, we can gather data if we care.
<hazmat> lifeless, no takers :-)
<lifeless> I'm not sure that we -care- about the dns name though; what does it offer the user?
<hazmat> a nicer symbolic name
<hazmat> lifeless, think about configuring a vhost in apache
<lifeless> hazmat: have you seen the public names ec2 uses ?
<hazmat> or browsing to an app..
<hazmat> lifeless, true
<lifeless> these are not the same as symbolic names
<lifeless> they are bulk loaded static mappings
<lifeless> per region
<lifeless> such as ec2-184-73-10-99.compute-1.amazonaws.com
<hazmat> agreed realistically, true dns management  for services is already separate than the provider dns entries
<hazmat> er. a separate concern
<hazmat> lifeless, okay i'm convinced
<lifeless> hazmat: cool, thanks for taking the time; chat more when you're done @velocity
<SpamapS> lifeless: I'm with you on this. The DNS causes more problems than it solves.
<negronjl> SpamapS: ping
<negronjl> SpamapS: can you give me the commands that you use to run gource with jitsu ?
<negronjl> SpamapS: I am trying to show the "pretty deploying stuff screen"
<SpamapS> jitsu gource --run-gource
<jimbaker`> the gource integration definitely demos well, i used it as part of my demo for usenix two weeks ago. nothing like seeing some good pictures of what's going on
<SpamapS> Yeah I really think a web app showing a network diagram will play even better.. if we can ever get such a thing :)
<jimbaker`> SpamapS, yes, i'm sure that will be quite nice :)
<jimbaker`> it might be nice to also demo the checking off of expectations of jitsu watch, here's how service orchestration happens
<lifeless> does one need to wait for a deploy to complete (via status) before doing add-relation ?
<lifeless> or will it Just Work if you run add-relation immediately that the deploy returns ?
<jimbaker`> lifeless, you can do juju add-relation as soon as you have deployed the two services
<lifeless> jimbaker`: as soon as the juju cli returns, you mean ?
<jimbaker`> it just works because add-relation records this setup in zookeeper; it's the responsibility of the agents to carry out this policy
<lifeless> jimbaker`: right, but deploy returns before the agents are even running
<jimbaker`> yes, as soon as the juju cli returns
<lifeless> jimbaker`: which is why I'm probing for specifics ;)
<jimbaker`> yes, even in that case
<lifeless> cool
<lifeless> thanks
<jimbaker`> because the agents carry out the policy, as recorded as it is in ZK
<lifeless> how can one recover from a wedged node ?
<lifeless> by which I mean
<jimbaker`> lifeless, basically what you see with the juju cli returning is that the update to zk has been made
<lifeless> when I run 5 or 6 deploys in quick succession, openstack is throwing a tanty and gets an unspecified 'error' on one of the nodes - it gets an instance id and never comes up
<lifeless> How can I tell Juju 'btw that machine, it didn't, so toss it away and start clean'
<jimbaker`> lifeless, sounds like a bug with the provider and possibly the provisioning agent
<jimbaker`> lifeless, so if you can get the provision agent log (on machine 0), that would be helpful. or use juju debug-log
<lifeless> jimbaker`: I'm sure its accentuated for me here and now, local openstack install. *but*, it can happen in e.g. ec2, when service disruption happens, that a reservation request goes into limbo or even fails asynchronously.
<lifeless> jimbaker`: the API call to provision the instance fails.
<lifeless> bah
<lifeless> succeeds*
<lifeless> its a cloud backend failure. Async.
<jimbaker`> lifeless, it should eventually succeed
<jimbaker`> it's a bug if it doesn't
<lifeless> To a degree, I agree. Ideally we could say 'its a bug over there, go fix that'
<jimbaker`> so some sort of occasional failure is expected
<lifeless> but, ^^ that.
<lifeless> how do I recover without wiping the whole juju environment of 10 instances
<jimbaker`> we just tried to engineer the provisioning agent so that it does appropriate retries
<lifeless> jimbaker`: but it doesn't retry if the API call succeeds?
<jimbaker`> lifeless, iirc, it does do dead machine detection for cases like that
<lifeless> how long does it wait? Perhaps I wasn't patient enough
<jimbaker`> lifeless, without logs, i'm afraid i can only speculate :)
<lifeless> ok, well I didn't capture any (didn't konw how)
<lifeless> juju debug-log; how do I get the provision agent log ?
<jimbaker`> lifeless, it should just be there, see https://juju.ubuntu.com/docs/user-tutorial.html#starting-debug-log
<lifeless> jimbaker`: how do I get the whole log though?I mean, this is some time back presumably... I don't notice instantly.
<jimbaker`> lifeless, you can also just grab it from the machine 0 box
<lifeless> jimbaker`: thats what I was asking
<lifeless> where on the box :)
<jimbaker`> lifeless, regrettably i don't have it committed to memory, but i am looking right now
<lifeless> thanks; Perhaps a doc patch :>
<jimbaker`> /var/log/juju/provision-agent.log
<lifeless> cool
<lifeless> I will look there next time it happens and see what I can see
<lifeless> thank you
<jimbaker`> lifeless, sounds good
<lifeless> \o/ finally apparently have hdfs + hadoop up, ready to iterate on stuff using it ;>
<lifeless> super not-simple
<jimbaker`> lifeless, i did check juju debug-log; two things, 1) the check against the flag it sets in ZK executes too late for the provisioning agent upon first install; 2) the PA still does not seem to use debug logging even if restarted. so for now, /var/log/juju/provision-agent.log seems to be the way to go
<jimbaker`> also this doesn't mix output, which is probably why i actually use the file when debugging the PA
<lifeless> first install - of a new service, or first install of the environment ?
<jimbaker`> first install of the environment
<jimbaker`> specifically the bootstrap node
<jimbaker`> (aka machine 0)
<lifeless> that seems hard to avoid, as the zk runs on that node
<lifeless> if it fails to come up, ....
<jimbaker`> lifeless, actually i'm going to backout that earlier statement. you are going to miss some changes to agent log, but there is a watcher in place for such global en settings (currently only debug log). so it appears to be just a fault of logging the PA log
<jimbaker`> lifeless, not certain why, the log setup is fairly generic in terms of adding handlers
<lifeless> oh, lalalalalala
<lifeless> hazmat: SpamapS: you'll love this:
<lifeless> 2012-06-26 10:21:39,616 INFO Connecting to unit hbase-regioncluster-01/0 at server-10.novalocal
<lifeless> ssh: Could not resolve hostname server-10.novalocal: Name or service not known
<lifeless> (juju ssh)
<lifeless> novalocal is the local search domain on the instance, not globally resolvable.
<lifeless> INSTANCE        i-0000000a      ami-00000004    server-10       server-10       running         0               m1.small        2012-06-25T20:36:32.000Z        nova                            monitoring-disabled     10.0.0.7        10.0.0.7                       instance-store
<lifeless> is what the EC2 API returns :>
<lifeless> has anyone here worked with the hbase charm? I want to create a table from a different charm, which requires running the hbase binary (which is only on hbase units...)
<SpamapS> lifeless: double doh on the IP vs hostname
<lifeless> SpamapS: the rabbit hole gets deeper :) To me, this validates my first point :> Also, split view DNS sucks.
<SpamapS> lifeless: split view DNS is just a relic of the way EC2 has done things..
<SpamapS> lifeless: you know, there's an openstack provider in review right now..
<SpamapS> lifeless: have you been able to try it out?
<lifeless> SpamapS: nope
<lifeless> SpamapS: I'm not actually trying to hack on juju at all, just use it ;)
<lifeless> SpamapS: so I'm running precise, etc; just keep hitting yak shave events.
<lifeless> SpamapS: what I *want* to do, is try out opentsdb
<lifeless> and logstash (with elasticsearch)
<SpamapS> oh right
<SpamapS> lifeless: but you're also trying it against a local openstack.....
<SpamapS> which.. complicats matters. :)
<lifeless> SpamapS: a little
<lifeless> but shouldn't break charms as inside the stack dns work
<lifeless> s
<lifeless> its just the outside thing, exactly like canonistack
<SpamapS> yeah
<lifeless> e.g.
<lifeless> ubuntu@server-8:~$ host server-8.novalocal
<lifeless> server-8.novalocal has address 10.0.0.3
<lifeless> and
<lifeless> ubuntu@server-8:~$ host server-9.novalocal
<lifeless> server-9.novalocal has address 10.0.0.4
<lifeless> so hbase's tools decided they couldn't find the master.
<SpamapS> lifeless: I assume the relationship sets up the necessary configs for that?
<lifeless> http://jujucharms.com/charms/precise/hbase
<lifeless> I followed the bouncing ball there
<lifeless> juju deploy hbase hbase-master
<lifeless> juju deploy hbase hbase-regioncluster-01
<lifeless> juju deploy zookeeper hbase-zookeeper
<lifeless> juju add-relation hbase-master hbase-zookeeper
<lifeless> juju add-relation hbase-regioncluster-01 hbase-zookeeper
<lifeless> juju deploy --config example-hadoop.yaml hadoop hdfs-namenode
<lifeless> juju deploy --config example-hadoop.yaml hadoop hdfs-datacluster-01
<lifeless> juju add-relation hdfs-namenode:namenode hdfs-datacluster-01:datanode
<lifeless> juju add-relation hdfs-namenode:namenode hbase-master:namenode
<lifeless> juju add-relation hdfs-namenode:namenode hbase-regioncluster-01:namenode
<lifeless> juju add-relation hbase-master:master hbase-regioncluster-01:regionserver
<lifeless> ^ is exactly what I ran
<lifeless> with a minute or so between deploys
<lifeless> and nothing between relation adding
<lifeless> (sorry for the Spam)
<SpamapS> why minutes between deploys?
<SpamapS> You should have been able to run all of the deploys all at once
<SpamapS> and all the relations
<SpamapS> and then wait for everything to finish. :)
<lifeless> SpamapS: see my chat with jimbaker` above about vagaries of local stack deploys with one compute node
<lifeless> something whinges when I allocate 32GB of ram etc all in one batch
<SpamapS> lifeless: I see. Thats a bit disappointing given how "scalable" openstack is supposed to be...
<lifeless> well
<lifeless> I mean, I have one node here with 16GB ram
<lifeless> but yes, there is some glitch in there, and I've already shaved enough yaks on this exercise.
<lifeless> the local install is v useful considering latency to canonistack, f'instance
<SpamapS> indeed
<SpamapS> lifeless: I think a native OpenStack provider, and more people banging on OpenStack's with juju, will help this go a lot more smoothly.
<lifeless> well, I'm beyond that bit, workarounds R us.
<lifeless> SpamapS: whats a good charm that knows how to create db's on demand ?
<lifeless> I presume the wordpress <-> mysql pair does that ?
<jimbaker`> lifeless, that pairing would be a good choice
<lifeless> jimbaker`: my suspicion/concern is that it does that using the network protocol
<lifeless> hmmm
 * lifeless needs to know more hbase ops
<jimbaker`> lifeless, i'm not certain what you mean by that re "network protocol".  i will say that it does an exchange of relation settings to accomplish the desired service orchestration
<lifeless> hah
<lifeless> so schema details
<lifeless> I'd be surprised if schema migrations were passed via zk
<lifeless> rather than directly.
<lifeless> where directly == connect to the mysql network port.
<jimbaker`> lifeless, the theory of juju, if there is one ;), is that this is outside of the scope of the charm, unless orchestration is required
<jimbaker`> not trying to reinvent how mysql coordinates, just solve certain issues
<lifeless> jimbaker`: sure
 * hazmat pauses from juju talk to catchup
<lifeless> its orchestration though
<lifeless> schema application + upgrades is a massive orchestration issue
<jimbaker`> lifeless, then i can see that under the scope of the charm
<lifeless> including the trivial-enough first-bringup case.
<lifeless> the charm needs to facilitate it
<lifeless> which is what I'm trying to figure out for hbase :
<lifeless> >
<jimbaker`> so if the services need to coordinate on add-relation/remove-relation or add-unit/remove-unit, then sure, use juju there
<lifeless> I think we're talking past each other
<SpamapS> lifeless: the charm gets you credentials and a place to connect to. But no doubt, if a relationship is more specific, schema details and dependencies across multiple charms could be orchestrated very easily.
<SpamapS> lifeless: such a thing just has yet to surface yet.
<jimbaker`> lifeless, in particular, this can be done through an advanced form of service orchestration, which orchestrates with respect to relations. adam_g has been doing this for the openstack charms fwiw
<jimbaker`> lifeless, you might need to do this. i guess we just need to settle on something concrete to say one way or the other :)
 * hazmat gets back to the audience questions
<zirpu> velocity talk went well. woot
<m_3> whoohoo!
#juju 2012-06-26
<SpamapS> zirpu: awesome
<_mup_> Bug #1017792 was filed: running relation-get with no client id gives misleading error <juju:New> < https://launchpad.net/bugs/1017792 >
<lifeless> is this expected? :  charm proof .
<lifeless> E: could not find metadata file for .
<lifeless> E: revision file in root of charm is required
<lifeless> but
<lifeless> charm proof $(pwd)
<lifeless> works fine
<_mup_> Bug #1017801 was filed: juju tries to delete in-use security groups <juju:New> < https://launchpad.net/bugs/1017801 >
<snihalani> Hi guys
<snihalani> I have a question
<snihalani> Where do I get layman definition of what juju does for me?
<lifeless> it deploys software for you, into cloud environments like amazon ec2, rackspace, and so on.
<snihalani> so, applications from my dev environment into EC2, cloudfoundry etc?
<lifeless> juju manages all the connections between different services, like getting your webserver behind haproxy, or setting up new hbase cluster nodes.
<lifeless> yes, that sort of thing
<lifeless> there is in fact a charm to deploy cloudfoundry itself.
<snihalani> define charm? Sorry, I am a noob
<lifeless> a charm is the rules for deploying a specific piece of software
<MarkDude> imbrandon, pingy
<lifeless> foine.
<imbrandon> MarkDude: heya man
 * MarkDude needs those links again
<imbrandon> about to crash :) cool one sec
<MarkDude> making the Fedora juju page now
<MarkDude> Have some folks willing to test
<imbrandon> btw i plan on making some new ones in the next day or so ( juju is about to do a minor release )
<imbrandon> and i'll refresh them at the same time
<imbrandon> kk links, brb
<MarkDude> we have a few Fedora Ambaassadors that are also Ubuntu Memebers
<imbrandon> nice
<imbrandon> let em know they are more than willing to hang out
<imbrandon> and if we get tooo  off topic we'll make a #juju-fedora chan
<imbrandon> :)
<imbrandon> or #juju-ports , actually not a bad idea
<imbrandon> hrm
<imbrandon> ok
<imbrandon> http://jujutools.github.com/rpm-juju
<imbrandon> btw i made a page about them for the offical docs
<imbrandon> but the docs have a build issue atm, soon as thats worked out ( today hopes ) then there will be links to it in the official docs too
<imbrandon> etc
<imbrandon> MarkDude: ^^
<imbrandon> docs are linked at http://juju.ubuntu.com btw, pretty sure that was obvious tho
<imbrandon> but just in case
<imbrandon> but yea, sometime in the next 24hrs i'd say maybe 48 tops there will be a minor version refresh
<imbrandon> so it will be perfect timing for their feedback and i might be able to slip stuff in if its not a core bug
<imbrandon> etc
<imbrandon> right away in those cases
<MarkDude> Sorry, but the page you were trying to view does not exist.
<imbrandon> oh and you saw my "Download for Ubuntu" JS/CSS button thing ? I made one for Fedora and OSX too, might for SuSE as well if i get bored
<imbrandon> i'll link ya up
<imbrandon> bah
<imbrandon> let me go look
<imbrandon> i did that from memory
<imbrandon> heh
<imbrandon> https://github.com/jujutools/rpm-juju
<imbrandon> there we go
<imbrandon> sorry bout that
<imbrandon> anyhow like i said i got a little bit of docs and such creeping in tho todayish so that will help them too
<imbrandon> okies man, i'm about to crash, been a long day, hit me up anytime at imbrandon@ubuntu.com too if you dont catch me on IRC
<imbrandon> MarkDude: and there is a bug tracker on linked on that RPM page too "Issues" if anyone ways to report something
<MarkDude> Cool- I will include your email on the page to
<MarkDude> thx imbrandon
<imbrandon> i'll do the leg work and figure out if its a rpm specific issue or core one and file them in LP on their behalf if its core so they wont have a escuse not to report it :)
<imbrandon> cool cool, where ya posting this btw ?
<MarkDude> https://fedoraproject.org/wiki/Juju
<imbrandon> rockin, ok i'm off to bed man, take it easy
<imbrandon> sorry to run on ya :)
<jml> I'm working on my charm again. Getting rid of the sqlite kludge so I can aim it at postgresql. Exciting times.
<jamespage> jml: enjoyed your blog posts BTW
<jml> jamespage: thanks :)
<jamespage> agree that its quiet here in the mornings....
<jml> yeah
<jml> juju is the first command-line tool I've used in a long time that makes me think "this should have a GUI"
<jml> uhh, how do you get the port from the postgres charm?
<jml> is it correct to just assume the standard port?
<jml> I'm writing a script to call juju for my users. Is there a way to get the public address of the only deployed unit? (Would prefer the exposed service URL, tbh.)
<jml> why are stacks any more difficult than, say, a .dot file?
<jml> what I want is something that looks at my local charm, detects whether there are any changes between it and the last deployed charm, and if there is, add --upgrade to my deploy call.
<jml> what I want is something that looks at my local charm, detects whether there are any changes between it and the last deployed charm, and if there is, add --upgrade to my deploy call.
<jml> i.e. I want 'make'
<jml> (sorry for double entry. keyboard error.)
<jamespage> jml, hmm
<jamespage> jml, the postgresql does assume default port
<jamespage> charm that is
<jamespage> jml, the only way I know to get it to automatically update charms is to bump the version number in the revision file manually before doing a deploy
<jamespage> but I often forget todo that - and have to use --upgrade anyway
<jml> well, the real question is how do I detect that the version number needs bumping
<jml> the environment has a copy of the charm stored somewhere, presumably
<jml> if I could get that, it would be easy to ask "do your copy and my copy differ? if so, bump my version. if not, great."
<jamespage> jml, it does
<surgemcgee> Quick question, one of my web sites uses a node.js server. It is suppossed to open port 8080 which does not happen. What is the best way to open this port, including it in the charm?
<marcoceppi> surgemcgee: You can include it in the charm if the charm requires the node.js server etc. If you need to just ad-hoc open it you can do that with juju-jitsu
<marcoceppi> Ideally if the charm users it/needs it it should expose it
<jaustinpage> Im trying to debug some problems I am having with the swift charms. anyone else working on these, or using them?
<SpamapS> adam_g: ^^ jaustinpage needs help w/ swift
<SpamapS> jaustinpage: adam_g wrote them
<jaustinpage> SpamapS: thanks for the info
<SpamapS> japage: curious, what issues are you having w/ swift?
<cheez0r> japage: I'm working with the swift charms, what are you seeing?
<cheez0r> juju folks; I've got 11 nodes in my MaaS cluster, and 11 machines created, with 11 total services on them. When I attempt to add a unit to one of the services, it adds a 12th machine in state pending, assigns the unit to that machine, and nothing ever happens. Is there a way to make juju use an existing machine to host the unit, or what?
<cheez0r> So far the behavior seems to be one service unit per machine.
<japage> cheez0r - I seem to be seeing some sort of error when i try to create the connection between swift-storage and swift-proxy
<japage> It is like the swift-relation-changed function is not able to add the storage nodes for some reason
<japage> cheezor: i dont think you can push more than 1 service out to a node
<cheez0r> that's awfully strange.
<cheez0r> I should be able to, for instance, run nova-compute and swift-storage on the same node.
<japage> cheez0r: I agree
<japage> cheez0r: have you been able to deploy swift to 4 or more machines, and get the relationship between swift-proxy and swift-storage working/
<cheez0r> no.
<japage> *?
<cheez0r> I have not pursued the relationship at all
<japage> hmm, ok
<cheez0r> I'm right now working on getting swift-storage on multiple nodes- but I'm out of nodes
<japage> im cheating, im using vm's to test, so I can create more nodes as needed.....
<cheez0r> nice.
<cheez0r> I've been trying to deploy a whole MaaS/Juju/OpenStack cluster on HP Blade hardware.
<cheez0r> for about a month.
<SpamapS> cheez0r: everybody has asked for the ability to run two things on one machine
<japage> a month? the rest of the openstack charms seem to be working well, at least for me.
<SpamapS> cheez0r: but as yet, that feature is not implemented in juju
<SpamapS> cheez0r: part of the reason being that juju was envisioned first as "apt-get for the cloud" not "openstack deployment tool"
<cheez0r> japage: I've had many issues that have roadblocked me
<SpamapS> cheez0r: so, with the cloud, you just size your VMs right
<cheez0r> SpamapS: yeah, si comprende
<cheez0r> The more I work with juju + openstack the less I think they're a good pairing
<SpamapS> cheez0r: but w/ real hardware, you get what you get
<cheez0r> but perhaps that's going to improve as juju evolves
<SpamapS> cheez0r: Its the #1 feature request
<SpamapS> cheez0r: I think it will land as the first feature after the go port is done.
<japage> I've been "accidently" using juju/maas exactly as intended...
<jamespage> SpamapS, hey - you appear to be chair for our team meeting today!
<jamespage> still OK for that or do you want to slide to me?
<cheez0r> SpamapS: thanks for the info man, very helpful.
<negronjl> 'morning all
<SpamapS> negronjl: are you stalking the halls of Velocity as well?
<negronjl> SpamapS: I may go back there tomorrow but, not today
<negronjl> SpamapS: Did enough stalking yesterday :)
<hazmat> SpamapS, trying to upload a branch for co-location, but network here is choppy, going to hit up the speaker lounge and hardline it post plenaries
<hazmat> re jitsu
<SpamapS> hazmat: sweet
<koolhead17> hello all
<lifeless> morning y'all
<SpamapS> lifeless: howdy
<lifeless> SpamapS: o/
<lifeless> hey, so did you see my query about proof?
<lifeless> 15:48 < lifeless> is this expected? :  charm proof .
<lifeless> 15:48 < lifeless> E: could not find metadata file for .
<lifeless> 15:48 < lifeless> E: revision file in root of charm is required
<lifeless> 15:48 < lifeless> but
<lifeless> 15:48 < lifeless> charm proof $(pwd)
<lifeless> 15:48 < lifeless> works fine
<SpamapS> lifeless: that does not make much sense.
<_mup_> Bug #1018059 was filed: Disable fsync on zk to speed tests. <juju:In Progress by hazmat> < https://launchpad.net/bugs/1018059 >
<lifeless> indeed.
<_mup_> Bug #1018061 was filed: Disable fsync on zk to speed tests. <juju:In Progress by hazmat> < https://launchpad.net/bugs/1018061 >
<lifeless> would you like a bug, and if so where
<SpamapS> lifeless: if args.charm_name: charm_name = args.charm_name
<SpamapS> else: charm_name = os.getcwd()
<_mup_> Bug #1018062 was filed: teminate-machine should provide and option to '--force' <juju:New> < https://launchpad.net/bugs/1018062 >
<lifeless>  charm proof
<lifeless> usage: proof [ charm_name | path/to/charm ]
<SpamapS> lifeless: charm proof . works for me in all my charms
<SpamapS> as does no argument, which assumes .
<lifeless> http://paste.ubuntu.com/1061284/
<lifeless> SpamapS: I'm running precise.
<lifeless> SpamapS: does that make a difference?
<SpamapS> no, I am too
<SpamapS> tho I might have a more current charm-tools
<SpamapS> the one in precise is pretty old by now
<SpamapS> lifeless: I think that bug is fixed in trunk basically.
<lifeless> hokay
 * lifeless suggests more SRU is in order
<hazmat> lifeless, http://paste.ubuntu.com/1061301/
<hazmat> should fix it
 * hazmat submits a branch
<hazmat> actually this easier to cowboy ... SpamapS, m_3, jimbaker` any objections to the one liner above
<jimbaker`> hazmat, +1
<SpamapS> hazmat: if you think that corrects the problem (a problem I don't have or see) then yes, just commit and push
<SpamapS> hazmat: please run 'make check' too
<SpamapS> hazmat: charm-tools has tests
<SpamapS> hazmat: consider maybe adding a test for this :)
<cheez0r> hey juju guys, I just want to confirm that I understand this: Juju does not support specifying which node a charm is to be deployed to, right? It just pulls a node from what's available?
<cheez0r> All of the reading I've done on this topic has left me with that understanding; I'm just trying to confirm I'm right.
<SpamapS> cheez0r: yes thats correct
<cheez0r> Thanks SpamapS!
<hazmat> cheez0r, yes at the moment, re jitsu extensions that capability is coming soon
<jimbaker`> hazmat, my only objection to this code is that charm_name and charm_path are rather conflated here
<jimbaker`> but that's the existing case
<hazmat> jimbaker`, that's the entire script
<jimbaker`> indeed
<hazmat> jimbaker`, feel free to rewrite
<jimbaker`> hazmat, i'm sure there will be some opportunity :)
<SpamapS> I believe we have a lot of refactoring to do in charm proof
<SpamapS> I'd even be open to changing the name to something else, since proof was a play on 'principia mathematica'
<hazmat> SpamapS, is trunk open?
<SpamapS> hazmat: tonight
<hazmat> SpamapS, ack, i just closed out the galapagos milestone
<SpamapS> hazmat: actually, screw that
<SpamapS> closed?
<hazmat> SpamapS, if you have a separate branch, why are we gating trunk
<SpamapS> I mean to creat an actual release
<SpamapS> I have no branch
<SpamapS> Its a psychological freeze
<hazmat> SpamapS, the distro branch?
<SpamapS> meant to get you guys to *test* it
<SpamapS> hazmat: right, the distro package is a quilt stream. ;)
<SpamapS> hazmat: anyway, lets just open trunk, I'm going to tag 0.5.1 as trunk
<hazmat> SpamapS, ack
<SpamapS> hazmat: I meant to create a release from the galapagos milestone
<SpamapS> err, I'm going to tag r544 as trunk
<hazmat> SpamapS, i just did, its just a label though no tarball attached
<hazmat> https://launchpad.net/juju/+milestone/galapagos
<SpamapS> I'll do tarballs if somebody requests it
<SpamapS> but I don't really see a need
<SpamapS> tag is fine
<SpamapS> hazmat: I was going to bump the version in setup.py too
<hazmat> SpamapS, sounds good
<SpamapS> hmm wait, natty and oneiric still FTBFS
<SpamapS> hazmat: can you hold just a bit longer on lp:juju ? I want to make sure natty/oneiric build
<SpamapS> pretty sure this is buildds being slow, not a real problem
<SpamapS> hazmat: heh, tho your fsync change might be a solution for that
<hazmat> SpamapS, sure, just want to greenlight the sub port change and the fsync stuff in
<hazmat> SpamapS, perhaps.. some of the status setup tests need exorcism
<hazmat> they setup the world and examine a fraction
<hazmat> and worse they get used by other tests as a base
<SpamapS> hazmat: I'm tempted to just make a branch for 0.5, and do any fixes in there
<hazmat> SpamapS, hmm
<SpamapS> which is probably what we should do, but I liked the idea of taking a week to actually use/test the release
<hazmat> SpamapS, branch for stable or branch for features, either sounds reasonable
<hazmat> SpamapS, afaik the next major incompatible feature is upgrade support
<SpamapS> hazmat: I was thinking we should start honolulu by adding a feature which which makes arbitrary PPA selection possible so we can add PPA's and keep ppa:juju/pkgs stable
<hazmat> SpamapS, effectively the upgrade stuff has support for things ... like that
<SpamapS> I was hoping you'd say that
<hazmat> ie. arbitrary release url support
<hazmat> you can point it to any directory of releases
<SpamapS> https://code.launchpad.net/~juju/+archive/pkgs/+build/3600404 .. once that builds.. I'll commit the 0.5 -> 0.5.1 change and release that as 0.5.1...
<SpamapS> Then I was thinking I'd alter the PPA build recipe to pull from a stable series, and create a new PPA for trunk
 * hazmat needs an osd notify for website changes
<hazmat> SpamapS, +1
<SpamapS> hazmat: what would you say to shortening honolulu to just 3 weeks? Basically just merge everything that is approved already, and wrap up the zk acl work?
<hazmat> SpamapS, why?
<SpamapS> hazmat: so we can get the zk acl work out faster. :)
<hazmat> SpamapS, your setting what feels like arbitrary deadlines
<hazmat> doesn't make anything go faster
<SpamapS> hazmat: no, this would get us back on the 6 week cadence.
<SpamapS> since we're 3+ weeks late
<hazmat> SpamapS, but why are we late?
<SpamapS> Because nobody cares about releases except me
<SpamapS> and I've been busy/distracted/etc.
<hazmat> SpamapS, i think its more about the num of developers assigned to py juju atm
<SpamapS> well we didn't delay it for want of features/bug fixes
<SpamapS> just.. time to organize the release
<SpamapS> the whole point of having a cadence is to just ship what we have when the day arrives.
<SpamapS> that way people don't feel rushed to push things that aren't ready, because they know they will see a release in 6 weeks
<SpamapS> but yeah
<SpamapS> w/o developers this is a bit moot :)
<hazmat> jimbaker`, hows the sec group rework coming?
<SpamapS> https://launchpadlibrarian.net/108722045/buildlog_ubuntu-natty-i386.juju_0.5%2Bbzr544-1juju5~natty1_FAILEDTOBUILD.txt.gz
<hazmat> jimbaker`, ^
<hazmat> new format bug
<SpamapS> that has failed 3 times in a row
<SpamapS> works fine on precise and quantal
<SpamapS> hazmat: I'm tempted to just merge in your no fsync change and see if that solves it
<jimbaker`> hazmat, i'm currently sick, so not so much progress yet
<hazmat> m_3, your in au?
<hazmat> jimbaker`, ack, thanks for the update
<hazmat> SpamapS,  that failure is odd if its rel specific, its the same py version, and same major libyaml
<hazmat> SpamapS, do we even care about pre oneiric?
<SpamapS> hazmat: not much no, but oneiric fails too
<SpamapS> hazmat: its more that I want to know why, not that I want to care about oneiric/natty
<SpamapS> ok appt time
<SpamapS> hazmat: bbiab.. I'd be fine w/ pushing the fsync change into trunk.. if that solves it, we'll ship it with 0.5.1. :)
<hazmat> bcsaller, do you have time to debug ^
<bcsaller> hazmat: you mean the libzk deadlock thing still?
<bcsaller> I have looked at it but didn't finish it yet, I can poke at it again today
<hazmat> bcsaller, no the test failure on oneiric
<bcsaller> looks like natty
<hazmat> bcsaller, the libzk deadlock with security isn't critical, the test failure (see build failure link above ) is the problem
<hazmat> bcsaller, SpamapS mentioned it also exhibits under oneiric
<bcsaller> hazmat: it looks like the default json module would have to fail for this to break, but I'm spinning up an instance
<jml> what I think of when I think of juju & puppet: http://tinyurl.com/yfn5wn9
<sidnei> jml, why?
<jml> sidnei: puppet. juju. think about it. (I guess it might not translate well.)
<lifeless> jml: bad man :P
<SpamapS> jml: Hahahah thats fantastic
<imbrandon> ahhh finally .... it lives !! http://bholtsclaw.github.com/showdown/
<JoseeAntonioR> imbrandon: ping, question, do you have a mailman charm?
<imbrandon> i do not, there MIGHT be one
<imbrandon> but i would need to look hadent seen one around yet
<JoseeAntonioR> that was what I was asking, not you, in general :P
<JoseeAntonioR> if you see one, let me know
<imbrandon> you can check on the store
<imbrandon> it shows all of them
<imbrandon> and the ones in-porogresss
<imbrandon> as well
<imbrandon> progress*
<imbrandon> one sec i'll get you the exact link
<JoseeAntonioR> it's not there, so I assume there's no mailman charm
<imbrandon> yea if there isnt one there and you dont see it in the ~queue the other place to look is a bug against juju charms
<imbrandon> but i think those show in the in comming queue
<imbrandon> ( maybe not without a branch , not sure )
<imbrandon> but yea
<imbrandon> if its not there then nope and its fair game, if it is there but no one worked on it in 30 days
<imbrandon> then it is also fair game but still probably nice to ping the other party if its practial to do so
<imbrandon> but thats just "best" really
<imbrandon> SpamapS: i guess no word from RT ? unfortunate :(
<JoseeAntonioR> I'll see, I may write one if I have time
<imbrandon> even if your doing it shelfishly and then push what you have to your own branch on LP like ~lpip/charms/precise/mailman
<imbrandon> then someone else has a start on it when they go to finish etc
<JoseeAntonioR> great
<JoseeAntonioR> mental note taken
<imbrandon> so even partial charms are nice to keep on LP in your name too even if not ready for the store for whatever reason
<imbrandon> and if you push to a branch named like that me or anyone can install your version of the charm too like
<imbrandon> juju deploy cs:~lpid/precise/charmname
<imbrandon> :)
<JoseeAntonioR> that's great, JoseeAntonioR didn't know that
<imbrandon> yup yup. thje docs are actually prettu good on the basics
<imbrandon> some things and adv stuff a little dated
<imbrandon> but for most projects i've been on ours are fantasic compared
<imbrandon> even though we;re actually trying to make them even better still
<imbrandon> :)
<imbrandon> http://www.jujucharms.com/docs
<imbrandon> anyone that is in charm-contributors can commit or do a merge preposal too so alot of ppl can help like a wiki but cleaner
<imbrandon> and easier to manage :)
 * imbrandon gets back to his charm for now, havent actually done any juju stuff at all today yet
<JoseeAntonioR> :P
<JoseeAntonioR> go for it
<imbrandon> SpamapS / m_3 ( and everyone ) I almsot forgot to mention it since i've been not in #juju most of today, there is now a #juju-ports ( with pointers to here if all are afk ) for OSX and Fedora Peeps ( and others as more come ) to collab and get support and not clutter here if not needed
<imbrandon> its on the irc teams radar too as part of the official juju- namespace etc, and Markdue put the word out over there and we already have a few ilders
<imbrandon> ( like 3 )
<imbrandon> just kinda FYI
#juju 2012-06-27
<hazmat> bcsaller, any progress resolving that?
<bcsaller> hazmat: I've been running into other issues on both EC2 and local trying to reproduce it
<hazmat> bcsaller, ?
<hazmat> what issues?
<bcsaller> things like the oneiric instance isn't getting a new enough python-zookeeper so the managed classes were not there
<bcsaller> on the local provider the UA wasn't coming up when the series was old
<bcsaller> the json module seemed fine however which was all thats involved in that test afaics
<hazmat> bcsaller, juju-origin: ppa should solve the first (txzk version)
<hazmat> bcsaller, pls file a bug on the latter
<hazmat> SpamapS, trunk open?
<hazmat> are we there yet ;-)
<imbrandon> hazmat: you got openstack to work at all yet ? i dont care if i got to use s3 or not, i just cant seem to get my env.y happy
<imbrandon> its always something else once i get one fixed
<imbrandon> jej
<hazmat> imbrandon, i haven't had a chance to get rolling with it. though i did just get my invite for the rackspace openstack beta squared away. i'm hoping to get back into it post my velocity talk tomorrow
<imbrandon> rocking, if you dont mind ping me if i seem to be arond
<imbrandon> i'll try to play at the same timeish that way we can bounce fixes.problems with less effor both ends
<imbrandon> :)
<imbrandon> least till successfull bootstrap or i;ll slit wrists what ever seems like it will work hahahah , totally joken before someone calls the loony bin on me
<imbrandon> ok , gonna see if i cant migrate websitedevops.com to jekyll+bootstrap tonight ( havent started yet only been reading up on it ) hehe
<imbrandon> afk-ish
<imbrandon> hazmat: would you be cool if i changed the links on juju.ubuntu.com that point to the docs over to jujucharms only long enought till we get the RT ticket fixed or get the ok to charm it up
<imbrandon> i know might mean $ so thus asking
<imbrandon> before i did it, and i really just mean those two links in the wiki header nav
<imbrandon> temporarly
<imbrandon> hrm or actually we already point some pages to jujutools.github.com and multi of us herer on the core team have "owner" on that repo, i could just upload it to /docs there via git
<imbrandon> and no cost to anyone and be on an semi official , or at least fully contoled via official ppl
 * imbrandon preps that, not like it would take 30 sec to switch the link back
<cjohnston> I'm a little confused.. I have a charm that's been deployed by someone else.. I added the environments.yaml I was given to my .juju directory, but I'm still not able to run juju status, or anything else..  Any ideas what I may be missing?
<JoseeAntonioR> cjohnston: are you running it locally?
<cjohnston> JoseeAntonioR: no, ec2
<JoseeAntonioR> cjohnston: verify all the data in the environments.yaml file is correct, then juju bootstrap, juju deploy [charmname]
<cjohnston> its already deployed
<cjohnston> and now I'm trying to connect to i
<cjohnston> it
<JoseeAntonioR> juju status should give you a public IP address for it
<JoseeAntonioR> unless it hasn't finished downloading the master image and configuring yet
<cjohnston> its been up for a few months... I get "juju environment not found: is the environment bootstrapped?"
<cjohnston> I can ssh into one of the servers, I can view the site.. I just cant get to all the servers because I dont know all the ip's
<JoseeAntonioR> hmm, maybe juju destroy-environment and juju boostrap again
<cjohnston> noo... thats bad juju
<JoseeAntonioR> :P
<JoseeAntonioR> if there are things you can't lose, then don't run it
<cjohnston> ya.. not running
<JoseeAntonioR> you should try "juju bootstrap", if the environment is not bootstraped it will do it, but if it is already bootstrapped no changes will be applied
<JoseeAntonioR> you won't lose anything at the end
<cjohnston> should I have to run bootstrap in order to connect to something that was setup by someone else?
<JoseeAntonioR> no, you should run bootstrap to bootstrap the environment
<JoseeAntonioR> after the bootstrap, status
<JoseeAntonioR> may work
<cjohnston> I got asked about a fingerprint.. I guess thats a good thing
<cjohnston> hrm.. its only showing me one agent and no services tho
<JoseeAntonioR> cjohnston: then no charms were deployed
<JoseeAntonioR> you're starting from scratch
<cjohnston> then something is still broken.. I'm ssh'ed into one of the servers... I know it exists
<JoseeAntonioR> and no juju+charms in that server, then
<cjohnston> There should be 4 servers showing in status, a front end, a db, a memcache, and the server that the code lives on.. I'm only getting 1..
<JoseeAntonioR> that seems weird
<cjohnston> tell me about it
<JoseeAntonioR> cjohnston: I think it's only showing the server you're connected to
<cjohnston> I sshed into the server it shows, and I don't recognize any of the processes running on the server
<JoseeAntonioR> what the...
<sidnei> jml, ah, it does.
<m_3> dang... cjohnston left
<m_3> JoseeAntonioR: he's not using the right environment in his commands
<imbrandon> he is in some of the same chans i am
<JoseeAntonioR> m_3: that explains everything, I'll let him know
<m_3> i.e., 'juju status -esummit'
<m_3> yeah, his keys were injected as part of the 'authorized-keys:' for the environment
<JoseeAntonioR> still online, so catch him if you want to
<m_3> pm-ing... thanks
<m_3> hazmat: otw... we leave an another hour or so
<m_3> I'll be working from utc+10 for a couple of weeks
<negronjl> m_3:  utc+10 !!! that's a change
<imbrandon> m_3: nice , you have to put up with more of me then , hahaha
<cjohnston> back
<m_3> negronjl: yup... actually sitting in SFO atm :)
<m_3> negronjl: taylor's got a conference in sydney
<negronjl> m_3: ahh ...  enjoy
<m_3> negronjl: yup... _crazylong_ flight first tho
<cjohnston> how long m_3
<imbrandon> thats wh........
<m_3> cjohnston: til the 9th
<cjohnston> flight.. how longs the flight
<lifeless> jcastro: still handing out tshirts like lollies?
<imbrandon> heya lifeless you ever get to look into that box running the wiki/docs by chance ?
<burnbrighter> Can nodes be shut down from with juju as to power them down?  Is that juju terminate?  I've found my ssh key is only working on my first bootstrap node.  All others did not get the ssh key and apparently did not bootstrap correctly for juju.  What forces the subsequent nodes to re-bootstrap with juju and get the right ssh key?
<burnbrighter> Trying to avoid a complete re-do with destroy-environment
<lifeless> burnbrighter: you should be able to use your cloud providers API to get those additional nodes boot logs
<burnbrighter> this is maas
<lifeless> ah, not sure if it provides that
<lifeless> anyhow, Juju attempts to reprovision nodes that don't come up and register with the master node after a reasonable time
<burnbrighter> can I simply powerdown nodes without doing damage to the juju environment
<burnbrighter> ?
<lifeless> so you should be able to use MAAS to terminate and return to the pool the node.
<lifeless> I don't know whether the machine agent will run on reboot; if I did I could answer that question :)
<burnbrighter> I've had bad experiences with removing the node from maas without removing from juju
<lifeless> imbrandon: the sphinx docs q? I have the ticket to look at now, I haven't had time to look into the complexity and see about poking someone
<burnbrighter> ok, I will try just shutting it down.  thanks
<imbrandon> cool, yea me and SpamapS both receated and tested the proposed solutino of enabling -backports and upgrading the sphinx workd perfect, but yea i understand thir might be other services on the box to check
 * imbrandon hopes not tho
<imbrandon> heh
<imbrandon> lifeless: let me know if there is anything i can do to make it easier on ya when you do get that far :if i'm around etc)
<lifeless> imbrandon: sure, thanks
<burnbrighter> Are there any suggestions on how to approach the ssh keyless maas nodes?  Maybe diagnosing a possible problem in the provider code?
<burnbrighter> I can't get access to the nodes, so I can't see the logs
<SpamapS> imbrandon: wtf? Another channel?
<SpamapS> imbrandon: #juju-dev would have been the place for them to go, IMO
<twobottux> aujuju: Security groups creation in EC2 <http://askubuntu.com/questions/156715/security-groups-creation-in-ec2>
<hazmat> imbrandon, sure re docs, though do you know who confirmed the rt for the docs update
<zirpu> other than looking at $HOME/.juju/environments.yaml, is there a way to list your working/available environments?
<SpamapS> hazmat: ok, at this point, 544 is a failure. I'm releasing 543 as 0.5.1, and trunk is open for dev
<SpamapS> jimbaker`: ^^ please fix natty/oneiric
<SpamapS> wtf
<SpamapS> why can't I call "galapagos" milestone "0.5.1" ?
<SpamapS> ah because version was galapagos
<SpamapS> hrm or not
<SpamapS> weird
<hazmat> SpamapS, cool, fsync test stuff merged to trunk
<james_w> where's the codebase that generates the review queue wiki page?
<negronjl> 'morning all
<hazmat> james_w, there's a copy in charm-tools used for the cli version
<james_w> err, why did I say wiki?
<hazmat> bzr lp:charm-tools  && less scripts/review-queue
<james_w> thanks
<james_w> hazmat, the web version uses that code too?
<hazmat> james_w, pretty much, it just takes the output and dumps it into a mongodb, toss a simple html template on it
<hazmat> it does the data collection as a cron job to avoid interacting with lp inline to the web request
<james_w> hazmat, so the web page is live backed by mongo?
<zirpu> sort of.
<hazmat> james_w, yes nginx w caching, pyramid, and mongodb
<james_w> hazmat, ok, thanks
<james_w> hazmat, OOI why not just write out static html in the cron job?
<hazmat> james_w, i've been asking myself that same question recently about the entire jujucharms.com site.. ie render static and push to cloud front
<hazmat> the only dynamic bit atm is the search
<james_w> yeah
<hazmat> which according to analytics isn't used that much
<james_w> jujucharms I can understand more, as there are lots of pages
<james_w> eventually static html becomes a poor tradeoff with that IME
<james_w> hazmat, so the review-queue is done like that because the rest of the site is?
<hazmat> james_w, mostly its just ease of changing out styles and templates, and the caching makes it fairly fast regardless, even live (sans cache) its fairly fast
<hazmat> james_w, yup
<james_w> ok, thanks
<james_w> for example, status.ubuntu.com is static html
<zirpu> if you can do it statically it's generally better.
<james_w> and takes >1hr to generate now, which means the original design decision isn't such a good idea any more
<zirpu> especially if it doesn't change often.
<james_w> given the likely access pattern
<hazmat> well, part of the future goals is to add a bit more dynamicism
<hazmat> here's the bug list for the site.. fwiw. https://bugs.launchpad.net/charmworld
<hazmat> er.. feature requests
<james_w> thanks
<james_w> we're just wanting a review queue for a "distribution" on LP as well, but don't need a browser at this point
<james_w> so mongo+nginx+pyramid seems a bit overkill
<zirpu> hm. haproxy charm hooks/stop  stops mysql instead of haproxy.
<zirpu> well i guess i need to learn how to make patches and submit bugs now.
<SpamapS> zirpu: bzr branch lp:charms/haproxy , edit, bzr commit, bzr push lp:~your-lp-username/charms/precise/haproxy/fix-stop-hook, bzr lp-propose
<zirpu> cool. thanks. was digging through the docs.
<SpamapS> zirpu: note that the stop hook isn't actually called anywhere yet, but it should be at some point in the near future.
<zirpu> ok.  this was a good 1st for me.
<SpamapS> zirpu: ^5
<zirpu> i'll be going to the charm school session at velocity at 13:00 today (PDT).
<SpamapS> zirpu: *awesome*
<juju-velocitycon> hi guys :-)
<SpamapS> hah
<SpamapS> notice
<SpamapS> 10:21 ... join!#juju -> juju-velocitycon(~subway@67.111.115.235.ptr.us.xo.net)
<SpamapS> subway :)
<marcoceppi> nice
<hazmat> negronjl, what's the admin web ui for hadoop?
<hazmat> er. where
<negronjl> hazmat:  hadoop-master:50030
<negronjl> hazmat:  or .. http://<hadoop-master>:50030
<hazmat> negronjl, thanks
<negronjl> hazmat: np
<lifeless> jcastro: ola
<SpamapS> wth boto.. shipping your own certs and ignoring those in /etc/ssl/certs?!
<SpamapS> badform
<SpamapS> hazmat: something wicked in the fsync change me thinks
<SpamapS>  * Duration: 2 hours 50 minutes
<SpamapS>     test_resolved_stopped ...                                              [OK]
<SpamapS> hung there before the buildd killed it
 * SpamapS bootstraps using the proposed openstack provider
<SpamapS>     agent-state: running
<SpamapS> sahweeet
<zirpu> sweet!
<zirpu> anyone written a provider that uses vagrant? or virtualbox?
<SpamapS> zirpu: no, just the local/lxc provider
<SpamapS> zirpu: its certainly been wondered about a lot
<burnbrighter> Anyone know what this error is about:
<burnbrighter> ProviderInteractionError: Unexpected Error interacting with provider: 409 CONFLICT
<burnbrighter> 2012-06-27 20:19:07,255 provision:maas: juju.agents.provision INFO: Starting machine id:10 ...
<burnbrighter> 2012-06-27 20:19:07,471 provision:maas: juju.agents.provision ERROR: Cannot process machine 10
<burnbrighter> I see lots of emails on it, but nothing that definitively says why I'm receiving this error
<SpamapS> burnbrighter: you may have forgotten to set port in your environment.. or the constraints you have are not resolvable
<burnbrighter> Thanks - which configuration file handles this?
<burnbrighter> (this is for maas btw)
<SpamapS> burnbrighter: yeah that only happens with MaaS .. it depends on what the problem is
<SpamapS> burnbrighter: if its the port problem, its ~/.juju/environments.yaml
<burnbrighter> trying to run openstack and none of the nodes will start
<SpamapS> burnbrighter: if it is a constraints problem, you can resolve it with juju set-constraints
<burnbrighter> you are right, I have no port set
<burnbrighter> I am using the example from the documentation
<burnbrighter> ah, no wait
<SpamapS> burnbrighter: there's a fixed version of juju in precise-proposed I think
<SpamapS> burnbrighter: it assumes 80 now ;)
<burnbrighter> yeah, that's there:
<burnbrighter> environments:
<burnbrighter>   maas:
<burnbrighter>     type: maas
<burnbrighter>     maas-server: 'http://172.16.100.11:80/MAAS'
<burnbrighter>     maas-oauth: 'XXXXXXX'
<burnbrighter>     admin-secret: 'nothing'
<burnbrighter>     default-series: precise
<burnbrighter> does that look right? or should I be using the latest version of juju?
<SpamapS> burnbrighter: 0.5+bzr531 should be fine
<SpamapS> burnbrighter: dpkg -l juju to figure out the version
<burnbrighter> so the environments.yaml looks ok?
<burnbrighter> the nodes did their bootstrapping and cloud-inits fine. i could ssh to them and everything
<SpamapS> burnbrighter: right, so when do you get that 409 ?
<burnbrighter> just after I finished deploying all of the openstack stuff
<burnbrighter> and exposing it
<SpamapS> you may want to run 'juju -v whatever-causes-it' and possibly read /var/log/juju/provisioning-agent.log on machine 0 to figure out why
<SpamapS> burnbrighter: my guess is not all of your services were able to deploy because of constraint problems
<SpamapS> but I am not a MaaS expert :p
 * SpamapS heads to lunch
<SpamapS> burnbrighter: try #maas too
<burnbrighter> thnx
<juju-velocity-de> hi guys from velocity live
<juju-velocity-de> -)
 * zirpu .
<zirpu> svg status graph done via graphviz?
<zirpu> cs == charm store?
<juju-velocity-de> zirpu:yes and yes
<burnbrighter> Problem above identified - mass/juju REQUIRES 10 reachable nodes, I only had 6
<SpamapS> burnbrighter: there is a hack you can do to consolidate a few onto node 0
<SpamapS> burnbrighter: basically you can force mysql+rabbitmq+nova-cloud-controller+something-else onto node 0
<SpamapS> imbrandon: hey did you get the openstack provider to work w/ hpcloud?
<SpamapS> signed up today.. want to see how it goes
<SpamapS> yes
<SpamapS> hah doh
<SpamapS> ssh fail
 * SpamapS goes on a mission to fix the mediawiki charm
<imbrandon> SpamapS: no
<imbrandon> SpamapS: you get it working on hp ?
<SpamapS> imbrandon: have not tried yet
<SpamapS> imbrandon: just using the internal canonistack so far
<imbrandon> hrm, sneek some hp time in i;d love to have someoen else fail too so i can get mine going
<imbrandon> but beating my head on it alone , i just gave up
<lifeless> jcastro: around ?
<jcastro> hi
<lifeless> jcastro: hey
<lifeless> jcastro: so, are you still handing out tshirts for charms ?
<jcastro> yeah
<lifeless> cool, I can has?
<jcastro> Yeah, mail me your shipping info
<lifeless> \o/
<jcastro> You get a handy dandy ubuntu mug as well
<lifeless> https://code.launchpad.net/~lifeless/charms/precise/opentsdb/trunk btw
<jcastro> k, just be responsive when a review comes in
<imbrandon> those mugs are nice, i use mine daily :)
<jcastro> the shirt is for when your charm is in the store itself
<jcastro> but it's ok, I have this amazing feeling that SpamapS will drive that queue to zero today
 * jcastro snickers
<imbrandon> i was gonna do some reviews tonight for doc's merges so i may get some charms done too
<imbrandon> jcastro: hows velocity
<jcastro> pretty high end
<jcastro> watching how they built Pinterest
<imbrandon> jcastro: i got yout email too, but for what i do i would have to move to ireland, no thanks
<imbrandon> heh
<sidnei> duct tape and paper clips?
<imbrandon> yea pintrest and instragram are both kick ass on the infra
 * sidnei j/k 
<SpamapS> jcastro: tomorrow is my day isn't it?
<SpamapS> jcastro: damn, I was planning on doing it tomorrow, silly me
<zirpu> i've decided i need to upgrade from my old lenovo t61 laptop to something new and better.  anyone have suggestions?  or laptops that you're using and like?
<zirpu> i've only got a measly 4gb memory. :-(
 * zirpu really want's 1Tb just to feel snug.
<thomi> zirpu: The system76 laptops are nice, especially the new ones, you no longer need to carry around a massive power brick
<zirpu> i have one on my desk at work. i stopped carrying it around. it's just a friking desktop. huge.  it has 8g memory. :-)
<SpamapS> hmm.. I wonder if charmers should have an official backports PPA
#juju 2012-06-28
<imbrandon> SpamapS:  that makes alot of sense
<imbrandon> zirpu: i've always used whatever the current 15inch MBP is and been very happy the last 4 or so years
<zirpu> hm. i'm not a mac fan. but i have had mbp's at various companies.
<zirpu> i still hate case-insensitive filesystem with a flaming irrational passion of a sloth.
<SpamapS> weird, why doesn't hp cloud recommend using python-novaclient ? :-P
 * SpamapS reluctantly installs fog
<zirpu> MiGhTilY i say!
<imbrandon> SpamapS: you can
<imbrandon> SpamapS: i use it
<SpamapS> imbrandon: it wants a username
<imbrandon> SpamapS: its in the docs to setup either, but they are ruby fans
<imbrandon> SpamapS: yea one sec its like ur email:tenneid
<imbrandon> or something
<imbrandon> one sec
<SpamapS> export OS_TENANT_NAME="clint.byrum@canonical.com-tenant1"
<SpamapS> that?
<burnbrighter> SpamapS: it appeared the admin node was checking for all those nodes.  Is there not a hard requirement for at least 10 nodes?  And can you point me to reference for that hack please?
<imbrandon> no
<SpamapS> burnbrighter: there's no reference to that hack
<SpamapS> burnbrighter: because it is.. the suck ;)
<SpamapS> burnbrighter: but for testing w/ < 10 boxes...
<burnbrighter> gotcha
<SpamapS> burnbrighter: basically you edit ~/.juju/environments.yaml and add 'placement: local' to the environment ..
<SpamapS> burnbrighter: but, this sends *EVERYTHING* to node 0
<burnbrighter> interesting
<SpamapS> burnbrighter: and it will install things in parallel, which will break things (apt-get can't run in parallel for instance)
<SpamapS> burnbrighter: so you kind of have to set it, install anything you want on node 0 one at a time, and then un-set it.
<SpamapS> anyway, I'm late.. gotta run
<burnbrighter> but, how would you do, say, just 5 nodes for example?
<burnbrighter> anyways, thnx
<SpamapS> burnbrighter: you can squeeze almost everything onto that one node
<burnbrighter> ok, thanks
<burnbrighter> I'm running this all on esxi / vsphere server, so adding nodes and cloning them wasn't a problem anyways :)
<jcharette> anyone around to support MAAS with juju?
<imbrandon> m_3: ping
<imbrandon> m_3: i was just thinking ( not looked at the code yet so you might already do this ) but the juju charm
<imbrandon> would it mayeb be a good idea to have it install by "hand" into a dediacated python virtualenv
<imbrandon> so it cant mess up the "host" juju or mess with the versions etc if the
<imbrandon> charms are frozen etc
<imbrandon> e.g. a seperation of the juju charm installation and the juju on the system already if its a subordiante of like an existing webserver/appserver services grp
<imbrandon> just a thought ... i'll peek into the code and see if u are already doing that or not
<zirpu> you could create a virtual env from a pip freeze and a cache dir easily w/o network access being needed.
<zirpu> i actually like that idea.
<m_3> imbrandon: yeah that's planned
<imbrandon> rockin , just crossed my mind ( btw i do have it as a sub on my webservers so i can call "juju" from hooks :)
<m_3> imbrandon: maybe even use that to provide the tmux session you leave up and drive the env with
<imbrandon> shhhh
<imbrandon> yea good idea
<m_3> imbrandon: that's sort of turning into a best practice for groups to manage an env.... two environments... <myenv> and <myenv-control>
<imbrandon> but please think of us screen users too, tmux normally gets apt-get purged from me :)
<m_3> easier than talking each member of the team through setup on frozen client branch
<imbrandon> yea good call
<m_3> imbrandon: ha!
<m_3> yeah, was a screen die-hard for years... but I've totally drunk the cool-aide
<imbrandon> man i cant get it to work like screen
<m_3> (helps to have the same bindings)
<m_3> really?
<imbrandon> if i could i probably would, but there is a few things that i cant make work
<zirpu> i just took my bindings with me.
<imbrandon> not the bindings
<imbrandon> more about the window setup
<zirpu> i'm all about full screen. :-)
<imbrandon> and making the shared screens work "right"
<m_3> multiuser acts pretty differently, but imo tmux works better for that
<imbrandon> well mine are full but i keep 5 or so windows open
<zirpu> i haven't used shared screens (yet).
<m_3> i.e., all users looking at same screen as I switch around
<imbrandon> m_3: yea thats what i cant get to work
<imbrandon> on screen works perfect
<imbrandon> tmux wont do it
<m_3> there's a way to get it to behave like screen tho
<zirpu> alias tmux=screen
<imbrandon> no no i dont think you get it
<zirpu> so it's said. :-)
<m_3> where each user canbe looking at diff screens
<imbrandon> my screen does like you say is easy in tmux
<imbrandon> but i cant even get tmux to do it
<imbrandon> i WANT that in tmuc
<zirpu> so client screens should switch when the master screen switches?
<imbrandon> no
<m_3> zirpu: that's tmux default, but opposite of screen default
<imbrandon> i flip arround to whatever i want
<imbrandon> and so can anyoone else
<zirpu> ah
<imbrandon> or we can all goto screen 0
<imbrandon> and share
<m_3> imbrandon: right... I'd have to dig for that... it's possible
<imbrandon> yea if you can help me figure that out i'd be a happy tmux user
<zirpu> hm. so attach -r   puts client in read only mode.
<imbrandon> like i have byobu set 4 windows on login, and if you and me both ssh in we can see each other type if on the same window but i can flip to others and so can you
<zirpu> but it can't switch screens. weird.
<imbrandon> mysetup^^
<zirpu> read-only mode is kind of useless.
<imbrandon> nah
<imbrandon> great for classroom
<zirpu> well, ok. there's that.
<imbrandon> or logs reads
<imbrandon> for a dedicated log window
 * zirpu does not know how all these kids live in their gui worlds. :-)
<m_3> yeah, I use one-way shared screens more than I ever thought I would
<imbrandon> but yea m_3 eveyone keep saying that WHY they swiched to tmux cuz that was hard in screen but to me thats default how screen works not hard and tmux i cant make do it
<zirpu> so readonly switches screens w/ the master screen. i see what you meant now.
<imbrandon> and i like multi bars at the bottom of screens in byobu but i think i can make tmux do that too
<m_3> imbrandon: http://paste.ubuntu.com/1063589/ is my conf... I'll get back to you on users watching separate windows in one session
<imbrandon> m_3: rockin
<imbrandon> ty
<m_3> totally does the bars too
<imbrandon> btw, whats the format for .byobu/windows.tmux ? i would need to convert my .byobu/windows file ( screen based )
<imbrandon> for the default windows on startup
<m_3> way different scripting mechanism I think... there're a couple of different schools on that... I use tmuxinator
<imbrandon> m_3: http://paste.ubuntu.com/1063593/  thats my .byobu/windows file on my OS X box
<m_3> 'mux blog' or 'mux charmtester'.... can start with 'mux copy juju mynewcharm; mux mynewcharm'
<m_3> tmuxinator is a gem so I assume it's good on osx
<imbrandon> likely , i use a TON of ruby on OSX
<imbrandon> as much as most ubuntu users use python :)
<imbrandon> heh
<imbrandon> trying to bring some of it with my to juju/ubuntu , still havent got a good juju-hook capfile balance yet
<imbrandon> but i'll get there
<imbrandon> s/my/me
<m_3> imbrandon: equiv is http://paste.ubuntu.com/1063597/
<imbrandon> i keep wanting to get at cloud-init tho, juju should really provide a way for us to do that, i dont care that it got shot down on the ML
<m_3> that'd be 'mux play'
<imbrandon> its that or i just use hax and write a wrapper in jitsu
<imbrandon> heh
 * m_3 off in tweakage-land :)
<imbrandon> hahaha yea i tweek my setups CONSTANTLY
<imbrandon> like i'm never done
<imbrandon> ever
<m_3> jitsu is all things unloved and evil... but working :)
<m_3> so anything goes
<imbrandon> yea , perfect home for it
<imbrandon> if i have to resort to it
<imbrandon> i'd rather not tho cuz it will likely take a patch to juju to accomplish cleanly
<imbrandon> and that makes me mad to have to resort yo
<m_3> ah, right
<imbrandon> to*
<imbrandon> but i certainly will to show the utility of such
<imbrandon> :)
<m_3> crap, sorry forgot to reset my away mgs... hope you're not getting to much noise back
<imbrandon> nope
<imbrandon> well i dunno actually
<m_3> cool
<imbrandon> i have msgs like that off
<imbrandon> hehe
<imbrandon> joins parts quits aways etc etc
<imbrandon> all prety much off
<m_3> right
<imbrandon> too many "dead" channels i idle in that would fill my disk with logs
<imbrandon> if i dident :)
<imbrandon> m_3: see my newest widget ? http://bholtsclaw.github.com/showdown/
<imbrandon> think i'm gonna make a charm out of the backend stuff i need to host those , so i can have them all central and easy to put all on one page etc etc
<m_3> cool
<imbrandon> cuz its like a mix of nodejs and mustash templates and php ( on my.phpcloud.com ) git and github pages
<m_3> you should add other keys in there so we can have multiple peeps manage it
<imbrandon> heh in otherwords i got em all spread out
<m_3> and freeze the juju branch
<imbrandon> hmmm yea
<imbrandon> good idea
<imbrandon> great idea actuallu
<m_3> etc etc all the production best practices we reommend.... (nother test case)
 * imbrandon puts that on the very short-term todo's
<imbrandon> yea , seriously , good call
<m_3> perhaps even use lp:charms/juju in the "control" environment... where it gets left open with a tmux serssion any of the team can attach/drive
<m_3> (the latters still a maybe... not nec recommended practice yet)
<m_3> just really needed it for scale testing
<imbrandon> ok i need a little food and mt dew then to get started on that, i think that will be a good project tonight ... ( i dident wake up till 6pm local so i'll be up all night heh )
<m_3> and sort of need it after the fact for plumber's summit
<imbrandon> ahh right
<m_3> cool later man
<imbrandon> yea i'll be back in ~20ish min, just need a lil snack, but i'll br round all night
<imbrandon> and gonna work on just that, got me kinda fired up about the idea
<imbrandon> ;)
<m_3> cool.. I'll be in and out
<imbrandon> kk
<imbrandon> yea i like the idea of a tmux session with charms/juju installed on the bootstrap node , then the "team" can just ssh into the bootsrap and drive from a juju window there
<imbrandon> and not need anything but their ssh key on that bootstrap
<imbrandon> no local juju or env.y etc
<imbrandon> ...
<m_3> imbrandon: I'm thinking separate envs...
<imbrandon> yea ok fooood then that is gonna become a reality with the widgets and button as a testcase
<m_3> summit and summit-control
<imbrandon> ahhh
<imbrandon> yea
<burnbrighter> Anyone know about troubleshooting why the mysql charm instance didn't come up in juju/maas?  I can reach the node via regular ssh, but agent shows the node is down.
<imbrandon> yea i liky that too
<m_3> both would have 'authorized-keys:' in the env with everybody's keys
<imbrandon> right
<burnbrighter> this is for openstack-dashboard btw
<imbrandon> burnbrighter: hrm nothing in juju-debug log?
<imbrandon> err juju debug-log
<m_3> burnbrighter: not really no... I've seen machines with services running great but the juju aagents report it down
<burnbrighter> nothing extraordinary I can see
<m_3> rebooting (those ec2 instances) worked to reset the agents
<burnbrighter> this is maas
<m_3> right
<m_3> I've only seen that prob in ec2.... and pretty rare
<imbrandon> restart the agent ? ( or reboot the node )
<imbrandon> bbiab
<burnbrighter> how does one restart the agent (me=noob)
<m_3> yeah, first try taking a look in the provisioning agent's log to see that the service was started up correctly
<burnbrighter> I did restart
<burnbrighter> machines:
<burnbrighter>   1:
<burnbrighter>     agent-state: not-started
<burnbrighter>     dns-name: maas07
<burnbrighter>     instance-id: /MAAS/api/1.0/nodes/node-4ddba634-bfdb-11e1-8909-005056a44c88/
<burnbrighter>     instance-state: unknown
<burnbrighter> services:
<burnbrighter>   mysql:
<burnbrighter>     charm: cs:precise/mysql-2
<burnbrighter>     relations:
<burnbrighter>       shared-db:
<burnbrighter>       - glance
<burnbrighter>       - keystone
<burnbrighter>       - nova-cloud-controller
<burnbrighter>       - nova-compute
<burnbrighter>       - nova-volume
<burnbrighter>     units:
<burnbrighter>       mysql/1:
<burnbrighter>         agent-state: pending
<burnbrighter>         machine: 1
<burnbrighter>         public-address: null
<m_3> burnbrighter: did it ever say 'started'?
<burnbrighter> no
<burnbrighter> looking in provision log now..
<m_3> hmmmm... yeah, dig for the maas version of the provisioning-agent log
<burnbrighter> 2012-06-27 23:01:35,045:901(0x7f45fbcc2700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7f45f8fce6b0 sessionId=0 sessionPasswd=<null> context=0x3352b20 flags=0
<m_3> and then the machine-agent log on that box
<burnbrighter> 2012-06-27 23:01:35,045:901(0x7f45f74ca700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
<burnbrighter> 2012-06-27 23:01:38,381:901(0x7f45f74ca700):ZOO_INFO@check_events@1585: initiated connection to server [127.0.0.1:2181]
<burnbrighter> 2012-06-27 23:01:38,403:901(0x7f45f74ca700):ZOO_INFO@check_events@1632: session establishment complete on server [127.0.0.1:2181], sessionId=0x1383109683e0001, negotiated timeout=10000
<burnbrighter> 2012-06-27 23:11:55,399: juju.agents.provision@INFO: Stopping provisioning agent
<m_3> then unit-agent there too
<m_3> refused...hmmm
<burnbrighter> 2012-06-27 23:13:31,767:886(0x7fa12a2f8700):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=localhost:2181 sessionTimeout=10000 watcher=0x7fa1271b56b0 sessionId=0 sessionPasswd=<null> context=0x1db9090 flags=0
<burnbrighter> 2012-06-27 23:13:31,768:886(0x7fa125b00700):ZOO_ERROR@handle_socket_error_msg@1579: Socket [127.0.0.1:2181] zk retcode=-4, errno=111(Connection refused): server refused to accept the client
<burnbrighter> 2012-06-27 23:13:35,105:886(0x7fa125b00700):ZOO_INFO@check_events@1585: initiated connection to server [127.0.0.1:2181]
<burnbrighter> 2012-06-27 23:13:35,133:886(0x7fa125b00700):ZOO_INFO@check_events@1632: session establishment complete on server [127.0.0.1:2181], sessionId=0x138311458a10000, negotiated timeout=10000
<burnbrighter> 2012-06-27 23:13:35,216: juju.agents.machine@INFO: Machine agent started id:0
<m_3> zk's down maybe?
<burnbrighter> how can I check please?
<m_3> ps awux | grep zoo
<burnbrighter> on that node?
<m_3> netstat -lnp | less
<m_3> on the maas server
<burnbrighter> ubuntu@maas07:/var/log/juju$ ps awux | grep zoo
<burnbrighter> 107        925  0.2  0.8 1835488 33088 ?       Ssl  23:13   0:01 /usr/bin/java -cp /etc/zookeeper/conf:/usr/share/java/jline.jar:/usr/share/java/log4j-1.2.jar:/usr/share/java/xercesImpl.jar:/usr/share/java/xmlParserAPIs.jar:/usr/share/java/zookeeper.jar -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=false -Dzookeeper.log.dir=/var/log/zookeeper -Dzookeeper.root.logger=INFO,ROLLINGFILE org.apache.zookee
<burnbrighter> erver.quorum.QuorumPeerMain /etc/zookeeper/conf/zoo.cfg
<burnbrighter> ubuntu    1205  0.0  0.0   9376   932 pts/0    S+   23:22   0:00 grep --color=auto zoo
<burnbrighter> it must be up, right? because other agents are running fine
<m_3> burnbrighter: this is where I'm out of my depth wrt maas.... sorry, I'm not sure exactly
<m_3> I'll assume that the maas server is set up as the bootstrap node and runs the provistioning agent and zookeeper
<burnbrighter> m_3: thanks anyways.  this is tricky
<m_3> (note the provisioning agent is attempting to connect to localohost:zk
<burnbrighter> eh, localhost on its own zookeeper?
<m_3> and getting refused for some reason
<burnbrighter> um yeah, I see that - connection to localhost 2181
<m_3> right... in std juju, the provisioning agent and zk live together on the bootstrap node
<m_3> provisioning agent should be able to subscribe to zk changes and spin up machines accordingly afaik
<burnbrighter> I think in maas, and I may be wrong - the bootstrap node becomes the first node bootstrapped
<m_3> ah, yeah, that's the question... does the maas server do this or does it actually still use some _other_ bootstrap node per environment
<burnbrighter> I know it pulls the ephemeral image from the primary maas node, but where does it get that info?
<burnbrighter> from the first bootstrapped node, or the primary maas node?
<burnbrighter> ugh
<m_3> um... dunno
<m_3> I don't know if a maas env is treated as a single juju environment
<m_3> lemme dig a bit
<burnbrighter> are you on the canonical team?
<m_3> this is known stuff though... might try #ubuntu-server or #juju again tomorrow
<m_3> yeah, I'm on the server team, but haven't worked with maas yet
<burnbrighter> bigjools may know too - but I think he draws the line when it gets deeper in to the juju side
<m_3> I think there's a #maas here too... looking
<burnbrighter> yeah - that's my primary channel
<m_3> ah, see you're already there :)
<burnbrighter> but now that I've got the maas side stable, I've been focusing on trying to get the openstack stuff working
<m_3> burnbrighter: yeah... that's probably our most complex stack to date on juju
<burnbrighter> and the only one I REALLY want working :)
<burnbrighter> lol
<m_3> the 'not-started' sounds like a lower-level problem than the charm though
<burnbrighter> this has been a 3 week endeavor for me
<m_3> wow
<burnbrighter> both cheez0r and I have both been trying to get it working
<m_3> adam_g: you up still?
<m_3> burnbrighter: they've got this going regularly in a hw lab
<burnbrighter> but I'm wondering if there was special tweaking for the mysql stuff
<burnbrighter> or if there are open issues
<m_3> dunno
<burnbrighter> yeah
<burnbrighter> admittedly, I'm not up on the juju stuff yet enough to be dangerous with troubleshooting
<m_3> try to bring it up to a 'started' state before adding any relations
<burnbrighter> how do you bring up individual nodes in started state?
<m_3> I know I've heard of ordering deps in openstack charm relations
 * imbrandon returns
<m_3> ju do the deploys
<m_3> no add-relation calls until you see them in a 'started' state
<imbrandon> m_3: i thought about that at one time, me and marcoceppi brefly talked about it at UDS etc
<imbrandon> but i decided that hiabu was a better way
<imbrandon> ( pending rename )
<m_3> (there's a jitsu tool to script all of this... 'apt-get install juju-jitsu; jitsu watch -h'... eventually... but for now just do it in your scripts)
<burnbrighter> ok, so destroy the individual services, but don't add the relation calls?
<burnbrighter> eh, re-add after destroying then wait till they individually come up?
<burnbrighter> then add relations?
<m_3> burnbrighter: yeah, so at this point mysql hasn't been "started"
<m_3> so you should be able to remove-relation's
<m_3> then destroy that service
<m_3> then terminate that machine
<m_3> then deploy the service
<m_3> now, wait until it's 'started'
<burnbrighter> so first try to remove relations, destroy mysql service - terminate machine is new for me
<m_3> then go in and 'add-relation'
<burnbrighter> I'll try that - remove all mysql related relations I take it?
<m_3> burnbrighter: so I would totally recomend restarteing from scratch... but you don't _really_ have to do that since the service has never been in a started state
 * imbrandon just thought about something from the backlog ... is tmux in ruby ? if so i'd drop screen a moment and deal with "issue" just for that alone
<m_3> so the relation hooks on either side have never really fired
<m_3> imbrandon: dunno
<burnbrighter> does destroying the service basically start things from scratch too?
<m_3> burnbrighter: destroying the service _and_ terminating that machine will start that service from scratch
<burnbrighter> ie. without destroying the rm -rf "destroy-environment"
<burnbrighter> ok
<m_3> burnbrighter: but won't require you to bring your whole stack down
<m_3> right
<burnbrighter> cool, I can try that
<burnbrighter> thank you
<m_3> I'd recommend that just for sanity's sake as the 'reboot' option... but that's more of a pain with some providers than others
<m_3> and if I thought any relation hooks got fired, I'd say... destroy-environment
<m_3> but it doesn't look like that here
<m_3> you can tell by sshing to that machine and 'mysql -uroot; show databases' or something like that
<m_3> should be empty
<burnbrighter>  yeah, I didn't even see mysql coming up on that node, so that will be a good test
<m_3> yup
<m_3> not sure exactly how to terminate-machine in maas... that's equiv to a re-install
<burnbrighter> yeah - I learned this the hard way - you have to terminate the juju stack before trying to do anything on the maas side
<m_3> 'juju terminate-machine <machine_id>' on most providers totally kill the instance... so you're starting with a freshubuntu server next time
<burnbrighter> your maas nodes end up pulling the ephemeral image, but the maas nodes are based on the precise image, I believe
 * m_3 _really_ needs to learn maas... gotta bump that up higher in the queue
<burnbrighter> its so very cool.  Its just raw
<burnbrighter> a little baking and it will make a really nice pie
<m_3> cool
<burnbrighter> ok, let me try your suggestions
<m_3> ok, I'm on UTC+10 this week so I'll be in and out for several hours still
<burnbrighter> still not starting...
<burnbrighter> http://pastebin.ubuntu.com/1063654/
<m_3> burnbrighter: wow.... hmmmm
<burnbrighter> any of that look interesting?
<m_3> not really... that's during an expose
<burnbrighter> I'm not exposing, just deploying
<m_3> expose is a separate step... much like 'add-relation'
<burnbrighter> right
<burnbrighter> I didn't expose, I just added the service
<burnbrighter> just ran "juju deploy mysql
<burnbrighter> "
<m_3> it's marked as exposed somehow... but it's not surprising that that's barfing if the machine's not coming up
<m_3> ok, 'juju deploy mysql' is perfect
<burnbrighter> here is exactly what I did: 1. remove relations:
<m_3> so now what did you do to throw that machine away?
<burnbrighter> juju remove-relation keystone mysql
<burnbrighter> juju remove-relation nova-cloud-controller mysql
<burnbrighter> juju remove-relation nova-volume mysql
<burnbrighter> juju remove-relation nova-compute mysql
<burnbrighter> juju remove-relation glance mysql
<burnbrighter> then I removed the service ie. juju destroy-service mysql
<m_3> then do a status to see the service sitting all alone (no relations)
<burnbrighter> did that, yes
<m_3> then destroy-service... sounds good
<m_3> now for terminate-machine
<burnbrighter> then I terminated the node it was running on
<burnbrighter> then I went straight in to deploy -
<m_3> does that just put the existing server into WOL sleep?
<m_3> or does it _rebuild_ it?
<burnbrighter> well, there are VMs ;)
<burnbrighter> s/there/they are/
<m_3> oops, sorry not following
<m_3> ok, so you're deploying maas on a bunch of vms
<burnbrighter> yes
<burnbrighter> which to this point and not without a lot of headaches was working :)
<m_3> ok, so when that machine is terminated, does it destroy that instance _and_ image?
<burnbrighter> the headaches were all procedural though
<m_3> or does it put it to sleep or just stop it?
<burnbrighter> it appears to remove the instance
<burnbrighter> from juju
<burnbrighter> let me check if the machine is actually down
<burnbrighter> it is not
<m_3> ah, but it really needs to kill the instance and image and reprovision a new machine
<burnbrighter> so my guess is if I "add" it back it will be back in the line up
<burnbrighter> but why do I need to do that?
<m_3> burnbrighter: so I think that whatever is causing this to show up in a not-started state is written into the image
<m_3> it wouldn't be in a pristine copy of the image
<m_3> but it's in that actual one
<burnbrighter> so, check this - when I asked for mysql to be redeployed - it went to a whole different node - my node 12 and got the same results
<m_3> (total guess... but that's what I'm trying to determine by doing a 'terminate-machine')
<m_3> oh wow
<m_3> ok, so scratch that....
<burnbrighter>   mysql:
<burnbrighter>     charm: cs:precise/mysql-2
<burnbrighter>     relations: {}
<burnbrighter>     units:
<burnbrighter>       mysql/2:
<burnbrighter>         agent-state: pending
<burnbrighter>         machine: 12
<burnbrighter>         public-address: null
<burnbrighter>   12:
<burnbrighter>     agent-state: not-started
<burnbrighter>     dns-name: maas07
<burnbrighter>     instance-id: /MAAS/api/1.0/nodes/node-4ddba634-bfdb-11e1-8909-005056a44c88/
<burnbrighter>     instance-state: unknown
<m_3> burnbrighter: is the uuid different?
<m_3> dunno how it reuses instances
<m_3> oh actually it's the same
<m_3> maybe definging the same instance as a new machine id
<m_3> or mabe is a new instance from the same image
<m_3> sorry for the typing... high latency hotel connection
<burnbrighter> I know how that goes :)
<burnbrighter> I just restarted the old node that I previously terminated and its rebootstapping and adding itself back to juju
<m_3> yeah, looks like it didn't kill that instance... just re-used it
<burnbrighter> seems like it migrated it to another node?
<m_3> so can you add new nodes into maas without terminating the whole juju environment?
<burnbrighter> sure
<m_3> they had the same instance-id though
<m_3> (from earlier in the channel)
<m_3> just different machine_id
<burnbrighter> what does that mean though - the instance ID is the same?
<m_3> which seems strange, but I don't know
<m_3> perhaps maas is trying to intelligently re-use instances
<m_3> but if there's a catastrophic config error in one... there's gotta be a way to tell it to wipe it and start over from a fresh install
<burnbrighter> it appears to me, juju runs a layer above maas
<burnbrighter> so juju AFAICT has little knowledge of maas
<m_3> yes, but 'juju terminate-machine 12' in all other providers gives you a completely fresh ubuntu server instance
<m_3> but there's _definitely_ problems re-using intances
<m_3> i.e., if the charm has an error... does something stupid with config
<m_3> there're plenty of situations where that's unrecoverable on _that_ instance
<burnbrighter> ah freakin weird
<m_3> and a need to re-provision that from scratch
<burnbrighter> mysql just came up
<m_3> ha
<m_3> so don't relate anything yet
<m_3> let's figure out what's up
<burnbrighter>   mysql:
<burnbrighter>     charm: cs:precise/mysql-2
<burnbrighter>     relations: {}
<burnbrighter>     units:
<burnbrighter>       mysql/2:
<burnbrighter>         agent-state: started
<burnbrighter>         machine: 12
<burnbrighter>         public-address: maas07.localdomain
<m_3> strange... take a look in the various logs
<burnbrighter> looking
<m_3> on the instance, there should be something like /var/lib/juju/units/mysql-xxx/charm.log
<m_3> might be in /var/log/juju... different on different providers (#$$%#@!)
<burnbrighter> only a machine-log
<burnbrighter> http://pastebin.ubuntu.com/1063669/
<m_3> there should be a charm log there _somewhere)
<burnbrighter> but here is the debug log from the main node, more interesting
<burnbrighter> http://pastebin.ubuntu.com/1063670/
<burnbrighter> I think that's the charm log
<burnbrighter> right?
<m_3> wow
<m_3> there're a couple of strange things going on here
<burnbrighter> what are you seeing?
<m_3> 1.) debconf is trying to use a Dialog frontend... should be noninteractive
<m_3> 2.) mysql revision is just whack
<m_3> I'm testing a charmstore deploy here
<m_3> while I do that, can you please grab a local copy of the charm?  'charm get mysql' or 'bzr branch lp:charms/mysql'
<burnbrighter> ohhhh wait - I saw somewhere something about needing to declare a terminal!
<burnbrighter> like Xvfb or something like that...
<m_3> juju should be setting DEBIAN_FRONTEND=noninteractive
<burnbrighter> no, I was mistaken - that was in the openstack.cfg
<burnbrighter> nice find -
<burnbrighter> does that need to be fixed in the build?
<m_3> it's in juju bzr544 /usr/share/pyshared/juju/hooks/invoker.py line 219
<burnbrighter> you are speaking greek to me :)
<burnbrighter> should I destroy msql-service?
<burnbrighter> uh destroy-service mysql?
<m_3> sorry... just checking out the versioning problem
<m_3> ok, so you said the openstack notes said something about needing a specific terminal?
<burnbrighter> no worries.  I'm happy to test out whatever you have...
<burnbrighter> no, forget that
<m_3> ok
<burnbrighter> this is what I'm looking at
<burnbrighter> https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<burnbrighter> look under Deploying Ubuntu Cloud Infrastructure with Juju
<burnbrighter> but that's unrelated to what we are doing
<burnbrighter> that's specific for the nova-volume charm
<burnbrighter> oh but looky there
<burnbrighter> right under that
<m_3> ok, so the debconf errors are a red herring... np
<burnbrighter> step 2
<m_3> http://paste.ubuntu.com/1063695/ is a successful ec2 mysql spinup... we can compare
<m_3> your log looks ok as far as I can see
<m_3> so maybe it was just timing that was giving problems
<m_3> you can either kill it and see if it spins up again with relations
<burnbrighter> that could be... but did you look at that step 2?
<m_3> or just add the relations from here to see if they come up
<burnbrighter> they deploy from the local instance
<burnbrighter> so, how do I then tell afterward if the relations were added correctly?
<m_3> burnbrighter: no, that one was deployed from the charm store
<burnbrighter> mine was you mean, right?
<m_3> burnbrighter: any failed relations will have an 'error' state
<burnbrighter> my mysql charm was deployed from the charm store
<burnbrighter> vs the way they say I should be doing it which is by downloading the charm then installing locally, correct?
<m_3> burnbrighter: that's what I was testing out... to see if I got any differences
<burnbrighter> ah, ok - see anything?
<m_3> burnbrighter: in general, yes you want to deploy from a local charm repository... mostly just to freeze the charms you're working with
<m_3> burnbrighter: but for what you're doing... just getting things up and running... the charm store should be fine
<burnbrighter> what about "DEBIAN_FRONTEND=noninteractive" ?
<burnbrighter> another question I have, is now that I terminated that instance, my instance numbering is off
<burnbrighter> ie. 0,2,3,4,5,6,7,8,9,10,11
<imbrandon> thats normal. they only ise
<burnbrighter> notice "1", the node I terminated before is now missing and was put back in as 12
<imbrandon> rise*
<imbrandon> e.g i have an env with 3 machines and they are 0 5 and 11
<burnbrighter> ok, no way to go backwards for linearity?
<burnbrighter> ok, low on the priority list I guess.  no big deal
<imbrandon> nope, but its highly irrelivent if you are thinking in juju terms of service mgmt, forget the machine is there :)
<burnbrighter> right...
<burnbrighter> hmmm, yeah, that puked
<m_3> burnbrighter: only way to rewind ids is destroy-environment
<m_3> that's just zookeeper sequence ids
<m_3> no way
<m_3> the relations barfed it.... or it barfed on its own?
<burnbrighter> glance puked mostly
<m_3> oh
<m_3> :(
<burnbrighter> also saw this a lot
<burnbrighter> 2012-06-28 05:05:17,750 unit:glance/1: hook.output INFO: db_changed: DB_HOST || DB_PASSWORD not yet set. Exit 0 and retry
<m_3> burnbrighter: that's normal
<burnbrighter> and this
<burnbrighter> 2012-06-28 05:05:14,025 unit:mysql/2: hook.output INFO: DATABASE||REMOTE_HOST||DB_USER not set. Peer not ready?
<m_3> burnbrighter: that too
<burnbrighter> maybe its a timing thing too
<burnbrighter> I should let all of the relations finish one at a time? instead of just pasting them all in?
<m_3> those messages are part of the normal relation exchange between the two sides
<burnbrighter> I will put it in to pastebin and let you see what I am seeing ;)
<m_3> burnbrighter: no,all at once _should_ be fine
<m_3> I remember seeing something with relation weights to order them though
<m_3> what openstack scripts are you working from?
<burnbrighter> http://pastebin.ubuntu.com/1063712/
<burnbrighter> just basically following the procedure you I posted in above, but minus the local repository creating thing
<burnbrighter> I should follow it more closely
<burnbrighter> there is likely a reason its set up the way it it
<burnbrighter> it is rather
<m_3> damn... that sure seems like the relation barfed
<m_3> could be residual state from the earlier mysql instance fail
<m_3> i.e., worth trying a total destroy-environment again
<burnbrighter> well, this looks good:
<burnbrighter>  mysql:
<burnbrighter>     charm: cs:precise/mysql-2
<burnbrighter>     relations:
<burnbrighter>       shared-db:
<burnbrighter>       - glance
<imbrandon> resolved --retry ?
<burnbrighter>       - keystone
<burnbrighter>       - nova-cloud-controller
<burnbrighter>       - nova-compute
<burnbrighter>       - nova-volume
<burnbrighter>     units:
<burnbrighter>       mysql/2:
<burnbrighter>         agent-state: started
<burnbrighter>         machine: 12
<burnbrighter>         public-address: maas07.localdomain
<m_3> but I'm really concerned that destroy-environment and then reboostrapping is re-using instances
<m_3> yes, that looks like it's in a good state
<m_3> lemme read back through the log to see if it fixed itself
<burnbrighter> still not authenticating though...
<imbrandon> m_3: iirc it is, and intentionaly iirc inless you also terminate machine
<imbrandon> i ran into that issue myself
<imbrandon> unless*
<burnbrighter> mbrandon: is that directed to me? you think auth won't work off the bat?
<imbrandon> no no
<imbrandon> m3
<burnbrighter> ah, ok, sorry
<imbrandon> np
<imbrandon> :)
<m_3> imbrandon: not sure what terminate-machine is doing in maas
<imbrandon> it was re: bootstraping reusing instances
<m_3> it appeared to not be a real flush
<imbrandon> ahh
<imbrandon> yea last time this came up i was told it was that way due to the time it takes ec2 to spin up a new instance
<imbrandon> so it reused it unless you explisitly say destro it
<imbrandon> iirc
<imbrandon> i'm not greatly familiar with that tho so i very well could be thinking of something else or just flat mistaken
<imbrandon> :)
 * imbrandon goes back to making the PoC juju-control charm
<m_3> imbrandon: no, that's the issue... but re-using hardware saves time for maas.. but can also intro config problems from the instance's previous life
<m_3> burnbrighter: ok, so based on the log content, I'd expect that the good status you're seeing is a lie
<burnbrighter> yeah, what I kind of thought
<m_3> ok, so where are we...
<burnbrighter> I do see mysql appears to be up and running, but the errors don't paint a rosy picture
<m_3> at this point, I'm thinking a destroy-enviropnment is called for
<burnbrighter> I guess its time to blow everything away - can I do that short of destroying the whole environment? It's kind of a pain to re-bootstrap everything
<burnbrighter> ie. destroy-service?
<burnbrighter> or is the only sure way to start from scratch?
<m_3> yeah, you're right... re-boostrapping is a pain
<m_3> perhaps do the destroy-service dance again (removing relations first)
<adam_g> m_3: i am now
<m_3> but this time, make sure there's no way that maas can re-use that nstance
<m_3> after a destroy-service
<burnbrighter> ah, but it worked the second time around
<burnbrighter> oh...
<burnbrighter> I see what you are saying...
<m_3> do a terminate-machine and add the manual --with-vengeance option on there :)
<m_3> I don't know what that needs to look like in maas
<burnbrighter> I think the only way to guarantee is to destroy-env right?
<m_3> other than perhaps virsh destroy or even undefine
<burnbrighter> this is esxi :)
<m_3> no, I don't know that destroy-environment really flushes the instances with maas
<burnbrighter> ok, I'll try destroy-service first
<m_3> ok
<burnbrighter> follow the RTFM print
<m_3> :)
<m_3> I'll dig a bit later today and see what the code does with a terminate-instance
<burnbrighter> if you are around, I'll let you know how it goes
<burnbrighter> thnx heeps for your help
<m_3> I'll bet they're erring on the "more re-use" side so they come back up quicker
<m_3> burnbrighter: happy to... we'll figure it out
<adam_g> burnbrighter: the traceback in http://pastebin.ubuntu.com/1063712/ looks like a dns problem
<burnbrighter> dns?
<SpamapS> m_3: g'day
<m_3> SpamapS: yo
<adam_g> burnbrighter: i suspect glance is trying to reach a mysql host at maas07.localdomain, , but that hostname is not resolvable from that machine
<adam_g> burnbrighter: if you still have that up, can you confirm that? ssh to the machine and see if you can reach that host
<burnbrighter> adam_g: where is it getting the .localdomain - is that added by default?
<burnbrighter> yes, but I have it on the primary node's host file
<adam_g> burnbrighter: the .localdomain is the default, yeah. is MAAS handling your DNS or is that handled somewhere else?
<burnbrighter> dnsmasq runs on the primary node
<burnbrighter> but yeah, you're right, its not working...
<burnbrighter> ubuntu@maas07:~$ nslookup maas02.localdomain
<burnbrighter> Server:		172.16.100.11
<burnbrighter> Address:	172.16.100.11#53
<burnbrighter> ** server can't find maas02.localdomain: NXDOMAIN
<burnbrighter> ok, it is going to the right node though...
<adam_g> i dont follow
<burnbrighter> uh sorry
<burnbrighter> 172.16.100.11 is my maas01 - my primary maas node
<burnbrighter> and that's where dnsmasq is
<adam_g> burnbrighter: can the maas managed nodes reach each other via their maas00*.localdomain hostname?
<burnbrighter> not localdomain, but short name because its in the host file
<burnbrighter> I guess if I update the domain in the host file it will work
<burnbrighter> let me try that
<adam_g> burnbrighter: you'll need to ensure they can reach each other via their FQDN
<burnbrighter> yeah - is that .localdomain configurable? where does that come from?
<adam_g> burnbrighter: i personally dont know how to go about getting that right in MAAS, but it looks like thats the main issue
<burnbrighter> I think I know how to fix it from maas, but I would rather fix it on the juju side
<m_3> wonder if that could be why the service got stuck in a 'not-started' state to begin with
<burnbrighter> that would make sense
<m_3> seems like nothing would work
<m_3> though
<adam_g> burnbrighter: i believe you can edit the nodes' hostnames via the web interface
<burnbrighter> eh...which web interface??
<burnbrighter> maas?
<adam_g> burnbrighter: MAAS
<adam_g> in the node list, you have the option of editting nodes, and you can update hostnames there
<burnbrighter> I have done that - and its set to maas.local - but that is not getting to juju's configuration
<burnbrighter> somehow the nodes are getting ... hmmm
<burnbrighter> thinking
<burnbrighter> you know, my node aren't set
<burnbrighter> ok
<burnbrighter> nope, they aren't
<burnbrighter> so my choice goes back to setting it in the host files
<adam_g> burnbrighter: localdomain is the default domain name used when none is specified, unrelated to juju, maas, etc.
<burnbrighter> yup, that's making sense
<burnbrighter> I don't have a domain specified in maas
<burnbrighter> its only in the host file
<burnbrighter> so that does make sense
<m_3> adam_g: thanks for catching the dns... missed that
<adam_g> yeah, ive seen similar issues. im sure someone else knows how to fix that properly in MAAS
<burnbrighter> From what bigjools said, they are getting rid of the dns configuration in the next release
<burnbrighter> not sure about the dependencies though...
<burnbrighter> maybe putting that back on the user
<adam_g> that'd make things easier for sure. :)
<burnbrighter> ok, now its working...
<burnbrighter> ubuntu@maas07:~$ nslookup maas02.localdomain
<burnbrighter>  Server:		172.16.100.11
<burnbrighter> Address:	172.16.100.11#53
<burnbrighter> Name:	maas02.localdomain
<burnbrighter> Address: 172.16.100.102
<burnbrighter> ubuntu@maas07:~$  ping maas02.localdomain
<burnbrighter> PING maas02.localdomain (172.16.100.102) 56(84) bytes of data.
<burnbrighter> 64 bytes from maas02.localdomain (172.16.100.102): icmp_req=1 ttl=64 time=0.362 ms
<burnbrighter> 64 bytes from maas02.localdomain (172.16.100.102): icmp_req=2 ttl=64 time=0.228 ms
 * SpamapS shakes head
<SpamapS> DNS is such a CF
<adam_g> burnbrighter: what is the state of things now? where did you leave it?
<burnbrighter> Now do you think its a matter of rebuilding relations, or destroying the service and starting from scratch with the service stack?
<adam_g> i think its probably safe to remove relations and re-add, without tearing down.
<burnbrighter> ok, I will try that...
<adam_g> burnbrighter: to confirm though, remove all relations and add the glance <-> mysql one, see that it works
<adam_g> burnbrighter: this might be helpfuul: http://paste.ubuntu.com/1063760/
<burnbrighter> http://pastebin.ubuntu.com/1063763/
<burnbrighter> ah that's way different than what I was following
<burnbrighter> I was just doing this...
<burnbrighter> juju add-relation keystone mysql
<burnbrighter> juju add-relation nova-cloud-controller mysql
<burnbrighter> juju add-relation nova-volume mysql
<burnbrighter> juju add-relation nova-compute mysql
<burnbrighter> juju add-relation glance mysql
<adam_g> burnbrighter: can you just confirm glance <-> db is okay, 'sudo glance-manage db_version' on the glance node?
<burnbrighter> sure, hang on
<burnbrighter> ubuntu@maas09:~$ sudo glance-manage db_version
<burnbrighter> 13
<adam_g> burnbrighter: cool, looks good
<burnbrighter> is there a default u/p for openstack-dashboard?
<burnbrighter> or is it admin and whatever you configure in to openstack.cfg?
<adam_g> burnbrighter: yeah, its the admin username and password configured in the deployment config
<burnbrighter> but when that's not working, db issues are suspect
<burnbrighter> I'm not even sure I see anything hitting the db
<adam_g> burnbrighter: logging into the dashboard? no, thats probably not db related
<burnbrighter> interesting, ok
<adam_g> burnbrighter: can you paste bin `juju status`?
<burnbrighter> sure
<adam_g> (wife is home, need to run in a min)
<burnbrighter> adam_g :  http://pastebin.ubuntu.com/1063775/
<imbrandon> m_3: use juju to deploy -> MAAS using -> EC2 instsnces as Nodes -> to build an Openstack Cloud -> that you use juju to bootrap a juju-control environment -> juju-control installs juju and sets up tmux charm-tools jitsu environments.yaml and ssh-import-keys to bootstrap a child env -> juju deploys byobu-classroom to child env -> byobu-classroom creates a LXC based public bootstrap and ajaxterm node for all to marvel -> Internets break from all the 
<adam_g> burnbrighter: i do not know why, but it shows keystone as 'pending' with no machine. perhaps its just a slow machine? wait for that to come up and 'started' before trying anything. they dont call that 'keystone' for nothing :)
<burnbrighter> question for you ...
<adam_g> sorry, it has a machine but it is not started
<burnbrighter> that relation list you sent me is vastly different than the one from the maas documenation
<burnbrighter> is that what I should be using?
<burnbrighter> I'm not sure I have all of those things deployed
<adam_g> burnbrighter: which are you referencing?
<burnbrighter> http://paste.ubuntu.com/1063760/
<adam_g> burnbrighter: i meant, which maas documentation
<burnbrighter> oh..
<burnbrighter> hold on
<burnbrighter> https://help.ubuntu.com/community/UbuntuCloudInfrastructure
<adam_g> burnbrighter:
<adam_g> just compared, they are basically the same. the order is a bit different, and in the one i pasted it is specifying the interface for each relation (eg, :identity-service) which is not needed
<adam_g> anyway, i have to run
<burnbrighter> ok
<burnbrighter> thnx
<adam_g> burnbrighter: now that your DNS is sorted, id suggest trying again from scratch (sucks, i know)
<adam_g> ping me tomorrow if youre still having problems, see ya
<alo21> Can I ask here about juju building problems?
<burnbrighter> adam_g: thanks - what timezone / times are you usually around?
<m_3> burnbrighter: he's us-west-coast
<m_3> imbrandon: sounds just pathologically like inception
 * koolhead11 pokes m_3 
<m_3> we should start an inception-count or someting... inception depth
<m_3> hey koolhead11!
<SpamapS> alo21: yes you can and should ask about those. :)
<m_3> gonna go grab food...bbiab
<alo21> SpamapS: I have problem building juju from source
<alo21> can someone help me?
<SpamapS> alo21: its python..
<SpamapS> alo21: what source are you trying to build?
<SpamapS> or are you trying to play with the go port?
<alo21> SpamapS: I downloaded juju from apt-get source
<SpamapS> alo21: ah
<alo21> SpamapS: and then I run sudo pbuilder build *.dsc
<SpamapS> why does everybody use pbuilder? sbuild is so much better.. :p
<imbrandon> m_3: hahah +1 on inception-count or similar :)
<SpamapS> imbrandon: sooo close
<imbrandon> ?
<SpamapS> imbrandon: hpcloud
<alo21> SpamapS: I do not know what is sbuilder
<imbrandon> oh nice, i am getting to know the nova api better
<SpamapS> imbrandon: I think there's something wonky in the swift+nova
<imbrandon> :(
<alo21> SpamapS: could youprovide me an useful link about it?
<imbrandon> SpamapS: re pbuilder amost ALL the docs dating back centuries recoment pbuilder is why
<imbrandon> :)
<SpamapS> yeah
<SpamapS> alo21: I'm about to pass out, so sorry, no. :-/
<imbrandon> but pbuilder + some custom scripting and lvm snapshots can be beuitiful :)
<imbrandon> heh
<imbrandon> alo21: what are you looking for ?
<SpamapS> imbrandon: all built in to sbuild
<SpamapS> it even has btrfs support now
<imbrandon> SpamapS: yea i'm mostly rembering back to my MOTU days and pbuilder barely existed let along all the cool stuff now
<alo21> imbrandon: I am trying to build juju from source
<imbrandon> cowbuilder
<imbrandon> heh
<imbrandon> alo21: ok . where ya hitting a snag ?
<SpamapS> ahh, there's a messed up assumption in the openstack provider...
<SpamapS> it assumes all regions will be the same
<imbrandon> SpamapS: nice . something i can hack out ?
<SpamapS> but object-store is in region-1.geo-1 while compute is az-3.region-a.geo-1
<alo21> imbrandon: what "ya" means?
<SpamapS> imbrandon: I'm about done with it actually
<imbrandon> ya == you
<imbrandon> where are you having truble
<imbrandon> SpamapS: nice , kk
<alo21> imbrandon: http://pastebin.ubuntu.com/1063813/
<imbrandon> if you are on Ubuntu alo21 copy and paste this into terminal and it should build from source, you can customize it from there ... "mkdir ~/juju-build && cd ~/juju-build && bzr branch lp:juju && cd juju && python setup.py build"
 * imbrandon looks at pastebin
<alo21> imbrandon: done, then?
<imbrandon> yea no idea about your pastebin, one it needs a little more context but it looks to be a pbuilder issue not juju build issue and the above command should help you there, as for pbuilder #ubuntu-motu should be able to help sort that
<imbrandon> done then what ?
<alo21> imbrandon:I am wondering if I have to run other commands?
<imbrandon> oh no, just copy what i had in quotes and paste it as one line
<imbrandon> without the quotes
<imbrandon> and it will run it all but stop if there is an error
<imbrandon> due to the && vs ;
<imbrandon> but yea thats it, it should tell you where it built to, but i think `pwd`/build is the default iirc
<alo21> imbrandon: I did not recive ant errors
<imbrandon> then you are good, you just compiled juju
<alo21> imbrandon: ok, thnaks
<alo21> thanks*
<imbrandon> if you want to do it for hacking on it a little in a saner way not to mess up your system accidenly
<imbrandon> there is a section in the docs
<imbrandon> for devlopers install that will guide you
<imbrandon> alo21: http://jujucharms.com/docs/drafts/developer-install.html
<SpamapS> imbrandon: I think most of the probs are due to hpcloud being diablo
<SpamapS> 2012-06-28 00:00:53,347 DEBUG openstack: 201 '201 Created\n\n\n\n   '
<SpamapS> 2012-06-28 00:00:53,348 INFO 'bootstrap' command finished successfully
 * SpamapS crosses fingers
<imbrandon> sweet
<imbrandon> yea i've been going over their documentation , they have alot of "extensions to the openstack api" but also "noteable diffrences" and "stuff we dont implement" as well :(
<imbrandon> i mean its cool but more like a frankenstack not openstack
 * imbrandon ponders is canonistack is similar
<imbrandon> if*
<SpamapS> imbrandon: yeah, they're really big on "nobody will ever run vanilla openstack"
<SpamapS> which means "openstack will go nowhere"
<imbrandon> yea , thats a sure fire way to solidify that
<imbrandon> who is they in this context tho, canonical ?
<SpamapS> hrm, ssh not coming up
<SpamapS> imbrandon: no, HP
<imbrandon> oh
<imbrandon> right
<SpamapS> <-- says canonical as "we"
<imbrandon> right, figured that but i thoght maybe the IS team was they in that sentace :)
<imbrandon> e.g assumed canonistack admins
<SpamapS> no its more the providers who think they must be able to differentiate outside the normal process
<SpamapS> Canonistack might be the only semi-public vanilla openstack ever ;)
<SpamapS> Oh wow.. still apt-getting stuff
<SpamapS> key gen seemed to take a long time
<imbrandon> yea seriously , thats one reason i loved linode for so long and i think RACK is just a big linode from what i have seen, but provide powerfull stuff at good price and dont cripple it cuz someone could take advantage just fire  the bad apples as customers
<SpamapS> I suspect we dont' have the same pseudo-randomness hack on their cloud as we do on EC2
<imbrandon> i dunno its official ami's from canonical
<SpamapS> Yeah that would be unfortunate if it weren't
<imbrandon> a few new ones even poped up this week for older versions
<SpamapS> Cpu(s):  0.0%us,  0.2%sy,  0.0%ni, 49.9%id, 49.9%wa,  0.0%hi,  0.0%si,  0.0%st
<SpamapS> disk seems SLOW
<imbrandon> yea , disk io is horrid
<imbrandon> BUT
<imbrandon> cdn and nic speed is AWESOME
<imbrandon> :)
<SpamapS> good tactic to get people to just buy big mem-tastic instances
<imbrandon> their network is much faster to me than aws , but their cpu and disk io blows
<imbrandon> thus i create like lots of xsmall instances to do the job as a cluster :)
<imbrandon> heh
<SpamapS> ugh yeah wowowow is I/O slow
<SpamapS> imbrandon: anyway, let me push up a branch that works
<imbrandon> their xsmall like rackspaces isnt crippled either
<imbrandon> sweet, if you could pastebin a config too sanitised
<SpamapS> thats one area I'm still a little bit unsure of
<imbrandon> and i'll buy u a crown and coke next meetup :)
<SpamapS> because I think the env vars are used and I may have polluted it
<SpamapS> mmcrown
 * SpamapS could use some whiskey about now
<imbrandon> heh
<SpamapS> forgot to do my SRU's and juju patch pilot today.. er.. yesterday
<SpamapS> seriously dpkg is just SUCKING ASS
<SpamapS> unpacking java took almost 2 minutes
<imbrandon> yea my env on my mac ( what i'm booted into atm ) is very poluted , heh its survived me hacking the hello out of /etc/profile and then in place upgrades from 10.4 to 10.8 preview 4 currently
<imbrandon> heh
<imbrandon> SpamapS: yea once i got mine all setup with "base install" i used the nova api to create a personal base image
<imbrandon> that i use to fire new ones up with, so i'm hoping we can provide a ami-id to the hp provider
<imbrandon> heh
<SpamapS> imbrandon: I think we should build that into juju, all providers have a way to turn a running instance into an image
<imbrandon> that would fall into "we shoudlent care about the env" so i gave up afrer being shut down 2 times
<imbrandon> wasent gonna battle mr juju over it
<imbrandon> but i totally agree its esential in the long run
<jcastro> SpamapS: don't be sad, you always go on friday remember?
<imbrandon> i mean we can either not care about the env and let something like puppet provsion for us , or we can care and provide juju ways to do it
<imbrandon> but it cant be no you cant touch the metal and expect users to take us serious
<imbrandon> and not hack it
<SpamapS> jcastro: true. I'm out Friday tho, so tomorrow is it. :)
<jcastro> heh
<SpamapS> jcastro: btw, are your slides still shared on U1?
<imbrandon> heya jcastro :)
<SpamapS> jcastro: I need new ones
<jcastro> SpamapS: for this weekend we went with dropbox
<jcastro> err, week.
<SpamapS> OpenStack LA will have 50 people tomorrow night
<imbrandon> ohhhh i got a new set of slide templates i made that are SLICK AS HELL
<jcastro> I can send you my whole deck.js folder
<SpamapS> and I am going to show them this openstack provider...
<imbrandon> i need to dig em out for you guys
<SpamapS> I hope for them to all show up in #juju and DEMAND that it be merged *IMMEDIATELY*
<SpamapS> because honestly, even with the problems it has
<SpamapS> its amazing
<SpamapS> *amazing*
<imbrandon> ??
<SpamapS> tho hpcloud is embarassing itself right now
<imbrandon> sarcastic >
<imbrandon> ?
<SpamapS> jcastro: I am bootstrapping on hpcloud right now
<imbrandon> SpamapS: will this work on RACK and DREAM openstacks too ?
<imbrandon> i will be as soon as he pushes the button
<imbrandon> :)
<SpamapS> in theory
<SpamapS> is DREAM open for beta?
<SpamapS> I should get an account, since they're sponsoring tomorrow night ;)
<SpamapS> lp:~clint-fewbar/juju/openstack_provider_fixes
<imbrandon> jcastro: yea whats up with those accounts
<jcastro> working
<SpamapS> cloud-init boot finished at Thu, 28 Jun 2012 07:19:12 +0000. Up 1077.37 seconds
<SpamapS> *pathetic*
<jcastro> imbrandon: big company, slow gears
<jcastro> slow gears, but fast IO!
<jcastro> SpamapS: I'll tarball up our slides and send them over
<lifeless> SpamapS: 'lo, so, you're reviewing charms atm ?
<SpamapS> lifeless: tomorrow morning.. about to sign off after I see mysql deploy on hpcloud
<SpamapS> lifeless: but I will definitely take a look at your thing. :) 12 items in the queue, but half of them are adding maintainer
<lifeless> \o/
 * SpamapS hopes HPCloud will address their IO problems soon
<SpamapS> 2012-06-28 07:36:52,593 - DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds
<SpamapS> le sigh
<SpamapS> cloud-init boot finished at Thu, 28 Jun 2012 07:44:48 +0000. Up 251.99 seconds
<SpamapS> Wow, night and day there
<SpamapS> more diablo lameness..
<SpamapS> secgroups aren't automatically setup to be able to talk freely amongst their members
<SpamapS> and in fact, have no way to do that. :-P
 * SpamapS heads to bed
<imbrandon> yea there is a way, i had to do it when i created a databse sec grup
<imbrandon> but no way on the console or their cli too
<imbrandon> l
<imbrandon> need to use the os cli nova tool
<imbrandon> and not the one in the repos
<imbrandon> imagine that they extended that too
<imbrandon> :(
<imbrandon> anyhow i'll shoot u a message with where i found the info
<imbrandon> gnight
<imbrandon> jcastro: found it, putting a Ubuntu logo on it and uploading, its "almost" ready for wide use, i was gonna unleash it in a few days after a little more markup cleanup
<imbrandon> but if you like it , use it now, nothing major changing , only cleanup and some new images to replace the pixelated logos
<imbrandon> :)
 * imbrandon gets the url
<imbrandon> jcastro: http://api.websitedevops.com/slides/template/
<imbrandon> then watch that on your ipad, then iphone, then driod phone , then kindle , then laptop  :)
<imbrandon> bad ass, telling you
<imbrandon> all kinda features :)
<ejat> imbrandon : c00l
<imbrandon> just need to finish it :)
<imbrandon> ejat: ?
<imbrandon> heh
 * ejat is that a template u guys use to present juju 
<ejat> or just a general ubuntu template..
<imbrandon> not yet. when i finish it in the next few days hopefully :)
<ejat> \0/
<imbrandon> neither atm, its about 98% done :)
<twobottux> aujuju: "init: juju-..." errors in syslog after uninstalling juju <http://askubuntu.com/questions/157093/init-juju-errors-in-syslog-after-uninstalling-juju>
<jml> is it possible for 'juju ssh' to somehow bypass the host authenticity check?
<jml> I mean, I really have no idea what the fingerprint of this thing is, and I don't know how I'd find out using a trusted channel.
<imbrandon> jml: yea , but you do it with ssh configs not juju specific
<imbrandon> one sec i'll get you the snipit i use
<imbrandon> http://paste.ubuntu.com/1064070/
<jml> ...
<jml> imbrandon: so, I'm not crazy enough to want to completely disable strict host key checking.
<imbrandon> now i'm not advocating that its safe to ignore those warnings, but i feel you, and i personaly do, but i verify my machines in other ways
<imbrandon> well you can narrow thats to ec2 as well
<imbrandon> with the host *
<imbrandon> host *.ec2.amazonaws.com
<imbrandon> or something
<imbrandon> anyhow, thats one solution, least get ya in the right direction for what would need to be done
<imbrandon> btw not all of that may be needed
<imbrandon> i just grabed that from the user i have setup to control my juju instances
<jml> imbrandon: what I meant was, why doesn't juju do this: https://code.launchpad.net/~jml/juju/no-host-key-check/+merge/112540
<imbrandon> not sure, imho thats a good fix , one note, you can also specify a temp known_hosts file when using that flag as well
<imbrandon> like /dev/null
<imbrandon> so the remark in your comment becomes moot
<imbrandon> but either way, yours or mine, does the exact same thing, just looks scarier in my ssh config :)
<imbrandon> another solition too would be to add that feature into jitsu , then when you wrap juju in jitsu as per normal it can have an alias for ssh with that config set
<imbrandon> imho that would be the way to get the ball rolling anyhow, SpamapS has said in the past that its a good testing ground before getting things into core
<jml> meh.
<imbrandon> :)
<jml> 'juju status' ... look for the IP somewhere. try it in the web browser. Internal server error. 'juju status', this time look for the instance number. 'juju ssh instance/num'. manually type 'sudo cat /var/log/apache2/error.log'.
<twobottux> aujuju: Can I specify tighter security group controls in EC2? <http://askubuntu.com/questions/156715/can-i-specify-tighter-security-group-controls-in-ec2>
<SpamapS> jml: sorry to be so harsh ;)
<SpamapS> jml: btw, the trusted channel you can use for ec2 is euca-get-console-output
<SpamapS> jml: but it lags by 2 - 5 minutes
<jml> SpamapS: and lxc?
<SpamapS> jml: eventually juju will record the fingerprint in ZK as soon as the machine agent starts up
<SpamapS> jml: for lxc you have the file on disk ;)
<SpamapS> cat the console log
<jml> SpamapS: hang on, walk me through this
<jml> I type 'juju ssh instance/N'
<jml> I'm asked "is this *really* the host you want?"
<jcharette> anyone around to help a maas / juju noob
<jml> and then, for an lxc instance, how do I actually verify that?
<SpamapS> jcharette: we can try. Note there is also a #maas channel
<jcharette> SpamapS:  thanks, i'll let you finish with jml and then go from there
<jcharette> i've asked for help on maas as well
<SpamapS> jml: the container's root filesystem is accessible to you (via sudo) .. so you can just add the public key directly to known_hosts...
<SpamapS> jml: it will be at /var/lib/lxc/juju-envname-machineid/rootfs/etc/ssh/id_rsa.pub or something like that
<jml> SpamapS: and what assurance does that actually grant me?
<SpamapS> jml: that your container wasn't mitm'd on your own box ;)
<SpamapS> jml: another better way is to just turn off strict host key checking on 192.168.122.*
<SpamapS> jml: you can also direct known_hosts to /dev/null
<SpamapS> so you don't get warned about changed keys
<jml> SpamapS: ok, I'll do that.
 * jml subverts things
<jcharette> SpamapS:  I'm going to read some more documentation on maas, may be back soon with q's
<SpamapS> jml: I do think the known_hosts file needs to be rethought for the cloud. It was created at a time where creating/destroying "servers" was unheard of.
<SpamapS> jml: seems like that bit of the cloud is ripe for an API.
<jml> SpamapS: I don't see how that could work.
<SpamapS> In fact that would be a cool feature to add to openstack. Let the guests inject fingerprints when they boot up.. then have it available as part of the instance info available via the API
<jml> SpamapS: Because the API would also be subject to MITM
<SpamapS> API is HTTPS
<SpamapS> so it at least has the CA system protecting it
<jml> hmm.
<SpamapS> jml: the API must be trusted, as it is standing in for the sysadmin of old, racking boxes and laying down images. :)
<jml> and then things like 'juju ssh' would inspect that API & write to a tmp known_hosts and ?
<jml> SpamapS: "must be trusted" as in "you have no alternative but to trust it" or "it must be trustworthy"?
<SpamapS> jml: it must be trustworthy
<SpamapS> jml: well hopefully we could add something to ssh that stands in for known_hosts, not a file. Something like --host-key="...content of public key"
<SpamapS> jml: but with what we have today, yes, just a temporary known hosts of 1
<jml> SpamapS: I guess I'm going to hold out for the future then.
<james_w> it's trusted, so you better hope it is trustworthy
<SpamapS> jml: for the present, ~/.ssh/config works great
<jml> SpamapS: is there an easy way to get the IP address of my just-deployed service?
<jml> SpamapS: I get it that juju is supposed to work for a jillion instances, but I still want to be able to open a web browser for the thing I just deployed
<SpamapS> jml: there's a few jitsu commands to help
<SpamapS> jml: jitsu get-unit-info
<SpamapS> jml: I do think there is a need for charms to be able to talk back to the admins more
<james_w> jml, juju status $service | grep public-address | head -n1?
<jml> james_w: oh right, status can be restricted to service
<james_w> yeah
<jml> SpamapS, james_w: thanks
<jml> SpamapS: there seems to be a general pattern of "Q. X is tedious with jitsu. Any tips?  A. Use jitsu."
<jml> SpamapS: why is jitsu separate to juju?
<jml> james_w: juju status libdep-service 2>/dev/null | grep public-address | head -n1 | cut -d':' -f2 | sed -e 's/ //g'
<jml> it seems as if my db-relation-changed hook isn't being called when the relation is added... or something?
<james_w> jml, the relation isn't in an error state?
 * jml head desks
<jml> james_w: no, I don't think so. juju status doesn't say 'error' anywhere and the logs don't have any obvious errors
<jml> of course one of the many errors I'm dismissing as false positives might be actual errors
<jml> umm...
<jml> right, so I ran debug-hooks
<jml> and then I hit C-d to quit tmux
<jml> and now I can't do debug-hooks on that instance?
<jml> and now it works?
<jml> *sigh*
<jml> And then I just had debug-hooks running, and then I added the relation (juju add-relation postgresql:db libdep-service) and nothing happened in the debug-hooks session
<james_w> hmm, I don't know what that could be
<james_w> you're not currently executing a hook in tmux?
<james_w> that caught me out once
<james_w> as hooks are serialized
<jml> james_w: no, I wasn't. Perhaps "install" isn't finished... although how the instance got into the 'started' state without that I won't know.
<jml> I've trashed my environment. Perhaps that will hepl.
<jml> yay, the hook ran this time
<jml> now I'm getting an internal server error, but that's probably my fault.
<sidnei> james_w, around?
<james_w> hi sidnei
<sidnei> james_w, i see you added an upstart script to the txstatsd charm, im considering adding it to the txstatsd packaging instead, with a few changes.
<james_w> sidnei, sounds ok
<sidnei> james_w, so generally, the way we're running txstatsd and carbon-cache for u1 is to run multiple processes in a single, to take advantage of all cores. got any handy examples of spawning multiple daemons from an upstart script, controlled by say /etc/default/$daemon having 'NPROCS = X'?
<sidnei> s/a single/a single machine/
<sidnei> then i guess we'd control that by doing relation-set nprocs=X
<james_w> sidnei, I don't have an example
<sidnei> ok, thanks!
<sidnei> james_w, http://upstart.ubuntu.com/cookbook/#instance fyi
<james_w> cool
<SpamapS> jml: I've had the exact experience you just did. I'd bet money you had a hook stuck in a command, so the next hook could not run.
<SpamapS> jml: and to answer your earlier question about why jitsu is not part of juju, because juju has stringent requirements for code review and design, so it takes time for fresh new ideas to trickle into juju
<SpamapS> jml: whereas jitsu allows rapid iteration and scratching of itches
 * SpamapS starts reviewing charms
<zirpu> does the ppa juju package have the fix for the lxc not using sudo to bootstrap?
<zirpu> 0.5+bzr531-0ubuntu1  version doesn't work for lxc for me unless i'm root. and i have pwdless sudo.
 * zirpu should probably just live on the edge and run from trunk.
<SpamapS> zirpu: I wasn't aware any fixes were done around that
<SpamapS> zirpu: the PPA is trunk
<SpamapS> something I'd actually like to change
<SpamapS> but.. so many ideas, so little time
<imbrando1> yup, sometimes i wish there was 3 of me
<imbrando1> maybe 4
<imbrando1> :)
<koolhead17> SpamapS, planning to try juju for google cloud?
<zirpu> i'm pretty sure my clone and i would attempt to kill each other. :-)
<SpamapS> koolhead17: looks like it is a different API, so no
<koolhead17> SpamapS, k
<dk1> hi, anyone feel like debugging a juju on osx install problem?
<dk1> totally clean 10.7/homebrew, brew install gets almost to the end and then:
<dk1> Finished processing dependencies for juju==0.5
<dk1> build.rb:49: in 'initialize': Bad file descriptor
<SpamapS> imbrandon: around?
<SpamapS> dk1: imbrandon wrote that recipe, he should be able to help
<SpamapS> imbrandon: ^^
<imbrandon> sorry
<imbrandon> like right in the middle of something
<imbrandon> but yea i can fix you up in a lil bit
<imbrandon> lemme get off the phone and such
<imbrandon> dk1: ^^
<dk1> hey
<dk1> ok
<imbrandon> dk1: between now and then this will fix you up ( you made it far enough in the process ) ... open term and run "mkdir /tmp/juju && cd /tmp/juju && bzr branch lp:juju && cd juju && sudo python setup.py install && cd ~" thats _should_ fix you up from that point and i'll look deeper into the issue here in a few
<dk1> sec
<imbrandon> just copy/paste that
<dk1> running now
<dk1> ok, so after that running juju gives me:
<dk1> ImportError: No module named zookeeper
<imbrandon> brew uninstall zookeeper && brew install zookeeper --python
<dk1> imbrandon: ***
<dk1> No such file or directory
<dk1> .. and
<dk1> already installed
<dk1> when tried running separately
<imbrandon> you typo'd
<dk1> nope
<dk1> Uninstalling /usr/local/Cellar/zookeeper/3.4.3...
<dk1> Error: No such file or directory
<imbrandon> ahh your ZK barfed on install the first time
<dk1> seems likely
<imbrandon> brew unlink zookeeper
<dk1> same rror
<dk1> error
<imbrandon> cd /usr/local/Cellar
<imbrandon> rm -rf zookeeper
<dk1> ok
<dk1> ok
<imbrandon> cd ~ && try installing again
<imbrandon> with --pythonm
<imbrandon> --python
<dk1> bombs out
<dk1> at..
<imbrandon> ok, there is a problem with the zk formula then
<imbrandon> and i/we dont maintain that BUT you can try ...
<dk1> ld: duplicate symbol _hashtable_iterator_key
<imbrandon> wait do you have xcode and the cli tools etc
<imbrandon> installed
<dk1> yep
<lifeless> morning 'all
<imbrandon> k yea its likely a zk formula problem the only other thing you can try atm is this ( without me getting on my osx box and digging in more heh )
<imbrandon> heya lifeless
<imbrandon> try "brew install zookeeper --HEAD --python"
<imbrandon> rm -rf like before if needed
<imbrandon> did i mention i HATE zookeeper
<imbrandon> and i see no reason that should be on the client anyhow /me grumbles
<imbrandon> brb , lil boys room callin my name
<dk1> bombs out
<dk1> with a brand new error
<dk1> autoreconf -if
<imbrandon> frak, ok pastbin the output of "brew doctor" and i'll be back in a few
<imbrandon> oh yea
<imbrandon> you need that insatlled
<imbrandon> should have told ya
<dk1> it grabbed all of that for me
<imbrandon> not sure what the autoreconf pkg is called but its in brew
<dk1> just bombs with:
<dk1> configure.ac:37: warning: macro 'AM_PATH_CPPUNIT' not found in library
<imbrandon> oh ok , yea zk is being a bitch, you said clean 10.7 ? no python from brew or gcc from brew or nothing ?
<dk1> there's a /usr/bin/python and /usr/bin/gcc, but neither in the Cellar
<imbrandon> toss "brew doctor" output in paste.ubuntu.com and i'll brb, really got to head for a sec
<imbrandon> k thats fine
<dk1> paste.ubuntu.com/1065093/
<imbrandon> open /etc/paths and move the line with /usr/local/bin to the top, save exit term
<imbrandon> reopen term, then try the zk install with --python , then with --HEAD
<imbrandon> and afk again
<imbrandon> brb
<dk1> sec
<imbrandon> if that dont work, screw brew and http://paste.ubuntu.com/1065123/
 * imbrandon is going to write a custom installer for the 0.5.1.x release
<imbrandon> damn, afk AGAIN
<SpamapS> .x ?
<SpamapS> imbrandon: oh right you want .bzrrev
<SpamapS> what ever would we do if it were git tho? ;-)
<SpamapS> lifeless: reviewing opentsdb now
<SpamapS> lifeless: thats one hell of a README :)
<SpamapS> lifeless: review posted
<imbrandon> SpamapS: hahah , easy
<imbrandon> SpamapS: JUJU_BUILD_REV=`git log -n 1 --pretty=format:"%h"` :)
 * imbrandon uses that for client website builds that use git or HG to print in the footer , normally a CI server is pushing it out so its very very nice
<SpamapS> %h is ?
<imbrandon> bholtsclaw@ares:/usr/local/Library$ git log -n 1 --pretty=format:"%h"
<imbrandon> 2c9d87e
<imbrandon> short hash
<SpamapS> a hash that will always be higher than previous hashes?
<imbrandon> uniq
<SpamapS> need higher
<imbrandon> not sure about higer but thats solved with the
<imbrandon> rev number
<imbrandon> if its like our workflow anyhow the rev num would work from github etc
<imbrandon> you just can depend on it from diff repos
<imbrandon> but if you build from the same one always its fine :)
<SpamapS> yeah
<SpamapS> same restriction for bzr really
<SpamapS> crap, 3 hours till presentation and I literally have 1 slide
<imbrandon> and really to be "the best" would be a combo, like
<negronjl> only one charm in the review queue ... nice
<imbrandon> 0.5.1.N-%h
<imbrandon> or something , that way devs could easily tie it to a hash local
<imbrandon> and N will always be higher
<SpamapS> negronjl: thats either very good (we're on top of things) or very bad (we're sinking!)
<imbrandon> hah
<negronjl> scratch that .... no charms in the queue .... nice/scary (depending on your point of view)
<imbrandon> that reminds me i was gonna do some docs merges and i got one more build work arround to maybe make 0.6.4 work
<negronjl> I better get to write more charms STAT so we have some stuff there :)
<imbrandon> HOLLY CRAP, i just hit jackpot with the OSX build, i def got to write a native installer but thats easy
<imbrandon> http://pypi.python.org/pypi/zc-zookeeper-static
<imbrandon> ^^ staticly compiled zkpython 3,4,3 for osx
<imbrandon> YAY!
<imbrandon> bout time something went right
<lifeless> SpamapS: thanks
<imbrandon> oh that is SOOO much easier ...
<SpamapS> imbrandon: sweet
<imbrandon> yea , this like is gonna make the osx installer solid and not a thorn in my side
<imbrandon> heh
<imbrandon> now does the client REALLY need zookeeper local or just the bindings, rember no lxc here
<SpamapS> just bindings
<imbrandon> ROCK!
<imbrandon> i think i just squeeled like a lil girl
<imbrandon> oh hell yea then no brew needed at all since bzr has a native osx .dmg
<imbrandon> hells yea
<imbrandon> that makes my day
<imbrandon> hell week
<imbrandon> infact ... let me ponder on this a few moments ...
 * imbrandon opens lp:juju/setup.py
<imbrandon> crap still need brew for gcc
<imbrandon> damn
<imbrandon> or xtools
<imbrandon> oh no there is a cli dmg install of that too
 * imbrandon goes to look
<dk1> imbrandon: no luck
<imbrandon> haha, i found some good news
<dk1> ?
<imbrandon> sudo easy_install zc-zookeeper-static
<imbrandon> let that finish
<dk1> ah
<imbrandon> then ping me
<imbrandon> builds fast i tested it here
<dk1> running
<dk1> this is one of the new retina mbps, fairly fast
<dk1> done
<imbrandon> nice
<imbrandon> sweet
<imbrandon> ok type "python"
<imbrandon> get a prompt
<dk1> yep
<imbrandon> then "import zookeeper"
<imbrandon> enter
<dk1> ok
<imbrandon> error ? or nother prompt
<dk1> prompt
<dk1> looks like it imported
<imbrandon> sweet, ctl D
<dk1> ok
<imbrandon> then try "juju bootstrap"
<dk1> no envs configured
<imbrandon> ROCKIN
<dk1> better than before
<dk1> definitely
<imbrandon> ok go read the docs and enjoy juju
<imbrandon> your good
<dk1> haha
<imbrandon> :)
<dk1> awesome
<imbrandon> i'll fixup the installer for next time but yea
<imbrandon> rember there is no lxc on osx
<imbrandon> so everything is ec2 or other remote env's
<imbrandon> :)
<dk1> this is minute 5 of juju for me
<imbrandon> trust me you dont want lxc anyhow , vbox would be a better local provider imho
<imbrandon> :)
<dk1> do i need a local linux env for anything
<dk1> if i'm just writing scripts and pushing them out to test
<imbrandon> yae just signup for a amazon acount and goto town tho, your normal install ar this point
<imbrandon> nope , only need linux if you want lxc ( chroots ) local
<dk1> cool
<lifeless> imbrandon: closer to jails ;)
<dk1> and what are people using for deploying big rails apps
<imbrandon> and like i said personaly i dont use/like them anyhow
<dk1> upstart, runit?
<imbrandon> forever , god
<imbrandon> i see god used alot
<imbrandon> or upstart
<dk1> in production?
<imbrandon> dunno any one that dont run nginx+passenger for ruby in prod :)
<SpamapS> dk1: if you want the offline experience, simplest to just fire up an ubuntu server VM and use the local provider inside that
<dk1> not usually offline, only rationale would be if the testing loop was masively faster
<dk1> massively, rather
<imbrandon> not really once you have a good env setup
<imbrandon> if your destroying and creatign over and over then yea
<imbrandon> maybe
<imbrandon> but once the base is down pat, and just charm upgrades
<imbrandon> etc
<imbrandon> then much easyer the "normal" way imho
<dk1> we're probably going to jruby+trinidad soon anyway
<imbrandon> but we're talking 10 min diff here
<imbrandon> not hours
<SpamapS> dk1: its only faster if you are on a really fast SSD.. otherwise its faster to just terminate/start EC2 m1.smalls while developing
<SpamapS> dk1: and even if you're on a fast SSD.. its still sometimes slower because many m1.smalls are going to still be more scalable than youre one 4 core i5 ;)
<dk1> think this question will probably answer itself by tomorrow
<dk1> thanks imbrandon & guys
<imbrandon> i'll put it like this, i have only fired up a lxc container ONE time ever and dident even let it finish
<imbrandon> dk1: np
<imbrandon> pop in anytime ya need a hand, normally someoen arround
<imbrandon> and i'm normaly the token mac fella
<imbrandon> :)
<imbrandon> but really once your at this point its all the same
<imbrandon> your past the mac specifics
<lifeless> SpamapS: dunno how fast you think ec2 is ;)
<lifeless> SpamapS: but, local openstack is about 30 times faster fo rme
<imbrandon> local openstack sure, not 2 or 3 containers on your laptop
<imbrandon> :)
<lifeless> imbrandon: you haven't met my laptop :)
<imbrandon> heh
<lifeless> imbrandon: 8GB ram, intel SSD, i7 4-hardware threads.
<imbrandon> same
<dk1> sounds heavy
<imbrandon> well 16gb
<imbrandon> but yea
<imbrandon> same
<imbrandon> is my daily machine
<lifeless> imbrandon: desktop is 16GB, 8 hardware threads; raid 10, its where my local openstack is.
<lifeless> dk1: its about 1kg
<imbrandon> :)
<lifeless> dk1: x201s
<imbrandon> eww but sooo ugly , i'll stick to my mbp :)
<imbrandon> you can toss em cross the room tho
<dk1> ah, didn't know you could get an s with a qc
<dk1> 16gb retinas aren't shipping yet anyway, they just gave me a loaner 8gb
<imbrandon> with sublime text open x2 adobe ps 6 + adobe flashbuilder 5 + xcode + sourcetree + god knows how many term tabs + screen windows + chrome w/ 100 tabs combine + firefix with 4 or 5 tabs + Espresso + Mail.app + cyberduck + gradient + itunes + Proseql + adium , and then whatever random app i might have temporarily open
<imbrandon> thats what easy my 16gb ram quicky ^^
<imbrandon> thats actually what i have open now, and only been on the osx partition an hour
<imbrandon> heh
<dk1> imbrandon: one more thing, don't know what Formulas are yet, but the jujutools.github.com site links to a directory with text that implies there should be more than one juju.rb file in it
<imbrandon> there will be when i put charm-tools
<imbrandon> and a few other goodies you'll soon want to have
<imbrandon> :)
<imbrandon> just not got one of those roundtuits' yet, maybe a good project this weekend
<imbrandon> SpamapS: import subprocess
<imbrandon> _proc = subprocess.Popen('git log --no-color -n 1 --date=iso',
<imbrandon>                          shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
<imbrandon> try:
<imbrandon>     GIT_REVISION_DATE = [x.split('Date:')[1].split('+')[0].strip() for x in
<imbrandon>                          _proc.communicate()[0].splitlines() if x.startswith('Date:')][0]
<imbrandon> except IndexError:
<imbrandon>     GIT_REVISION_DATE = 'unknown'
<imbrandon> :)
#juju 2012-06-29
<burnbrighter> can you guys tell me if there is stored/cached information besides .juju/cache that gets stored?  I'm finding that my originally bootstrapped node is the always the first node that is getting chosen for bootstrapping.  So that is telling me the information must be stored somewhere?
<burnbrighter> ie. after I do juju bootstrap
<SpamapS> burnbrighter: no, its likely just maas choosing using predictable rules
<burnbrighter> maas, really?
<SpamapS> are you not using maas?
<burnbrighter> I sure am.  But I thought this problem was on the juju side
<burnbrighter> ok, maybe someone knows something over there...
<SpamapS> burnbrighter: or do you have placement: local maybe?
<burnbrighter> eh, placement: local?
<burnbrighter> oh, you mean the charms?
<m_3> I doubt it's ordering by mac, but it's possible... mostly likely something like dnsmasq or maas itself
<imbrandon> zomg SpamapS look what i fixed
<m_3> `ps awux | grep dnsmasq` to see where it's caching
<imbrandon> docs working again ( still 0,6,4 )
<imbrandon> http://juju.ubuntu.com/docs
<imbrandon> looks like your meta stuff at the bottom isnt tho, but i'll cheeck that in a bit
<burnbrighter> my dnsmasq is fairly standard - it only sets static IPs for the given mac addr, and sets the domain to localdomain
<imbrandon> oh now THAT is nice
<imbrandon> first chrome app worth something , a fill working native ssh client
<imbrandon> in a tab
<imbrandon> even draws things correctly
<imbrandon> https://chrome.google.com/webstore/detail/pnhechapfaindjhompbnflcldabbghjo/related
<m_3> imbrandon: wow
<imbrandon> yea you can even bookmark the server
<imbrandon> once logged in
<imbrandon> and its made by the chromium devs
<imbrandon> ( intended for chromebooks but i have it open on osx and it works fine here )
<imbrandon> works great actually
<imbrandon> even irssi and byobu hotkeys work
<m_3> imbrandon: have to play with it... wonder if you can pass args through the url
<m_3> great for classroom stuff if you could
<imbrandon> wow and inspecting element on the page, its normal html output but realtime
<imbrandon> thats why it looks so nice
<m_3> cool
<imbrandon> and yea you can it loks like
<imbrandon> looksing at the bookmark
<imbrandon> chrome-extension://pnhechapfaindjhompbnflcldabbghjo/html/nassh.html#root@15.185.101.99:22
<m_3> omg
<m_3> when did you start having to sign in to add extensions
<imbrandon> :)
<m_3> wow
<imbrandon> umm since forever
<m_3> no way
<imbrandon> yea, iirc its always been like that
<m_3> there were several extensions I've installed without having to sign in to my google acct
<imbrandon> cuz it follows ya profile to other computers
<imbrandon> how do you run chrome at all without signing in ?
<imbrandon> mine nags to sign into the BROWSER
<m_3> 've got like 6 different profiels
<imbrandon> :)
<imbrandon> yea i use 3
<m_3> one is signed in for gmail... the others aren't
<imbrandon> gonna make a special one jusy for this
<imbrandon> this rocks
<m_3> right
<imbrandon> and you could easily short url that i think,dunno but definatly a href= link it
<imbrandon> i cant get over just how much it is like a term and not like a JS rendition of a term
<imbrandon> heh
<imbrandon> and things like CTL+a baksp work
<imbrandon> as expected etc
<imbrandon> <embed style="position: absolute; height: 0px; " src="../plugin/ssh_client.nmf" type="application/x-nacl">
<imbrandon> i'ma have to look into these native client apps, man
<m_3> imbrandon: ah, only _some_ extensions require this
<imbrandon> ahh
<m_3> prob all ones after a certain date or something
<imbrandon> yes
<imbrandon> i bet
<burnbrighter> if anyone has any ideas, I am seeing this...
<burnbrighter> ituser@maas01:~$ juju -v status
<burnbrighter> 2012-06-29 00:46:10,105 DEBUG Initializing juju status runtime
<burnbrighter> 2012-06-29 00:46:10,114 INFO Connecting to environment...
<burnbrighter> 2012-06-29 00:46:10,213 DEBUG Connecting to environment using maas03...
<burnbrighter> 2012-06-29 00:46:10,213 DEBUG Spawning SSH process with remote_user="ubuntu" remote_host="maas03" remote_port="2181" local_port="54681".
<burnbrighter> 2012-06-29 00:46:13,223 ERROR SSH forwarding error: ssh: connect to host maas03 port 22: No route to host
<burnbrighter> that must be cached info from somewhere?
<m_3> burnbrighter: once you've bootstrapped, yes the bootstrap nodes' info is cached for all subsequent commands to know where to route to
<m_3> burnbrighter: but that should be destroyed on a `destroy-environment`
<burnbrighter> but it always ends up going to the same node - maas03
<m_3> burnbrighter: the initial bootstrap is handed a machine from the provider (maas in this case)... I'd expect that maas makes the decision of which machine is the bootstrap node
<imbrandon> then the chooseing alg could be the same
<burnbrighter> even after destroying things and deleting the cache directory
<m_3> so that's possible cached in maas, or at a lower level in dnsmasq
<imbrandon> its maas choosing not juju
<imbrandon> i dont think juju COULD choose if it wanted to
<burnbrighter> ok ok, you guys win :)
<m_3> imbrandon: there're constraints for juju to choose, but I don't think that's implemented in the maas provider (or maas itself) yet
<imbrandon> right but even then they cant say i want the node with this mac
<imbrandon> they just say i want the one that has a quad proc
<imbrandon> or soemthing
<m_3> imbrandon: similar to 'juju deploy --constraints="instance-type=m1.large" ...'
<imbrandon> sure but even then you could get a diff m1
<imbrandon> every time
<imbrandon> juju cant garentee it would be the same one
<m_3> imbrandon: there were specs for '--constraint="tagged-with=junk"'... but dunno what'll come out of it
<imbrandon> yea now that could
<imbrandon> and that would rock
<m_3> (we need that for rack-locality in openstack)
<imbrandon> yea we need access to alot of the lower stuff, maybe not needed for every charm etc
<imbrandon> but we cant not care about it
<m_3> need to start pushing for tag constraints in ec2 too
<imbrandon> else we'll have to have another provisoning agent
<imbrandon> between us and the provider to do it
<imbrandon> us = juju
<m_3> yup
<imbrandon> but as long as juju must provision then we must have access to configure the bare metal if needed , sure abstract it 99% of the time
<imbrandon> but the 1% will make it 100% unuseable
 * imbrandon preachin to the quior
<m_3> I do see the value of pushing the abstraction... but yes, the tool should support the corners as well... we can push the abstrction socially with examples
<imbrandon> ohhhhhhh i havent tried my new hpc config yet ....... /me does that
<imbrandon> m_3: exactly
<m_3> i.e., it really is a totally new way of approaching most problems
<imbrandon> yup
<m_3> especially when you start looking at maas as using "ephemeral" instances
<imbrandon> i mean its very very simple to give us EVERYTHING , all in one swoop, i've already got it figured out
<m_3> gotta balance the paradign shift with just outright saying "no" all the time
<imbrandon> its like 25 loc + tests
<m_3> everything wrt what?
<imbrandon> anything below juju we may need
<m_3> more coffee brb
<imbrandon> kk
<imbrandon> all we need is the ability to pass in one string ( a https url ) to a user script that gets appeneded to cloud-init like the other scripts and settings juju does
<imbrandon> this bypasses the 16kb limit on size and the script can then be in bootstrap at clout init just like juju and do whatever it needs
<imbrandon> and pass info etc etc , basicly it would be a custom juju plugin or equivilant as there would be not a juju api to use
<imbrandon> but your loading in initramfs at that point
<m_3> hmmm
<m_3> dunno
<imbrandon> and thats all the code it took, the hard work was laid out by the cloud init team
<imbrandon> you seen that you can run puppet and chaef and
<imbrandon> all kinda stuff directyl from clout-init
<imbrandon> native
<m_3> best way to communicate about it is a combo of jistu and charm-tools branches that implement
<imbrandon> yup, already on it :)
<m_3> rockin
<imbrandon> got it so far decided i did not know node well enough so i started a new design doc and am going with something i know .... not phpp
<imbrandon> heh
<twobottux> aujuju: Unable to fully remove Juju <http://askubuntu.com/questions/157093/unable-to-fully-remove-juju>
<imbrandon> ruby :)
<imbrandon> well probably as i know it the 2nd best
<imbrandon> and i dont want to be learning node AND pulling some coool hacks
<imbrandon> :)
<imbrandon> sides i can take advantage of cloud-inits capistrano client then too :)
<imbrandon> m_3 / SpamapS / jcastro : http://bholtsclaw.github.com/html5slides/templates/ubuntu/
<imbrandon> last slide tells how to dl the template , i'm gonna make a slight variation for juju specific too
<imbrandon> ( plus looks great on an ipad and iphone for postign online after the talk too, as well as when ya give it :P )
<imbrandon> sugestions welcome :) but that gives the team a consistan look and feel for the charm school slides and stuff
<m_3> imbrandon: nice
<m_3> thanks!
<imbrandon> yw :)
<imbrandon> bah i goofed the link on the last page
<imbrandon> it has to be https:// to work fixing now but if you already loaded it you need to refresh
<imbrandon> or add https://
<imbrandon> to get the downloadable index.html
<imbrandon> ( rest of the assest load remotely from the one JS call in the top
<imbrandon> )
<imbrandon> fixed :)
<imbrandon> https://raw.github.com/bholtsclaw/html5slides/gh-pages/templates/ubuntu/index.html
<imbrandon> is the raw
<imbrandon> jcastro: new docs are LIVE!
<m_3> imbrandon: and here I was all excited about tpp
<imbrandon> hehe :)
<imbrandon> i think i can embed EVERYTHING too, into data uri's and put it at the bottom, ( the css and js and logo images )
<imbrandon> then it would be truely just one file
<imbrandon> firefox, safari, chrome and ie8+ would all be fine
<imbrandon> dunno if i care about ie7 enough to let it stop me
<imbrandon> :)
<imbrandon> whats tpp ?
 * imbrandon hugs his avatar http://bholtsclaw.wpengine.netdna-cdn.com/wp-content/uploads/brandon-penguin.png
<m_3> imbrandon: apt-cache show tpp
<imbrandon> oh nice :)
<m_3> ncurses slides... they're fun let's do you stuff like http://paste.ubuntu.com/1065310/
<imbrandon> nice
<m_3> depends totally on the crowd... but it works if you're already in a tmux session on a shared ajaxterm doing a demo
<imbrandon> hrm but with mine you can put a whole ssh term into the iframe of a slide :)
<imbrandon> hehe
<m_3> and since _tmux_ lets you control everybody's screen that they're looking at... :)
<m_3> ha yes
<m_3> exactly inverted
<m_3> nice
<imbrandon> :)
<m_3> that rocks
<imbrandon> that is pretty slick tho
<imbrandon> i'ma have to check it out
<imbrandon> more of the sleep movie
<imbrandon> yo dawg i herd u liked ssh so we put some ssh into your slides that have some slides in their ssh
<lifeless> SpamapS: I replied fwiw
<negronjl> hazmat: you still around ?
<negronjl> flame on
<negronjl> ... and with that ..... I call it a night ... later all
<jml> tee hee: https://juju.ubuntu.com/docs/write-formula.html
<m_3> jml: doh!
<jml> I want to fetch some branches in my charm. They require authentication (username, ssh key) to get. What do I do?
<hazmat> jml, service config
<sidnei> hazmat, is it safe now though? i remember there was some discussion about it being publicly accessible from zk
<hazmat> sidnei, zookeeper isn't encrypted or acl'd, a zk client can discover information yes, that's not a permanent scenario though
<SpamapS> jml: is there any reason you're not just embedding the branches in your charm btw?
<SpamapS> hazmat: IMO we have to remove all of the data from ZK and only use it for structure
<jml> SpamapS: Canonical Webops deploy to production from config-manager branches that are kept private
<jml> SpamapS: I want to re-use as much of their deployment chain as possible
<SpamapS> hazmat: config settings should go directly to units, and on add-unit, the settings should be sent peer to peer. Same with relation data. It terrifies me that ZK has all of the usernames/passwords for the entire environment.
<SpamapS> jml: ah interesting
<jml> currently, my branch uses a public fork of that config-manager config.
<hazmat> SpamapS, then it should terrify for you any config management system, because they all have it centrally
<hazmat> they do a better job of protecting the kitty
<hazmat> and if i can nail this libzk issue, we can haz some acls
<hazmat> although perhaps that will get last minute veto'd too
<jml> SpamapS: also, that config-manager branch fetches other branches -- the project itself and some of its dependencies. I don't want to embed them in the charm. That would be getting the cart before the horse.
<hazmat> jml, service config is the mechanism for this
<jml> hazmat: `juju set libdep-service launchpad-private-key-file=~/.ssh/id_rsa`?
<hazmat> jml, contents of the key
<jml> hmm, no, because it wouldn't be able to access it.
<jml> right.
<jml> juju set libdep-service launchpad-private-key-file="$(cat ~/.ssh/id_rsa)"
<jml> s/-file//
<hazmat> yup
<jml> well, except presumably the decrypted version
<hazmat> jml, right no password on the key since their is no user agent on the remote sys, but that already applied i assumed
<jml> I'm sure the other developers won't mind sending their private ssh keys in the clear
<hazmat> jml, ? why not create a separate user in the lp group for this
<hazmat> ie a deployment user
<hazmat> or are you doing this not from group owned branches
<jml> no, they're all owned from groups. we could do that.
<jml> it's a bit annoying that the deployment user would have write access.
<jml> but that's LP's problem.
<hazmat> jml, that's an lp issue
<hazmat> SpamapS, re p2p rels via zeromq.. its interesting and definitely has merit imo.. but i don't see it happening without some advocacy to juju-dev and the core team.
<sidnei> jml, one alternative i've considered is packing all the branches into a 'dumb' package, putting into a private ppa.
<sidnei> jml, effectively, just running the build step that triggers the config-manager bits and throw that into a .deb.
<_mup_> Bug #1019294 was filed: a charm publisher instance should handle the same charm being uploaded twice  <juju:New> < https://launchpad.net/bugs/1019294 >
<negronjl> 'morning all
<negronjl> hazmat: lp:~hazmat/juju-jitsu/deploy-to is empty. ??
<sidnei> SpamapS, hazmat: know about the status of #993034? seems like it should be resolved, but i just upgraded to the ppa version (was running a version before the fix) and it doesn't seem to have fixed the issue?
<_mup_> Bug #993034: lxc deployed units don't support https APT repositories <verification-needed> <juju:Fix Released by davidpbritton> <juju (Ubuntu):Fix Released> <juju (Ubuntu Precise):Fix Committed> < https://launchpad.net/bugs/993034 >
<hazmat> sidnei, that should be fixed
<hazmat> sidnei, pls comment on the bug if you have a reproducible case for it
<sidnei> maybe i need to juju bootstrap again or something (after apt-get upgrade juju)?
<hazmat> negronjl, it should be there
 * sidnei checks what the actual fix is
<negronjl> hazmat: It's empty ... forgot to push maybe ?
<negronjl> hazmat:  I reviewed the code from the other webapp ( codereview .... )
<hazmat> No new revisions or tags to push.
<negronjl> hazmat:  What do you see when you check this page: http://bazaar.launchpad.net/~hazmat/juju-jitsu/deploy-to/files
<hazmat> negronjl, oops
<sidnei> hazmat, i think the lxc template needs to be refreshed after upgrading, a destroy-service/deploy combo still has the old version of /etc/apt/apt.conf.d/02juju-apt-proxy, even though i confirmed that /usr/share/pyshared/juju/lib/lxc/data/juju-create has the fix
<hazmat> sidnei, its only cleand out if the env is destroyed
<hazmat> sidnei, each env has a template container with customization thats cloned for units
<sidnei> hazmat, as suspected. thanks!
<negronjl> hazmat: can you merge it into jitsu ?  I approved your MP already but, can't get to your branch
<hazmat> negronjl, i'm waiting for the last sparks to fly
 * negronjl nods
<hazmat> ec2-east down again
<hazmat> i find it odd that jujucharms has managed to survive all the recent ec2-east outages
<hazmat> not doing enough 'real' work i guess
<sidnei> survival of the weakest, that's how the cloud rolls *wink*
<negronjl> hazmat:  dont jinx it :)
<jcastro> negronjl: SpamapS uhhh wow. The queue has _2_ items in it.
<jcastro> did you guys do a massive sprint or something?
<hazmat> i should remove those
<negronjl> jcastro:  check again ... nothing on the queue :)
<jcastro> <3
<bkerensa> jcastro: is uber list live yet?
<jcastro> uberlist?
<bkerensa> jcastro: for HP Cloud
<jcastro> hey we demoed subway live at velocity
<jcastro> nope, unfortunately we are still waiting on them
<jcastro> I've asked for a status report this week already
<sidnei> is there anything in charm-tools for 'give me the public address of this unit'? i saw some charms using hostname -f but i guess that doesn't generally cut it
<SpamapS> sidnei: get-unit-info in juju-jitsu has stuff for that I think
<twobottux> aujuju: juju: ERROR Unexpected Error interacting with provider: 409 CONFLICT <http://askubuntu.com/questions/157785/juju-error-unexpected-error-interacting-with-provider-409-conflict>
<sidnei> SpamapS, uhm, i guess i need to be more specific. i need the external address for exporting the hostname for the http interface
<sidnei> SpamapS, eg, haproxy's charm website-relation-joined has "relation-set port=80 hostname=`hostname -f`"
<sidnei> SpamapS, but running local with the lxc provider i got 'localhost' out of 'hostname -f'
<SpamapS> sidnei: haproxy should have `unit-get private-address` not hostname -f
<sidnei> SpamapS, awesome, that's the one i was looking for. thanks!
<sidnei> (also, will send a MP for fixing that if no one beats me to it)
<SpamapS> haproxy's charm needs a lot of work
#juju 2012-06-30
<imbrandon> well that totally blows, websites down, emailed support no response in 30 minutes, called them and they "refreshed" the page and just saw the ticket, when asked about a resolution i was told it would be a while and if i wanted to point my dns to AWS or something they would let me know when it was ok to point it back
<imbrandon> keep in mind this is a dedicated Wordpress hosting company, its all they do ...
<imbrandon> frackin tards ... if i take the time to move DNS I'm not moving it back!
<imbrandon> ( re: brandonholtsclaw.com @ wpengine.com )
<imbrandon> bkerensa: ^^
 * imbrandon opens subime text and gets ready to put brandonholtsclaw.com on juju charms, screw them ... i had other crap to do this weekend
<bkerensa> imbrandon: my site is up :)
<bkerensa> imbrandon: and yeah I noticed your site was hiccuping earlier
<bkerensa> I thought it was me though :D
<JoseeAntonioR> your sites are down
<imbrandon> JoseeAntonioR: yup , waay ahead of ya
<imbrandon> JoseeAntonioR: https://plus.google.com/u/0/112508636642046792855/posts/eHEwphLF1no
<SpamapS> speaking of wordpress ...
<imbrandon> heya SpamapS
<SpamapS> marcoceppi: tap tap ... perhaps you want to hand off the rewrite of the wordpress charm? ;)
<SpamapS> i demo'd the crappy one last night
<SpamapS> and it felt dumb to be like "Use juju! Collaborate! Some day, maybe we'll even do that!"
<imbrandon> btw SpamapS you cant destroy-env with the os
<imbrandon> fixed
<imbrandon> fixes*
<imbrandon> nice python stackstrace and had to goto console and term machines :)
<SpamapS> imbrandon: yeah I know
<imbrandon> :)
<imbrandon> looks to work other than that, trying out a few deploys right now, well about to, waiting on charm getall to finish
<SpamapS> imbrandon: you have RAX beta access right?
<imbrandon> yup
<SpamapS> imbrandon: work?
<imbrandon> hrm not tried
<SpamapS> because hpcloud is old diablo junk
<imbrandon> hadent crossed my mind yet, will here in just a few
<SpamapS> what are you trying against?
<imbrandon> hpc
<imbrandon> region az3 in hpc
<imbrandon> got _my_ stuff in az1 heh, "just in case"
<SpamapS> :)
<imbrandon> i could just see destroy-env wiping them all
<imbrandon> lol
<SpamapS> imbrandon: thus far, not impressed w/ hpc
<imbrandon> yea its good if you use their rules
<imbrandon> and igore disk io
<imbrandon> heh
<SpamapS> which are... "never use the disk" ?
<imbrandon> well their floating ip crap is funny too
<imbrandon> and shared ip is fubar
<imbrandon> and
<imbrandon> i could go on
<SpamapS> juju is going to have to grow floating ip support
<imbrandon> well i guess it does kinda cuz on hpc
<imbrandon> its allocating me a new one and assigned it to the instance every time
<SpamapS> anyway, time to sleep
<imbrandon> so i end up with 2 public ips on each box
<SpamapS> yeah I noticed that
<imbrandon> heh
<SpamapS> with the last one being the outgoing IP
<imbrandon> yea
<SpamapS> anyway, sleep..
 * SpamapS disappears
<imbrandon> l8tr
<imbrandon> i need to as well ... but wont for a bit
<marcoceppi> SpamapS: I'll push up what I have
<hazmat> melting
<hazmat> sidnei, unit-get public-address should also work
 * hazmat has a house of barking dogs and babies, refugees from power outage during a heat wave
<m_3> hazmat: yikes
<hazmat> m_3, same outage that took out ec2-east
<m_3> hazmat: ah, yeah we caught a little bit of hipstergeddon from over here
<m_3> so... are you back up or running on UPS w/unineterrupted cable/phone?
<hazmat> m_3, my power has been fine, just everyone outside of dc that's having problems
<hazmat> hence we've become a refuge camp
<m_3> yeah, just been reading... orig thought it was just another aws power snafu...didn't realize it was... um... bigger
<hazmat> m_3, that new jitsu import/export was so you can ditch your shell scripts :-)
#juju 2012-07-01
<lifeless> hazmat: dc thing?
<imbrandon> power out all over
<hazmat> lifeless, just a thunderstorm with strong winds, dc == washington, dc
<hazmat> not data center :-)
<lifeless> hazmat: I figured that, but its taken out the whole city grid?
<hazmat> lifeless, quite a bit of the suburbs, like 3.5 million were without power across several states
<hazmat> within the city itself only about 65k
<lifeless> thats a fair number; does the US have a fragile power grid?
<imbrandon> heh,  whats causing the ec2 east too
<hazmat> but theirs a tech center an hr west of the city, that has data centers for most of the tech industry that took out the local grid
<hazmat> for ec2 i imagine a backup generator didn't do its job in one of the azs
<imbrandon> lifeless: not really , just bad storm
<imbrandon> hazmat: yea likely
<hazmat> lifeless, debatable :-) but its an unusual event, hurricane force winds 80mph on a storm
<hazmat> afaik this is the largest non hurricane outage for this area
<hazmat> larger than some of the severe snow storms we had a few years back as far wide spread outage
<imbrandon> wow
<hazmat> lots of trees down all over, i saw like 10 down, and 2 cars crushed walking my dog this morning
<imbrandon> pics!! jk
<hazmat> imbrandon, http://www.cnn.com/2012/06/30/us/dc-storm-damage/index.html
<hazmat> imbrandon, the internet has your back :-)
<imbrandon> kinda funny how instagram and a few others are cactching hell for not having geo redundant stuff
<imbrandon> heh
<imbrandon> oh wow
<lifeless> meep
<imbrandon> In neighboring Virginia and Maryland, seven people were killed by fallen trees, authorities said.
<imbrandon> wow
<imbrandon> says new storms may head in tonight too
<imbrandon> you good where you are hazmat ?
 * imbrandon hopes marcoceppi is cool too, he is in that area somewhere
<hazmat> imbrandon, yup we're good, its not that bad for most
<imbrandon> cool
<hazmat> just finished adding a few new goodies to juju-jitsu env import/export to a json file, and service machine placement when deploying
 * hazmat switches into review mode
<imbrandon> sweet, was getting tired of adding placement:local to the env.y and rembering to remove it
<imbrandon> brb afk a few , i'll likely be on most of the evening if you need a guiney pig tho
<hazmat> imbrandon, they've both been tested by others but feedback welcome
<nathwill> upgrade-charm looks at revision number to determine "newer"? trying to test upgrade-charm hook and it's not picking up my local charm as newer...
<marcoceppi> Lost power for a bit, but we're okay here
<_mup_> juju/trunk r547 committed by kapil@canonical.com
<_mup_> [trivial] adjust readme to point to external docs
<hokka> i would like to have multiple local environments. i understand i can specify different directories, and probably need to hack juju code to allow for a custom parameter -- to specify the name of the network, 3rd octet for the network. any other catches?
<lifeless> hokka: the network name shouldn't matter AIUI,
<lifeless> hokka: unless you want really detailed control
<hokka> lifeless: i want to be able to deploy services attached to different networks. then I will be able to control exposure of these networks to the outside world. this is because "expose" is not particularly fine grained.
<hokka> lifeless: i thought libvirt uses network name to distinguish between bridges to connect a container to. maybe i'm wrong, need to check.
<hokka> i have serious doubts the name of the network has no meaning, it is used extensively throughout the code http://bazaar.launchpad.net/~juju/juju/trunk/view/head:/juju/providers/local/network.py
<hokka> could anybody please give me a hint how IP address allocation is managed in juju with local provider? do I need to take care that as well if I want multiple networks?
<lifeless> hokka: I'm pretty sure its done via dnsmasq
<lifeless> hokka: I'm not sure what expose does locally, I would have expected it to let just traffic to the one port through.
<hokka> lifeless: thanks, seems to be so
<hokka> anyway, I think it is would be nice to have different networks, will try to implement it
<lifeless> we should really have a true cloud API that happens to drive LXC.
<lifeless> would be more consistent, reusable, etc.
<hokka> lifeless: what do you mean by 'true cloud API'?
<hokka> i've implemented optional 'net-name' and 'subnet' config options and they are used to create a new libvirt network just fine. but i'm stuck at the stage of injecting the new network name into lxc container config file. can't figure out how to access the config of the current environment from within UnitContainerDeployment._get_master_template(). any clue?
<lifeless> hokka: I mean a JSON HTTP based API that will bring up containers, destroy them, provide blob storage and support image uploads.
<lifeless> it would take care of restarting containers on host reboot
<lifeless> and firewall / network rules etc
<lifeless> probably be best built on top of libvirt.
#juju 2013-06-24
<adam_g> jamespage, think this is good to merge by now? https://code.launchpad.net/~openstack-charmers/charms/precise/rabbitmq-server/ha-support/+merge/165060
<jamespage> adam_g, yes
<luca__> Hi all, was my wireframe attached to my email/
<luca__> ?
#juju 2013-06-25
<mectors> Is Juju supposed to set the $hooksdir variable?
<hazmat> mectors, CHARM_DIR
<rick_h> marcoceppi: ping, heads up. The categories on your ~marcoceppi/charms/precise/mysql-server are off. They need a trailing s to be plural.
#juju 2013-06-26
<m_3> sidnei: and really if there are multiple people deploying an infrastructure from multiple different local repositories... they've got more problems than just 'revision' file mismatch!
<jcastro> juju charm weekly meeting in 20 minutes!
<marcoceppi> rick_h: I need to delete that branch, it was used during an example
<rick_h> marcoceppi: cool, it's showing in the 'new' charms and the icon is borked so figured I'd push you on it :)
<marcoceppi> rick_h: thanks, forgot to remove it after last charm school
<jcastro> https://plus.google.com/hangouts/_/a3309c5f8bca62ce3cdc6326001c7294f51ab806?authuser=0&hl=en
<jcastro> is where the meeting will be: m_3 evilnickveitch
<jcastro> anyone is free to hangout in the meeting
<jcastro> or you can follow along on ubuntuonair.com
<arosales> jcastro, apologies for missing this week :-/
<arosales> evilnickveitch, apologies for my video still being tardy
<arosales> super :-(
 * wedgwood watches stream, doesn't join in
<wedgwood> evilnickveitch: \o/
<evilnickveitch> wedgwood, :)
<marcoceppi> evilnickveitch: do you need redirects for tomorrow's staging?
<evilnickveitch> marcoceppi, it would help if you had them already done, but we will muddle through if not
<marcoceppi> evilnickveitch: I have them, I just need to stick them somewhere and do one last pass. I'll merge them in later this evening
<evilnickveitch> by "muddle" I obviously mean "be excellent"
<marcoceppi> m_3: it should be removed from proof too, I'll open a bug to remind myself
<m_3> marcoceppi: ack
<evilnickveitch> marcoceppi, if you have them, just mail me the file and I will sort it
<m_3> marcoceppi: dude... more sunscreen, less youtube
<marcoceppi> evilnickveitch: even better
<m_3> :)
<marcoceppi> m_3: got a nice base now, not much suncreen needed anymore :D
<m_3> nice
<jcastro> <-- lunch
<wenjianhn> Hello. need help. juju 0.7+bzr628+bzr631~precise
<wenjianhn> In my lxc instance's /var/log/cloud-init.log:
<wenjianhn> ubuntu-lxc-postgresql-0 [CLOUDINIT] __init__.py[WARNING]: get_data of DataSourceNoCloudNet raised [Errno 13] Permission denied: '/var/lib/cloud/seed/nocloud-net/user-data'
<wenjianhn> $ ls -l /var/lib/cloud/seed/nocloud-net/user-data
<wenjianhn> -rw------- 1 root root 1535 Jun 26 14:18 /var/lib/cloud/seed/nocloud-net/user-data
<wenjianhn> Leads to infinite pending agent state.
<wenjianhn> ignore the above msg, my fault
<mgz_> wenjianhn: it'd be nice to know what you did to fix it, just for the record :)
<wenjianhn> mgz_, i executed 'cloud-init start' manually without sudo
<sarnold> wenjianhn: I'm surprised you were in a position to do that on a juju-controlled lxc instance..
<wenjianhn> sarnold, no /etc/init/juju-postgresql-0.conf in my lxc instance, so i tried to create one by myself
<sarnold> wenjianhn: aha :)
<m_3> damn, did hp reduce default quotas?
<b4sher> exit
<nextrevision> ls
#juju 2013-06-27
<wenjianhn> Need help
<wenjianhn> Where is the code that generates /var/lib/cloud/instances/nocloud/datasource in a lxc instance?
<wenjianhn> which file, which line in juju 0.7
<jpds> Guys, what happened to https://juju.ubuntu.com/docs/config-openstack.html ?
<pavelpachkovskij> looks like they released all new documentation and previous is available only in google cache :)
<FunnyLookinHat> marcoceppi, ping ?
<jcastro> jpds: old links shouldn't be broken
<jcastro> let me check it out
<jcastro> jpds: https://juju.ubuntu.com/get-started/openstack/ for now please!
<jcastro> I'll work a page on it now
<jcastro> evilnickveitch: added an openstack section, MP submitted
<evilnickveitch> jcastro, cool, will look at it right away.
<evilnickveitch> jcastro - hmmm, don't see that one - where is it?
<jcastro> https://code.launchpad.net/~jorge/juju/add-openstack
<jcastro> arosales: http://jujucharms.com/review-queue
<jcastro> confirmed, docs submissions end up in the queue
<arosales> jcastro, perfect
<arosales> and that is just targeting the bug at the doc series correct jcastro ?
<evilnickveitch> jcastro - you named the branch wrong
<evilnickveitch> should MP into lp:juju-core/docs
<jcastro> ah crap
<jcastro> right, I can fix that
<evilnickveitch> do it!
<jcastro> https://code.launchpad.net/~jorge/juju-core/add-openstack/+merge/171874
<jcastro> sorry about that!
<evilnickveitch> jcastro, it's okay, just was a bit puzzling. I have also updated the navigation to reflect the new page
<jcastro> evilnickveitch: I'm just used to submitting to juju and not juju-core
<jcastro> bad habit. :)
<evilnickveitch> not the only one I'm sure :)
<jpds> jcastro: Well, it's referred from the main doc page...
<jpds> jcastro: Though I see that's been updated.
 * jpds scrolls up.
<jcastro> yeah busted cron, all fixed jpds
<carif> jcastro, just got some email about lp:juju/docs, did you stop using rst and format in html5 directly?
<carif> just curious
<carif> nm, just looked at the sources in lp, dumb question
<matsubara> hi there, is there a alternative in juju-core for juju debug-hooks?
<jcastro> carif: yep, all html5
<jcastro> carif: nick will announce it on the list tomorrow.
<jcastro> carif: we've got inlined videos and stuff too now, it's nice.
<jcastro> no more building docs, etc.
<carif> jcastro, interesting, looks like lots of .rst files in bzr, did I look in the wrong place?
<carif> jcastro, so do you author in .html directly?
<jcastro> it's in juju-core/docs, not the old place
<jcastro> yeah so nick made templates in html, we just use normal html
<carif> jcastro, vg, i looked in the old place
<jcastro> https://juju.ubuntu.com/docs/contributing.html
<carif> jcastro, interesting, ty
<FunnyLookinHat> marcoceppi, ping ?
#juju 2013-06-28
<_mup_> Bug #1195537 was filed: "type: local" doesn't work in host with kernel option 'ds=nocloud' <juju:New> <https://launchpad.net/bugs/1195537>
<Daviey> Anyone else seen this with juju-core? http://pb.daviey.com/WKch/
<Daviey> evilnickveitch: Hey, I see the docs are updated.. but i noticed https://juju.ubuntu.com/resources/ the menu on the left doesn't work
<evilnickveitch> Daviey -ok, will look at that
<AskUbuntu> MAAS. Adding nodes | http://askubuntu.com/q/313790
<Akira1> good morning and happy friday all, anyone around?
<Akira1> hmm perhaps not, I've got an issue, trying to update/upgrade a charm I have deployed due to additional config elements i've added to a config.yaml for that charm
<Akira1> turns out the /var/lib/juju/units/<service unit>/charm contents are not being updated
<Akira1> still crickets?
<paraglade> Akira1: did you the value in the <charm_home>/revision file?
<Akira1> hmm actually no!
<Akira1> however, everytime I attempt an upgrade, it is incrementing the internal revision
<Akira1> i.e. this was earlier: 2013-06-28 08:01:54,406 unit:cdh4-master/0: charm.upgrade DEBUG: Replacing charm local:precise/cdh4-17 with local:precise/cdh4-18.
<Akira1> 2013-06-28 08:01:54,415 unit:cdh4-master/0: charm.upgrade DEBUG: Charm has been upgraded to local:precise/cdh4-18.
<Akira1> i'm honestly not entirely sure what is in the charm cache at this point, the hook scripts are missing things that should have been there in every revision
<Akira1> is there a way I can workaround this and force update the hooks on each deployed service unit manually?
<paraglade> well I am not sure.  Might be a good idea to wait for one of the more seasoned folks to jump in. :)
<Akira1> fair enough ;)
<Akira1> everything had been working swimmingly until last night =/
<ehg> Akira1: my (go) juju upgrade-charm has a --force option, not sure what it actually does though :)
<ehg> i bump the number in the revision file sometimes to get it to update stuff
<ehg> (locally)
<Akira1> well my understanding was they'll hash out most things
<Akira1> basically what is happening here is we keep on needing to add config params
<Akira1> so we keep updating config.yaml, and updating the hook to deal with the new params
<Akira1> i've done it about 6 times now
<ehg> we had to use juju set --config=<file> <service> to get the new config values in
<Akira1> but the params i added last night aren't getting picked up
<Akira1> I inherited all this stuff from someone else so I didn't do this from scratch
<Akira1> the hook script is missing a config-changed hook =/
<ehg> Akira1: the unit isn't in an error state is it?
<Akira1> it sure is now
<ehg> i think juju queues up hook runs until you resolve it
<ehg> so juju resolved <unit> may resume upgrades
<ehg> not completely sure :)
<Akira1> im giving that a shot now too
<Akira1> i was doing resolve -retries all night
<Akira1> heh
<Akira1> so that does make things run again
<Akira1> and fail
<Akira1> it is so weird, I'm actually on the service unit itself, blowing up teh charm dir
<Akira1> forcing it to redownload it
<Akira1> and it redownloads the wrong one everytime
<ehg> bit you increase the revision number in the revision file?
<ehg> s/bit/did
<Akira1> you know
<Akira1> you might be on to something here
<Akira1> my revision # in the charm was out of sync with what juju thought it had
<Akira1> when you do a juju upgrade-charm it will increment the number no matter what
<Akira1> so I was on local:precise/cdh4-20 even though my revision file said 10
<Akira1> i just bumped up the value in revision to 30
<Akira1> and the juju dry-run says it will upgrade: 2013-06-28 08:31:29,783 INFO Service would be upgraded from charm 'local:precise/cdh4-20' to 'local:precise/cdh4-30'
<ehg> we have problems because they are multiple people on different machines doing stuff with the local juju repo
<ehg> *there
<Akira1> similar type of stuff?
<Akira1> well lets cross our fingers
<Akira1> well you guys don't have to, but I sure as hell am
<ehg> yeah :)
<Akira1> omg omg
<Akira1> i think this worked!
<Akira1> that sure is messed up
<ehg> woot
<ehg> so in the juju docs somewhere it mentions local repos causing weirdness if you have multiple people using them
<Akira1> im going to have to dig that up
<ehg> imho it's something that juju should just support
<Akira1> juju site got updated last night
<Akira1> so a bunch of google'd links are 404s now
<ehg> although i was wondering if we could run our own juju repository
<ehg> ooh
<Akira1> well anytime you have something like that, you could try and fake out the remote repo
<Akira1> or maybe use launchpad?
<Akira1> *shrug*
<Akira1> it'd be pretty neat to be able to pull from git or something too
<Akira1> since we obviously have our own git repo, svn, etc
<ehg> yeah, no bzr pls :)
<Akira1> :)
<Akira1> you guys know if anyone is doing any consulting in this space?
<Akira1> i've got 6 days left with this team and I'm afraid juju is going to die with me when i leave
<ehg> maybe canonical?
<Akira1> yeah im thinking of calling them on monday
<Akira1> as with any services org, I'm sure they will SAY they can do something
<Akira1> and then send someone out who never saw it before
<Akira1> I saw that soooo many times with IBM
<Akira1> "oh sure we're sending you an expert!, only $250/hr!"
<ehg> haha, well i think juju is pretty important to them
<ehg> so hopefully they can help
<Akira1> yeah, we shall see
<Akira1> while folks are feeling chatty... what platform are you running against, have you tried this stuff against MaaS?
<Akira1> i'm using maas and things are going sort of swimmingly, but I get occasional http 401 errors from juju commands due to oauth failures
<ehg> ec2 for us
<ehg> was MaaS easy to set up?
<Akira1> actually, yes
<Akira1> I ran into a couple weird things
<jcastro> hey so we should have the doc 404 links fixed soon
<Akira1> awesome :)
<Akira1> with maas you can theoretically install it right from the OS boot media
<Akira1> but it doesn't work
<ehg> using PXE?
<Akira1> yeah
<Akira1> but if you install that way you run into some chicken/egg scenarios with services coming up in the right order
<Akira1> so I blew up the machine, just installed 13.04 on it
<Akira1> THEN installed maas and it worked fantastically
<jcastro> waas the maas in 13.04 different than the one you used before?
<Akira1> this was setting up a fresh cluster jcastro
<jcastro> also in today's news, Frank has an OSX build
<ehg> nice
<Akira1> some folks had submitted a bug on what i saw, and the bug was supposedly fixed
<Akira1> but alas, such was not the case
<ehg> goju compiles fine on OS X - pretty cool
<jcastro> I think all we need to do now is find out where to put it
<ehg> my OS X using colleagues would love it if they could go "brew install juju-client" or something similar
<jcastro> yep, that's what we should do
<jcastro> has homebrew pretty much taken over the package manager space on osx?
<ehg> i'm not the best person to ask that, but it seems homebrew is preferred to macports in our office at least
<jcastro> these homebrew formulas look pretty simple
<paraglade> has anyone used Tsuru - http://www.tsuru.io/ ?  âjujuâ is the default provisioner used by Tsuru and claims that "Itâs a extended version of Juju, supporting Amazonâs Virtual Private Cloud (VPC) and Elastic Load Balancing (ELB)."
<jcastro> hey EvilBill
<jcastro> I mean evilnickveitch
<jcastro> https://juju.ubuntu.com/docs/write-charms.html
<jcastro> do you think we could toss a 404 page up in the meantime?
<jcastro> something like "We've redone our docs, click here to get to the main page"
<evilnickveitch> jcastro, sure. well, we should have a redirects file
<jcastro> Yeah but I don't think Marco gets back to work until like Monday right?
<evilnickveitch> jcastro, yeah, I can put a dummy page in there to do a quick redirect and get it in before the next sync, that should hold us off until monday.
<evilnickveitch> jcastro - are there any others you can think of?
<Akira1> sounds like someone needs to write some redirect rules!
<Akira1> btw new layout looks nice, good work guys :)
<evilnickveitch> Akira1, thanks! We have the rules, but they are in someone's draft folder until they get back from holidays
<evilnickveitch> jcastro - done that one, btw
 * jcastro nods
<jcastro> evilnickveitch: ok, so what am I to do with getting-started?
<jcastro> you wanted that to redirect to the doc page right?
<evilnickveitch> jcastro - yeah, that would be best
<evilnickveitch> and the other getting started too
<jcastro> what other getting started?
<evilnickveitch> I thought there were 2 in different places on j.u.c
<evilnickveitch> ?
<evilnickveitch> or maybe I am mad
<jcastro> https://juju.ubuntu.com/get-started/
<jcastro> you're probably thinking of the one on normal ubuntu.com
<evilnickveitch> ah, ok
<evilnickveitch> jcastro - can you change that link?
<jcastro> juju charm school in 5 minutes!
<jcastro> follow along on http://ubuntuonair.com
<niemeyer> o/
<jcastro> PLease post questions in this channel once we get started with the walkthrough
<jcastro> one sec
<arosales> jcastro, looks to be back now
<robbiew> charms *are* code...so document them well :)
<jcastro> the guy with the stable net connection ends up being the worst one
<negronjl> I guess I'm scaring the crowd :)
<jcastro> stand by please!
<henninge> where are the docs for the juju-tools? (unit-get etc.)
<robbiew> jcastro: it's fine for me
<arosales> jcastro, back now
<negronjl> http://jujucharms.com/charms/precise/mongodb/hooks/hooks.py
<niemeyer> Talking about docs, two quick plugs:
<henninge> jcastro: It's me ;-) Hi! https://launchpad.net/~henninge
<niemeyer> 1) The technical documentation for juju was *just* updated, and looks so much better: http://juju.ubuntu.com/docs
<henninge> niemeyer: Hi! Yeah, but I am missing stuff or not finding it.
<niemeyer> 2) I've you've missed it, last week I've published a long due high-level blog post about what juju is: http://blog.labix.org/2013/06/25/the-heart-of-juju
<niemeyer> s/I've you've/If you've/
<henninge> The hooks docs seem to be gone as well.
<niemeyer> henninge: Fair point, we'll look into this and see if the new documentation has dropped content
<jcastro> Yeah I think that page is missing
<jcastro> I can't find it anywhere
<arosales> fyi, old docs are still available at
<arosales> http://jujucharms.com/docs/
<niemeyer> The writing a charm document seems pretty light
<henninge> arosales: Great, thanks!
<arosales> so if you find a missing page please file a bug  via https://bugs.launchpad.net/juju-core/+filebug
<jcastro> yeah it's still under construction
<jcastro> the write charms part
<arosales> jcastro, but all content should be in the new docs ported from the new docs
<arosales> specifically the old docs and new docs should have the same content
<arosales> if something is missing that is a valid bug
<arosales> henninge, thanks for trying out our docs :-)
<niemeyer> There we go again
<niemeyer> Oh, it's back
<henninge> There is also pages like this that now have dead links:
<henninge> http://askubuntu.com/questions/82683/what-juju-charm-hooks-are-available-and-what-does-each-one-do
<jcastro> the rewrites haven't been deployed yet
<jcastro> those will be fixed on monday
<arosales> redirects that is
<robbiew> jcastro: I'm seeing  negronjl
<negronjl> robbiew, fixed ?
<jcastro> fixed
<henninge> Ah, good to hear ;-) Great work
<henninge> not fixed for me
<robbiew> nope
<robbiew> bingo!
<henninge> yeah
<henninge> No, it's fine now
<niemeyer> jcastro: Do we have a fallback deployment of the docs?
<niemeyer> jcastro: Having no docs whatsoever about a lot of those topics is a step backwards :(
<arosales> niemeyer, yes the old docs are still at http://jujucharms.com/docs
<niemeyer> arosales: Sweet, thanks
<niemeyer> arosales: We need a link from the new docs there
<arosales> niemeyer, please file any bugs you find for missing content
<niemeyer> arosales: Well, that sounds backwrads
<niemeyer> arosales: The content is there.. whoever is rewriting it should make sure that things are being dropped on the floor
<niemeyer> are not
<arosales> niemeyer, yup we'll udpate jujucharms.com when the docs have some more run time on juju.u.c
<henninge> I can read
<arosales> niemeyer, agreed
<arosales> niemeyer, I am saying we have tried to port all content over
<arosales> however if you notice missing content please let us know via a bug report
<jcastro> I don't think we ever documented the juju tools did we?
<arosales> niemeyer, the task before going live with the new docs was to make sure all "old" content was available in the "new" docs
<arosales> so all content ~should~ be there, but I am sure we will find some issues here and there as we use the new docs :-)
<henninge> Where does private-address get set? Is that automatic?
<niemeyer> arosales: Sorry, but that's not realistic
<niemeyer> arosales: Just look at https://juju.ubuntu.com/docs/authors-charm-writing.html
<niemeyer> arosales: and compare that with jujucharms.com/docs/charm.html
<niemeyer> arosales: A *ton* of important information was dropped
<hazmat> and formatting there is a bit off as well
<niemeyer> Yeah
<hazmat> niemeyer, that new doc is a straight copy of.. http://jujucharms.com/docs/write-charm.html
<arosales> so we are still polishing :-)
<jcastro> yeah I don't think Nick considers the Writing a Charm section complete
<arosales> but please file a bug even if it is saying content is missing from this page and we'll track it down
<arosales>  https://bugs.launchpad.net/juju-core/+filebug
<jcastro> I think this first drop was to concentrate on the end user bits
<arosales> formatting, or any isses you see :-)
<jcastro> the docs are html5, if someone wants to just mass copy and paste and then mark up the old content and submit a MP that would be awesome
<henninge> What was the hook you were editing?
<hazmat> negronjl, i'm here huckleberry
<henninge> The one on the left
<henninge> -joined
<niemeyer> hazmat: Yeah, but it has been repositioned to be the only content about charms
<henninge> Thanks, I got it ;-)
<niemeyer> arosales: Sure, but that approach has some serious issues. Important content just went away
<niemeyer> arosales: and external links are now broken
<jcastro> the links will be fixed on monday
<arosales> niemeyer, yes redirects land on monday
<arosales> niemeyer, we'll review our process, and the feedback is much appreciated
<niemeyer> arosales, jcastro: That doesn't solve the problem if the content is different!
<arosales> niemeyer, what I am saying now is if you notice something let us know via a bug and we'll work to make it better
<niemeyer> arosales: For example, have a look at http://askubuntu.com/questions/82683/what-juju-charm-hooks-are-available-and-what-does-each-one-do and try to imagine how little sense will that response make with the new content that the links will point to
<jcastro> niemeyer: nick is working fulltime on the issues you're talking about
<jcastro> and unfortunately the guy who wrote the redirects went on vacation, so the links will be busted for 2 days, tops
<jcastro> let's discuss this after the charm school
<henninge> So, I just did something like that today but I put the lib (or rather a python package) inside the hooks dir.
<henninge> Since the hook is excuting in that directory, it will find it.
<henninge> w/o messing with pythonpath
<niemeyer> jcastro: I don't really have much else to say.. as I responded on juju@, I appreciate the new work, but suggest rolling back the new docs until they cover at least the most relevant portions of the old documentation
<niemeyer> henninge: The hook executes on the charm directory
<hazmat> henninge, CHARM_DIR is available to hooks
<henninge> oh, ok
<niemeyer> henninge: The documentation is misleading.. I sent a note about that to the list a couple of days ago
<hazmat> henninge, for adding libraries.. ie $PYTHONPATH=$CHARM_DIR:$PYTHONPATH
<hazmat> niemeyer, yeah.. that section part was wrong
<henninge> hazmat: A good, I'll try that
<henninge> I guess that's what I remembered
<robbiew> yep
<henninge> yes
<hazmat> jcastro, yes
<Akira1> we've expanded the hadoop charm over here to work with the cloudera cdh4 distribution, btw
<jcastro> wait, what!
<jcastro> Akira1: dude, that is awesome!
<Akira1> i haven't really looked at the OG one yet
<Akira1> but i've been expanding the configs and stuff like that to make it work
<Akira1> somehow the charm is missing the config changed hook
<Akira1> it is pretty slick and works great
<Akira1> one thing we can't figure out is how to deploy a secondary name node on another instance, right now the secondary namenode stuff runs on the same box as the primary namenode which is kind of a no-no
<Akira1> yeah that'd be slick wouldn't it...
<Akira1> sadly im not sure we did it that way...
<Akira1> big paste inc:
<Akira1> configure_sources () {
<Akira1>     source=`config-get source`
<Akira1>     juju-log "Configuring hadoop using Cloudera repo..."
<Akira1>     wget -q                                                                    \
<Akira1>     https://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/cloudera.list   \
<Akira1>     -P /etc/apt/sources.list.d
<Akira1>     wget -O -                                                                  \
<Akira1>     https://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key     \
<Akira1>     | apt-key add -
<Akira1>     apt-get update
<Akira1> }
<Akira1> sweetaction
<Akira1> you guys going to run out of time before you talk about the monitoring interface
<Akira1> ?
<m_3> Akira1: yes
<m_3> :)
<Akira1> bah, that is one thing i actually want to hear about as i've been struggling with it :)
<Akira1> may i suggest that as a topic for another one of these screencast thingies?
<m_3> yes, absolutely!!
<Akira1> :)
<Akira1> yeah, do that
<Akira1> :-P
<Akira1> i've got nagios, nrpe, etc
<m_3> lp:charm-helpers
<henninge> Thanks guys!
<Akira1> yeah, epic presentation, thanks
<jcastro> \o/
<jcastro> m_3: can you pass me along the youtube url for the recording?
<m_3> hey
<Akira1> i've inherited all this juju stuff from a predecessor, he was working on sweet stuff
<henninge> Are these sessions available for watching afterwards? I'll be on holiday in two weeks.
<Akira1> integration with saltstack for one
<m_3> Akira1: cool!
<jcastro> henninge: yep, I keep a list here: https://juju.ubuntu.com/resources/videos/
<jcastro> Akira1: that is awesome man
<m_3> henninge: yes... we're working on an actual youtube channel, but they're all available from juju.ubuntu.com/resources/videos
<m_3> ah, jinx
<Akira1> like i said WAY earlier, i've been left with a mess heh
<jcastro> well hopefully we can help!
<Akira1> i started touching this stuff 4 weeks ago
<Akira1> and now I'm moving on so i'm trying to leave the team with as much solidity as possible
<jcastro> m_3: I need the youtube url for the videos page please
<henninge> jcastro, m_3: thanks
<m_3> henninge: np... thanks for hanging!
<m_3> jcastro: dude, I'm looking for it
<jcastro> http://www.youtube.com/watch?feature=player_embedded&v=08dOs3eO04M
<jcastro> aha
<jcastro> it's linked from ubuntuonair of course
<henninge> I have a basic question: Is this all done using the new juju (juju-core)? I am still using 0.7 because I had some problems.
<Akira1> i still use 0.6.1
<Akira1> so, *shrug*?
<jcastro> henninge: the one thing we don't have in -core is the local lxc provider
<jcastro> for everything else, you want to use juju-core
<m_3> henninge: mixed actually... depends on which series you're on
<jcastro> Akira1: you'll want to upgrade to .7
<m_3> on precise, I run 0.7... on raring, I run 1.10.0
<m_3> Akira1: yes, definitely upgrade from 0.6.1 to 0.7!
<henninge> ok, my workstation is raring, the server is precise.
<m_3> henninge: haha, yeah... that's pretty common
<henninge> but am i currently missing something using 0.7? Will it still get updates?
<m_3> henninge: it's EOL'd, so we're trying to switch everybody to 1.x
<jcastro> all new development is happening in 1.x
<Akira1> hmmmmmmmmmmm
<Akira1> i'll have to check that out
<henninge> what I expected ;)
<Akira1> i'll go look at a changelog
<m_3> henninge: really it depends on what you're doing honestly... most things work fine in 1.x
<m_3> no lxc support is the biggest gap atm
<m_3> note that we now have a nice mac client for 1.x series!
<Akira1> is 1.x still python or is that go?
<m_3> 1.x is go
<Akira1> cool
<Akira1> that coming along ok?
<m_3> yup... it's moving nicely
<Akira1> sweet
<henninge>  m_3: will 1.x get lxc support eventually?
<m_3> biggest problem atm is packaging... otherwise the juju bits are working fine
<Akira1> i've been giving a few presentations internally about juju, we're a kind of small company, only about 30k employees...
<jcastro> henninge: like this next month we hope
<m_3> henninge: yup, it's on the nearterm roadmap... with us going "you done yet?" every day!
<henninge> :-D
<m_3> haha
<henninge> You guys rock!
<m_3> Akira1: let us know if we can help with presentation material or content
<Akira1> too late! :-P
<henninge> I really like juju and it's exactly what I need here. So, thanks.
<m_3> we can go over the monitoring stuff outside of regular charmschools too
<Akira1> I gave my last presentation to mumbai this morning
<m_3> henninge: awesome!
<Akira1> I had a question way way way way earlier in the day, prolly before you guys were up
<m_3> Akira1: dang, well we should totally hook you up
<Akira1> which of you work for canonical?
<m_3> well yes :)  I'm on US pacific time atm
<Akira1> I'm on monster energy drink time
<m_3> several in the channel... I do as well as jcastro
<Akira1> since my cdh4 charm blew up on me last night but i fixed it this am ;)
<Akira1> are any of you affiliated with the consulting/services arm?
<m_3> Akira1: I'm wondering what you did with the jdk version...
<Akira1> I submitted an inquiry out to them via their shoddy web portal, we might be looking for some paid assistance
<Akira1> right now everything is working with openjdk even if it isn't advised
<m_3> Akira1: we're on the products side of the house... negronjl is canonical consulting though
<m_3> Akira1: we can certainly put you in touch with the right folks
<Akira1> in fact, my juju deployed cluster is actually humming along smoother/faster than our older cluster that was manually deployed
<Akira1> that being said, i've made a bunch of hacks to get stuff working
<m_3> Akira1: great news!  I've been really dreading integrating the oracle versions
<Akira1> tons of things in the charm aren't necessary best practices for config
<Akira1> so my config.yaml is now... bigger...
<m_3> yeah, that comes along
<Akira1> and i have some issues due to maas too
<Akira1> im sure i'll figure some of it out eventually
<Akira1> sometimes this hadoop stuff causes you to do things that arent very... shall we say, service orchestration-y
<m_3> ha!  was just going to ask you which provider you were running on
<Akira1> yeah i inherited about 170 legacy physical servers
<jcastro> hey so I don't know who you'd talk to wrt. consulting
<m_3> Akira1: let us know what you had to do outside-of-paradigm... we can really cover a lot with config and constraints
<jcastro> negronjl: whom should Akira1 contact?
<m_3> I think we made negronjl late to another meeting... soudned like he had to run
<Akira1> m_3: the maas template doesn't mount all the local storage, so I do use maas tags to constrain to a few known nodes
<Akira1> since my hadoop nodes have additional local storage
<Akira1> but I actually mount the storage after provisioning
<Akira1> it all works but it isnt' 100% automagical
<m_3> right... gotcha
<Akira1> and the big thing that could use fixing is the secondary namenode thing
<m_3> there're some specs in place for more control of local storage in the 1.x go versions
<Akira1> thing is im not sure what relation would define that
<m_3> dunno where it landed in priority though
<jcastro> Akira1: pm me your contact info and I''ll have someone contact you asap
<henninge> bye! have a nice weekend (eventually)
<m_3> that sounds like more of a pure constraint thing...
<Akira1> well
<Akira1> the way the charm works, as you guys talked about
<m_3> could maybe specify mounts after-the-fact via relations or config.. .but the constraint is the key to getting the service positioned for that
<Akira1> oh yeah the mounts thing is definitely outside of the box
<Akira1> and i just have to make the right tags
<Akira1> maas-tags
<m_3> yup
<Akira1> i got that working ok via hacky ways
<Akira1> knowing machine serial numbers =/
<m_3> oh, sorry... so secondary namenode
<Akira1> yeah
<m_3> _should_ be able to do with relation "roles"
<Akira1> ok
<m_3> but we haven't tried b/c the main hadoop charm's using an older hadoop
<Akira1> ah
<m_3> it's worth trying to add multiple hadoop-master units and fixing the relations to work
<Akira1> it is actually in here...
<Akira1> but i haven't figured out how to get it to fire
<m_3> but you could also just do something like... juju deploy hadoop hadoop-master; juju deploy hadoop hadoop-secondary-masters -n2
<m_3> ah, ok
<m_3> cool, well it'd be cool if you could push the cdh changes into the current charm
<Akira1> it is actually in the regular charm
<m_3> that'd totally rock
<Akira1> https://bazaar.launchpad.net/~charmers/charms/precise/hadoop/trunk/view/head:/hooks/hadoop-common
<Akira1> er
<Akira1> not the cdh4 stuff, the secondary namenode stuff
<Akira1> i'll try and cook up a way, ours really looks almost exactly like the regular hadoop one
<Akira1> but it isn't configurable to use base hadoop
<m_3> oh, sweet!
<m_3> jamespage rocks!
<Akira1> he's got the secondary namenode stuff in there
<Akira1> i'll see about cooking up a way to use one or the other, shouldn't be too hard
<Akira1> you get neat issues with that and maas too
<m_3> secondary namenode right there... so I don't know if that was working or if that was just planning for the future, but cool
<Akira1> yup, i see it in there, dunno wtf can trigger that relation
<Akira1> i mean
<m_3> yeah... um... I bet you do
<Akira1> i should make the charm reconfigure the squid proxy for maas hhe
<m_3> ouch
<m_3> ok, I'm gonna go grab some food... later gang
<Akira1> adios
<Akira1> thx again for the presentation earlier
<m_3> sure.. .thanks for hanging!
<m_3> Akira1: jorge set you up with canonical contacts huh?
<jcastro> Akira1: I'm going to have a guy mail you directly
<Akira1> sounds good
<Akira1> i can start a dialog with them about what we need and we can see if anyone can help :)
<jamespage> m_3, Akira1: secondary namenode stuff did work last time I tried it
<arosales> negronjl, m_3, jcastro thanks for the charm school!
<Akira1> hmm really james?
<Akira1> what was the relation you added to do it?
<Akira1> we hve a running secondary namenode process, it just runs on the namenode for some odd reason
<matsubara> hi there, does anyone know what's the alternative for juju debug-hooks on juju-core?
<sidnei> matsubara: there's none yet
<matsubara> :-(
#juju 2013-06-30
<fossterer> Hi ! just installed juju.. getting error.. need help
<fossterer> Hello... anyone here?
<fossterer> help....
<JoseeAntonioR> marcoceppi: around?
#juju 2014-06-23
<mwhudson> hm
<mwhudson> when using the manual provider, can you tell the added machines apart somehow?
<sparkiegeek> tvansteenburgh: hi could you take a look at https://code.launchpad.net/~adam-collard/charms/precise/reviewboard/trunk/+merge/224041 it fixes a bug in installation and adds a bunch of unit tests
<jamespage> niedbalski, around ? looking at the ceph-osd merge proposal you made
<galebba> I am trying to colocate few openstack services in to a single maas node with lxc containers. however when i deploy them from juju, they always show up as agent pending in juju status. I can see all the containers are up and running. Any idea how to troubleshoot ?
<lazypower> galebba: is there anything showing up in machine-0.log that provides a hint?
<automatemecolema> can anyone tell me how I can upgrade my juju environment? I've tried juju upgrade-juju but it remains at 1.18.1
<automatemecolema> I've add the juju/stable ppa and have done an apt-get update
<jcastro> achiang, hey, I found this wrt. our last conversation: http://askubuntu.com/questions/262833/how-do-i-force-juju-to-deploy-a-fresh-charm-not-a-cached-one
<automatemecolema> nevermind... didn't realize I can just update it from apt-get
<automatemecolema> So I'm getting an error creating a blank charm here it is: https://gist.github.com/anonymous/fe34ed482cc45508de23 Not sure what to make of it
<lazypower> automatemecolema: thats a known regression as of Friday in charm-tools. There's some new charm templating code to blame there.
<automatemecolema> So what should I do about it?
<automatemecolema> Not sure how I can roll back to an older version
<marcoceppi> automatemecolema: there's a new release that fixes this coming out today
<automatemecolema> fantastic
<automatemecolema> :)
<lazypower> automatemecolema: https://bugs.launchpad.net/charm-tools/+bug/1332664 - looks like it was fixed in trunk, its just not released.
<lazypower> oh
<lazypower> looks like marco's on it :)
<automatemecolema> Until the fix is released can someone point me to where I can pull the bits from a previous working version?
<automatemecolema> looks like I can download source files from launchpad, but only for 1.2.9 or 1.3.0, looks like tons of bugs in 1.3.0
<tvansteenburgh> 1.2.9 should be fine
<automatemecolema> Yep I just installed 1.2.9 from source
<tvansteenburgh> automatemecolema: note that prior to 1.3 'charm create' can only create bash charms
<automatemecolema> So what can charm create do in 1.3+
<tvansteenburgh> python charms
<automatemecolema> And that's why 1.3.1 is broken because it's trying to create symbolic links for some python hooks?
<tvansteenburgh> and it adds a plugin system for charm templates so we can add more template, e.g. ansible-based charms
<tvansteenburgh> there were several bugs, in your instance that's not the problem
<jamespage> marcoceppi, around? wanted to chat about mysql and how we might make it more multi-network aware
<marcoceppi> jamespage: I am around, but I'm a bit tied up at the moment. Give me an hour?
<jamespage> marcoceppi, works for me
<marcoceppi> tvansteenburgh: I've built a prelim release of 1.3.2, want to test it?
<tvansteenburgh> yes
<marcoceppi> it looks good on my system
<tvansteenburgh> marcoceppi: how do i install it?
<marcoceppi> tvansteenburgh: uploading the deb to my server, one sec
<tvansteenburgh> k
<marcoceppi> tvansteenburgh: http://marcanonical.com/charm-tools_1.3.2-0ubuntu1_all.deb
<tvansteenburgh> thanks, testing
<tvansteenburgh> marcoceppi: it's good
<marcoceppi> tvansteenburgh: sweet, pressing now
<marcoceppi> automatemecolema: new tools released, should be available in apt in 15-20 mins
<automatemecolema> marcoceppi: thanks watching some of the charm school vids right now :/
* marcoceppi changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: mceppi & jcastro || News and stuff: http://reddit.com/r/juju
* marcoceppi changed the topic of #juju to: Welcome to Juju! || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://goo.gl/9yBZuv || Unanswered Questions: http://goo.gl/dNj8CP || Weekly Reviewers: marcoceppi & jcastro || News and stuff: http://reddit.com/r/juju
<marcoceppi> automatemecolema: cool, keep in mind if you want the same charm create template that was generated during those charm schools you'll need to use `charm create -t bash`
<automatemecolema> So moving forward python is the wave of the future of hooks in a charm?
<automatemecolema> is there some major lingering differences in 1.3+ that will require some new charm school vids to point out?
<automatemecolema> Also, will the charm school live streaming be a normal scheduled event in the future, is this just something Canonical/Ubuntu is doing to get awareness out there about juju and charms?
<marcoceppi> automatemecolema: we do it about every two weeks
<marcoceppi> next one is this Friday, IIRC
<sparkiegeek> marcoceppi: jcastro: hi guys! could you please give some review love to https://code.launchpad.net/~adam-collard/charms/precise/reviewboard/trunk/+merge/224041? It fixes a bug in installation which is a blocker (and also adds some lovely unit tests)
<jcastro> ooooooh
<marcoceppi> sparkiegeek: will do, thanks for the heads up!
<sparkiegeek> marcoceppi: cheers!
<dpb1> niedbalski: hey, did you see my review comments?  thoughts?  I'd like to get this merged up, if possible.
<dpb1> btw: https://code.launchpad.net/~davidpbritton/charms/precise/apache2/avoid-regen-cert/+merge/221102
<niedbalski> dpb1, answered, just minor observations.
<automatemecolema> juju pulled down the bits for 1.3.2 . The template charm looks quite a bit different than from 1.2 versions
<lazypower> automatemecolema: by default it now specs out a python charm
<lazypower> automatemecolema: if you wish, you can pass -t bash and it will generate a bash charm template. We recommend python based charms as its easier to unit test and typically yields higher quailty charms.
<automatemecolema> lazypower thanks for the explantation. We are planning on use puppet to build charms with, and wonder if using bash for now is the best route
<lazypower> automatemecolema: Bash is a good option. There's a precise charm for puppetmaster that may have the template structure for you - i'm not real familiar with teh charm
<automatemecolema> Does anyone have a charm that I can use as a template to see how others have codified their charms with config management toolsets like chef, puppet, ansible, etc etc.
<lazypower> but if you find it to be of help/use - Let us know. We'd love to build a template for puppet charms as well.
<automatemecolema> Yea we are wanting to leverage puppet enterprise, so it seems we might have to write new charms to work with PE
<marcoceppi> automatemecolema: you need to use `charm create -t bash`, as I mentioned earlier, to follow along in the video
<marcoceppi> why is my buffer not refreshing until I send a message
<marcoceppi> stupid IRC client
<lazypower> automatemecolema: a good chef example is the rails charm, puppet master for puppet - theres a branch for Elastic Search using ansible here: https://code.launchpad.net/~michael.nelson/charms/precise/elasticsearch/trunk
<automatemecolema> lazypower thanks reviewing the code on launchpad now
<automatemecolema> Is there documentation around the charm-helpers.yaml file? Not sure what all options are available to configure it with
<automatemecolema> lazypower: looks like that example is using ansible? Same difference when using it as a reference for how others are building using config management
<lazypower> automatemecolema: ah, i just pulled up puppet-master and its in bash. Not puppet - so yeah. Following the rails / elastic search charm pattern will help
<lazypower> both of which call the external config management framework from bash hooks.
<marcoceppi> lazypower automatemecolema there is a puppet charm somewhere, I just can't seem to find it
<marcoceppi> mhall119: can  you comment on this when you get a chance? https://bugs.launchpad.net/charms/+bug/1317281
<_mup_> Bug #1317281: New Charm: go-pronto <Juju Charms Collection:New> <https://launchpad.net/bugs/1317281>
<lazypower> i remember seeing one too, but had the same results.
<Guest97229> Hello. I have created containers for diffferent OpenStack services on one of the nodes. However I don't know the credentials to ssh into those lxc containers
<dpb1> thanks niedbalski, if you agree, could you mark your review and LGTM?
<dpb1> niedbalski: (I just replied, might take a sec to show up)
<automatemecolema> Guest97229 juju ssh {machine #}
<automatemecolema> Any good docs around configuring security for your juju environment?
<automatemecolema> best practice for a prod deployment maybe?
<automatemecolema> going to the exposed juju-gui service using the password found in the .jenv files seems fairly insecure
<marcoceppi> automatemecolema: what is insecure about it to you?
<mhall119> marcoceppi: commented
<X-warrior> How could I add a config that it is the file path to a file?
<danharibo> I'm not sure if I'm missing something here; how do I deploy a charm that uses git over ssh to retrieve the repository, I don't see how I would set up the SSH keys for this
<automatemecolema> macroceppi: 1. Password is in clear text in a configuration file. 2. No way to delegate access rights to the juju environment (all or nothing scenario)
<tvansteenburgh> automatemecolema: re: charm-helpers.yaml, this may be helpful -> https://pythonhosted.org/charmhelpers/getting-started.html#updating-charmhelpers-packages
<wrale_> must all juju charm running nodes have direct access to the public internet, so that they may install from the charm store?  i'm trying to deploy juju-gui, but i'm on a private network... bootstrap worked when i did sync-tools.. now what?
<tvansteenburgh> wrale_: most charms will try to install packages from apt, at a minimum
<tvansteenburgh> wrale_: this may be useful: https://juju.ubuntu.com/docs/howto-proxies.html
<wrale_> tvansteenburgh: thank you.. i'm using maas.. i was hoping its deb squid would take care of this for me.. thanks for the link.. will look
<danharibo> I kepe running into "/etc/ansible/host_vars/localhost: Expecting property name: line 1 column 1 (char 1)" during the install of python-django
<danharibo> is this charm broken or what?
<automatemecolema> tvansteenburgh: thanks lots of good stuff on that site
<tvansteenburgh> music to my ears :)
<schegi> hi playing around with maas/juju for a couple of days. want to deploy openstack/ceph ha cluster. is there a way to customize network configuration of the ceph charm? like to have two networks public/cluster different from the public network used by juju itself. Additionall i like to have customized osd (journal device / data device)
<schegi> osds should be easy. just deploying naked and add osds later. but changing the networks in a deployed ceph environment would mean to replace the monitors and i am not shure if this would reflect back to juju-relations
<tvansteenburgh> danharibo: does it break in the install hook? can you pastebin the contents of /etc/ansible/host_vars/localhost ?
<danharibo> https://gist.github.com/danharibo/0262671fa222564b1663
<danharibo> yes, 'hook failed: "install"'
<tvansteenburgh> danharibo: my guess is that charmhelpers for python-django need to be upgraded
<tvansteenburgh> i recall a bug fix relevant to this issue
<danharibo> this one? https://code.launchpad.net/~patrick-hetu/charms/precise/python-django/charmhelpers/+merge/216351
<danharibo> that was one of things I found while searching but I wasn't sure how it related
<tvansteenburgh> danharibo: no, let me see if i can find it
<tvansteenburgh> danharibo: https://bugs.launchpad.net/charm-helpers/+bug/1318036
<_mup_> Bug #1318036: ansible thinks /etc/ansible/host_vars/localhost format is json <Charm Helpers:Fix Committed> <https://launchpad.net/bugs/1318036>
<tvansteenburgh> so this was fixed in charmhelpers but python-django doesn't have it yet
<automatemecolema> I notice when build things on my juju environment bootstrapped to Amazon, it throws tons of security groups out there. Is there anyway of making sense of what security groups match what, and any way to tidy this up a bit?
<danharibo> tvansteenburgh: according to the merge, commit 62 fixes the issue
<tvansteenburgh> danharibo: link?
<danharibo> https://code.launchpad.net/~patrick-hetu/charms/precise/python-django/charmhelpers/+merge/216351
<tvansteenburgh> danharibo: ah, theory busted :(
<tvansteenburgh> oh wait
<tvansteenburgh> danharibo: i'm not sure these changes have made it into the charm store yet
<danharibo> yeah that might be the issue
<Delair_> how can i download the icehouse package for juju
<automatemecolema> Delair_: you mean the icehouse charms?
<danharibo> tvansteenburgh: you were right.
<schegi> Delair you should go for the trusty packages as far as i know (juju charm get lp:trusty/XXX )
<schegi> https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes/OpenStackCharms
<schegi> or set your series to trusty in the environments
<schegi> but correct me if i am wrong also kind of new to juju and charms
<Delair> SORRY I LOST CONNECTION.. I meant to say that when i deploy openstack using juju it deploys grizzly.. I know the dashboard of icehouse is different
<sarnold> Delair: < schegi> or set your series to trusty in the environments < schegi> but correct me if i am wrong also kind of new to juju and charms
<schegi> https://wiki.ubuntu.com/TrustyTahr/ReleaseNotes/OpenStackCharms
<schegi> someone able to help a litte with the ceph charm? wondering if there is a way to customize network settings.
<sarnold> Delair_: http://paste.ubuntu.com/7691406/
<jamespage> schegi, what are you looking todo?
<schegi> ok got juju bootstarped and it uses lets say eth0 an 1GBit interface. now i like to deploy ceph to three nodes and i want ceph to use two other interfaces as its public/custer network.
<schegi> in a manual deployment you could pass the public network ="interface" and cluster network="interface" parameter to your ceph.conf. But the charm does not have any options to do this. Changing it after the deployment would mean to replace monitors and i have no idea how this affect juju and relations made to the ceph charm. just adding these parameters to the template in the charm breaks the deployment.
<Delair> schegi: sure make sense.. Let me do the change from precious to trusty and test
<schegi> jamespage found this http://askubuntu.com/questions/469013/how-can-i-configure-the-public-and-cluster-network-with-the-juju-ceph-charm  similar setting. doing it post deploy does not make sense. was wondering what to tweak in the pulled charm sources.
<schegi> Delair under precise adding openstack-origin: cloud:precise-icehouse to the config.yaml and juju deploy -c config.yaml should to the trick. take a look at the link
 * schegi 
<jamespage> schegi, I've started work on making the ceph charms and their clients network aware - anything with a /network-splits suffix under https://code.launchpad.net/~james-page/ is relevant
<schegi> jamespage looks like that would do the trick. thy i will give it a try.
<jamespage> schegi, configuring the actual network interfaces themselves is outside the scope of the charm - but if configured, ceph will use them
<schegi> thats enough, networks are already configured as long as it uses given networks (by ip) and later relations are working i am perfectly fine.
<danharibo> hmm, I issued remove-service and now lxc is just chewing up CPU
<kirkland> can someone tell me why my icon.svg isn't working?  https://jujucharms.com/~kirkland/precise/yo-3/?text=yo
<marcoceppi> kirkland: personal branches don't have icons shown
<marcoceppi> kirkland: ask all about it in #juju-gui
<kirkland> marcoceppi: ah, okay, makes sense
<kirkland> marcoceppi: I suppose you could put some rather interesting images in there ;-)
<marcoceppi> kirkland: right, that's the idea for protecting the store
<marcoceppi> kirkland: there might be a flag you can put, #juju-gui would know
<kirkland> marcoceppi: makes sense
<benji> k
 * benji moves the mouse cursor slightly
<automatemecolema> have sort of a problem... I deployed a local charm, but it's not showing up on the GUI. If I do a juju status everything shows ok
<jose> marcoceppi: hey, is there a possibility that the Chamilo charm can be on the store before or on Wed? upstream is having an expo and they would mention Juju on the expo
<marcoceppi> jose: not sure if it can be in the store, but it'll be reviewed by then
<jose> well, we're hoping it works :)
<Tug> hey guys, I'm thinking about writing an adapter for capistrano. What would you recommend, a juju plugin which parses `juju status`?
<marcoceppi> Tug: that's one way
<marcoceppi> Tug: you could use python-jujuclient which interacts with the websocket api directly
<Tug> marcoceppi, ok let's have a look
<Tug> marcoceppi, do you know if the default public key always added when bootstrapping an environment or should I try to get the juju-client-key somehow?
<Tug> is 'ca-private-key' in .juju/environments/{current_env}.jenv the ssh private key I can use to connect to the machines ?
<marcoceppi> Tug: it's your .ssh/id_{rsa,dsa} key
<Tug> marcoceppi, yeah I saw my public key was added to the list of authorized keys, not just that was always the case
<Tug> marcoceppi, thank you
<jose> hazmat: ping
#juju 2014-06-24
<hazmat> jose, pong
<jose> hazmat: hey, you wrote that juju docean plugin, right?
<hazmat> jose, yes
<hazmat> jose, anything in particular?
<hazmat> they've got an api v2 in beta atm, haven't looked at it
<jose> hazmat: I have a user who is having troubles when deploying to zone nyc2
<hazmat> jose, could you be more specific?
<jose> and also deploying mysql on precise fails with some yaml error
<jose> sure
<jose> when they try and bootstrap with a constraint to create the machine in nyc2
<hazmat> jose, hmm.. the yaml error is something i should fix.. they respun their existing images to no longer have py-yaml
<jose> it just fails for any reason, may be it not getting an IP, not completing, or anything else
<hazmat> hmm.. odd that's the default region in the plugin
 * hazmat gives it a whirl
<jose> lemme see if the user is around so he can join the discussion
<jose> (and deploying mysql on trusty does good)
<hazmat> jose, or have him file a bug on the github issue tracker
<jose> cool then
<mwhudson> mwhudson@narsil:lava-dispatcher$ juju bootstrap -e manual
<mwhudson> [sudo] password for mwh:
<mwhudson> ERROR initialising SSH storage failed: failed to create storage dir: rc: 255 (Permission denied (publickey).)
<mwhudson> what does that mean?
<marcoceppi> mwhudson: you need to be root or hsve sudo access on the box
<mwhudson> i do
<mwhudson> (have sudo access)
<marcoceppi> mwhudson: actually, it's an ssh login error
<marcoceppi> mwhudson: is your key on the box?
<mwhudson> yes
<mwhudson> oh
<mwhudson> there is already an ubuntu user on the box
<axw> mwhudson: if you're not able to login as "ubuntu", you can set bootstrap-user
<axw> that will be used for the initial login, and the ubuntu user's authorized_keys will be updated
<mwhudson> axw: i think that sort of half-happend somehow
<axw> mwhudson: hmm. do you perhaps have an environments.yaml lying around?
<axw> mwhudson: err, .jenv
<axw> mwhudson: the keys should be set up when the .jenv file is created
<mwhudson> ah
<mwhudson> this machine has an unexpectedly hardcore ssh config
<mwhudson> ("AllowGroup sshuser")
<mwhudson> so juju created ubuntu but then couldn't log in
<axw> ah
<axw> we probably should at least add a check after creating the user that it can log in
 * axw files a bug
<axw> mwhudson: if you've got anything else to add, https://bugs.launchpad.net/juju-core/+bug/1333496
<_mup_> Bug #1333496: environs/manual: ubuntu user creation should check login succeeds <manual-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1333496>
<mwhudson> axw: i think that covers it
<mwhudson> axw: thanks
<axw> nps, thanks for digging
<mwhudson> oh er
<mwhudson> how does the manual provider decide whether to use the amd64 or x86 tools?
<mwhudson> i have a x86 userspace but amd64 kernel
<mwhudson> and juju chose wrong
<rbasak> sinzui: please could you take another look at bug 1328958?
<jamespage> gnuoy, is https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor/+merge/224111 actually merged?
<gnuoy> jamespage, if you look at charm-helpers you'll see it was merged in error and reverted
<jamespage> gnuoy, so I still need to action that right?
<jamespage> gnuoy, can you re-propose pls
<gnuoy> jamespage, yes pls
<jamespage> gnuoy, I can't merge it right now - bzr just tells me its already done :-)
<gnuoy> jamespage, ok, let me great a new mp
<jamespage> gnuoy, you might have to push a different branch - not sure
<jamespage> gnuoy, iis this the diff - http://paste.ubuntu.com/7694495/ ?
<gnuoy> jamespage, thats the one
<jamespage> gnuoy, includnig the bit in utils?
<gnuoy> jamespage, yes. New mp https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor-again/+merge/224267
<jamespage> gnuoy, can I have a unit test with those please :-)
<jamespage> but other than that looks OK
<gnuoy> sure
<mfa298> with juju and maas is there a way to tell juju which zone within maas to either use or not use when deploying nodes?
<jamespage> gnuoy, re determine_packages in neutron-openvswitch - I think that needs to be a list if lists of packages
<jamespage> the reason being that if dkms is requirement, that must be installed first, otherwise openvswitch-switch might load a ovs module that's not openstack compatible
<jamespage> gnuoy, review line 504 of the openstack context helper
<jamespage>         [ensure_packages(pkgs) for pkgs in self.packages]
<AskUbuntu> Unable to add nodes to ubuntu MAAS server on static vlan | http://askubuntu.com/q/487549
<automatemecolema> found it, a charm done up all spiffy in puppet albeit very old http://bazaar.launchpad.net/~michael.nelson/charms/oneiric/apache-django-wsgi/trunk/files
<rbasak> sinzui: re: bug 1328958, it also affects desktop users. This is key use case for users of the local provider.
<rbasak> sinzui: why do you restrict the bug to just cloud users?
<sinzui> rbasak, you are the only one reporting that
<sinzui> rbasak, I don't have an ubuntu user on my machine, neither does the developer or the qa staff or the solutions staff
<rbasak> sinzui: so you can't reproduce? Please could you note that in the bug? Then I'll test reproducing on a desktop running VM.
<sinzui> rbasak, the only version of ubuntu that has an ubuntu user is a cloud/server image
<rbasak> sinzui: that's my point. I think you understand me backwards.
<rbasak> If the ubuntu user is not present, then juju bootstrap fails.
<rbasak> I demonstrate this by taking a cloud image and removing the ubuntu user.
<rbasak> I originally hit this on my desktop which also has no ubuntu user.
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charm-helpers/neutron-refactor-again/+merge/224267 now with added unit tests
<sinzui> rbasak, The juju client does not require an ubuntu user. The client requires it on the machines it deploys. The client might be confused if the host machine is a server image and that ubuntu as the user on it too
<rbasak> sinzui: "The juju client does not require an ubuntu user." - that's the bug I'm reporting. It does. It happened on my desktop machine.
<rbasak> If you think it affects only cloud images in a reproducible way, then I'll find you a reproducer for a desktop install I guess.
<jamespage> gnuoy, merged
<gnuoy> thanks
<sinzui> rbasak you are alone in your experience, so I marked the bug medium. If it affected other users and blocked them, I would mark the bug high
<jamespage> gnuoy, hmm - I merged and then thought again
<gnuoy> ?
<jamespage> gnuoy, inserting  rel_name='amqp', relation_prefix=None befor ssl_dir is probably not a good idea
<jamespage> if code exists which is not using ssl_dir= (i.e. positional parameters) it will break
<gnuoy> jamespage, ok, I create a new mp with the args reordered
<jamespage> gnuoy, url?
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charm-helpers/fix-AMQPContext-back-compat/+merge/224294
<jamespage> gnuoy, merged - thanks!
<gnuoy> ta
<natewarr> Is there some undocumented trick to getting juju+vagrant+LXC to work? I'm using precise and keep getting 'error: error executing "lxc-create":' when it tries to deploy juju-gui
<natewarr> ...or documented, but hard to find.
<jamespage> gnuoy, I still see one test failure on https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/neutron-refactor/+merge/224069
<jamespage> gnuoy, and did you see my note on irc re installation order of packages for openvswitch (its above) in the context of the neutron-openvswitch charm?
<gnuoy> jamespage, that test failure is fixed (as of about 10s ago)
<gnuoy> jamespage, I'll take a look at neutron-openvswitch charm comment
<jamespage> gnuoy, nova-compute and quantum-gateway merges - thanks!
<gnuoy> \o/
<dpb1> hi marcoceppi, jcsackett -- could I get some more review love? https://code.launchpad.net/~davidpbritton/charms/precise/apache2/avoid-regen-cert/+merge/221102
<marcoceppi> dpb1: it's on my list
<marcoceppi> my short list
<dpb1> marcoceppi: great! :)
<natewarr> Has anyone had trouble with juju LXC on precise?
<gnuoy> jamespage, determine_packages should return a list of lists which the install hook should iterate over passing each inner list to apt_install in turn ?
<jamespage> gnuoy, yes
<gnuoy> ack, ta
<jamespage> gnuoy, ensures the dkms module gets install prior to openvswitch-switch trying to load it
<jamespage> required on older kernels
<jamespage> 3.2 for example
<jamespage> otherwise no GRE
<automatemecolema> I have a bug problem on precise with local provider https://gist.github.com/anonymous/9ecb23a51844627028b0 Anyone able to point out something here? Here's a bug that maybe somewhat related https://bugs.launchpad.net/juju-core/+bug/1330406 ??
<_mup_> Bug #1330406: juju deployed services to lxc containers error executing "lxc-create" with bad template: ubuntu-cloud <bootstrap> <local-provider> <lxc> <juju-core:Incomplete> <https://launchpad.net/bugs/1330406>
<jamespage> gnuoy, just neutron-openvswitch remaining right?
<gnuoy> jamespage, shouldn't neutron-api be in next rather than trunk ?
<jamespage> gnuoy, as it does not have a stable, in-store charm yet trunk is fine for now
<gnuoy> jamespage, but its not compatible with the other trunk charms
<jamespage> gnuoy, but its not in the store so meh
<gnuoy> ok
<jamespage> means its a straight promulgation at the end of cycle
<gnuoy> jamespage, neutron-openvswitch updated and tested
<jamespage> gnuoy, +1 pushed
<jamespage> gnuoy, maybe they should be /next
<automatemecolema> Anyone out there that can help figure out why I cant use local environment on precise because of an lxc-create error. Followed the instructions to a tee.
<jamespage> gnuoy, gah - neutron-api has a bug
<gnuoy> jamespage, I refuse to believe that
<jamespage> the superclass of NeutronCCContext calls "_save_flag_file"
<jamespage> which write out to /etc/nova
<tvansteenburgh> automatemecolema: you might try asking in #juju-dev
<gnuoy> jamespage, hmm, I see what you mean but I haven't hit that oddly. I'll do some digging
<vladimiroff> Is there a way to just validate an index.json and products.json files? Without running a custom cloud and stuff?
<jamespage> gnuoy, for some reason my flake8 is ignoring the issues yours throws up on nova-compute-vmware
<gnuoy> jamespage, what flake version do you have?
<jamespage> gnuoy, 2.1.0
<gnuoy> hmm, me too
<jamespage> gnuoy, s-ok figured it out
<gnuoy> tip top
<jamespage> gnuoy, pip installed pep8
<jamespage> \o/
<jamespage> gnuoy, anyway - I tied
<gnuoy> jamespage, +1cinder-vmware and nova-compute-vmware . Want me to push them to next ?
<jamespage> gnuoy, nah - these go straight to stable
<jamespage> no other changes required
<gnuoy> sounds good
<jamespage> gnuoy, I'll sort them now
<jamespage> coreycb, just a couple of minor things on the keystone MP - other than that happy to merge once you have those fixed.
<coreycb> jamespage, ok thanks!
<loki27_> Hi all
<loki27_>  I have question about charms deployment and juju .. If i deploy the same charm , on different machine, (Let's say mysql and rabbitMQ) , how will the deployment work their relation, will the mysql instances be standalone , or are they going to replicate the same data trough all instances ?
<jamespage> loki27_, the honest answer is that it depends on the charm
<jamespage> loki27_, rabbitmq will configure itself as a native RabbitMQ cluster
<jamespage> loki27_, mysql won't
<automatemecolema> Can you debug-hooks install?
<loki27_> jamespage really i could not find any HA documentation for MAAS/Juju
<jamespage> loki27_, well its not really MAAS/Juju that's doing the HA - its the charm
<jamespage> loki27_, and that depends on the charm author having done the right things
<loki27_> Trying to figure the best architecture to have HA but it's hard to spot SPOF with all the magic going on..
<jamespage> including documented stuff
<jamespage> loki27_, with regards juju itself, the next stable (1.20/2.0) will have HA
<jamespage> you can try this in the 1.19.x interim dev series now
<jamespage> loki27_, MAAS - not 100% sure
<jamespage> loki27_, I know its planned - but not certain on current status
<jamespage> loki27_, re mysql - mysql is not natively active/active HA'able - however it's possible to use it with ceph and the hacluster charm to implement active/passive HA
<jamespage> loki27_,
<jamespage> loki27_, https://jujucharms.com/~openstack-charmers/trusty/percona-cluster-5/?text=percona-cluster
<loki27_> https://wiki.ubuntu.com/ServerTeam/OpenStackHA talking about 28 servers
<jamespage> is in the review queue - its a drop in replacement for mysql with is active/active
<loki27_> Really i want to achieve 4 node HA
<loki27_> something that is not that hard to do using Mirantis
<jamespage> loki27_, its possible to deploy alot of those in LXC containers now that juju supports that
<jamespage> loki27_, that documentation is a little out-of-date - I have it on my list to refresh
<loki27_> ok , do you know the ETA for 1.20 ?
<jamespage> alexisb, is there an eta for the next juju stable yet?
<alexisb> jamespage, mostly that is dependent on when 1.19.4 (aka 1.20 release candidate) gets pushed out
<alexisb> we are still working through critical bugs
<loki27_> Ok , sounds like "Not that far" to me
<alexisb> the goal is to have a weeks of exposure on 1.19.4 before we call it good for 1.20
<alexisb> so no earlier then tuesday of next week at this point for 1.20
<loki27_> that's pretty fast :)
<loki27_> What are the machine number #num from (juju deploy charm --to #num)  relative to the MAAS machine name/ip ?
<lazypower> loki27_: the principal behind that is the machine is already under juju's control
<lazypower> if you want to specify a specific machine in your MAAS pool that juju hasn't already registered, you'll need to look at machine tagging and using those tags as a constraint.
<loki27_> Ok so considering (I guess) my current only node i have is the bootstrapped node, and i need to deploy to other available node from MAAS
<lazypower> loki27_: so, given a scenario where you did juju deploy mysql,   and it allocated node3.maas as unit #1 - you would then do juju deploy cs:phpmyadmin --to 1
<lazypower> to colocate with MySQL
<lazypower> loki27_: if you need specific machines, use tagging. Otherwise let juju pick from the 'cloud pool' provided by maas.
<lazypower> loki27_: i use a VMAAS setup using VM's provisioned by MAAS -- here's my setup. http://i.imgur.com/d71Nedd.png  - Node9 is the bootstrap node. Ignore the 3 running at the top as they are part of a manual provider setup.
<lazypower> if i wanted to allocate node5 because its got 2GB of ram assigned - i use constraints to do that, or maas tagging.  (i keep saying this, but i dont really know how to use the maas tagging feature. I should read up on that)
<loki27_> hum
<loki27_> see http://pastebin.com/U6FFAxDu
<loki27_> That's the deployment i was planning
<lazypower> the --to's need to be the machine #'s in the listing
<lazypower> i dont think juju will work if you specify the hostname.
<loki27_> no, it won't
<lazypower> s/juju/--to/
<lazypower> loki27_: so what you can do is juju add-machine
<loki27_> Can i just add the node to juju (Assign to juju) witouth deploying anything
<lazypower> which will spin up the machine, and allocate it without any services. Then use that machine # in the listing.
<loki27_> and so it would get indexed in and assigned from MAAAS with it's UID..
<lazypower> yep. juju add-unit will tell maas to return power on a node and install teh agent.
<loki27_> hehe
<loki27_> ok
<loki27_> add-machine sounds like what i need
<lazypower> my poor maas cluster. so abused, it gets no joy of long running instances outside of my manual provider setup
<loki27_> add-machine works just fine
<lazypower> loki27_: so for that script to work just make sure you've got the add-machine statements above to register X - Y and sub those node names with unit #'s and you're g2g
<loki27_> yes.. it's deploying right now :)
<loki27_> ERROR cannot add service "nova-cloud-controller": service already exists
<lazypower> are you trying to scale a service?
<loki27_> well i'm trying to have that whole openstack with fail-over / clustering
<lazypower> i see you have it listed twice, which is why its complaining. You can only deploy a service name once - if you want 2 then you add-unit -n 1 after teh deploy.
<lazypower> ir you want a secondary independent service, give it a new jame
<lazypower> juju deploy nova-cloud-controller nova-cloud-failover -- as an example.
<loki27_> Will that node get,s replicated with nova-cloud-controller at node 0 ?
<lazypower> it would not get replicated no.
<lazypower> sounds like you want juju add-unit <service> -n #
<loki27_> Ok so i would deploy the node to 0, and then add-unit to 1 so it get's replicated and works in HA
<lazypower> node0 should be your bootstrap node
<lazypower> is that what you're intending to do?
<marcoceppi> loki27_: what you can do is juju add-machine
<marcoceppi> then target the machine specifically which corrleates to the maas name you want to use
<marcoceppi> loki27_: alternatively, tag each node with a unique tag
<marcoceppi> and use the --constraints="tag=<tag-name" on each deploy command
<marcoceppi> to assign that service to that machine
<lazypower> aha!
<lazypower> thats what i was looking for when i was talking about tagging. ty marcoceppi
<schegi> anyone can help with the ceph charm? after deploying 3 ceph nodes ceph only return the monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication stuff.
<lazypower> but we got him lined out with add-machine
<marcoceppi> loki27_: I keep getting weird buffer lag in quassel, you seeing that too?
<marcoceppi> lazypower: ^^*
<lazypower> marcoceppi: nope. How long has it been since you cleaned up your database?
<lazypower> i drop the logs > 2 months old on the first of every month to keep the database nice and compact.  but i'm also using SQLITE not PGSQL
<marcoceppi> lazypower: I have never cleaned my db
<marcoceppi> I've got like a year of history in here across a bunch of networks
 * marcoceppi considers cleaning
<lazypower> marcoceppi: i'm willing to bet thats why
<lazypower> marcoceppi: maybe take a dump and stick it in your archives so you've got it, and then just start fresh?
 * marcoceppi shrugs, maybe
<lazypower> marcoceppi: have you had any issues with nodes getting stuck in the pending state in maas? they look like they are reaching the end of the provisioner and handing off to juju, but not completing the run
<lazypower> i'm rounding node #5 with this issue today
<lazypower> wait, nvm. it appears it finished. the gui was lagging behind the actual status of the environment
<automatemecolema> Anyone with experience leveraging cloud-init with juju ??? Maybe a concept of how one might use cloud-init
<automatemecolema> I know behind the covers juju uses cloud-init, but was wondering how I could send directives to cloud-init whilst provisioning with juju
<lazypower> automatemecolema: thats more the responsibility of the install / config-changed hook. What are you trying to do with cloud init?
<lazypower> wait, we spoke about this earlier didn't we - it has to do with using a repository mirror right?
<automatemecolema> lazypower: maybe that was another guy, or my headache is ruining my day where I can't remember anything
<lazypower> :( boo
<automatemecolema> I'm working adamantly with another guy building a continuous deliver pipeline that involves puppet enterprise, github, jenkins, and juju tying into multiple cloud providers
<lazypower> Ok so far so good.
<automatemecolema> and we are thinking about rolling our own images for a couple of reasons
<automatemecolema> one being compliance
<automatemecolema> two being able to have a certain feature set of tools readily available at time of provisioning without over complicating the charm
<automatemecolema> Some of our hang up is going to be around node classification, and how we might specify custom facts for each charm we roll
<automatemecolema> two is more of a problem with inexperience of haproxy, and how we can roll new production instances during revision pushes without notices downtime
<automatemecolema> scenario one being having two tomcat servers behind a set of haproxy servers, when it's time to push new code to production, we need to provision two more instances, tell haproxy to send traffic to the new instances, and then have juju decommission the old instances
<automatemecolema> it's a pretty fantastical approach, but learning curve seems a little steep up front
<lazypower> yeah i get what you're telling me
<lazypower> you know, i went a different approach and put my tools in a subordinate that i deploy with all of my services on my MAAS cluster
<lazypower> its not ideal if you need the tools up front, so i can see where you're looking at cloud-init
<lazypower> and prebaked images
<lazypower> automatemecolema: so as i understand cloud-init, you can populate these with user-data scripts
<automatemecolema> correct
<lazypower> https://help.ubuntu.com/community/CloudInit - this is the document i reference when talking to people who are new to cloud-init. I'm fairly certain there are more exhaustive rsources - but this is a good place to get you started.
<automatemecolema> yep I've read through that
<lazypower> instead of moving to the 'golden image' approach - cloud init is a great way to get you moving with those tools provided at time of cloud boot instead of pressing the image and crossing fingers you dont need to provide rolling updates across your network.
<lazypower> this is why i caution against running home rolled images. its not a problem persay if you're aware of teh short comings. You mentioned compliance so you may *need* to go that route, in which case, there's no problem doing that. I'm fairly certain you can specify AMI's
<lazypower> i'd need to double check for you, but i think its a constraint you can pass juju
<automatemecolema> hmmm lazypower: yea I think rolling images is maybe a last resort for compliance
<lazypower> then my suggested thought is to build a user data script. From there I need to see if you can stuff those in constraints / the provider configuration
<lazypower> asking for you now. it make take a bit - its getting late in the day for core devs
<lazypower> basically middle of the transition of people coming on/off watch :)
<automatemecolema> hmmm let me mull this over with my counterpart and see what his take is
<lazypower> automatemecolema: ok. if your status changes just ping me and i'll adjust teh questions as required.
<lazypower> alternatively you can join #juju-dev and follow along.
<automatemecolema> lazypower: sounds like a plan, thanks for your help
<automatemecolema> I'm in the juju-dev chat as well
<lazypower> automatemecolema: looks like i'm remembering our python days - we dont support custom AMI's as of the GO Port unless something has changed and this answer needs updated - http://askubuntu.com/questions/84333/how-do-i-use-a-specific-ami-for-juju-instances
<lazypower> automatemecolema: so looks like my subordinate to populate my tool chain was the way to go in the current implementation of juju.
<lazypower> automatemecolema: its not ideal, especially if you depend on the tools to be present for teh charm's installation. The alternative would be to version that routine out into its own script, and then import/clone that in your charms installation hooks.
<lazypower> so you make sure they are present when the install hook is done, then do your application heavy lifting in config-changed.
<sarnold> 'course those answers are over a year old. it feels like something that ought to have been done by now, no?
<automatemecolema> Yea...reviewing that kb...curious if anything has changed on that front
<lazypower> sarnold: nate confirmed its unimplemented. i would imagine it caused some headache somewhere?
<sarnold> lazypower: odd.
<automatemecolema> lazypower: so here's a question my counterpart posed about rolling our own images, one being a dev to production issue with having a different image in dev than in prod
<lazypower> sarnold: it may be one of those things, that unless it gets a sizeable amount of community want, it will remain unimplemented. I know we are currently hacking on HA as that's one of the highest requested features.
<lazypower> automatemecolema: yeah i'm not a huge fan of the golden image approach
<automatemecolema> lazypower: if we want the same testing results in dev, test, stage, and then to production, how can we guarantee that what worked in dev will work ing prod?
<sarnold> lazypower: yeah, and it could just be that simplestreams makes doing something significantly better significantly easier, but I just don't know how to phrase the question in that case :)
<lazypower> automatemecolema: at the end of the day, you have root in charms. so you can do whatever unholyness you need to accomplish the goal. My suggestion would be to build a sub and attach it to your deployed units. You can be responsive to what happens based on the relationship created.
<lazypower> that way you get a consistent output in dev/staging/production
<lazypower> you're not muxing with the base cloud image before juju touches it.
<lazypower> and if you need something as a dependency. it should probably live in the install hook of the charm anyway
<lazypower> but may it never be said that i stifle innovation. If you guys wanted to hack on cloud-init and we gave you those details, by all means - go forth and conquer
<lazypower> sarnold: oh yeah - simplestreams. i forgot all about the fact we have a tutorial as of UOS on how to run your own simplestreams.
 * lazypower makes a note to go back and watch that session
<automatemecolema> lazypower: maybe I'm confused about how you suggest using subordinates, I'm just curious how one would say if today juju provisioned an instance with trusty 14.04.1 and tomorrow it rolls 14.04.2. Dev lives on 14.04.1, but we can't be sure it'll run on 14.04.2 ??
<lazypower> automatemecolema: when you deploy on 14.04.1 its running 14.04.1 unless you run the dist-upgrade on your prod stack.
<lazypower> automatemecolema: so the idea would be to not deploy 14.04.2 in staging if youre' on 14.04.1 in your production env. When that occurs, you run rolling upgrades. In terms of services and how we orchestrate, servers are more like cattle than they are pets.
<automatemecolema> yes, but if I do a juju deploy haproxy and it rolls out me haproxy on 14.04.1 how can I control if tomorrow 14.04.2 is release and I do a juju deploy haproxy that I won't get 14.04.2 ??
<lazypower> hmm. thats a good question. default-series only ensures you're within the 12.x and 14.x series presently
<lazypower> we dont typically push breaking changes like what we're suggesting in a LTS though. thats reserved for the non LTS releases.
<lazypower> eg: apache2 between precise => trusty - config file structure changed. it takes a series jump to run into that. not a minor revision.
<lazypower> automatemecolema: i dont have a good answer for you about lockstepping minor revisions.
<automatemecolem_> dang thing kicked me  off of freenode...
<lazypower> boo. did you get any of our conversation?
<automatemecolema> awry irc
<automatemecolema> so here's something to throw out there, how would one build a paas solution leveraging juju. Just thinking about concepts etc etc.
<lazypower> automatemecolema: Juju can be wrestled into a paas, but its intended to orchestrate solutions. We are actively working through charming up Cloud Foundry to provide a PAAS solution
<lazypower> but if you want to do continuous delivery. There's 2 methods i am aware of
<lazypower> 1 is git based delivery. You can either place git checkouts in your config-chagned and execute the hook from your CI if the build passes to run your config-changed hook, or when juju actions lands - build an action for it.
<lazypower> the other is charm building in CI, and increment - charm-upgrade your deployment.
<lazypower> The path is to add a files/ directory in your charm for example, and do local installation of your artifacts by copying them where they need to go if they exist, or checkout from version control / upstream.
<lazypower> I'm doing method A with some rails deployments I have in the wild, and it works reasonably well. I gate in CI, and only master is ever deployed - to staging. Production is gated manually and has a revision flag on the charm that I populate.  (but now that i think about it, i  can just juju-set from CI and achieve the same goal in staging...)
<automatemecolema> given a few things to think about.
<automatemecolema> me*
<lazypower> automatemecolema: feel free to ping if you need anything. I'm going to EOD
<automatemecolema> lazypower: thanks for your time sir
<Delair> Hi All, .. Can somebody please tell me that if quantum-gateway is the same as neutron-gateway when installing using juju
<Delair> how do i know what is the default openstack origin type when installing openstack using juju
#juju 2014-06-25
<AskUbuntu> Getting started with juju | http://askubuntu.com/q/487906
<simhon> hello all
<simhon> need help about juju bootstrap
<simhon> it seems to get stuck on the ssh part
<simhon> the node itself is up and running and i can ssh it myself using the same command the juju does
<simhon>  ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/simhon/.juju/ssh/juju_id_rsa -i /home/simhon/.ssh/id_rsa ubuntu@10.0.2.152 /bin/bash
<simhon> but somehow juju keeps retrying this command
<simhon> when it times out it shows me the following :
<simhon> 2014-06-25 07:09:10 ERROR juju.provider.common bootstrap.go:123 bootstrap failed: waited for 10m0s without being able to connect: /var/lib/juju/nonce.txt does not exist Stopping instance... 2014-06-25 07:09:10 INFO juju.cmd cmd.go:113 Bootstrap failed, destroying environment 2014-06-25 07:09:10 INFO juju.provider.common destroy.go:14 destroying environment "maas" 2014-06-25 07:09:11 ERROR juju.cmd supercommand.go:300 waited for 10m0s wi
<simhon> any idea ????
<schegi> jamespage, you pointed me to your ceph charms with network split support. where to put the charm-helpers supporting network split so that they are?
<jamespage> schegi, from a client use perspective?
<jamespage> not sure I understand your question
<schegi> i am pretty new to juju charms and not shure how they depend on each other. so what i did was branching the charms from lp. and putting them somewhere on my juju node. For deployment i use something like 'juju deploy --repository /custom/trusty ceph' and it seems to deploy the network-split charm from my local repo. But as long as i understand it depends on the charm-helpers, so how to keep this dependency to the branched charm-helpers
<schegi> jamespage, as long as i understand the network split versions of the charms need the network split version of the charm-helpers. when deploying from a local repository how to ensure usage of the network split version of the charm helpers?
<jamespage> schegi, ah - I see - what you need is the branches including the updated charm-helper which I've not actually done yet - on that today
<schegi> jamespage, in lp there is also a branch of the charm-helpers with network split support (according to the comments). but i assume that just branching that and putting it somewhere will not be enough.
<jamespage> schegi, no - its needs to be synced into the charms that relate to ceph
<schegi> jamespage ok and that means? What do i have to do? can you maybe point me to some resource that provides some additional information. The official juju pages are a little bit highlvl.
<jamespage> schegi, if you give me 1 hr I can do it
<jamespage> schegi, I just need to respond to some emails first
<schegi> no problem
<gnuoy> jamespage, do you know if neutron_plugin.conf is actually used ?
<jamespage> gnuoy, maybe
<jamespage> gnuoy, I'd have to grep the code to see where tho
<jamespage> schegi, OK - its untested but I've synced the required helpers branch into nova-compute, cinder and glance under my LP account
<jamespage> network-splits suffix
<jamespage> schegi, https://code.launchpad.net/~james-page
<jamespage> schegi, and cinder-ceph if you happen to be using ceph via cinder that way
<schegi> thx a lot i will give it a try
<jamespage> gnuoy, maybe nova-compute - rings a bell
<gnuoy> jamespage, I'm going to override that method to do nothing in the neutron-api version of the class
<jamespage> gnuoy, +1
<bloodearnest> anyone know of any charms that use postgresql and a handle new slaves coming online in their hooks?
<bloodearnest> ah, landscape charm has good support, awesome
<jamespage> schegi, fyi those branches are WIP _ expect quite a bit of change and potential breaks :-0
<jamespage> gnuoy, do you have  fix for the neutron-api charm yet? I'm using the next branch and I don't like typing mkdir /etc/nova  :-)
<gnuoy> jamespage,I'll do it now
<jamespage> gnuoy, ta
<jamespage> schegi, just applied a fix to the ceph network-splits branch - doubt you will be able to get a cluster up without it
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/fix-flag-bug/+merge/224414
<schegi> jamespage, your right currently trying with the unfixed version leads to unreachability between the mon nodes
<jamespage> schegi, yeah - that sounds right
<jamespage> missed a switch from ceph_public_addr to ceph-public-address
<jamespage> :-(
<schegi> ill give the new version a try
<schegi> but i was wondering always getting the missing keyring error when calling ceph without -k <pathtokeyring>, what am i doing wrong
<schegi> hm once started ceph units seem to be undestroyable. hanging ind agent-state: started and life:dying
<schegi> is there a way to force destruction other than destroy the whole environment or remove the machines??
<schegi> jamespage, monmap looks good so far monmap e1: 3 mons at {storage1=10.10.1.21:6789/0,storage2=10.10.1.22:6789/0,storage3=10.10.1.23:6789/0}, election epoch 4, quorum 0,1,2 storage1,storage2,storage3
<schegi> still some issues but should work
<avoine> noodles775: have you an idea how to solve the race condition with multiple ansible charms on the same machine?
<avoine> it just hit me
<avoine> noodles775: https://bugs.launchpad.net/charm-helpers/+bug/1334281
<_mup_> Bug #1334281: cant have multiple ansible charms on the same machine <Charm Helpers:New> <https://launchpad.net/bugs/1334281>
<noodles775> avoine: let me look.
<noodles775> avoine: Nope, but a hosts file per unit would be perfect as you suggested. I'll try to get to it in the next week or two, or if you're keen to submit a patch, even better :)
<noodles775> s/per unit/per service/ (should be enough)
<noodles775> avoine: hrm, if hooks are run serially, how are you actually hitting this? (I mean, the context should be written out to the /etc/ansible/host_vars/localhost when each hook runs?) Or what am I missing?
<automatemecolema> service is stuck in a dying state, but has no errors to resolve. How can I force it to go away?
<automatemecolema> nevermind juju remove-machine --force <#>
<jamespage> schegi, I have it on my list to figure out how to switch a running ceph from just eth0 to split networks
<jamespage> I don't think I can do it non-distruptively- clients will lose access for a bit
<schegi> jamespage, i looked into it a bit. it is not so easy. recommended way is to replace the running mons by new ones configured with the alternative network. there is a messy way but i didn't tried it.
<jamespage> schegi, OK - sounds like it needs a big health warning then
<avoine> noodles775: I'm hitting it on subordinate
<noodles775> avoine: yeah, I think I remember bloodearnest hitting it in a subordinate too. I'd still thought that hooks were serialised there too, but obviously I'm missing something. But +1 to the suggested fix.
<avoine> noodles775: ok, I'll finish with the django charm and I'll work on a fix
<noodles775> avoine: Oh - great. Let me know if you need a hand or don't get to it and I can take a look.
<avoine> ok
<schegi> jamespage, another question if i like to have specific devices used for journals for the single osds (got a couple of ssds especially for journals) could i just try to add the [osd.X] sections to the ceph charm ceph.conf template or is there anything that speaks against it?
<jamespage> schegi, there is a specific configuration option for osd journal devices for this purpose
<jamespage> schegi, osd-journal
<cory_fu> How does one re-attach to a debug-hooks session that was disconnected while still in use?  Running `juju debug-hooks` again says it's already being debugged
<jamespage> schegi, it may be a little blunt for your requirements - let me know
<cory_fu> Ah.  `juju ssh` followed by `sudo byobu` seems to have worked
<schegi> jamespage, the config.yaml of the ceph charm only mentiones a parameter osd-journal and says "The device to use as a shared journal drive for all OSD's.". But i like ceph to use one particular device per osd running.
<jamespage> schegi, yeah - its blunt atm
<jamespage> schegi, we need a nice way of hashing the osd's onto the avaliable osd-journal devices automatically - right now its just a single device
<schegi> So i though adding [osd.x] sections to the ceph.conf template could help. they won't change anyway
<schegi> it would be fine for me to do the mapping manually just to get it working. But yeah you are right especially if qou are working on different machines.
<marcoceppi> negronjl: hey, do you have any updated mongodb branches?
<marcoceppi> the one in the store is broken and I've got a demo in 1.5 hours at mongodb world
<negronjl> marcoceppi, I don't ... broken how ?
<negronjl> marcoceppi, I'm already working on the race condition for the mongos relation but, no fix yet
<schegi> jamespage, i always thought when playing around with the charm it would also be a nice idea to have the opportunity to pass a ceph.conf file to the charm on deployment. So that it gets all of its parameters from this conf.
<marcoceppi> negronjl: mongos is my main problem
<marcoceppi> everytime I try deploying it and working aroudn the race it still fails
<marcoceppi> also, it only works on local providers
<marcoceppi> all other providers it expects mongodb to be exposed
<negronjl> marcoceppi, the only workaround that I can give you is to deploy manually ( juju deploy ...... ) as opposed to the bundle
<marcoceppi> and fails on hp-cloud
<marcoceppi> negronjl: yeah, that's failing for me to
<automatemecolema> Why is it that sometimes my local charms don't show up in the gui?
<negronjl> marcoceppi, I am still working on it :/
<marcoceppi> negronjl: okay, np
<automatemecolema> macroceppi: So are you saying the Mongo charm doesn't work in a bundle on any providers except local? We were planning on having a bundle that include Mongo
<marcoceppi> negronjl: yeah, now configsrv is failing
<negronjl> marcoceppi, on which provider ?
<negronjl> marcoceppi, precise or trusty ?
<marcoceppi> precise
<negronjl> marcoceppi, pastebin the bundle that you are using so I can use it to debug here ... I'm working on that now
<marcoceppi> negronjl: precise, http://paste.ubuntu.com/7695787/ I removed the database -> mongos relation in the bundle
<marcoceppi> so cfgsvr would come up first
<marcoceppi> but now that's getting failed relations
<bloodearnest> noodles775: avoine: it is my understanding that hooks are serialised on a unit by the unit agent, regardless of which charm they are part of
<bloodearnest> noodles775: avoine: but you could get a clash if you set up a cron job that uses ansible, for example
<avoine> bloodearnest: even subordinate?
<bloodearnest> avoine: yes
<negronjl> marcoceppi, deploying now
<marcoceppi> god speed
<marcoceppi> negronjl: when I got to the webadmin, after attaching a shard, it doesn't say anything about repelsets
<negronjl> marcoceppi, still deploying ... give me a few to look around
<marcoceppi> wow, it took a long ass time, but all the config servers just failed with replica-set-relation-joined
<marcoceppi> negronjl: I think I've made a little progress
<negronjl> marcoceppi, what have you found ?
<marcoceppi> configsvr mongodbs are failing to start
<marcoceppi> error command line: too many positional options
<marcoceppi> on configsvr
<marcoceppi> upstart job is here
<marcoceppi> http://paste.ubuntu.com/7700869/
<marcoceppi> negronjl: ^
 * negronjl reads
<marcoceppi> negronjl: removing the -- seems to fix it
<marcoceppi> looks like a parameter might not be written to the file correctly
 * marcoceppi attempts to figure out where that is
<negronjl> marcoceppi, that will not really fix the issue ... just hide the replset parameter
<marcoceppi> negronjl: it seems to start fine
<marcoceppi> oh wait
<marcoceppi> jk
<negronjl> marcoceppi, the arguments after the -- ( the one that's by itself ) pass the params to mongod
<marcoceppi> no it doesn
<marcoceppi> yeah
<marcoceppi> fuck
<negronjl> you should not have a replset at all now
<lukebennett> Hello everybody, I have an issue I can't find any reference to online anywhere - when bootstrapping my environment using MAAS, it's crashing out because the juju-db job is already running. This didn't occur the first time I bootstrapped, only after I destroyed that initial environment. It feels like the MAAS node it deployed to hasn't destroyed itself
<lukebennett> properly. Any ideas?
<lukebennett> I haven't yet tried manually rebooting the node but it feels like that shouldn't be necessary
<lazypower> lukebennett: did your bootstrap node go from allocated to off when you ran destroy-environment?
<automatemecolema> Any takers on if it makes sense to use juju as a provisioning tool, but allow a config management tool to all the heavy lifting in regards to relationship. I'm thinking along the lines of hiera in puppet
<lazypower> automatemecolema: when you say heavy lifting - what do you mean?
<automatemecolema> lazypower: well we looking at using puppet hiera to build relationships with different nodes. and use puppet to deploy apps on the instances.
<lazypower> automatemecolema: sounds feasable - are you initiating the relationships with juju?
<AskUbuntu> MAAS JUJU cloud-init-nonet waiting for network device | http://askubuntu.com/q/488114
<sparkiegeek> lukebennett: sounds like maybe your node isn't set to PXE boot?
<frobware> I was trying to run juju bootstrap on arm64 and it mostly works (http://paste.ubuntu.com/7701639/) but at the very end of the pastebin output it tries to use an amd64 download.  Is there somewhere where I can persuade juju that I want arm64?
<automatemecolema> lazypower: well the thought was attaching facts to relationships and having puppet hiera do the relationship work
<Pa^2> My machine "0" started just fine.  Machine "1" has been pending for almost 3 hours.  Running 14.04 but the Wordpress says "precise".  Am I missing something simple?
<Pa^2> ...this is a local install.
<arosales> Pa^2: is your home dir encrypted?
<Pa^2> arosales: no
<arosales> Pa^2: ok.
<arosales> Pa^2: and since your client is running 14.04 I am guessing you are running juju version ~1.18, correct?
<Pa^2> I assume so.  How can I verify?
<Pa^2> 1.18.4.1
<arosales> juju --version
<Pa^2> 1.18.4-trusty-amd64
<arosales> Pa^2: and what does `dpkg -l | grep juju-local` return?
<lazypower> Pa^2: when you say local install you mean you're working with the local provider?
<Pa^2> ii  juju-local                                  1.18.4-0ubuntu1~14.04.1~juju1         all          dependency package for the Juju local provider
<Pa^2> yes to local provider
<lazypower> run `sudo lxc-ls --fancy` do you see a precise template that is in the STOPPED state?
<Pa^2> Yes, the precise template is in the STOPPED state.
<lazypower> hmm ok, so far so good
<lazypower> can you pastebin your machine-0.log for me?
<lazypower> Pa^2: just fyi, the logpath is ~/.juju/local/logs/machine-0.log
<Pa^2> Haven't used pastebin before, lets see if this works.  http://pastebin.com/SHSEABYJ
<lazypower> ah, and protip for next time, sudo apt-get install pastebinit. you can then call `pastebinit path/to/file` and it gives you the short link
<Pa^2> Thank you, great tip.
<AskUbuntu> MAAS / Juju bootstrap - ubuntu installation stuck at partitioner step | http://askubuntu.com/q/488170
<lazypower> hmm,  nothing really leaps out at me here as a red flag. its pending you say? is your local network perchance in the 10.0.3.x range?
<Pa^2> My system is dual homed, Yeah, I wondered about that... 10.0.0.0 is routed to my WAN gateway.
<lazypower> well, i know that if youre in the 10.0.3.0/24 cidr, your containers may run into ip collision
<lazypower> which will prevent them from starting
<lazypower> if you run `juju destroy local && juju bootstrap && juju deploy wordpress && watch juju status`
<lazypower> you will recreate the local environment. it should literally take seconds to get moving since you have the container templates cached
<lazypower> if you run that, and it still sits in pending longer than a minute, can you re-pastebin the machine-0.log and unit-wordpress-0.log for me?
<lazypower> actually, instead of unit-wordpress-0, give me the all-machines.log
<Pa^2> That would explain it... I will down the WAN interface and see if your suggestion works.
<lazypower> ok, if that doesnt help next step in debugging is to try and recreate it and capture the event thats causing your pinch point.
<lazypower> Pa^2: warning, i have to leave to head to my dentist appt in about 10 minutes - i'll be back in about an hour and can help further troubleshooting
<Pa^2> Thanks so much for taking the time.  Much appreciated.
<lazypower> no problem :) its what i'm here for.
<gQuigs> how does a charm know it's for precise or trusty?   (I'd really like to get trusty versions of nfs, wordpress, and more...)
<arosales> gQuigs: when the charm is reviwed to pass policy it is put into a trusy or precise branch
<gQuigs> arosales: I'm trying to run it locally though... to help make either of them work on trusty
<lazypower> gQuigs: the largest portion of blockers we have for charms making it into trusty are lack of tests.
<arosales> gQuigs: the ~charmer's team is currrently working to verify charms on trusty and working with charm authors to promote applicable precise charms to trusy
<Pa^2> Still no love... I think I will start with a new clean platform and try it from the ground up.
<lazypower> gQuigs: in your local charm repo, mkdir trusty, and `charm get nfs` then you can deploy with `juju deploy --repository=~/charms local:trusty/nfs` and trial the charm on trusty.
<arosales> gQuigs: ah, just put then in file patch such as ~/charms/trusty/
<gQuigs> interesting.. is precise hardcoded in?   works: juju deploy --repository=/store/git precise/nfs
<lazypower> gQuigs: however if you can lend your hand to writing the tests we'd love to have more involvement from the community on charm promotion from precise => trusty. Tests are the quickest way to make that happen.
<gQuigs> doesn't: juju deploy --repository=/store/git trusty/nfs
<arosales> from ~/ issue "juju deploy --repositor=./charms local:trusty/charm-name
<gQuigs> arosales: lazypower thanks, making the folder trusty worked
<gQuigs> :)
<gQuigs> lazypower:  hmm, what kind of tests are needed?
<gQuigs> doc?
<arosales> gQuigs: juju is looking for something along the lines of "<repository>:<series>/<service>"
<arosales> https://juju.ubuntu.com/docs/charms-deploying.html#deploying-from-a-local-repository
<lazypower> gQuigs: we *love* unit tests, but amulet tests as an integration/relationship testing framework is acceptable in place of unit tests (extra bonus points for both)
<lazypower> gQuigs: just make sure your charm repo looks similar to the following: http://paste.ubuntu.com/7702523/
<lazypower> gQuigs: and wrt docs -  https://juju.ubuntu.com/docs/tools-amulet.html
<lazypower> unit testing is hyper specific to the language of the charm as its written. but amulet is always 100% python
<lazypower> the idea is to model a deployment, and using the sentries to probe / make assertions about the deployment
<lazypower> using wordpress/nfs as an example, you add wordpress and nfs to the deployment topology, configure them, deploy them, then do things like from teh wordpress host, use dd to write a 8MB file to the NFS share, then using the NFS sentry - probe to ensure 8MB were written across teh wire and landed where we expect them to be.
<lazypower> it can be tricky to write the tests, and subjective, but they go a long way to validating claims that the charm is doing what we intend it to do.
<lazypower> s/we/the author/
<Pa^2> http://paste.ubuntu.com/7702540/
<lazypower> strange its not panicing, its not giving you errors about anything common
<lazypower> and this is on a trusty host too right PA?
<lazypower> yeah
<lazypower> hmm
<Pa^2> affirmative
<lazypower> Pa^2: lets see your all-machines.log
<Pa^2> http://paste.ubuntu.com/7702550/
<lazypower> oh wow it never even hit the init cycle, i expected *something* additional in all-machines
<gQuigs> lazypower: thanks, will try... they don't have to be comprehensive though?  (nfs can work with anything that can do "mount"  which is pretty open ended..)
<Pa^2> Don't be late for your appointment... this can wait
<lazypower> good point. thanks Pa^2 - my only other thought is to look into using the juju-clean plugin
<lazypower> and starting from complete scratch - as in re-fetching the templates and recreating them (takes ~ 8 minutes on a typical broadband connection)
<Pa^2> Will have a look.  Thanks again.  I will keep you apprised.
<lazypower> if the template creation pooped out in the middle / the end there, and didnt raise an error it can be finicky like this
<lazypower> ok i'm out, see you shortly
<lazypower> gQuigs: one final note before i jam - take a look at the owncloud charm, and the tomcat charm
<lazypower> they have tests, and they exhibit what i would consider decent tests. The openstack charms are another example of high quality tested charms
<lazypower> using them as a guide will set you on the right path
<gQuigs> lazypower: will do, thanks!
<ChrisW1> mwhudson: too?
<mwhudson> ChrisW1: eh?
<ChrisW1> are you on the juju team at canonical?
<mwhudson> ChrisW1: no
<ChrisW1> that's okay then :-)
<mwhudson> but i do arm server stuff and was involved in porting juju
<ChrisW1> was getting a little freaked out at the number of people I appear to know on the juju team...
<mwhudson> ah yeah
<ChrisW1> anyway, long time no speak, how's NZ treating you?
<mwhudson> good!
<mwhudson> although my wife is away for work and my daughter's not been sleeping well so i'm pretty tired :)
<mwhudson> luckily the coffee is good...
<ChrisW1> hahaha
<ChrisW1> yes, coffee is good
<ChrisW1> I have a bad habit of making a pot of espresso and drinking it in a cup when I'm at home...
<mwhudson> and you?  are you using juju for stuff, or just here for social reasons? :)
<ChrisW1> oh, *cough* "social reasons"...
<whit> hey heath
<jose> tvansteenburgh: ping
<jose> niedbalski: ping
<jose> cory_fu: ping
<cory_fu> jose: What's up?
<jose> cory_fu: hey, I was wondering if you could take a look at the Chamilo charm and give it a review
<jose> I was trying to get as many charm-contributors reviews so it can be easy for charmers to approve
<jose> upstream is having an expo and they could mention juju
<niedbalski> jose, sure, i can do it in a bit
<niedbalski> brb
<jose> thank you :)
<AskUbuntu> juju mongodb charm not found | http://askubuntu.com/q/488209
<lazypower> Pa^2: how goes it?
<cory_fu> jose: I ran into one issue when testing that charm.
<cory_fu> Other than that, though, it looked great
<cory_fu> That would be an excellent case-study for converting to the services framework, when it gets merged to charmhelpers.  It also made me realize that we need to enable the services framework to work with non-python charms
<Pa^2> lazypower: I tried deleting all of the .juju and lxc cache then deploying mysql.... just checked, still pending after almost two hours.
<cory_fu> But it would handle all of the logic that the charm currently manages using dot-files, automatically
<lazypower> Pa^2: if you're on a decent connection, if it takes longer than 10 - 12 minutes its failed.
<lazypower> usually once you've got the deployment images cached, it'll take seconds. ~ 5 to 20 seconds and you'll see the container up and running
<jose> cory_fu: what was the issue>/
<lazypower> cory_fu: the services framework has been merged as of yesterday. you mean the -t right?
<Pa^2> I will start with a clean with a clean bare metal install of Ubuntu and start from scratch.
<lazypower> Pa^2: ok, ping me i'll be around most of the evening if you need any further assistance.
<lazypower> Pa^2: i can also spin up a VM and follow along at home
<jose> cory_fu: oh, read the comment. will check now
<Pa^2> I won't be able to do so until I am at work tomorrow.  Remote sessions via putty from a Windows laptop are just too cumbersome.
<lazypower> i understand completely. I have an idea for you though Pa^2
<lazypower> we publish a juju vagrant image that may be of some use to you.
<lazypower> vagrant's support on windows is pretty stellar
<jose> cory_fu: the branch has been fixed, if you could try again that'd be awesome :)
<lazypower> Pa^2: https://juju.ubuntu.com/docs/config-vagrant.html
<Pa^2> Good possibility.  I will look into it.  They frown on my loading on my production laptop.
<Pa^2> One way or another I will keep you apprised of my cirucmstances.
<Pa^2> Should have brought my Linux laptop instead of this PoC.
<Pa^2> Misunderstood, vagrant on my Linux box at work then...
<Pa^2> Lemme look into it.
<cory_fu> jose: Great, that fixed it
 * jose starts running in circles waving his arms
 * Pa^2 slaps jose on the back... Success is grand.
<cory_fu> jose: Added my +1 :)
<jose> thanks, cory_fu!
<cory_fu> Alright, well, I'm out for the evening
<cory_fu> Have a good night
<jose> you too
<lazypower> jose: did you do something good?
<lazypower> do i get to hi5 you?
#juju 2014-06-26
<lazypower> you realize my hand is getting sore from all the hi5's lately dude. You've been killin it.
<jose> lazypower: apparently the Chamilo charm is working and needs to be promulgated so it can be promoted during an expo
<jose> thanks :)
<lazypower> when is this expo?
<jose> from today until saturday
<jose> started like at noon today
<lazypower> oh ok, zero notice
 * jose told Marco on Monday
<lazypower> i'm checking into the MapR Hadoop distribution charm. Once that's done i can take a look at chamilio again
<jose> thank you!
<lazypower> ah, he's been saddled with my MongoDB work since i fell ill
<lazypower> i'll pick up the torch nbd
<jose> in the meanwhile, I'll organize the Ubuntu on Air! channel for a nicer experience
<lazypower> what
<jose> you'll see later today
 * lazypower eyeballs jose suspiciously
<marcoceppi> jose: promulgated
<jose> marcoceppi: awesome, thanks a bunch! \o/
<yaell> Hi we are new to juju and trying to deploy openstack using it. We encountered some basic problems. We are using 10 machines one is the maas and juju server and the 9 others for deployment. The problem is there are 9 services that have to be deployed but only 8 machines left after bootstrap. When trying to deploy 2 services on 1 machine something always fails. Does anyone know if there are services that can not be installed on the same m
<yaell> In addition is this the best place to ask questions regarding juju and openstack or is there an additional mailing list?
<axw> yaell: I don't know the answer to your problem, but there's a mailing list (https://lists.ubuntu.com/mailman/listinfo/juju) and http://askubuntu.com/
<axw> this is an appropriate place, but I think most of the people who know the answers will be asleep
<mwhudson> yaell: you can deploy a bunch of the services to separate containers on one node
<mwhudson> i'm fairly sure
<mwhudson> how production-ish is this?
<yaell> Just testing it for now
<yaell> how do I deploy to separate containers on the same node?
<mwhudson> yaell: if you know the machine number you can say juju deploy --to lxc:$machine number
<mwhudson> yaell: juju help deploy has some more examples
<lukebennett> lazypower: Thanks for your reply to me yesterday. No the node stayed as allocated. As suspected, when I powered the node down manually and reran juju bootstrap, it PXE booted and deployed as expected.
<yaell> Yes I did that the problem is when I do that the service does not come up properly or if I add relations the service fails.
<mivtachyahu> good morning. Anybody ever seen a bug where juju mixes up what services are running on which machines?
<sherman> hey all, how's things?
<mwhudson> yaell: oh
<mwhudson> you need to multilate networking a bit
<mwhudson> something like this https://github.com/Ubuntu-Solutions-Engineering/cloud-installer/blob/master/tools/cloud-sh/lxc-network.sh
<sherman> hey, dumb q: can I bootstrap to a local container then juju deploy to MaaS? or do I need a MaaS node bootstrapped?
<sherman> like is my machine:0 just a node because maas is set as my default?
<AskUbuntu> MAAS/juju not cleaning up nodes | http://askubuntu.com/q/488396
<schegi> jamespage, out there??
<lazypower> lukebennett: interesting. That's happened to me a few times, things seem to get out of whack once in a great while and things go into a pending state.
<lazypower> sherman: i dont understand the question
<lazypower> You want to place your bootstrap node on an lXC container somewhere, and use that to orchestrate your maas cluster?
<jcastro> hey so I noticed something odd
<jcastro> hey marcoceppi
<jcastro> https://juju.ubuntu.com/docs/tools-amulet.html
<jcastro> in the sidebar
<marcoceppi> jcastro: hey
<jcastro> I think we should just rename it to "Writing Tests" to be more obvious.
<gQuigs> with the local-provider all of my VMs get the same IP/DNS name, what am I doing wrong?
<jcastro> https://juju.ubuntu.com/docs/authors-testing.html
<jcastro> oh, this is the page I wanted anyway
<marcoceppi> jcastro: it's the Amulet reference guide
<marcoceppi> yeah
<jcastro> I'm going to retitle that
<jcastro> "Writing tests for charms"
<lazypower> gQuigs: lxc is attempting to assign the same ip to all of your containers?
<gQuigs> lazypower: afaict it actually does
<lazypower> that.. makes no sense. its given a range in the configuration to choose from
<lazypower> what version of juju/lxc?
<gQuigs> last time, I could ssh to the IP and I would randomly get one of the VMs
<lazypower> gQuigs: if you look in /etc/defaults/lxc you should see a config variable for LXC_DHCP_RANGE
<gQuigs> juju, stable/ppa - 1.18.4-trusty-amd64;   lxc, the stable
<jcastro> huh, how is that even possible
<lazypower> jcastro: this ties in with what bluespeck emailed us about this morning.
<jcastro> surely it would error out if it tried to get the same ip?
<lazypower> and right, the container should sit in pending due to ip collision
<jcastro> lazypower, but they were on azure, so if it affects multiple providers ....
<lazypower> jcastro: well i dont have imperical evidence but this sounds strikingly similar
<jcastro> similar enough to ask somebody
<gQuigs> lazypower: jcastro: nope, it tries really hard to work... http://pastebin.ubuntu.com/7705971/
 * gQuigs checking in ..default/ now
<lazypower> what the
<jcastro> hey so let's open a bug right away
<jcastro> so I can ask people to take a look
<lazypower> gQuigs: do you mind attaching the output of your machine0.log, your juju-status, and filing a but at launchpad.net/juju-core ?
<lazypower> er juju-local
<gQuigs> lazypower: sure, will do if I can find a cause
<lazypower> gQuigs: we need to poke at this sooner rather than later. this appears to be related (but may not be)
<gQuigs> lazypower: what is your /etc/default/lxc-net supposed to look at?
<lazypower> jcastro: once we have a bug # will you follow up with the bluespeck guys?
<lazypower> gQuigs: http://paste.ubuntu.com/7705989/ is mine
 * gQuigs is identical... I'm going to revert to a non-ppa LXC...
<sparkiegeek> marcoceppi: jcastro: sorry for nagging but would really appreciate a look at https://code.launchpad.net/~adam-collard/charms/precise/reviewboard/trunk/+merge/224041. Fixes a bug that prevented installation on stale images and adds a ton of tests
<jcastro> sparkiegeek, yeah reviewers have been swamped
<lazypower> sparkiegeek: this looks good - however your tests are in hooks/
<jcastro> lazypower, you got time to take a break and give that a review today?
<lazypower> there's an idiom to keep tests in $CHARM_DIR so they can be executed via charm test
<sparkiegeek> lazypower: they aren't amulet tests (as discussed before, can't do that due to postgresql charm relation)
<lazypower> ahhh ok so you're aware
<sparkiegeek> lazypower: I consider it Python best practice to put them where they are now (close to the code)
<lazypower> well, you have a make target for them
<lazypower> +1
<sparkiegeek> yup :)
<lazypower> i dont have the same dependencies though, is it possible you could add a requirements.txt or something with the deps and a make setup target that would pip install those for me?
<lazypower> that way i'm not manually chasing down the deps?
<sparkiegeek> lazypower: of course
<jcastro> asanjar, hey elasticsearch ninja-edition is now in the store, please start bundling cool stuff
<wesleymason> marcoceppi: lazypower: I've been told you've been working on the mongo charm, I've been working on getting it to use the storage subordinate for persistent volumes
<jcastro> marcoceppi, how often does the github sync work? elasticsearch seems out of date but it updated in the store like yesterday
<lazypower> wesleymason: we're under an active effort to rewrite it to use currently existing charm-helpers and reduce overall complexity of the charm. We should have something to look at by next week. Do you have your storage-volume branch somewhere we can reference?
<lazypower> i'd be willing to try to fold it in to the first cut if its basically there in the  mongodb charms current form.
<wesleymason> lazypower: I have an MP, with branch referenced: https://code.launchpad.net/~wesmason/charms/precise/mongodb/add-storage-subordinate-support/+merge/223539
<lazypower> wesleymason: ah, thanks. seems pretty straight forward. I'l add this to our card to have this as a base feature ootb
<wesleymason> lazypower: cheers :)
<lazypower> gQuigs: still banging your head against the desk with ip assignment and LXC?
<gQuigs> lazypower: nuked all the juju/local/lxc packages and trying fresh
<gQuigs> hmm.. now on reinstall I'm getting lxcbr0 not found.. maybe I messed with that at some point...
<lazypower> gQuigs: its a virtual ethernet device bridge. when you purged it probably removed the device. did you install the juju-local package?
<gQuigs> lazypower: yup
<wesleymason> lazypower: fyi I'm also *currently* adding some nagios checks to the mongodb charm on another branch
<lazypower> wesleymason: might want to hold on that pending our re-release of the charm
<wesleymason> lazypower: is it going to add checks? Or just because of the refactor of the charm?
<marcoceppi> lazypower wesleymason adding nagions relation should be pretty easy to transplant, unless you're adding it in the hooks.py
<wesleymason> marcoceppi: yeah, the relation/hook is about 6 lines of python, most of the checks are in separate nagios plugin script files
<marcoceppi> wesleymason: cool, should be pretty easy to transplant that
<wesleymason> aye, won't be a merge, but not hard to copy over
<wesleymason> as I ended up bumping charmhelpers on my branch too, for the nrpe helper
<wesleymason> marcoceppi: lazypower: got any idea of an ETA on charm refactor, or just "when it's done"?
<marcoceppi> wesleymason: I'm working on it right now, I should have it done by the end of the weekend
<wesleymason> awesome
<gQuigs> yup, now installing lxc does't create the lxcbr0 at installl time...
<asanjar> lazypower: i am back from doc visist, what's up
 * gQuigs goes to#lxcontainers
<lazypower> couple of things actually
<lazypower> i gated a merge against your hadoop2-devel charm: https://code.launchpad.net/~lazypower/charms/trusty/hadoop2-devel/python_rewrite/+merge/224647
<lazypower> that should close the loop on removing all the class bits out of the hooks and 1:1 with the bash counterpart. so we're good to gofor replacement
<lazypower> we can pair program the tests as early as next week.
<lazypower> Did you get a chance to review MapR yet?
<schegi> anyone knows how to name ceph clusters when deployed with ceph charm??
<gQuigs> and now that I have lxcbr0 working again, it's all fine - different IPs for each machine, yay :)
<jcastro> sebas5384, it will take me a bit longer to get me a shirt to you
<schegi> jamespage, thx for your help yesterday ceph cluster up and running with splited network. now i have to figure out how to configure the osds to use external journal devices. but this seems to be doable.
<lazypower> sebas5384: hey! i haven't forgot about you. I'll be working on the vagrant plugin this weekend. I should have a PR circa sunday.
<sebas5384> yeah!! lazypower i'm a bit in phantom mode these days hehe
<lazypower> Totally understood :) Thanks for laying out the repository for the work though. I'm going to head back through it tonight and setup the remaining milestones and project plan
<sebas5384> but definitely i'm going to comeback to you in these days :D
<lazypower> i started it and never finished the work. I've been sidetracked with 8 billion other objectives
<sebas5384> great!! and i had some ideias that i'm going to registrate in the issues list :D
<sebas5384> hehehe
<sebas5384> i know that feeling
<sebas5384> right now im in a hackathon of drupal 8
<lazypower> sounds good. I'm heading out to a dr's appt. I'll catch you on the flip sebas5384
<sebas5384> porting some modules
<lazypower> nice
<lazypower> keep kicking butt sebas5384
<sebas5384> hehehe yeah :D
<sebas5384> great to hear from you man
<Pa^2> Probably should have asked this first: Can I run juju on a desktop system or does it require server?
<marcoceppi> Pa^2: you can run juju the client on Ubuntu, Mac OSX or Windows systems
<Pa^2> I am still missing something.  Did apt-get install juju-core, juju-local.  Then did juju init, switch local, bootstrap - Machine "0" : agent-state started.
<g0ldr4k3> hello everyone, I've a question for you, juju-core has to be installed on host machine or on region controller (MaaS) if I want to have an environment based on MaaS (Region Controller + 3 Cluster Controller +3 nodes for each CC)?
<Pa^2> When I attempt to do juju deploy wordpress it appears to try and start machine "1" but just says pending - many minutes now.  Thoughts?
<sarnold> Pa^2: the first deploy downloads a whole cloud image.. that can take a while
<Pa^2> I will wait and get back to you.  Thanks.
<g0ldr4k3> because I've installed it on RC but when to run the command "juju bootstrap -e maas --debug". juju try to connect to node but without to create a connection!!!
<g0ldr4k3> is there someone can help me? thanks
<g0ldr4k3> there string is this "2014-06-26 16:55:29 DEBUG juju.utils.ssh ssh_openssh.go:122 running: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/richardsith/.juju/ssh/juju_id_rsa ubuntu@Ubuntu14.04LtsNode02Cluster01Svr /bin/bash"
<sarnold> g0ldr4k3: oh, nice; what happens when you run that command by hand?
<g0ldr4k3> sarnold: which is the command I have to run?
<g0ldr4k3> sarnold: and why it try to connect on only one node the others
<sarnold> g0ldr4k3: ssh -o "StrictHostKeyChecking no" -o "PasswordAuthentication no" -i /home/richardsith/.juju/ssh/juju_id_rsa ubuntu@Ubuntu14.04LtsNode02Cluster01Svr /bin/bash
<g0ldr4k3> sarnold: the result is this "ssh: Could not resolve hostname ubuntu14.04ltsnode02cluster01svr: Name or service not known"
<sarnold> g0ldr4k3: nice.
<sarnold> g0ldr4k3: do you recognize that funny hostname? "Ubuntu14.04LtsNode02Cluster01Svr"
<g0ldr4k3> if I try to run a ssh connectio from host machine it works well
<sarnold> g0ldr4k3: do you recall if that's something that you might have assigned to it in some way? I'm really surprised about the . in the middle of it..
<g0ldr4k3> yes but it's a test in my lab
<sarnold> can you change the name to somehting that doesn't include a period?
<sarnold> unless you're in a position to run a domain 04LtsNode02Cluster01Svr that has a host named Ubuntu14...
<g0ldr4k3> no its hostname is another it's just the node's name added on MaaS UI
<g0ldr4k3> and that's it
<sarnold> hrm, did MAAS assign that name??
<g0ldr4k3> Anyone can help me?
<g0ldr4k3> I've posted my question tenore...
<sparkiegeek> g0ldr4k3: please rename the node in MAAS to not have the "." in it as per sarnold above (unless you really have a domain as he suggests)
<g0ldr4k3> I've to check i dont think its name has "."
<g0ldr4k3> Maybe i understand the issue. I've not a doman set on maas
<g0ldr4k3> The connection via ssh from host machine work's only with ip
<g0ldr4k3> Sparkiegeek: thanks a lot for support i'll try to set it on maas
<lukebennett> sparkiegeek: Thanks for your response to my query yesterday. PXE boot seems fine but I'm still having problems - http://askubuntu.com/questions/488396/
<sparkiegeek> lukebennett: are you using virtual machines, or is this real hardware? What power type are you using?
<lukebennett> Real hardware via WOL
<sparkiegeek> lukebennett: any other DHCP servers in the network? Can you share the console log?
<lukebennett> Yes, using external DHCP server but with reserved IP and relevant PXE configuration
<lukebennett> You mean the maas log?
<sparkiegeek> lukebennett: "relevant PXE" scares me... are you not using PXE from MAAS?
<sparkiegeek> no I mean the log of the node that got booted
<lukebennett> Test bed consists of two nodes - one running as MAAS cluster controller and region controller, one as a node... node has reserved IP and DHCP server is configured so it PXE boots from the MAAS server
<lukebennett> PXE booting works fine - when I juju bootstrap, the server powers up
<lukebennett> The environment deploys ok (well it doesn't, I get various errors with some of the charms but that's another story)
<lukebennett> When I destroy it, mongod keeps running and the server stays online so a subsequent bootstrap fails
<lukebennett> Which particular log file are you interested in? Sorry if that's a stupid question, still finding my way around
<sparkiegeek> lukebennett: ok, sorry I went to read up on MAAS power settings
<sparkiegeek> lukebennett: so it turns out that (as I suspected) MAAS can't do power down using WOL
<lukebennett> Ah
<lukebennett> Thanks
<sparkiegeek> lukebennett: the MAAS guys hang out in #maas if you need more help :)
<lukebennett> Thanks, I've been somewhat split between the two :) It feels like regardless of the power down, juju ought to be able to clean up after itself?
<lukebennett> i.e. I shouldn't have to power the node down to redeploy onto it
<sparkiegeek> how are you "stopping" juju?
<sparkiegeek> I mean, destroy-environment or ?
<lukebennett> juju destroy-environment xxxx
<AskUbuntu> Juju - configuring a service by clicking on charm with stand-alone implementation - LXCa | http://askubuntu.com/q/488575
<sparkiegeek> lukebennett: can't say for sure but I suspect Juju expects MAAS to be able to power down machines and so doesn't do it itself
<sparkiegeek> lukebennett: do you have any remote power control for turning the node off?
<sparkiegeek> lukebennett: if so, you could "teach" it to MAAS by editting /etc/maas/templates/power/ether_wake.template
<lukebennett> I think WOL is all we have on this setup but I'll speak to our ops guys about it. May have to try going down the VM route for testing
<lukebennett> I would say that juju ought to be agnostic of what MAAS decides to do with powering down however
<lukebennett> I need to hit the road now but thanks for your help
<sparkiegeek> lukebennett: no worries, sorry I couldn't give you a better overall answer!
<lazypower> sparkiegeek: did you get reviewboards make targets updated for me? this MP looks good otherwise.
<sparkiegeek> lazypower: haven't had a chance yet I'm afraid
<lazypower> ping me when that lands and i'll merge this for you.
<sparkiegeek> lazypower: sure, thanks
<arosales> jose:  nice to see tests in your chamilo charm
<Pa^2> Just how big is cs:precise\wordpress?
<Pa^2> BTW: I successfully got trusty/mysql and juju-gui working on my local install.
<jcastro> marcoceppi, here's a small one
<jcastro> https://code.launchpad.net/~chris-gondolin/charms/precise/nrpe-external-master/trunk/+merge/221062
<jcastro> I think chuck sorted the lowest hanging fruit
<jcastro> also, wrt to your mongo work
<jcastro> https://code.launchpad.net/~wesmason/charms/precise/mongodb/add-storage-subordinate-support/+merge/223539
<jcastro> marcoceppi, this one looks straightforward: https://bugs.launchpad.net/charms/+bug/1195736
<_mup_> Bug #1195736: Storm Charm was added. <Juju Charms Collection:Fix Committed by maarten-ectors> <https://launchpad.net/bugs/1195736>
<jcastro> here's another easier one
<jcastro> https://code.launchpad.net/~jacekn/charms/precise/mysql/n-e-m-relation-v2/+merge/218969
<lazypower> jcastro: i have that MP in the card on the board
<jcastro> rock
<Pa^2> Hmm, on a trusty system trusty charms spin right up, precise machines and charms not so much.
<Pa^2> failed to process updated machines: cannot start machine 3: no matching tools available
<Pa^2> Machine 3 is precise, 0,1,2 are trusty
<Pa^2> Is there a way to DL precise "tools"
<sarnold> Pa^2: juju sync-tools? tools-sync?
<Pa^2> Thanks
<lazypower> Pa^2: thats progress!
<lazypower> yesterday you weren't even getting the tools notice
<lazypower> sarnold: its sync-tools
<sarnold> lazypower: nice, alphabetical order :) thanks
<lazypower> np :)
<Pa^2> I have the precise tarball in ~/.juju/local/storage/tools/releases.  Is that where it should draw from?
<Pa^2> Unfortunately there is no precise in ~/.juju/local/tools
<lazypower> Pa^2: http://askubuntu.com/questions/285395/how-can-i-copy-juju-tools-for-use-in-my-deployment
<Pa^2> Thanks.
<Pa^2> Well, time for burgers and beer... I will let this go until I get back in the am.  Thanks for all the assistance.
<pmatulis> how do i remove proxy settings from juju?
<pmatulis> i removed it from ~/.juju/environments/local.jenv but no
#juju 2014-06-27
<AskUbuntu> Enterprise Support for Juju | http://askubuntu.com/q/488769
<jamespage> fwereade, this might seem like and odd question but would a change to 'private-address' on a relation persist OK?
<jamespage> i.e. if I wanted to change it to something other than the juju provided private-address?
<Pa^2> Success!
<schegi> jamespage, out there??
<jamespage> schegi, yup
<fwereade> jamespage, yes it would
<fwereade> jamespage, we don't overwrite it
<jamespage> fwereade, awesome
<fwereade> jamespage, and I don't *think* we ever will, I think my last chat with hazmat agreed there
<schegi> thinking about digging a bit into this charm stuff on the weekend in oder to modify the ceph/ceph-osd charms so that they support external journal devices. as long as i understood the whole thing up to now the most relevant  thinkgs should be the hooks and the files under upstart provide the configuration right?
<jamespage> schegi, the specific osd journal device is specified when initializing the OSD device in the charm code under hooks
<jamespage> osdize_device
<jamespage> it call ceph-disk-* todo that
<jamespage> schegi, its awesome that you want to hack on this!
<jamespage> niedbalski, hmm - I'm not sure that the way to changed rabbitmq to bump the open files ulimit is working:
<jamespage> Max open files            1024                 4096                 files
<schegi> yeah already seen the call of ceph-disk, what is not completely clear to me is the way the charm iterates over the given devices. especially then doing this on different nodes. lets see what i can do next week.
<niedbalski> jamespage, i tested that using pam / ssh / login and worked fine, did you rebooted the instance/server ?
<jamespage> niedbalski, just looking at a completely fresh install - the nfiles limits appear not to be set even before reboot
<frobware> I was trying to deploy wordpress on arm64 but get: "ERROR charm not found: cs:trusty/wordpress". It is available from precise but I get other problems choosing that. Is this simply not supported right now?
<sarnold> frobware: correct; you can see a list of charms available for trusty at http://manage.jujucharms.com/charms/trusty
<frobware> sarnold: thank you.  I tried mysql (which worked) then thought I'd tie that to wordpress... and got caught out.
<sarnold> frobware: you ought to be able to mix trusty mysql with precise wordpress
<frobware> sarnold: the error I get is (from juju status):  "3":
<frobware>     agent-state-info: '(error: index file has no data for cloud {RegionOne http://192.168.1.100:5000/v2.0}
<frobware>       not found)'
<frobware>     instance-id: pending
<frobware>     series: precise
<sarnold> frobware: yikes, that one's beyond me :) sorry
<frobware> sarnold: for ARMv8 I'm somewhat surprised to think you could mix precise and trusty. seems too old...
<sarnold> frobware: is armv8 arm64?
<frobware> sarnold: correct. AKA aarch64
<sarnold> the arch so nice they named it thrice
<sarnold> frobware: yeah, that looks to be true, precise php5 is only available on amd64, armel, armhf, i386, powerpc
<rick_h_> lazypower: around? I need a favor
<rick_h_> marcoceppi: or maybe you know, I'm trying to find a couple of charms that have some papercuts that need fixing.
<rick_h_> marcoceppi: preferably ones with tests to update with it
<frobware> is there a means in the juju-gui to deploy to a machine id when using openstack?  I was trying to avoid spinning up / tearing down instances whilst experimenting. You can do it with "juju deply --to", just wondered if there's a way to do this in the browser.
<bac> frobware: not yet, it is under development
<frobware> bac, ok, thx.
<rick_h_> frobware: we're working on that feature right now. If you go to the url https://xxxx/:flags:/mv you get a new machine view option
<rick_h_> that allows deploying a service, and then placing it
<rick_h_> frobware: it's not yet complete, but I think it will do what you want in the current state. If this is not a real live/testable environment would love to have you peek at it sometime and any feedback is valuable
<frobware> rick_h_: trying now...
<rick_h_> frobware: note that this is only on the latest release of the charm and that even better would be to use the charm's config to set the release to 'develop' which will pull from the latest trunk codebase.
<rick_h_> juju-gui-source="develop"
<frobware> rick_h_: using your URL with bits, I see "Machines".  I'm assuming 0 (i.e., 1 machine) is where it would deploy to, unless I "Add Machine".  Correct?
<rick_h_> frobware: correct
<rick_h_> frobware: if you deploy any services using the normal url, they'll show up in your environment there on each machine they're on
<rick_h_> frobware: and those machine numbers should match your 'juju status' output
<frobware> rick_h_: hehe. so I got a little lost. Have a new machine id "6" for mysql service (which is for oneric)
<rick_h_> frobware: lol, ok. Is that what you wanted to have?
<frobware> rick_h_: Machine id 6 listed. Hardware details not listed. Cannot see a way to remove the machine.
<rick_h_> frobware: if at any point you can reload the browser window and it should reset
<rick_h_> frobware: yea, removing is the task we're working on now :)
<rick_h_> frobware: so you have this deployed to the environment or just in your GUI window?
<frobware> rick_h_: both, trying to remove via the command line...
<frobware> rick_h_: you see, this is why I don't live leaving the church... (terminal)
<frobware> rick_h_, :)
<rick_h_> frobware: hah, I'm with you. Sorry to use you as a test pilot so. As we're working on this I was curious what someone new would do with it right off.
<frobware> rick_h_: right, strike N. Selected mysql (trusty this time). I see it listed under "Unplaced units".
<rick_h_> frobware: I'd be curious how you went from 1 machine to machine 6 if you have time to put it in a pastebin or anything, but if not thanks for checking it out briefly
<rick_h_> frobware: right, you can then drag/drop it or click the arrow icon next to it to place it on a machine
<frobware> rick_h_: is there a log I can cut & paste for you?  I was clicking a bit madly.
<rick_h_> frobware: hmm, not really, don't worry about it.
<frobware> rick_h_: so I dragged it "right" and ended up with 2 machines. Getting closer. ;)
<rick_h_> frobware: woot
<frobware> rick_h_: the UI says 2 machines. juju status doesn't. Is there an "apply" button?
<rick_h_> frobware: and down at the bottom, if you hit the deploy button it should say it's going to create a machine, deploy mysql, add a mysql unit?
<rick_h_> frobware: yep, in that footer
<frobware> rick_h_: so now I have "Deployment summary: 1 unit added, 1 machine added."
<rick_h_> frobware: ok, and then if you confirm there, it should go do it
<rick_h_> frobware: and after a few seconds juju status should show the new stuff going on and in progress
<frobware> rick_h_: what does "root-level" machine imply?
<rick_h_> frobware: not in an lxc or kvm container on that machine
<rick_h_> frobware: but on the base install of it
<frobware> rick_h_: so on the host?  ;)
<rick_h_> there was an email thread going around on that wording. You just confirmed I'm wrong :)
<rick_h_> frobware: yes
<frobware> rick_h_: not obvious. root has too (way too) many connotations.
<rick_h_> understood, that was the argument against
<frobware> rick_h_: so if I confirm now adding 1 machine on the "root" means I will get... 2 machines. or 1?
<rick_h_> frobware: so you will get one new machine, and then have 2 total (since you have the bootstrap node/machine #0 currently)
<rick_h_> frobware: if I'm understanding where you're at correctly
<frobware> rick_h_: yes. but I was hoping/trying to deploy on the machine that is running the UI
<rick_h_> frobware: oh! ok. what cloud is this?
<rick_h_> frobware: ok, so you've got machine 0, which is the bootstrap node and running the juju-gui currently?
<frobware> rick_h_: local deployment of openstack (via devstack) to which I added juju.
<rick_h_> frobware: ok, so if you reload the page, and go back to the start, deploy mysql, get an unplaced unit. Then click on machine 0, and then drag/drop the unplaced unit on the 'add container' header you can deploy msyql in a container on that machine
<rick_h_> frobware: but now we're running closer to the edge of how much work is released in your version of the GUI vs in trunk but not released yet
<frobware> rick_h_: it would be neat if I could drag from the left directly on to the Environment column (and 0 in my case/example)
<rick_h_> frobware: to the container column?
<frobware> rick_h_: given what I see, the "Environment column".  In there I see "Environment" and below that "0" - which I assume is the node running the UI.
<rick_h_> frobware: ok gotcha. Yea, that's the machines. We can't do the 'hulk smash' colocating over the api to juju so we have to walk you through creating a container
<rick_h_> frobware: it sounds like we're hitting the limits of what we've got for you to use at the moment vs what you want to do.
<frobware> rick_h_: sure. right, let me try once more... if you have the patience!
<rick_h_> frobware: so to your original question, we're working on it and getting there, but not quite yet. :)
<rick_h_> frobware: sure thing, I'm taking up your time here.
<frobware> rick_h_: I have my unplaced unit, click on the arrow indicating left->right, move to... (choosing) 0, choose location: choosing 0/bare metal.
<rick_h_> frobware: yea, that'll bomb
<frobware> rick_h_: ah!
<rick_h_> frobware: juju won't let us, we've removed that option this week
<rick_h_> frobware: but not in your version of the GUI codebase
<frobware> rick_h_: my clicking is obsolete!
<rick_h_> lol
<frobware> rick_h_: so can I just drag onto the "0" in the Environment column?
<rick_h_> frobware: so click the machine first
<frobware> rick_h_: which gives me "0/lxc/new0".  what's that? ;)
<rick_h_> frobware: so that's it's going to create a new lxc container on machine 0
<rick_h_> $machine0/lxc/$container0
<rick_h_> kind of thing
<frobware> rick_h_: so can  change the type?
<rick_h_> frobware: yes, kvm is the other type
<rick_h_> frobware: you should have gotten a drop down to select the type?
<frobware> rick_h_: it does drop down, but no choices. I'm just clicking the "0/lxc/new0" bits...
<frobware> rick_h_: incidentally, the trash icon doesn't appear to do anything when I click on it.
<rick_h_> frobware: hmm, ok so yea that's now a drop down in the latest code.
<rick_h_> frobware: yea, the devs are working on the 'remove' right now as we speak
<rick_h_> frobware: you can see the diff in trunk http://comingsoon.jujucharms.com/:flags:/mv/
<rick_h_> frobware: if you first deploy one service, deploy, and confirm, then deploy a second one, switch to machine view, and you create a container it has a choice
<rick_h_> frobware: come back next week? lol
<frobware> rick_h_: you're right. time for beer!
<rick_h_> frobware: thanks for experimenting though. Great feedback
<frobware> rick_h_: one other bit of feedback. Once you click on "deploy" would you (I would??) expect the state to change.  ie., should I be able to click deploy again.
<rick_h_> frobware: which deploy do you mean?
<rick_h_> frobware: the one in the footer where you get the confirmation step?
<frobware> rick_h_: bottom right button.  I click "Deploy", it says "Confirm", I do.  And the state goes back to "Deploy". I wasn't sure, then, whether it had registered my action.
<rick_h_> frobware: ah, yes there's a fix to grey it out and change it since after confirmation you've got no pending changes
<frobware> rick_h_: and if I do click and confirm again, what is it going to do?
<frobware> rick_h_: ^^^ right
<rick_h_> frobware: it 'resets' but it only says "deploy" once you have something to deploy
<frobware> rick_h_: yep, no pending changes, no possible action.
<rick_h_> http://comingsoon.jujucharms.com/:flags:/mv/ note it's greyed out and disabled
<rick_h_> by default
<frobware> rick_h_: thanks for your help and patience...
<rick_h_> frobware: right back at you
<frobware> rick_h_: back to my cli. :)
<rick_h_> enjoy! /me goes back to his mutt window to do email
<lazypower> rick_h_: whats the favor? :)
<rick_h_> lazypower: I'm looking for charms with papercuts that I can give new starts to fix
<rick_h_> lazypower: as part of a 'welcome to juju' program
<lazypower> Oh!
<rick_h_> lazypower: so wonder if you guys have anything in your heads, a charm with tests would be great
<rick_h_> so that they could get the full experience of debugging, fixing, adding a test, and such
<rick_h_> lazypower: but something small, don't want them to spend a week on it
<lazypower> ok that's going to be tricky as we have a handfull of charms with tests. let me cross ref launchpad with the audit spreadsheet
<rick_h_> lazypower: ok, well the test thing is icing on the cake if you've got it
<rick_h_> lazypower: I went through the charm bugs but so many look like the might not even apply any more
<rick_h_> lazypower: sent you a link
<lazypower> yeah this is going to be fun, jose has been really good for sniping a lot of the low hanging fruit
<rick_h_> lazypower: ok, well if not no biggie.
<rick_h_> lazypower: don't waste a bunch of time on it, but if you think of anything let me know and I'll get a new person on it for you for free :)
<lazypower> <3
<jose> rick_h_: is there any specific service in which you would like to see tests?
<rick_h_> jose: no, this is more about them just having something low hanging to check out a charm, walk through it, debug, and see how they tick
<rick_h_> jose: nothing in mind in particular
<jose> and that program is going to be like a mentoring program or a show?
<rick_h_> jose: I'm trying something new
<rick_h_> jose: we've got 2 new starters on monday, and want to intro them to juju before we give them bugs in our code to fix
<rick_h_> jose: and looking at and figuring out a charm is part of that intro
<jose> well, if you need a hand with mentoring or something similar let me know, if you want to host hangouts also let me know and I'll get you set up with ubuntuonair
<rick_h_> jose: naw, thanks though
<jose> :)
<Pa^2> Any thoughts on why a Nova-Compute might agent-state-info: 'hook failed: "install"' ?
<Pa^2> Environment: local v.1.18.4.1
<Pa^2> I will put this up if anyone wants to take a look: all-machines.log ( http://paste.ubuntu.com/7712947/ )
<Pa^2> ...now it is Miller time.
<jose> tvansteenburgh: ping
<tvansteenburgh> jose: pong
<jose> tvansteenburgh: hey, I'm having some problems with charm test not running the relation hooks, know how may I be able to fix it?
<tvansteenburgh> paste the test?
<tvansteenburgh> i'll have a quick look but fair warning, i'm gonna have to leave in about 5 min
<jose> tvansteenburgh: the test itself?
<jose> sure
<tvansteenburgh> yeah, the test itself
<jose> http://bazaar.launchpad.net/~jose/charms/precise/chamilo/add-tests/view/head:/tests/100-deploy
<tvansteenburgh> so what behavior are you seeing
<jose> tvansteenburgh: there's a db-relation-changed hook on the chamilo charm and it's not being ran even though the relation is there
<tvansteenburgh> jose: sorry, i don't see anything obviously long, and i have to leave for an appointment. ping me if you figure it out, otherwise i'll dig deeper when i get home later
<tvansteenburgh> s/long/wrong/
<tvansteenburgh> sorry i couldn't be more help right now, good luck!
<jose> tvansteenburgh: not a problem, just wanted to give the heads up :)
#juju 2014-06-28
<dpb1> marcoceppi: if you are around, I found a bug, and put a fix up for review (only affects self-signed cert path): https://bugs.launchpad.net/charms/+source/apache2/+bug/1335473
<_mup_> Bug #1335473: need pyasn1* python libs during maas deploy <apache2 (Juju Charms Collection):In Progress by davidpbritton> <https://launchpad.net/bugs/1335473>
<sherl0ck> Hello..We just deployed OpenStack environment using the Juju charms. I installed novncproxy and made changes to the nova.conf file. How can I restart the nova-api and other nova services. Tried /etc/init.d/x and service x restart which did not work
<sherl0ck> Should it be restarted using some Juju commands?
#juju 2014-06-29
<schegi> jamespage, git something done for the dedicated osd journals, but still got some questions on the charms code. Are you there?
<sparkiegeek> lazypower: I finally got a chance to work on it and pushed that fix for reviewboard charm (to include test dependencies make target)
<jose> sparkiegeek: mind giving me a link so I can take a look as soon as I've got some time? :)
<schegi> jamespage out there?
<sparkiegeek> jose: https://code.launchpad.net/~adam-collard/charms/precise/reviewboard/trunk/+merge/224041
<jose> cool, thanks
<sparkiegeek> np
<sparkiegeek> sorry, was afk :)
<jose> not a prob
<jaywink> charm documentation says charms should be uploaded to launchpad - what if my charm code is on github, do I still need to push it to launchpad?
<jose> jaywink: afaik, yes, but I'm not 100% sure
<jaywink> ok thanks jose
<jaywink> yay, my first charm submitted for review :P
#juju 2015-06-22
<firl> anyone know a way to specify an AMI for a juju deploy?
<Muntaner> hey guys
<Muntaner> having a problem with a bootstrap
<Muntaner> http://paste.ubuntu.com/11755548/
<Muntaner> what can this be?
<Muntaner> what does it means "index file has no data for cloud {etc...} " ?
<Muntaner> nobody?
<bleepbloop> Hey guys, hopefully simple question, I am using juju to deploy openstack. I have it setup with a dual network setup https://insights.ubuntu.com/wp-content/uploads/226b/split.png the problem is one of the services that is deployed in an lxc container should have a public IP but it is getting one on the internal network. How do I setup juju containers to have
<bleepbloop> an ip on the maas public network?
<bleepbloop> I pretty much want the equivalent of a "floating IP" added to this lxc service, since if I actually go in and manually move it to a different network it will make juju unable to access it
<wolverineav> hi
<wolverineav> quick question, I'm adding a relation between quantum-gateway and rabbitmq-server : juju add-relation quantum-gateway rabbitmq-server
<wolverineav> I get the following error: ERROR ambiguous relation: "quantum-gateway rabbitmq-server" could refer to "quantum-gateway:amqp rabbitmq-server:amqp"; "quantum-gateway:amqp-nova rabbitmq-server:amqp"
<wolverineav> does anyone know on the top of their head which one to choose? pretty sure the openstack documents will have that info somewhere deep :)
<wolverineav> oh, context : setting up openstack using juju and maas. using neutron networking instead of nova-networks
<wolverineav> ah, the next step is 'juju add-relation quantum-gateway nova-cloud-controller' so the nova-amqp would be used there.
<wolverineav> went ahead with juju add-relation quantum-gateway:amqp rabbitmq-server:amqp - no errors. hopefully its the right thing. will find out soon
<jackweirdy> Hey folks. I'm trying to get Juju working with an openstack instance, and I have no idea what to set as my `region`. I can't find anything about that in horizon, and juju won't bootstrap without a value set. Anyone got any suggestions? Is there a magical default value that makes it work?
<jackweirdy> (to be clear, that's the value of the `region` property in my environments.yaml file)
<marcoceppi> jackweirdy: it's usually "RegionOne"
<jackweirdy> Thanks marcoceppi!
#juju 2015-06-23
<jackweirdy> Well now I think I'm being bitten by this https://bugs.launchpad.net/juju-core/+bug/1452422 but at least I'm further along ;)
<mup> Bug #1452422: Cannot boostrap from custom image-metadata-url or by specifying metadata-source <sts> <juju-core:In Progress by anastasia-macmood> <https://launchpad.net/bugs/1452422>
<bradm> wallyworld: sigh, this is still busted.
<bradm> wallyworld: ah, wrong channel :)
<stub> marcoceppi: https://code.launchpad.net/~stub/charms/trusty/cassandra/spike/+merge/262608 isn't showing up in the review queue and I don't know why
<marcoceppi> stub: check back in 30 mins
<marcoceppi> it should should show up then
<stub> marcoceppi: The MP was created yesterday, so unless you have just poked it...
<marcoceppi> stub: I just poked the review queue
<stub> ta :)
<marcoceppi> stub: it seems to have poped in, we'll have  a look in due time!
<danielbe> there was a talk at UDS about developers getting credits for a cloud for testing juju charms. Are there any news concerning this?
<cholcombe> i think the answer to this is no but is it possible to control the kernel version of my instances in juju?  I know the containers can't but maybe with the VM's?
<marcoceppi> danielbe: not yet, it's in progress
<marcoceppi> cholcombe: not really outside of MAAS
<cholcombe> ok i figured
<danielbe> ok, thansk marcoceppi
<lkraider> I have machines stuck in dying state
<lkraider> How do I force to clear them?
<lkraider> running under vagrant (virtualbox) lxc
<lkraider> there is no corresponding machine in /var/lib/juju/containers/
<lkraider> what is the command to clear a machine state?
<marcoceppi> lkraider: is it an error on a unit or a machine provisioning?
<lkraider> I believe in the machine
<lkraider> the network was down at the moment I tried to create a service
<marcoceppi> well, you can `juju retry-provisioning #`
<lkraider>   "11":
<lkraider>     instance-id: pending
<lkraider>     life: dying
<lkraider>     series: trusty
<lkraider> "machine 11 is not in an error state"
<lkraider> the machine basically does not exist, for all I know
<marcoceppi> then, that's that. No there's no way to really "clear" it out afaik
<lkraider> but juju is still has some reference to it
<lkraider> but now I cannot provision new machines
<lkraider> juju is stuck
<lkraider> can I force add a new machine?
<lkraider> otherwise it also keeps on pending
<lkraider> can't I edit the juju database or something?
<thumper> lkraider: sorry, late to the story, but which provider?
<lkraider> @thumper lxc local
<thumper> lkraider: and host?
<thumper> also, juju version
<lkraider> mac -> vagrant trusty64 -> juju 1.23.3-trusty-amd64
<thumper> lkraider: I think you can do 'juju remove-machine --force'
<thumper> lkraider: although you might first have to remove the service or unit that is deployed on that machine, I don't recall
<lkraider> I destroyed the service already
<lkraider> @thumper I tried remove-machine --force --verbose --show-log
<lkraider> but it still shows in juju statu
<thumper> hmm...
<thumper> was there an error from remove-machine?
<lkraider> https://gist.github.com/anonymous/28409911b3b95bf8fdad
<lkraider> not
<lkraider> on the output
<lkraider> can I connect to the juju mongodb ?
<lkraider> @thumper I think my bug is this one: https://bugs.launchpad.net/juju-core/+bug/1233457
<mup> Bug #1233457: service with no units stuck in lifecycle dying  <cts-cloud-review> <destroy-service> <state-server> <juju-core:Fix Released by fwereade> <juju-core 1.16:Fix Released by fwereade> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Saucy):Won't Fix> <https://launchpad.net/bugs/1233457>
<lkraider> @thumper sorry, that is for a service, not a machine
<lkraider> this bug has some scripts that seem useful: https://bugs.launchpad.net/juju-core/+bug/1089291
<mup> Bug #1089291: destroy-machine --force <canonical-webops> <destroy-machine> <iso-testing> <theme-oil> <juju-core:Fix Released by fwereade> <juju-core 1.16:Fix Released by fwereade> <juju-core (Ubuntu):Fix Released> <juju-core (Ubuntu Saucy):Won't Fix> <https://launchpad.net/bugs/1089291>
<lkraider> but they all fail on the mongo auth
<lkraider> pymongo.errors.OperationFailure: command SON([('authenticate', 1), ('user', u'admin'), ('nonce', u'15c36a2f77cdc42d'), ('key', u'9c3a5ccb7ba361b481b5ffd2ffa86a07')]) failed: auth fails
<lkraider> @hazmat it's about your mhotfix-machine.py script
<ddellav> lazyPower: this is my new nick, i created a new one for this channel
<lazyPower> ddellav: awesome :) Glad to see you're already idling here
<ddellav> unify usernames across everything helps
<ddellav> of course :)
<lkraider> is the only solution to destroy the environment?
<lkraider> what happens if I restart the host without destroying the environment?
<lkraider> what happens if I restart the local provider host without destroying the environment?
<lkraider> (reboot)
<lkraider> Ok, seems restarting the local provisioner host breaks all the environment
#juju 2015-06-24
<bleepbloop> Any reason a charm would show an error in juju-gui but not on the command line? I reboot the juju gui machine and it goes away for a while and then comes back
<rick_h_> bleepbloop: what error?
<bleepbloop> rick_h_: 1 hook failed: "shared-db-relation-changed" for the mysql/0 charm, however saying juju resolved --retry mysql/0
<bleepbloop> gives back ERROR unit "mysql/0" is not in an error state
<rick_h_> bleepbloop: hmm, no. The GUI gets error statuses by asking juju about things. If you reload the juju-gui (just reload the browser window) it'll reask juju.
<rick_h_> bleepbloop: if it keeps coming back then it would seem Juju is telling one thing to the GUI and another at the CLI.
<bleepbloop> rick_h_: I I just completely closed the tab and reopened it in my browser and tried incognito mode and both still report the same error in the gui though so it seems to be reporting two different things, is there a way to debug and see what is being reported to the gui?
<rick_h_> bleepbloop: what browser are you using?
<bleepbloop> rick_h_: chrome, tried safari but the page wouldn't load
<rick_h_> bleepbloop: so in chrome, if you open the developer tools (ctrl-shift-j in ubuntu)
<rick_h_> bleepbloop: and go to the network tab there's a filter icon that allows you to filter all traffic by a type. You're looking for "WebSockets"
<rick_h_> bleepbloop: once you have that selected, reload the page with ctrl-r and you should see a single item there. That lets you investigate the data juju is sending the GUI
<bleepbloop> rick_h_: okay I see where its sending "Data":{"hook":"shared-db-relation-changed","relation-id":54,"remote-unit":"nova-compute-lxc/1"}},"AgentStatus":{"Err":null,"Current":"idle","Message":"","Since":"2015-06-24T13:06:53Z","Version":"","Data":{}}}]
<rick_h_> bleepbloop: ok, so that looks like no error there
<rick_h_> bleepbloop: so the thing is that if it comes up ok, to watch that because it's a continious live updating channel from juju to the GUI
<rick_h_> bleepbloop: and see if/when something comes in from Juju that makes the GUI think an error is there
<bleepbloop> rick_h_: Sorry that was the data on the data element on "WorkloadStatus":{"Err":null,"Current":"error","Message":"hook failed: \"shared-db-relation-changed\""
<rick_h_> bleepbloop: ah yea, so there Juju is telling the GUI that the hook failed
<bleepbloop> rick_h_: might removing the relation that is giving the error and re-adding it help?
<rick_h_> bleepbloop: possibly
<bleepbloop> rick_h_: okay a couple points of interest in the mysql charm log, "juju-log shared-db:66: This charm doesn't know how to handle 'shared-db-relation-joined'.", and ""Access denied for user 'root'@'localhost' (using password: YES)")"
<rick_h_> bleepbloop: hmm, yea not sure on the mysql charm. I've not used it myself.
<bleepbloop> rick_h_: no problem, thanks for your help anyway, it seems that the mysql gem has managed to lose its password, the password in the mysql.passwd file doesn't work
<rick_h_> bleepbloop: :(
<bleepbloop> rick_h_: Thanks anyway for helping, seems to be a bug with juju honestly, just not sure whats causing it and probably couldn't provide enough details to be useful on this one
<dweaver> I'm having a problem with upgrade from 1.23.3 to 1.24.0 - I triggered the upgrade and now the jujud process stops listening on port 17070 and a message in the log says  "fatal "api": must restart: an agent upgrade is available".  Any ideas how I can recover from this?
<dweaver> I can get it to listen for a very short time by restarting the service, but it fails after a few seconds or so with the same message each time.
<mwak_> o/
<bleepbloop> When using high availability is the vip just a random IP on your network or is there a specific thing it should be set to?
<bleepbloop> specifically with the hacluster gem
<bleepbloop> charm*
<lazyPower> bleepbloop: as i understand it, the VIP interface should be set to your management interface
<bleepbloop> lazyPower: so an IP on the management interface of my choosing?
<lazyPower> I do believe so
<lazyPower> I'm 60% sure thats correct
<lazyPower> if that helps :)
<bleepbloop> lazyPower: lol I'll give it a go since its more than I know about it
<Bialogs> Is there any way to have two juju environments on one host
<marcoceppi> Bialogs: very soonish
<lazyPower> Bialogs: Multi Environment state server is coming soon, there's been some buzz about that in our recent office hours
<lazyPower> marcoceppi: was that last weeks we touched on it briefly while thumper was there?
<marcoceppi> lazyPower: yes
<Bialogs> marcoceppi: lazyPower: thanks for the info
<aisrael> Odd_Bloke: ping
<Odd_Bloke> aisrael: Pong.
<aisrael> Odd_Bloke: the three ubuntu-repository-charm mps you have -- should they be squashed and tested together?
<Odd_Bloke> aisrael: The plan was to land the charm helper update first, then the other branches; but we want them all in so I think you can test them all together.
<aisrael> Odd_Bloke: ok, thanks. Did you see rcj's comment on the charm-helper update? Any thoughts about that?
<Odd_Bloke> aisrael: It's fixed by the handle_mounted_ephemeral_disk branch.
<aisrael> Odd_Bloke: ok, cool. I hoped that was the case. Thanks!
<Odd_Bloke> :)
<Bialogs> Is there any way to specify machines in a bundle? Like using something like to: 5 in the bundles.yaml?
<marcoceppi> Bialogs: there is, let me find the docs
<jackweirdy> Hey, I'm sat looking at my openstack web interface trying to configure my environments.yaml for juju. I keep getting weird error messages which have no suggestions on what to do. currently staring at
<jackweirdy> ERROR failed to bootstrap environment: index file has no data for cloud {RegionOne http://192.168.5.92:5000/v2.0/} not found
<jackweirdy> Besides it not making grammatical sense I don't know what/where the index file is and how I can put "data for cloud" in there
<jackweirdy> And weirdly I get 2 different errors if I try to use keypair auth vs userpass auth
<marcoceppi> jackweirdy: you'll need to follow this guide
<marcoceppi> jackweirdy: https://jujucharms.com/docs/stable/howto-privatecloud
<jackweirdy> Thanks marcoceppi :)
<jackweirdy> The tools mirror seems to be dead :/ https://streams.canonical.com/tools
<jackweirdy> (or the docs outdated)
<marcoceppi> jackweirdy: the first half of that doc is a lot of pre knowledge, skip down towards the bottom
<marcoceppi> also, https://streams.canonical.com/juju/tools/
<jackweirdy> Ah, seems to be /juju/tools
<jackweirdy> thanks. Is there a way I can file a PR against the docs?
<marcoceppi> jackweirdy: https://github.com/juju/docs
<jackweirdy> Awesome, thanks :)
<marcoceppi> jackweirdy: https://streams.canonical.com/juju/tools/releases/ that's the directory you probably want, it took me far too long to find so I figured I would help spare the pain
<jackweirdy> Thanks :)
<marcoceppi> jackweirdy: someone from our docs team will review your merge soon, thanks for the fix!
<jackweirdy> No worries :)
<Bialogs> marcoceppi: ping
<marcoceppi> Bialogs: pong
<Bialogs> marcoceppi: Did you get around to finding those docs?
<marcoceppi> Bialogs: you'll have to refresh my memory on which docs
<aisrael> Having a brainfart when it comes to amulet. Where does juju_agent.py live and what's responsible for putting it there?
<Bialogs> marcoceppi: Specifying the machines that juju deploys to in a bundle
<marcoceppi> aisrael: I think it gets dumped in /tmp, and .setup() does it iirc
<marcoceppi> Bialogs: ah, one moment
<aisrael> marcoceppi: marcoceppi ta, thx
<marcoceppi> Bialogs: https://jujucharms.com/docs/1.18/charms-bundles#bundle-placement-directives
<aisrael> marcoceppi: any idea why a setup() would timeout when the deployment stands up?
<marcoceppi> aisrael: deployer freaking? hooks not ready
<marcoceppi> which reminds me, we should update amulet to use extended status for 1.24 and greater
<aisrael> marcoceppi: I'm kind of wondering if I'm hitting a bug due to 1.24 and extended status
<marcoceppi> aisrael: you shouldn't
<marcoceppi> extended status is 1.24 compat
<marcoceppi> extended status is backwards compat
<aisrael> http://pastebin.ubuntu.com/11770001/
<marcoceppi> agent-state still remains in juju status output
<marcoceppi> what's amulet doing?
<aisrael> amulet just hangs on d.sentry.wait()
<marcoceppi> interesting
<aisrael> I suspect it's something this test or the charm is doing, which may be causing amulet some trouble.
<Bialogs> marcoceppi: Thanks, that clarifies a lot but I still don't see one type of example...how to specify multiple machines when deploying two services. Would the syntax look like "to: 1 <return> to: 2"?
<marcoceppi> Bialogs: two services, or two units?
<marcoceppi> also, why force which unit one goes to if it's just two services?
<marcoceppi> juju will just create two machines for those services if they each only have one unit
<Bialogs> marcoceppi: Lets say I'm deploying mysql with the bundle and I need two units of mysql. One unit on machine 1, the other on machine 2
<Bialogs> marcoceppi: All of my machines are not the same and sometimes Juju selects incorrectly from what I need
<marcoceppi> Bialogs: could you expand more on your setup?
<Bialogs> marcoceppi: sure... I'm trying to deploy the kubernetes bundle and the documentation says to deploy to machines, I have specific machines I do not want docker running on because they are running openstack services
<marcoceppi> Bialogs: are you using maas?
<Bialogs> marcoceppi: yes
<marcoceppi> Bialogs: what you'll want to do is tag the machines in mass that you want for each service
<marcoceppi> then you can use constraints instead of explicit placement
<Bialogs> marcoceppi: would you mind explaining?
<marcoceppi> by doing `constraints: [tags=tag-in-maas]`
<marcoceppi> you want to set a constraint on the service, and placement won't do that for you since the machiens maas gives juju are arbitrary
<marcoceppi> you want to make sure services always either end up in the same pool of machines, or the same exact machine
<marcoceppi> you can tag these machines in juju
<marcoceppi> err
<marcoceppi> in maas
<marcoceppi> either add one tag to all machines you want juju to use for kubernetes
<marcoceppi> ie, set a kubes tag on them, or tag each service individually "use this machine for docker, this for etcd, etc"
<marcoceppi> then you can set the bundle to use those constraints for the services and MAAS will only give machines that match that constraint
<Bialogs> Amazing! Thank you so much for this information
<marcoceppi> `juju help constraints` on the command line for more infromation, and here is constraints in bundles: https://jujucharms.com/docs/1.18/charms-bundles#service-constraints-in-a-bundle
<marcoceppi> Bialogs: no worries! Hopefully this will help streamline what you're trying to do
<Bialogs> I wish I knew what I was trying to do ;)
<lazyPower> Bialogs: hey there
<lazyPower> Bialogs: i'm one of the Kubernetes charm developers o/
<Bialogs> lazyPower: Oh hey! - Saw some strange issue earlier today where Pod object wasn't defined but we have been having all sorts of issues today and I'm writing that off as a fluke. Hope you won't mind if I ping you if I run into anything more...
<lazyPower> Bialogs: certainly. Can I also make a recommendation?
<Bialogs> Yeah go ahead
<lazyPower> we haven't backported an update to the charms/bundle in a bit, we've got a lot of active work tracking kubes - and our charms were accepted to the kubernetes repository (i have an outstanding todo to update the store copy of the charms)
<lazyPower> if you clone this repository: https://github.com/GoogleCloudPlatform/kubernetes
<lazyPower> and follow the docs here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/juju.md#launch-kubernetes-cluster
<lazyPower> you'll be running our current reference implementation of k8s
<Bialogs> I stumbled across this documentation earlier today and that probably explains why I got my earlier issue as I had deployed the implementation in the charm store
<lazyPower> Bialogs: If you have any issues with any of it feel free to ping me and let me know :) I'll be in and out over the next couple days due to Conference + travel
#juju 2015-06-25
<pitti> hello all
<pitti> I read https://juju-docs.readthedocs.org/en/latest/expose-services.html and https://jujucharms.com/docs/stable/authors-intro and two existing charms, but I can't figure this out:
<pitti> how/where do I declare the ports that "juju expose" should open?
<pitti> with my current charm, juju expose doesn't do anything
<pitti> but nowhere it is documented where/how to tell it what to open
<pitti> also, how is that actually implemented? my rabbitmq-server/0 has "open-ports: 5672/tcp" in juju status, but that's in none of its nova secgroups
<pitti> err, sorry,  I didn't expose it, it's in the juju-<env>-<machinenumber> rules as expected
<pitti> so that still leaves the question how/where to declare that?
<pitti> ah, I found it -- https://jujucharms.com/docs/stable/authors-hook-environment#open-port
<gnuoy> coreycb, I'll take a look at   https://pastebin.canonical.com/133930/ after the call
<coreycb> gnuoy, thanks
<Syed_A> Hello Folks, Can anyone point out to me how quantum-gateway charm interacts with OpenStack Keystone. I don't see any relation between two.
<Syed_A> 	In my openstack environment, instances are failing to get metadata from nova-api-metadata. And apparently the reason is the metadata agent is trying to contact keystone on loclahost instead of AUTH_URL
<beisner> gnuoy, still around?
<gnuoy> I am
<beisner> gnuoy, i know you're working on some amulet tests;  looking for your input on a set of MPs -- 2 reasons -- avoiding potential collisions, and getting your feedback.   fyi, coreycb will be doing a review this week on those, and i'm sure we'd both take your input as well.  ;-)  please & thanks
<beisner> https://code.launchpad.net/~1chb1n/charm-helpers/amulet-ceph-cinder-updates/+merge/262013
<gnuoy> sure
<beisner> gnuoy, the related charm MPs are linked on the c-h MP
<beisner> gnuoy, appreciate it!
<beisner> coreycb, ah heck.  your mp amulet test failed re: the inflight rename of quantum-gateway to neutron-gateway  https://code.launchpad.net/~corey.bryant/charms/trusty/neutron-gateway/proxy-none/+merge/262892    i'll update those tests shortly.
<coreycb> beisner, ok
<Bialogs> lazyPower: Hey question about the Kubernetes bundle: Is there a reason why the master/etcd charms deploy in the root container instead of lxc? Is there a problem with deploying lxc with those charms?
<dpm> hi all, I'd like to jujufy a personal Django project. What's the best way to get started? I looked at this a while ago, and there still seem to be several different django charms on the store. Is python-django (trusty) the one to use? And why is the django bundle still on precise?
<dweaver> should I expect to be able to upgrade juju agents using upgrade-juju from 1.23.3 to 1.24.0?
<lazyPower> Bialogs: ETCD has to be reachable from the other nodes, LXC networking is still a bit hinky
<lazyPower> Bialogs: however its safe to co-locate etcd on bootstrap unless you require additional resources/nodes
<Bialogs> lazyPower: what about additional resources would cause a problem for putting etcd on the bootstrap machine
<lazyPower> Bialogs: it doesn't necessarily make sense to deploy a cluster of 3 etcd machines on the same node
<lazyPower> defeats the HA model
<Bialogs> lazyPower: Thanks for the info
<lazyPower> np Bialogs
<beisner> gnuoy, coreycb - holding back the cinder-ceph amulet test mp, still some subordinate foo to resolve.
<coreycb> beisner, ok
<beisner> coreycb, the others are still re-testing, anticipate a-ok on those.
<Bialogs> lazyPower: The bundle deployed correctly except for one thing... its using the fqdn instead of the ip address and for whatever reason the machine that has the master node cannot be looked up by name
<lazyPower> Bialogs: maas provider?
<Bialogs> error: couldn't read version from server: Get http://ajunta-pall.maas:8080/api: dial tcp: lookup ajunta-pall.maas: no such host
<Bialogs> lazyPower: Also it created a new machine when there was one available in maas and it got a no available machine error. I had to manually delete that machine and service
<Bialogs> lazyPower: although this error is occurring repeatedly i killed it & i continued with the instructions...created a pod and ran it
<Bialogs> It still worked
<beisner> coreycb, neutron-gateway tests update for charm name change @ https://code.launchpad.net/~1chb1n/charms/trusty/neutron-gateway/next-amulet-update-rename/+merge/263006
<coreycb> beisner, I just had one comment
<beisner> coreycb, i wondered about that, but it looks like the oldest supported release (precise-icehouse) has python-neutronclient in the cloud archive, no?
<beisner> coreycb, looking at http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/precise-updates/icehouse/main/binary-amd64/Packages
<coreycb> beisner, ah ok so that must be from pre-icehouse
<coreycb> beisner, lgtm then
<beisner> coreycb, yep i think we're finally clear of that  \o/
<beisner> coreycb, thanks!
<beisner> coreycb, ok so the others re-tested a-ok and are more officially on to you.  tia!
<coreycb> beisner, ok cool. I'll probably have to look at those tomorrow.
<beisner> coreycb, totally not something you wanna start near eod
<coreycb> beisner, lol
<Odd_Bloke> aisrael: Thanks for looking at the ubuntu-repository-cache stuff!
<aisrael> Odd_Bloke: Happy to help!
<Odd_Bloke> aisrael: I think rcj knows something about the testing problems we've seen (and he's in the same room as me, as we're sprinting), so I'll get back to you on that.
<aisrael> Odd_Bloke: Excellent, thanks! I'd love to track that down and make sure it's fixed, if it's an amulet issue.
<Odd_Bloke> aisrael: Our response might just be "yeah, amulet is broken". ;)
<aisrael> Odd_Bloke: A perfectly reasonable response. :D If so, the good news is that there's a good example of how to recreate the failure.
<Odd_Bloke> aisrael: We need to test single units with sync-on-start; it's only really optional for multi-unit deployments.
<aisrael> Odd_Bloke: Understood.
#juju 2015-06-26
<jogarret6204> hi.  where can i change settings in .juju-persistent-config files?  for example debug settigns are true, and I want them to be false for openstack components
<jogarret6204> is this on stae server somewhere?
<jogarret6204> i see them in the input yaml file for juju-deployer, and I can change them there.  But this is post-deployment with working vms.
<lazyPower> jogarret6204: i'm not sure I understand. the .juju-persistent-config files - can you break this down a bit more for me so I can get you an answer?
<beisner> jamespage, can you have a look at bug 1468511 and the linked merge proposal?
<mup> Bug #1468511: leading spaces in ceph.conf template cause functional test failure (configparser) <amulet> <openstack> <uosci> <Charm Helpers:New> <cinder-ceph (Ubuntu):New> <cinder (Juju Charms Collection):New> <https://launchpad.net/bugs/1468511>
<tvansteenburgh> jogarret6204: `juju set myservice key=val`
<jogarret6204> @lazypower - thanks.  the initial yaml files used by juju-deployer had debug : true.  I want to now change to debug:  false
<tvansteenburgh> juju set myservice debug=false
<jogarret6204> @tvansteenburgh;  thanks - will give that a try
<lazyPower> heyo tvansteenburgh
<tvansteenburgh> yo
<lazyPower> are you running cassandra tests atm?
<tvansteenburgh> yes
<lazyPower> ok, i just terminated all of them (troll)
<tvansteenburgh> omp
<lazyPower> not really - just in queue curious
<lazyPower> i had one substrate kick my test suite, the rest were cassandra, i had a big whaaaa face after the fact
<tvansteenburgh> yeah, i'm in revq and running tests for the stuff i'm reviewing
<cholcombe> lazyPower: is there something special I need to do to get the 'merge proposal' button in launchpad?
<lazyPower> cholcombe: the project you wish to merge into has to exist to get a MP button.
<cholcombe> hmm ok
<lazyPower> cholcombe: what you're looking for I imagine, is the create bug so you can assign a review to your branch :)
<cholcombe> i'm just trying to merge into marco ceppi's gluster branch
<cholcombe> oh i see
<lazyPower> cholcombe: https://jujucharms.com/docs/stable/authors-charm-store#submitting-a-new-charm
<marcoceppi> cholcombe: not really worth merging into mine, just submit as anew charm
<marcoceppi> cholcombe: because my charm isn't promulgated
<cholcombe> oh?
<cholcombe> alright then
<marcoceppi> cholcombe: also, it's so badass that it's written in rust
<cholcombe> lol :D
<cholcombe> thanks man
<lazyPower> marcoceppi: oh man, i dont see the whole "open a bug" workflow in our submit a charm docs :|
<marcoceppi> lazyPower: it's there.
<cholcombe> ok so i have my charm pushed to lp
<cholcombe> next is creating the bug ?
<marcoceppi> https://jujucharms.com/docs/stable/authors-charm-store#recommended-charms
<lazyPower> marcoceppi: yeah just found it
<cholcombe> ah ok cool
<marcoceppi> lazyPower: this doc sucks
<marcoceppi> new review queue for life
<lazyPower> its kind of hiding in there, i was looking for a big blinking sign
<lazyPower> alas, we have no webm's in here
<lazyPower> marcoceppi: looks like i'm primarily responsible for this doc too... I wrote this Sep 24 2014
<lazyPower> https://github.com/juju/docs/commit/a2dc2d20399b3c0c9e3cb6e602635d9929e9b8ab
<lazyPower> #blame me
<marcoceppi> lazyPower: I always do
<marcoceppi> lazyPower: hopefully July will give us some time to fix up the review queue
<jogarret6204> tvansteenburgh:  thanks.  juju set worked great.  Is there any easy way to examine all juju parameters that are available per service?
<tvansteenburgh> jogarret6204: `juju get myservice` will list them
<jogarret6204> doh!  thanks again.  was trying juju list, juju show..
<tvansteenburgh> jogarret6204: `juju help commands` will be useful as well
<aisrael> cholcombe: marcoceppi: a charm written in rust? That's awesome.
<cholcombe> :)
<cholcombe> i'm writing the bug report/ review submission now
<cholcombe> aisrael: i added you to the bug report :)
<aisrael> woo!
<cholcombe> aisrael: oh do i link people to the branch or the bug report?  I linked to the branch
<aisrael> cholcombe: The bug report would be good
<cholcombe> ok
<cholcombe> alright charmers team is on the bug
<aisrael> Perfect, thanks!
#juju 2015-06-27
<amcleod-> 3452
<amcleod-> w/w
<bbaqar> Does anyone know how to run metadata in openstack while deploying from charms?
#juju 2015-06-28
<jackweirdy> Hey folks, the docs say "Running Juju with virtualised containers" is coming soon - anyone know how soon that will be? https://jujucharms.com/docs/stable/config-local
<jackweirdy> Also, does anyone know if I can use the juju cli from my host machine when I'm doing local stuff with vagrant?
#juju 2016-06-27
<magicaltrout> [boom dynamically scaling DC/OS masters
<magicaltrout> nearly there
<kjackal> magicaltrout: Nice!
<magicaltrout> yeah only took 3 evenings to figure it out :P
<magicaltrout> what a geek
<magicaltrout> just converting the main tarball to a charm resource now
<kjackal> Cool!
<magicaltrout> although the docs suck...
<kjackal> Let us know when/what to review!
<magicaltrout> do I do
<magicaltrout> hookenv.resource_get ?
<kjackal> Oh you know what documentation is like...
<magicaltrout> which should give me a path according to the docs?
<magicaltrout> oh I found an example by mbruzek
<magicaltrout> should do the job
<marcoceppi> magicaltrout: there's also an example here: https://github.com/marcoceppi/layer-charmsvg
<kjackal> Hello marcoceppi!
<marcoceppi> o/
<kjackal> Do you have ay news for me marcoceppi?
<marcoceppi> no
<kjackal> :) cool, let me know if there is anything I can do from my side
<kjackal> Thank you marcoceppi!
<stub> cory_fu, bcsaller_ : Whats happening with https://github.com/juju-solutions/charms.reactive/pull/66 ? I have some actions to write for an internal charm and need to know if I should be waiting, forking or going in a different direction.
<kjackal> Good morning kwmonroe. I am looking on the issue you reported on reading/writing topics. How do you reproduce it?
<rambit0> hello
<rambit0> 1
<rambit0> I get the message missing relation messaging
<rambit0> on neutron-gateway
<rambit0> do you know where I should search
<rambit0> :D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D:D
<lazyPower> stub ooo an @action decorator
<lazyPower> magicaltrout - are you still having issues with resources (per mailing list post)?
<lazyPower> magicaltrout - i may be doing this incorrectly, but what's worked for me w/ etcd, is to push the charm code, then `charm attach` the resources, *then* publish like so:  charm publish ~lazypower/etcd --resource etcd-0 --resource etcdctl-0 (these are the resource identifiers returned from charm attach).
<lazyPower> also pardon my omission of revision on the charm entity... it should read ~lazypower/etcd-2  for example.
<magicaltrout> thanks lazyPower , the error on the list looks non fatal
<magicaltrout> just ugly
<lazyPower> right, i think the messaging could be better
<lazyPower> but i didnt want you blocked since there is a known good workflow there
<magicaltrout> well I've just got out of a meeting to find a node saying dc/os installed
<magicaltrout> so I assume its not lying
<magicaltrout> just gonna check
<magicaltrout> lets spin up 2 more and have a party
<magicaltrout> or a quorum at least
<lazyPower> woo
<lazyPower> quorum party
<magicaltrout> boo i lied
<magicaltrout> I hadn't republished
 * magicaltrout tries lazyPower's method
<magicaltrout> okay
<magicaltrout> not a clue if its broken or not
<magicaltrout> this is a bit shit
<magicaltrout> charm attach cs:~spicule/wily/dcos-master-1 software=/home/bugg/Projects/dcos-master/bootstrap.tar.gz
<magicaltrout> lazyPower: does that look right?
<lazyPower> yep, if thats the name of the resource
<lazyPower> it should yield a resource id like  software-0  when its successfully completed
<magicaltrout> I get the same squid error
<lazyPower> rick_h_ urulama any idea why we're getting squid errors on attaching resources? I've run into this before when i dont specify a charm revision... but this is clearly not the case per magicaltrout's paste above.
<lazyPower> the squid responses both vex and frustrate me magicaltrout, i feel your pain here.
<magicaltrout> aye bit of a pain when I'm just trying to clear DC/OS stuff off my desk and get it into the store for people to hack on ;)
<magicaltrout> doesn't help when I can't understand what its trying to tell me :)
<urulama> lazyPower: jrwren was looking into that. the default settings disconnect long uploads
<lazyPower> ah right there was a post to the list about this last week
<lazyPower> magicaltrout - whats the size of your payload?
<magicaltrout> 478M lazyPower
<magicaltrout> from a data centre not over my pipes
<stub> lazyPower, urulama, magicaltrout : it also looks very similar to the bug report on the mailing list about large resources failing.
<urulama-sprint> stub: yes, that's the same thing
<magicaltrout> merlijn's?
<stub> yes
<magicaltrout> k
<magicaltrout> I'll revert to wget from dropbox then :)
* lazyPower changed the topic of #juju to: Welcome to Juju! || Docs: http://jujucharms.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Youtube: https://www.youtube.com/c/jujucharms || Juju 2.0 beta-10 release notes: https://jujucharms.com/docs/devel/temp-release-notes
<cory_fu> stub: I commented on https://github.com/juju-solutions/charms.reactive/pull/66 with a recap of the recent discussion I had with bcsaller_ about that.  When he comes on, I'd like him to verify that I didn't forget anything.  Let me know if that wall of text is unclear.  :p
<cory_fu> I'd also like anyone else's input, because how actions are handled will affect everyone.
<cory_fu> I know marcoceppi has thought about this as well
<cory_fu> stub: Also, apologies for it taking so long to move on those PRs.  We've just been pulled in other directions.
<magicaltrout> I like the idea of an action annotation
<cory_fu> marcoceppi: I know the new RQ is very charm focused, but PRs like that one would really benefit from me being able to kick it up to community as a whole to make it clear that, "hey, I could use help with this"
<marcoceppi> cory_fu: it's super charm focused atm, but we could expand that
<marcoceppi> cory_fu: or, just mail the list
<kwmonroe> kjackal - re: kafka write topic failure.. i believe it's only repro-able when the broker doesn't know its own hostname.  in the older charm, we had to set advertised.host.name so kafka knew when to listen.
<marcoceppi> mm, 99% test coverage, it's been a while
<kjackal> kwmonroe: ok, so I got the kafka charm deployed on EC2 and actions were working fine
<kjackal> kwmonroe: also we were not opening the ports of kafka so that might have been an issue
<stub> cory_fu: Ta, and rebutted ;)
<stub> I wonder if the reactive phases to be run should be declared by the bootstrap script that kicks off reactive.main() ? main(['setup', 'hooks', 'main']). Could extend the framework in all sorts of ways then without messing with the core.
<marcoceppi> cory_fu: is config.changed.<option> invoked on first hook run?
<cory_fu> marcoceppi: Yes
<marcoceppi> \o/
<marcoceppi> cory_fu: does config.changed reactive states mess with config().previous ?
<cory_fu> marcoceppi: The config.changed states are set based on hookenv.config().previous(), but they do not change the result of that
<marcoceppi> cory_fu: one last question
<marcoceppi> I'm rewriting an @hook(config-changed) state, one of the config option is version of software
<marcoceppi> the rest are literally for the application
<marcoceppi> so I basically want to do when any configuration changes, so long as it's not just version, but could also be version
<marcoceppi> I'm trying to avoid needless spaming of writing out configuration files
<cory_fu> Hrm.  If only the version changes, are you not going to need to re-write the config anyway as part of the update / reinstall?
<marcoceppi> cory_fu: a seperate logic handles that
<marcoceppi> cory_fu: and I want to avoid a race where config.changed is processed before everything else
<cory_fu> What do you mean, before everything else?
<marcoceppi> cory_fu: well, I have two handlers now, config.changed.version and config.changed
<marcoceppi> if config.changed gets processed before config.changed.version then it's wasteful
<stub> Probably simpler to use an if statement and the data_changed() helper
<cory_fu> Ah, yeah.  I would say stub is right about the if statement, but I don't think you actually need data_changed
<marcoceppi> I just want to know if more than version was changed, is there a way to list all changed keys?
<cory_fu> marcoceppi: Not sure why you would want to do that?
<stub> [state.split('.',3][-1]  for state in bus.get_states() if state.startswith('config.changed.')]
<cory_fu> marcoceppi: Let me write up how I would structure this.  One second
<marcoceppi> stub: that is certainly one way
<marcoceppi> cory_fu: I'll put what I'm doing in a branch if you want to optimize the rest of my crap
<cory_fu> marcoceppi: http://pastebin.ubuntu.com/17975757/
<marcoceppi> cory_fu: I want to avoid rewriting config is version is the /only/ state changed though
<marcoceppi> since it'll be written after intsall
<stub> marcoceppi: Only way I could find - I needed similar for the apt layer.
<cory_fu> Actually, that pastebin isn't ideal.
<cory_fu> marcoceppi: What about this?  http://pastebin.ubuntu.com/17975982/
<arosales> lazyPower: do you know where the code location for mariadb is at:
<arosales> https://jujucharms.com/mariadb/
<marcoceppi> cory_fu: I was about to do that
<marcoceppi> cory_fu: but I argued why I shouldn't
<cory_fu> If the version changes, even if other config values have changed, you don't need to worry about rewriting the config
<marcoceppi> but it doesn't matter
<marcoceppi> cory_fu: yeah
<cory_fu> I don't see the problem
<cory_fu> Only one of those two will ever run
<marcoceppi> cory_fu: yeah
<marcoceppi> cory_fu: http://paste.ubuntu.com/17976046/
<arosales> lazyPower: looks like you may have touched the charm last.  I would specifically like to know where the upstream code lives. I didn't find a branch at LP where the bug link is, and the project link just goes to mariadb.org
<cory_fu> It basically says what you asked for.  Run the first when the version changed.  Run the second when any config has changed except the version
<marcoceppi> cory_fu: earlier I hadn't refactored the mongodb.ready, but since I did I never really reconsidered this as a solution
<cory_fu> Ah
<marcoceppi> cory_fu thanks stub I will def be using that snippet for latter
<cory_fu> stub: Your point about charms needing to react to changes made by actions is a good one.  So my only outstanding item on that PR is the CLI support.  Perhaps I can add it as part of the merge, though
<cory_fu> Unless you want to take a stab
<stub> cory_fu: I don't know where to begin on that bit
<cory_fu> Ok, no worries
<cory_fu> stub: I do think, though, that the goal for reactive actions should be to have charm-build populate actions from the templates by default.  I think marcoceppi is on board with me on that, yes?
<stub> cory_fu: If charm build discovers and creates hooks for defined actions, an you think of a way for the user to specify that an action is 'traditional' or 'reactive' ?
<cory_fu> stub: If they create the action file manually, charm-build will not replace it with the reactive template
<stub> (although from what I can tell, charm build doesn't discover anything and only automatically creates hooks for interfaces)
<stub> ok. I guess that is a perfectly reasonable UI.
<cory_fu> Right.  charm-build is missing that functionality, but I want to add it
<stub> ok. I had assumed we would be stuck declaring them in layer.yaml.
<marcoceppi> cory_fu: +1 stub my thought was, interogate the actions.yaml file, if there's not actions/<action> file, create from boilerplate
<stub> marcoceppi: oh, yeah. discovery of actions is easy that way.
<lazyPower> arosales - i do, its still focused on the ~charmers branch in launchpad. http://launchpad.net/~charmers/charms/trusty/mariadb/trunk
<lazyPower> arosales - looks like i need to bump the metadata on the charm, this was one of the high priority items that squeeked through a couple weeks back...
<arosales> lazyPower: ok htanks
<arosales> lazyPower: if you are touching that charm may be good to put http://launchpad.net/~charmers/charms/trusty/mariadb/trunk as the contribute repo unless dbart would like to move that else where
<arosales> petevg: ^ note mariadb upstream link repo
<lazyPower> right, i wound up creating the group mariadb-charmers for pushing the charm as there wasn't one provided by dbart
<marcoceppi> arosales: if we're touching a charm, it's not going to live in ~charmers anymore
<petevg> arosales: useful. thx.
<marcoceppi> lazyPower: +1
<lazyPower> https://launchpad.net/~mariadb-charmers   for reference
<lazyPower> arosales petevg - is someone actively working on mariadb at hte moment for a review/etc?
<petevg> lazyPower: arosales asked me to add a check for z Linux to make it install the package from universe, rather than from the mariadb repo.
<petevg> That's all I'm doing to it at the moment, though -- adding a check for z Linux.
<lazyPower> petevg - ok, so lets move that rpeository from ~charmers, and target a repository in ~mariadb-charmers then.
<lazyPower> i'll send an email to dbart, cc you as well
<lazyPower> make it nice and official
<petevg> Sounds good.
<marcoceppi> writing unit tests for reactive files is brutal
<lazyPower> marcoceppi - its reminiscent of the stacks of @patch decorators in old amulet tests
<lazyPower> i dont like it
<marcoceppi> is there way I can like, globally mock status_set, set_state, etc?
<marcoceppi> like in setUp?
<lazyPower> only using with() statements, problem is, if you have a large method body calling those methods, you've re-implemented the stack of patches in a less obvious way
<petevg> marcoceppi: You can manually create a mock patch, and then call stop and start on it whenever you want, instead of using the decorator.
<arosales> lazyPower: could you cc me on the email to dbart I would like to discuss the s390x and ppc64el install paths
<lazyPower> arosales will do
<marcoceppi> petevg: you got an example?
<arosales> lazyPower: thanks
<marcoceppi> I just always want to mock these
 * arosales waves at marcoceppi
<petevg> It's been awhile since I've been foolish enough to try. Give me a moment to look it up :-)
<arosales> marcoceppi: feel comfortable with  me giving your mongodb layers a run on s390?
<marcoceppi> arosales: give me a min
<kwmonroe> cory_fu: are you worried that 00-deploy (for https://github.com/apache/bigtop/pull/120/files#diff-30135d8baafe145bd84fb0ec91657c93R1) is going to run twice in bundletester?  do we need a restrictive glob for tests in tests.yaml?
<marcoceppi> arosales: I'm just finishing unit tests and I've made a lot of changes in the past few hours
<arosales> marcoceppi: no rush I can also check in tomorrow
<marcoceppi> arosales: yas
<arosales> marcoceppi: thanks for working on it
<petevg> marcoceppi. From the docs here: http://www.voidspace.org.uk/python/mock/patch.html, you get:
<petevg> https://www.irccloud.com/pastebin/KTRbAA80/
<marcoceppi> petevg: could you do it with a real pastebin?
<petevg> Yes.
<marcoceppi> apparently IRC cloud won' tlet you view it unless you're in irccloud, what a worthless sack
<petevg> That's annoying.
<petevg> here's a gist: https://gist.github.com/petevg/2b22752b29f1a5f809560f448f6e7f63
<cory_fu> kwmonroe: It may run twice (I guess Tim's out, but he'd be able to answer that more definitively) but the second one would be close to a noop.  It would add only around 30s, maybe a minute, to the total run time
<cory_fu> A test pattern would not be bad, though, if you want to figure out what that should be
<kwmonroe> yeah cory_fu, but i think that's only because we luckily have reset: false in our tests.yaml.  i pity the soul that uses us as a reference and double's their test time.
<kwmonroe> not a biggie cory_fu.. i'll scan our bits and see if it could be as simple as *.py.
<cory_fu> Well, the only reason we can use the bundle_deploy option at all is because we also use reset: false.  It wouldn't do anything useful w/o it
<petevg> marcoceppi: You can use that in conjunction with unittest's setUpClass and tearDownClass methods and have code that isn't too messy, or too prone to failure. You wind up with a lot of lines like "self.foo_patcher = ...", "self.bar_patcher=...", though.
<marcoceppi> there has to be a better way
<Odd_Bloke> marcoceppi: If you do it in a TestCase super-class and inherit from that, then you only have to write it once.
<petevg> Odd_Bloke says smart things.
<marcoceppi> Odd_Bloke: yeah, I'm trying to read up on testcase superclassing
<Odd_Bloke> marcoceppi: Like this: https://gist.github.com/OddBloke/d9a3e87bd397878a77f21bb080e05275
<marcoceppi> Odd_Bloke: you da real mvp
<Odd_Bloke> marcoceppi: Note what you'll need to do for test cases that do already have a setUp: https://gist.github.com/OddBloke/d9a3e87bd397878a77f21bb080e05275
<Odd_Bloke> marcoceppi: (And a tearDown, for that matter)
<marcoceppi> Odd_Bloke: up!
<marcoceppi> thanks
<Odd_Bloke> marcoceppi: (And if you're patching loads of things consider shoving them in a list and having your tearDown just do: `for patcher in self.list_of_patchers: patcher.stop()`)
<marcoceppi> Odd_Bloke: is setup called for ever test method, or once per testcase class?
<Odd_Bloke> marcoceppi: Per method; setUpClass is per class.
<Odd_Bloke> (You probably want it per-method, else you'll end up with previous tests mocks which will already have some state)
<marcoceppi> Odd_Bloke: I do, just wanted to make sure I didn't need to run reset_mock
<Odd_Bloke> :)
<marcoceppi> Odd_Bloke petevg thanks!
<Odd_Bloke> marcoceppi: Any time. :)
<Odd_Bloke> Pretty much my entire professional career has consisted of arriving at a place, throwing up my hands at the testing, and then writing tests until I give up. ;)
<lazyPower> Odd_Bloke <3
<arosales> stub: nice work it looks like https://jujucharms.com/postgresql/ may "juju work" on s390x
<stub> cool
 * arosales has a "Live master (9.5.3) " workload status on a s390x machine atm
<bdx> is there a way to set arbitrary enpoint bindings for charms when deployed in bundles?
#juju 2016-06-28
<tzob1x> hello!!
<tzob1x> I get this error Services not running that should be: neutron-plugin-openvswitch-agent
<tzob1x> I cannot start it from terminal
<tzob1x> maybe you know what should I do?
<tzob1x> :)
<cory_fu> jcastro: Hey, I still need you to update the wiki-simple bundle since I don't have push perms for it in the store
<kjackal> cory_fu: after nodejs we hit: https://jujucharms.com/apache-processing-spark/
<kjackal> sorry
<kjackal> we hit ImportError: cannot import name 'node_dist_dir'
<cory_fu> Ok
<cory_fu> stokachu: Can you confirm that you intended https://github.com/battlemidget/charm-layer-ghost to be using https://pypi.python.org/pypi/nodejs?
<stokachu> cory_fu: never heard of that python module
<cory_fu> Oooh, I see!
<mskalka> %n
<cory_fu> stokachu: Ok, thanks.  I see the issue now
<mskalka> ignore that :X
<stokachu> cory_fu: :D i think i haven't updated the import path there
<stokachu> in lib/charms/nodejs
<cory_fu> Yep
<stokachu> cory_fu: are you submitting a PR or I can do it  right quick
<cory_fu> stokachu: I'll submit a PR
<stokachu> cory_fu: awesome thanks man
<cory_fu> Have some other changes I'd like to suggest
<stokachu> perfect sounds good
<mgz> cory_fu: poke about juju plugins specific change
<mgz> cory_fu: I have proposed <https://github.com/juju/juju/pull/5731> - can you eyeball it and yell if you see any issue?
<bdx> hey whats up guys? Is ppa:juju/daily no longer being maintained?
<bdx> guess I should throw that in #juju-dev
<mgz> bdx: it is... we just have the issue it was designed to be updated whenever we have a version that passes in CI
<mgz> I will let you finish that line of logic in your head
<bdx> ha
<bdx> mgz: ok, thanks
<stub> cory_fu: I wonder.... if @hook(cb) simply set cb._reactive_phase = 'hook', then the main loop just needs to perform the same loop over the handlers several times (once for each phase) ignoring handlers not in that phase. And @when and friends would need to propogate the attribute of course.
<cory_fu> stub: That's basically what it does now.  The problem is that we want @when + @hook to trigger the when bit, but we don't want @when by itself to trigger, which is hard to do with just a "phase" var
<cory_fu> Or perhaps I'm not following your suggestion
<cory_fu> Hrm, yeah, I was distracted when reading it and I think I might have misunderstood
 * mgz tears and cory_fu
<stub> I tend to think an empty when should always trigger (in the 'other' phase) (but would think that a separate issue)
<mgz> can you take a look at my mp or wave another eco guy at it? ^^
<cory_fu> stub: Yeah, I didn't mean an empty @when (as in, @when() with no states).  I meant a @when w/o a @hook or @action.  I think I have an idea of how to resolve the issue, from your suggestion.
<cory_fu> stub: Just to be sure I understand, you're suggesting that all cb have a default phase of 'other' and @hook changes it to 'hook' which lets that handler be considered in that phase, while handlers with the default 'other' are not, and the phase checks that are currently done in @when should be dropped
<cory_fu> Hrm.  I just remembered why I didn't do it that way to start with.  External handlers can't attach metadata like that.  But I think I can make it work since there have been some changes in how external handlers are checked since the phase checking was first added
<stub> cory_fu: yes.
<stub> cory_fu: And renaming the 'other' phase to 'main' might be a good idea ;)
<cory_fu> Certainly
<cory_fu> stub: Thanks for the help ont his
<cory_fu> *on this
<cory_fu> mgz: The plugin change seems perfectly reasonable to me
<mgz> cory_fu: thank you!
<petevg> I've got a question for anyone knowledgeable: do we have any best practices for unit testing layered charms?
<petevg> It looks like the Makefile for our layered charms has a "unit_test" entry by default, which attempts to run tox.
<rick_h_> petevg: I think the openstack folks would have loads to say on that along with cory_fu and kwmonroe who have the experience on the hadoop charms
<petevg> I picked cory_fu and kwmonroe's brains this morning. They said that I should start a convo on #juju :-)
<petevg> The hadoop charms have loads of integration tests and smoke tests, but no real unit tests yet.
<cory_fu> petevg: I know jamespage has talked about unit tests in the openstack charms, if he's still around, though I understand much of that was focused on pulling the bits that made since to unit test out into separately packaged libraries rather than testing them directly in the charm.
<petevg> Pulling libs out of the charm doesn't sound like the worst strategy ...
<cory_fu> He also mentioned that they use a repo structure that is slightly different from what we use on the big data charms in that they have the layer itself nested one directory deep in the repo so that they have an outer layer to keep all the bits that are necessary for the layer (like unit tests) but that don't make sense in the final built charm
<petevg> Interesting.
<petevg> Hiding the unit tests somewhere makes sense.
<petevg> The main thing that I'm trying to figure out how to do is to build up a virtualenv suitable for running unit tests in without duplicating the logic the populates wheelhouse in a layered charm.
<cory_fu> That also touches on the need we have to be able to nest base layers in subfolders of repos for upstream purposes while still having them be usable during charm build
<stub> To start with, I think we need a fixture that lets us specify relevant current state, call a handler (with suitable mocks), and inspect the resulting state. So testing handlers is just like testing any other function.
<cory_fu> petevg: The wheelhouse.txt file is in the same format as a requirements.txt and can be referenced from within a tox.ini
<petevg> cory_fu: cool. That makes that easy :-)
<cory_fu> stub: Since reactive decorators don't actually wrap the functions, they can be called (and thus tested) directly already.
<stub> That was easy then :)
<cory_fu> Just pass in Mock() objects in place of relation instances.  If the handler manually checks states, of course you have to handle that.  Also, it doesn't let you verify that your state patterns do what you want
<cory_fu> I do wish we could come up with a way to render the state graph as a means of verifying it, but it's too flexible as currently impelemented (which is not necessarily a bad thing)
<petevg> What I'd like to avoid doing is having tests that are so mocked out that they aren't really doing anything that a linter doesn't already do.
<cory_fu> petevg: +1
<cory_fu> That's my concern with testing reactive charms currently.  Especially with reactive charms around things like Bigtop, where all of the heavy lifting is done in something like Puppet
<lazyPower> cory_fu - charms.reactive -p get_states is my new favorite thing ever
<stub> I still can't see a way around integration tests. Just need to minimize them since they are slow.
<mbruzek> Why did it take so long to learn about this option?
<cory_fu> lazyPower, mbruzek: I usually prefer -y (though I completely forgot there was a shortcut and always used the --format=yaml long-hand)
<cory_fu> -p is pretty nice, too
<stub> petevg: Can you just reuse the cde to build a wheelhouse from charmtools.build ?
<lazyPower> http://paste.ubuntu.com/18038388/  for those following along at home and have no idea what -p does to that command :)
<cory_fu> stub: Definitely still have to have integration tests.  But like you said, we'd all prefer to minimize them.
<petevg> stub: probably. Though it looks like I don't need to do too much more than just point tox at wheelhouse.txt as its requirements.txt.
<petevg> ... though I guess wheelhouse.txt goes away once you build the charm. And before you build the charm, you are missing any layers that it depends on, so your unit tests will fail on import.
<petevg> In any case, I think that I have a place to start. Thank you, everyone.
<stub> petevg: If you need API provided by a dependency layer, you are probably going to need to charm-build your layer and run the tests there
<petevg> stub: yep. Which is not super unit testy (at least, not Python unit testy). But I'm not sure that there's a great way to work around it :-/
<stub> I think it is either that, or mock your dependencies entirely
<jose> balloons: happy birthday!
<petevg> stub: yep. The point where one starts mocking out imports is the point where one has perhaps mocked a bit too far, though :-)
<stub> Are you mocking my idea?
<petevg> Only a little :-)
<stub> petevg: ... you could assume your dependencies are in $LAYER_PATH and futz your import path and compile the wheelhouse.txts...
<petevg> True ...
<stub> c/compile/combine/
<petevg> There's no guarantee that someone will actually have their layer dependencies checked out, though.
<stub> I need to investigate my idea of embedding dependency layers in git subtrees
<petevg> that sounds interesting.
<LiftedKilt> question on the ceph-osd charm - it states that the default option for osd-journal is "no journal device will be used." Does that mean journaling is done per disk on the osd? or is journaling literally disabled
<cory_fu> stokachu: I'm hitting an issue with ghost now with encoding: http://pastebin.ubuntu.com/18040360/
<cory_fu> It looks like my default encoding is ANSI_X3.4-1968 for whatever reason (this is local provider in 1.25)
<cory_fu> I'm not sure what the right way to handle that is
<stokachu> hmm
<cory_fu> (For comparison, on my laptop, the preferred encoding is UTF-8)
<stokachu> cory_fu: try replacing with check_call('npm install --production', shell=True)
<stokachu> from subprocess import check_call
<stokachu> maybe it's that shell library im using
<cory_fu> Yeah, shell is setting universal_newlines=True which is causing the implicit decode, but I'm not actually sure why the preferred encoding isn't UTF-8 anyway
<stokachu> yea that is odd
<cory_fu> But I guess this is just more evidence that implicit conversions are a bad idea
<cory_fu> WTG subprocess and by extension, shell
<stokachu> :D
<stokachu> we should probably switch out that shell module for subprocess
<stokachu> and just handle the output ourselves
<stokachu> cory_fu: does charmhelpers have a generic shell run command?
<cory_fu> Yeah.  I always think that it would be nice to have helpers around subprocess, but then every use case ends up needing something different that makes the helpers not work
<cory_fu> No
<cory_fu> Because of that ^
<cory_fu> :p
<stokachu> ah makes sense
<cory_fu> This is  a failing of subprocess, though.  The implicit decoding done by universal_newlines should accept an explicit encoding type (and shell should also accept and pass that through)
<stokachu> cory_fu: im ok with just replacing shell with subprocess and working around it that way
<cory_fu> stokachu: Yep, I'm on it.  :)
<stokachu> cool thanks :D
<cory_fu> stokachu: Why does ghost hate me?  :)  http://pastebin.ubuntu.com/18044477/
<stokachu> wow
<stokachu> are you in /srv/app by chance?
<cory_fu> Nope.  Was just watching a fresh deploy on aws
<stokachu> hmm
<stokachu> i wonder what's using /srv/app other than our attempt to remove the directory
<cory_fu> stokachu: Why is ghost set up to deploy onto a storage mount?
<stokachu> would juju resources be doing anything?
<stokachu> so nodejs pulls /srv/app from the storage functions
<cory_fu> It's a storage mount, but you would think it'd be done once the storage-attached hook fires
<stokachu> marcoceppi: ^
<stokachu> cory_fu: https://github.com/battlemidget/juju-layer-node/blob/master/lib/charms/layer/nodejs.py#L18
<stokachu> thats where we pull the app path
<cory_fu> stokachu: I'm still not seeing the benefit of having the node app live on a storage mount.
<stokachu> cory_fu: that was a marcoceppi change
<cory_fu> I can see putting app data there, but why the app itself?
<cory_fu> Ah
<cory_fu> marcoceppi: Please enlighten me
<stokachu> cory_fu: came from https://github.com/battlemidget/juju-layer-node/issues/3
<cory_fu> stokachu: Ok, lazyPower helped me sort it out
<cory_fu> It's an issue in the ghost charm
<stokachu> what was the issue?
<cory_fu> We shouldn't be calling rmtree on that path; that will try to delete the mount point itself.  We should be removing the app within it, if necessary (I'm not sure it's actually necessary)
<cory_fu> I think that rmtree / makedirs code is a hold-over from before it was a Juju storage mount
<matthelmke> I'm doing a bootstrap of juju beta 10 and it returns this error: ERROR failed to bootstrap model: no matching tools available
<matthelmke> Any idea why?
<cory_fu> matthelmke: What Ubuntu release is the machine from which you are bootstrapping?  Xenail?
<cory_fu> er, Xenial?
<matthelmke> xenial
<cory_fu> Hrm.  Odd.  It should work.  Perhaps try adding --upload-tools
<cory_fu> juju boostrap --upload-tools <cloud> <controller>
<cory_fu> That shouldn't be necessary, but might work around the issue for you
<matthelmke> Nope
<matthelmke> juju bootstrap --upload-tools --config default-series=âxenialâ --constraints tags=juju maas-controller maas
<matthelmke> Creating Juju controller "maas-controller" on maas
<matthelmke> Bootstrapping model "controller"
<matthelmke> Starting new instance for initial controller
<matthelmke> ERROR failed to bootstrap model: no matching tools available
<cory_fu> matthelmke: I'm afraid I don't know.  You could try asking in #juju-dev or pinging one of the core devs in this channel
<matthelmke> thx
<cory_fu> rick_h_: Maybe you could point someone out?
<cory_fu> I get confused by time zones.  :p
<cory_fu> stokachu: I just realized the current ghost charm doesn't react to changes in the 'release' option.  Should it re-fetch and reinstall the ghost app when that changes?
<stokachu> cory_fu: yea it should and we also need to add an upgrade path
<stokachu> i think the latest ghost is like 0.8.0
<stokachu> matthelmke: juju bootstrap casey maas/172.16.0.1 --upload-tools --config image-stream=daily --config enable-os-refresh-update=false --config enable-os-upgrade=false --config bootstrap-timeout=8400 --bootstrap-series=xenial --credential casey
<stokachu> matthelmke: thats the cmd i run with beta10 and it works fine
<matthelmke> Weird...this seems to be working. the only difference I see is the order of the flags
<matthelmke> juju bootstrap --config default-series="xenial" --constraints tags=juju --upload-tools maas-controller maas
<stokachu> cory_fu: how's the charm coming?
<cory_fu> stokachu: Working on it.  I'm going to have patches for the nodejs, nginx, and ghost layers
<stokachu> nice!
<kwmonroe> hey cory_fu, for hadoop-rest, was the plan to still build using layer:hadoop-client, or are we axing that from layer.yaml and putting hadoop-rest explicitly in the metadata.yaml?
<kwmonroe> i'm finally getting around to hive, but it's been so long i forgot how we're supposed to use rest.
<cory_fu> Anything using hadoop-rest should not use hadoop-client
<cory_fu> It should just use the interface directly
<kwmonroe> jolly good
<magicaltrout> ....old chap
<kwmonroe> hey magicaltrout, do you reckon PDI will need hadoop jars/libs to integrate, or do you just point at the namenode url:port?
<kwmonroe> also, i'm assuming it's just hdfs integration, right?  or does pdi do transforms with mapreduce?
<magicaltrout> both
<kwmonroe> you can't say "both" when i asked you like 4 questions.
<magicaltrout> it installs itself inside HDFS,  but you use map reduce steps in your transformation that run on the cluster
<magicaltrout> so for example, http://wiki.pentaho.com/display/BAD/Using+Pentaho+MapReduce+to+Generate+an+Aggregate+Dataset
<magicaltrout> scroll down
<magicaltrout> you'll seem some screenshots
<magicaltrout> you have a Map/Reduce input & Map Reduce output
<magicaltrout> and you can do whatever you want in between
<kwmonroe> yeah, neat!
<magicaltrout> so it will need lib access AFAIK
<magicaltrout> so it can run on cluster
<magicaltrout> but i've not installed it myself before and I can't find the docs offhand
<magicaltrout> so I could be wrong
<kwmonroe> doubtful.  you're magicaltrout!
<magicaltrout> likely
<magicaltrout> you mean
<kwmonroe> yeah, i get those confused all the time
<magicaltrout> hehe
<kwmonroe> anyway, good to know.  i'm banging around on cory_fu's hadoop-rest bits.. just curious if pdi would be another consumer.  i think not, but we'll see.
<stokachu> cory_fu: im stepping away for a bit, just shoot me a ping when you have those PR's up and ill get to them tonight
<cory_fu> stokachu: Ok, just about done (finally).
<stokachu> cory_fu: cool ill check back in about an hour
<cory_fu> stokachu: Ok, three PRs ready for you whenever you get back
<cory_fu> https://github.com/battlemidget/charm-layer-ghost/pull/2
<cory_fu> https://github.com/battlemidget/juju-layer-node/pull/8
<cory_fu> https://github.com/battlemidget/juju-layer-nginx/pull/5
<stokachu> cory_fu: nice will take a look
<stokachu> cory_fu: merged thanks! I'll push a new version of ghost up soon to charmstore
<cory_fu> Awesome.  Thanks
<cory_fu> stokachu: When you do, can you comment on the issue that it's been pushed, and the new revision?
<stokachu> cory_fu: yea will do
<cory_fu> Thanks
#juju 2016-06-29
<stokachu> cory_fu: new ghost is working on xenial so far :D
<stokachu> i want to test the 0.8.0 release and get that out the door soon too
<yioup0x> hello!!
<yioup0x> some services are stuck in update-status hook is this normal?
<yioup0x> it's been some hours that they are updating
<magicaltrout> we have the best stuff ever for OLAP over all your big data stuff coming in our 3.9 release
<magicaltrout> just putting together an RC2 with juju datasource injection, automatic schema generation and stuff
<magicaltrout> it'll make analytics over the big data & SQL charms really easy
<cory_fu> stokachu: I should have mentioned, I updated the ghost charm to support changing the "release" option, so we should definitely try upgrading.  I just realized, though, it should probably retry on "checksum" change as well
<cory_fu> stokachu: https://github.com/battlemidget/charm-layer-ghost/pull/3
<cory_fu> :)
<stokachu> cory_fu: merged, ill try the upgrade now
<marcoceppi> cory_fu: ping re: https://github.com/juju-solutions/charms.reactive/issues/76
<lazyPower> marcoceppi - is that a verbiage change to reactive as a whole?
<marcoceppi> cory_fu: people we really confused by `status_set` and `set_state`
<marcoceppi> givenhow close in nature they are, it aslo made it really confusing since it blurs what state, people assuemd this was a state engine in juju instead of on the unit level
<magicaltrout> you know what
<magicaltrout> I was going to say that exact thing the other day
<magicaltrout> and I thought I was being pedantic ;)
<marcoceppi> so the idea of flags as a way to distil the state without really changing meaning was proposed by mark
<marcoceppi> here's an example of what the code could look like: https://github.com/marcoceppi/layer-vyos-proxy/blob/master/reactive/vyos_proxy.py
<magicaltrout> +1 for something like that
<magicaltrout> also helps in amulet and stuff where you're tying to figure out what status/state/message you're trying to extract
<cory_fu> If we're going to change that, should we make it "flag_set" (even though I hate that order) to be consistent?
<lazyPower> +1 to normalizing on the method signatures
<cory_fu> (I'm not even going to bother trying to argue that we change "status_set" :p)
<marcoceppi> cory_fu: welllllllllllllll
<marcoceppi> I'd be okay with set_status
<aisrael> ^^ +1
<marcoceppi> in charms.model
<marcoceppi> ;)
<cory_fu> True
<marcoceppi> with like an from charms.model import classic_names
<marcoceppi> cory_fu: though, set_action seems odd compared to action_set
<marcoceppi> maybe not
<Prabakaran> Hello Team, I am facing the below issue in Juju 2.0 while uploading packages to the controller using juju attach command and also juju inline deployment command using resources. Error code : http://pastebin.ubuntu.com/18097919/
<marcoceppi> set_action_data might be better name in general
<Prabakaran> And also it is taking lot of time to upload into the container when the product package is of more than 1 GB. Is there any trick to increase file upload speed? And also I am wondering why it is taking long time to upload/download on the controller within the same subnet. Could somebody make me clear on this?
<cory_fu> marcoceppi: What about charms.model.status.set() and charms.reactive.flag.set()
<lazyPower> Prabakaran - how large is your attachment?
<cory_fu> +1 to set_action_data
<Prabakaran> 1.8GB
<marcoceppi> cory_fu: possibly
<marcoceppi> I'm still very much up in the air about charms.model
<lazyPower> Prabakaran - ah i missed your follow up. There is a bug currently that prevents large attachments from working in the current betas
<magicaltrout> lazyPower: thats push to controller is it the same?
<marcoceppi> cory_fu: right now it's more like charms.model.unit.status.set()
<stokachu> cory_fu: should we be using mariadb instead of mysql?
<lazyPower> magicaltrout - it appears so, it ran into an IO error trying ot attach to the controller
<marcoceppi> cory_fu: so I like charms.model.unit.set_status instead
<lazyPower> Prabakaran - can i get you to file a bug for that? i'd like to xref with core on this issue
<magicaltrout> ah yeah
<magicaltrout> i didn't scroll across far enough
<Prabakaran> ya sure
<cory_fu> stokachu: I did my testing with mariadb, but marcoceppi has put a fair amount of work in recently to improve the mysql charm
<magicaltrout> marcoceppi <-> work?
<stokachu> cory_fu: ok cool
<marcoceppi> stokachu: cory_fu: that's an overstatement, I removed and touched up the tests that were failing consistently because of bad setups
<marcoceppi> magicaltrout: great question!
<magicaltrout> pfft
<marcoceppi> ;)
<Prabakaran> and also is it possible for me to upload the file by getting into the controller...?
<magicaltrout> i'm looking forward to a whole room of americans with similar enthusiam later this year marcoceppi
<magicaltrout> although I might need ear plugs, or a sick bowl ;)
<marcoceppi> magicaltrout: I'll make sure to beef up on my "English"
<magicaltrout> hehe
<magicaltrout> just ask kwmonroe for some tips
<magicaltrout> he's got the english patter down to a fine art.... if you live in the early 1900's
<marcoceppi> pip pip!
<stokachu> cory_fu: if you get time today you could run through my conjure-up enabled ghost bundle
<stokachu> https://gist.github.com/battlemidget/a35809531b34db683d54d8b046353d80
<lazyPower> Prabakaran - negative. the resources feature works with gridfs in mongo, and then some internal juju bits rely on that process to complete successfully (eg: BSON ID's of the resource)
<lazyPower> Prabakaran - so without having intimate knowledge of that, i dont think you can fake it out
<lazyPower> stokachu - oh btw i ran the cojure scripts for kubernetes, nice work :)
<lazyPower> *conjure
<stokachu> lazyPower: thanks :D ive got some updates to make to the bundle  as well
<lazyPower> nice
<marcoceppi> cory_fu: I updated the issue, I was projecting on screen when I filed it
<Prabakaran> k thanks <lazypower> what is the bug number for the timeout issue?
<lazyPower> Prabakaran - I'll need you to file the bug for that :)
<Prabakaran> can u help me on how to raise a bug?
<lazyPower> Prabakaran - its only hit the list as far as I'm aware of. There was a thread started by  merlijin, but it was specifically in reference to jujucharms.com resources, not local resources on the controller
<lazyPower> Prabakaran - sure, start here https://bugs.launchpad.net/juju-core/+filebug
<lazyPower> Prabakaran - we'll need the juju version, arch, what you did, what you expected to happen, what happened, associated logs, and reproduction steps
<Prabakaran> <lazypower> i am getting timeout error while submitting bug.. " Timeout error  Sorry, something just went wrong in Launchpad.  Weâve recorded what happened, and weâll fix it as soon as possible. Apologies for the inconvenience.  Trying again in a couple of minutes might work.  (Error ID: OOPS-4ad94ec13b63a1d16b8f1c05cd6b031b)"
<lazyPower> Prabakaran - if you hit the back button in your browser and re-submit does it complete successfully or does it give you the same 500 error?
<kjackal> cory_fu: do you want a help in reviewing zeppelin?
<magicaltrout> it's a big balloon filled with gas and exploded in flames....
<magicaltrout> much like some juju deployments! ;)
 * magicaltrout gets his coat
<lazyPower> alternatively, its a wicked rockstar group whos music is as timeless as... well.. a zeppelin
<cory_fu> kjackal: Yeah, if you want to take a look at it, go ahead.
<lazyPower> potentially made from lead even
 * lazyPower rimshots and wanders off stage
<Prabakaran> <lazypower> i tried twice still i am getting the same... i asked my teammate to raise for the same.. bug number is https://bugs.launchpad.net/juju-core/+bug/1597354
<mup> Bug #1597354: Juju 2.0 Resource Error - cannot add resource failed to write data: read tcp : i/o timeout <juju-core:New> <https://launchpad.net/bugs/1597354>
<lazyPower> Prabakaran - that makes me want to say it might also be on your end if your tcp connections are timing out
<kjackal> cory_fu: cool, I am on it
<lazyPower> Prabakaran - perhaps do some network debugging and see if you're experiencing packet loss?
<cory_fu> kjackal: thanks
<lazyPower> but thanks for the bug, I'll shop this with the core dev release manager to see if they've seen anything like this in testing
<Prabakaran> <lazypower> i just pinged from my machine .. there is no packet loss
<stokachu> lazyPower: who can i bug for https://bugs.launchpad.net/charms/+source/elasticsearch/+bug/1589947
<mup> Bug #1589947: Use status-set when elasticsearch is active and ready <conjure> <elasticsearch (Juju Charms Collection):New> <https://launchpad.net/bugs/1589947>
<lazyPower> stokachu - thats going to be fun... you'll have to set that in the middle of the ansible routine
<stokachu> ew
<lazyPower> yeah
<lazyPower> :)
<lazyPower> I considered going for a reactive rewrite but time prevented me from doing that
<lazyPower> that and our ES charm is pretty solid right now
<stokachu> yea
 * lazyPower had a bag of bugs to sprinkle around :P
<lazyPower> i can try to hammer that out today
<lazyPower> should be pretty simple
<stokachu> hmm i wonder, juju should just pull from agent status if workload status is undefined
<lazyPower> i dont think thats right to just clone the agent status
<lazyPower> i'd rather the workload status remain unknown if the author has not put any status messaging in the charm
<lazyPower> then its pretty blatant whats happening
<stokachu> well you dont really know whats happening as there is no messages :)
<stokachu> lazyPower: also seeing this on aws https://paste.ubuntu.com/18101489/
<lazyPower> stokachu which revision of ETCD is this?
<stokachu> lazyPower: rev3
<stokachu> kubernetes is rev5
<lazyPower> hmm, flannel biffed it on registering the network
<lazyPower> is the instance still up?
<stokachu> lazyPower: yea
<lazyPower> if so can you ssh-import-id lazypower on that node and hit me with some ip <3?
<lazyPower> i'd like to see why that tanked, its been good in CI
<stokachu> you want on kubernetes or etcd?
<stokachu> or both
<lazyPower> both would be good
<lazyPower> resource-get
<lazyPower> Use 'resource-get' while a hook is running to get the local path to the file for the identified resource. This file is an fs-local copy, unique to the unit for which the hook is running. It is downloaded from the controller, if necessary.
<lazyPower> whoops
<arosales> stokachu: I am testing the good work you and cory_fu did on ghost
<stokachu> arosales: cool!
<stokachu> arosales: you should also test the upgrade
<stokachu> since it defaults to version 0.7.2 and latest is 0.8.0
<arosales> stokachu: recommended wise I see https://jujucharms.com/ghost/precise, but I was wondering if we should reach out to hatch and confirm to promote yours for trusty and xenail
<stokachu> yea i'd like to see the reactive charm go recommended
<arosales> ok, I don't see hatch here atm but I'll send a mail on it.
<arosales> stokachu: thanks
<LiftedKilt> jcastro marcoceppi: is there a best guess eta for a final 2.0 release? (are we talking weeks or months?)
<LiftedKilt> also - great job on beta10. It's been really smooth for me so far
<penguinRaider> Hi, I am running juju 1.25.5-trusty-amd64. If I run a command using juju run which gets timed out, then the next command seems to be getting timed out too even with the greater threshold. I checked it on the machine side it seems both processes seem to be running and stuck http://paste.ubuntu.com/18109177/
<bdx> openstack-charmers: how do you guys feel about setting openstack-dashboard session timeout to that of the keystone token expiration?
<bdx> openstack-charmers: or possibly a new config "session-timeout" for openstack-dashboard
<cargonza> bdx, you should attend the openstack irc meeting in the #ubuntu-meeting
<bdx> cargonza: thx, on it
<bdx> cargonza: did I miss it?
<cargonza> bdx, yup... next week again - https://wiki.ubuntu.com/ServerTeam/OpenStackCharmsMeeting
<bdx> awwwwweee
<bdx> thanks
<bdx> I'll mark my schedule
<cargonza> cool!
<rick_h_> stokachu: around?
<bdx> rick_h_: hey whats up? how are cross-model-relations coming along?
<rick_h_> bdx: :P
<rick_h_> bdx: stop asking about juju post-2.0 we're still working on 2.0
<bdx> rick_h_: shucks ... I was under the impression that was in the 2.0 milestone.  Last we spoke you guys were sprinting on it ... just wondering how its coming along is all
<rick_h_> bdx: sorry, it's on the "release after 2.0" one
<bdx> no worries
<bdx> thanks
<LiftedKilt> rick_h_: how is the 2.0 release coming?
<bdx> bahaa
<bdx> yes
<bdx> rick_h_: go easy on him
<LiftedKilt> Don't want to be a bother - just curious if there is any idea of a timeline - i.e. weeks vs months
<rick_h_> LiftedKilt: weeks, just more than a couple of them :)
<LiftedKilt> rick_h_: totally understand - there's a ton going into this release, and I'd way rather it get delayed and the bugfixes added than just rushing to get it out there before it's ready
<LiftedKilt> any idea when gui will be beta10 compatible?
<jrwren> LiftedKilt: more soon than soonish.
<LiftedKilt> jrwren: haha fair enough
<stokachu> rick_h_: heyyy
<stokachu> conjure-up is beta10 compatible just fyi
<stokachu> so is macumba :)
<rick_h_> stokachu: <3
<stokachu> rick_h_: see my email?
<rick_h_> stokachu: yes, sorry, getting pulled into a customer issue atm so kina afk looking at that
<stokachu> rick_h_: all good
<junaidali> Hi everyone, I'm having an issue in one of my deployments, juju gets the ip of node as public-address. Anyone knows about this issue?
<kwmonroe> cory_fu: i'm doing an "if is_state(db.available); do_something(db)", but i can't remember how to get the db object.  halp.
<cory_fu> I'd recommend using @when() if at all possible.
<cory_fu> If not possible, you have to use RelationBase.from_state('db.available')
<cory_fu> But that will make your handler / code harder to unit test
<cory_fu> And petevg will get mad at you
<kwmonroe> :)
<kwmonroe> fair enough.. i was on a kick about condensing reactive handlers so i didn't have stuff like "configure_with_db()" and "configure_without_db()", but whatevs.
<petevg> Mocking out RelationBase.from_state isn't the worst thing, for the purposes of unit tests.
<petevg> I'm still working out what level of mocking we want to tolerate, though.
<kwmonroe> petevg: i don't care what cory_fu says.. you're alright in my book.
<petevg> Aw, shucks.
<magicaltrout> kwmonroe: https://medium.com/@ecesena/a-quick-demo-of-apache-beam-with-docker-da98b99a502a#.k8dnol49r
<magicaltrout> good one  for you guys,  I  mentioned it before I think,  but the blog is a good demo
<kwmonroe> yeah - neat magicaltrout.. but there goes my afternoon :/
<magicaltrout> you're very welcome ;)
<arosales> kwmonroe: to confirm, what were your plans for  cs:~bigdata-dev/xenial/apache-spark-79
<arosales> kwmonroe: specifically for promulgating the s390x pieces.
<kwmonroe> arosales: i wanted to make sure the rebuilt spark charm didn't break realtime syslog bundle, but i haven't gotten around to bundletesting it with the -dev charms.  i'll kick that off right now.
<kwmonroe> if it looks good, i'll promulgate the xenial pieces for spark
<arosales> kwmonroe: sounds good, thanks
<arosales> kwmonroe: your code never has bugs right
<kwmonroe> yeah arosales, but this has some stuff from cory in it too.
<arosales> ah, ok
<lazyPower> kwmonroe - so they are rocket powered bugs, with heisenbug tendencies
<cory_fu> Sorry, I wandered off in the middle of the conversation.  Keep in mind that you can avoid duped code across handlers by calling one directly from the other.  Or by refactoring shared code up into a lib/charms/layer/foo.py library for your charm.
<cory_fu> kwmonroe: That said, I just did the optional state plus RelationBase.from_state() approach in https://github.com/battlemidget/charm-layer-ghost/blob/master/lib/charms/ghost.py
<cory_fu> See lines 44 and 81
<kwmonroe> very cool cory_fu
<narinder`> hi all  start from today i am seeing weired issue with juju where public-ip was blank with the service and deployment failed at last. But status says installation in progress.  ceilometer/0            maintenance    executing   1.25.5  2/lxc/0                                (install) Installing packages
<bbaqar> hey guys .. is there any way to connect a bridge to a VLAN interface i have created using maas? i.e. bond0.120 -> br-mgmt
<kwmonroe> arosales: syslog bundle looks good: http://54.67.62.114:9090/#/notebook/flume-tutorial  rev 10 is ready: https://jujucharms.com/apache-spark/
<arosales> kwmonroe: and assuming the current promulgated version is built from https://github.com/juju-solutions/layer-apache-spark it has your latest bits in it, correct?
<arosales> to confirm the correct charm publish workflow is:
<arosales> charm push .
<arosales> charm publish cs:<user>/<charm>-<rev>
<arosales> and then a grant to all for read
<arosales> charm grant cs:~<user>/<charm> everyone
<arosales> My main question is the charm grant is needed or publish to the stable channel does that by default. My testing shows the grant was needed by wanted to confirm that is indeed correct
<kwmonroe> arosales: correct in that the current promulgated version is built from that gh link.
<arosales> https://jujucharms.com/docs/devel/authors-charm-store is my reference
<arosales> kwmonroe: cool thanks
<kwmonroe> arosales: ymmv if you're not explicit when you 'charm push .'.  i think by default it will stick the charm in your namespace (ie, ~kwmonroe) based on the distro you pushed from (ie, xenial).  so for spark, i was more explicit: charm push . cs:~bigdata-charmers/apache-spark
<kwmonroe> and then it spit back a charmstore url, which is what you feed charm publish
<kwmonroe> grant is only needed the first time
<kwmonroe> arosales: so i just "charm publish cs:~bigdata-charmers/apache-spark-10", but didn't need to re-grant for that specific revision.
<arosales> kwmonroe: ok I was pushing a new charm to my name space and I had to charm grant to deploy it
<arosales> kwmonroe: I got an odd Ubuntu One login when I didn't grant, in the terminal no less
<arosales> very odd
<kwmonroe> arosales: i would bet if you built a new version of that charm, you could push/publish without a re-grant
<arosales> kwmonroe: but if I wanted apache-spark in my name space
<arosales> I would need to push/publish/grant
<kwmonroe> right
<arosales> ok
<arosales> I didn't think I needed the grant
<kwmonroe> (the first time, subsequent times would just be push/publish)
<arosales> but  it turns out I do
<arosales> ok
<arosales> I'll let you return to watching the beam demo now
<arosales> thanks for the help kwmonroe
<kwmonroe> :)  np
<arosales> kwmonroe: got a sec to look at some reactive bits?
<arosales> or any other reactive gurus in here
<arosales> http://paste.ubuntu.com/18126079/
<cory_fu> arosales: The configure method requires a "config" param: https://github.com/marcoceppi/layer-mongodb/blob/master/lib/charms/layer/mongodb.py#L66
<cory_fu> arosales: I'm guessing that just needs to be changed to: m.configure(config())
<arosales> cory_fu: thanks, I'll give that a try
<cholcombe> how do you specify the repository in juju2 like juju1? --repository=blah/blah
<magicaltrout> just give it a path to your local charm
<cholcombe> mskalka, ^^
<magicaltrout> juju deploy /home/charmbuilder/charms/trusty/mynewcharm
<cholcombe> magicaltrout, thanks
<magicaltrout> no problem.... appears the charmers don't believe in  depreciation of stuff! ;)
<cholcombe> lol
<petevg> cory_fu: following up on our conversation about unit testing layered charms this morning ... I may push back and say that we really do need to build a layered charm before unit testing it; the code isn't really complete until you do so.
<petevg> That said ...
<petevg> Behold the horrors that I have wrought: http://paste.ubuntu.com/18127615/
<cory_fu> Why?
<petevg> Ask again after you read the module I linked above :-)
<petevg> That code actually works, and it handles the issue where I want to import the "real" charms.layer.apache_bigtop_base in my test while mocking out charms.layer.options
<petevg> *case (not issue)
<petevg> Here's the top of my test class, to give an example of actually using it: http://paste.ubuntu.com/18127718/
<petevg> cory_fu: I'd like to spend a little more time writing tests with the thing that I linked above, just to see how weird or messy it gets in practice. It may be cleaner to just tell people to build the charm and then run "make unit_test", though ...
<igonl> Any good references or methods for negotiating a relation that sets up a database charm with a web application?
<petevg> I am happy about having written something that involves a closure, hacking the sys module, and monkey patching import. I'm not sure that it's the right kind of happy, though ...
<igonl> What's the best way to pass around the credentials?
<igonl> I'm familiar with lifecycle hooks, but I'm new to relation hooks.
<petevg> igonl: if you're writing a charm, you can use relation-get to fetch things like the password, which the database charm will generally generate for you. Take a look at the "database-relation-changed" from the walk-through for reference: https://jujucharms.com/docs/1.25/authors-charm-writing
<petevg> If you're writing a layered charm, things are a little bit different. It doesn't sound like that's what you're doing, though.
<magicaltrout> igonl: i might have something
<igonl> Yah, that looks good. I've found relation-get and relation-set helper methods.
 * magicaltrout greps around launchpad
<cory_fu> petevg: https://github.com/juju-solutions/layer-apache-bigtop-base/commit/a4c1b8355c9032d0e7f577af9ecc92e25e93f9e7
<igonl> Docs do stress that they run out of order though and to not rely on anything from those methods being there. Then something about the -relation-changed being the best place to get variables.
<igonl> I did test the out of order bit to be true using `sleep` to change the writes to a file.
<cory_fu> petevg: That works for me.  The pre-import of charms.layer and the create=True are both critical, though
<igonl> I can't use Layers as it's built in python and I'm sticking with Bash-only.
<petevg> cory_fu: Yep. I was going for something where we could generate a list of all the things that we need to mock, and pass it in all at once, rather than ending up with a cascading layer cake of "with" statements.
<magicaltrout> igonl: http://bazaar.launchpad.net/~spicule/charms/trusty/saikuanalytics-enterprise/trunk/view/head:/hooks/mysqldb-relation-changed
<magicaltrout> i use that to get mysql details and store them in  a file inside my charm install
<petevg> igonl: no pressure to use layers -- I just wanted to make sure that was wasn't pointing you at bash when you were looking at Python :-)
<igonl> magicaltrout: I like the juju log message in that script, thanks.
<cory_fu> petevg: My point was that I think we can avoid having to mock builtins.__import__
<igonl> I was interested in if any usage of the environment variables provided were being used in such charm conversations. $JUJU_REMOTE_UNIT
<petevg> cory_fu: We could actually hide multiple calls to .patch in a context, so it would look something like 'with mock_modules([<some list o' modules]): import ...'  I think that mocking out __import__ is more fun, but it is probably a better idea to leave it alone.
<cory_fu> Hrm.  It seems like removing the pre-import of charms.layer works ok after all
<petevg> It should. I was only pre-importing it in my code because I wanted it in sys.modules.
<cory_fu> Sometimes package namespaces work strangely
<petevg> ... before I did my module hacking, that is.
<cory_fu> Anyway, I have to EOD
<petevg> Cool. I am going to do the same soon. Catch you later.
<cory_fu> petevg: I got about 1/4 of the way through reviewing your hiera layer change.  :p
<cory_fu> I'll finish it first thing tomorrow
<kwmonroe> igonl: just fyi, you can write bash layers.. openjdk is an example bash charm: https://github.com/juju-solutions/layer-openjdk/blob/master/reactive/openjdk.sh
<kwmonroe> igonl: i should note that interfaces need to be in python, but the charm layers can be in bash.
<igonl> kwmonroe: had no idea. Must of developed the bad assumption off of what the docs showed and from the charms I browsed. Thanks.
<plars> anyone seen a situation where the bootstrap node gets *very* high load average? We had a power outage recently, and the maas cluster I'm using came back ok but it was slow to do anything with juju
<plars> after many failed attempts, I was finally able to ssh to the bootstrap node and it had a load avg near 500!
<igonl> Ya'll Ill take another look at layers. They did seem like one more layer of abstraction I didn't want to deal with when learning charms.
<plars> syslog is getting flooded with a lot of messages like this:
<plars> Jun 29 22:12:08 limited-club mongod.37017[20063]: Wed Jun 29 22:12:08.533 [conn918] update presence.presence.pings query: { _id: "e548e092-a274-46ef-8ed7-089b1cb954d7:1467238320" } update: { $set: { slot: 1467238320 }, $inc: { alive.36: 4294967296 } } nscanned:0 idhack:1 nupdated:1 fastmodinsert:1 keyUpdates:0 locks(micros) w:116225 116ms
<plars> (and many more, all from mongod)
<plars> restarting juju-db brings it down for a little while, but it always creeps back up
<igonl> kwmonroe: btw that's some clean code you wrote there.
<kwmonroe> THANKS igonl!  see magicaltrout?  i do good things.. ^
<magicaltrout> i never said you didn't kwmonroe
<kwmonroe> too right magicaltrout, too right.
<kwmonroe> chip chip cheerio
<magicaltrout> i just defer to cory_fu when it comes to you explaining the stuff you've done :P
<kwmonroe> lolz
<magicaltrout> ah its still online!
<magicaltrout> kwmonroe: http://www.whoohoo.co.uk/cockney-translator.asp
<magicaltrout> that is more important than apache beam
<kwmonroe> oh hairy biscuits
<kwmonroe> hey kjackal admcleod, on the off chance you're online.. how long should a hbase perf-test take on aws?
<kwmonroe> ha!  nm, just returned in 1300 seconds.. i guess that's how long.
<magicaltrout> i believe what you really mean is  kjackal admcleod, on the Frank Bough chance you're online.. 'a long should a 'base perf-test take on aws?
<kwmonroe> :)  that's exactly what i meant
<magicaltrout> of course
<magicaltrout> you have to bear in mind admcleod 's scottish background
<magicaltrout> Guid day kjackal admcleod, oan th' aff chance yoo're online.. hoo lang shoods a hbase perf-test tak' oan aws?
<magicaltrout> would be better
#juju 2016-06-30
<junaidali>  
<jacekn> could somebody check any my MP is not showing on review.juju.solutions? https://code.launchpad.net/~canonical-sysadmins/charms/trusty/apache2/apache2-storage/+merge/298617
<magicaltrout> jacekn: its probably just broken, although I think there is movement on the new review queue
<magicaltrout> marcoceppi is in europe at the mo I believe so might be able to shed some light
<jacekn> yes some info would be good, I know there were problems with the review queue months ago but they must have been fixed by now
<aisrael> jamespag`: We need to connect a charm to keystone. Is there a layer for that relation?
<jamespag`> aisrael, there is an interface for the keystone interface which is probably what you want to use
<jamespag`> its in the index
<jamespag`> aisrael, context?
<aisrael> jamespag`: openmano charming session w/marks
<jamespag`> aisrael, right - so does it need todo endpoint registration into the keystone server catalog?
<jamespag`> aisrael, if it does not you might want to use the keystone-credentials interface to just get some credentials - really depends
<jamespag`> there is also an interface for that
<aisrael> jamespag`: ack, that's a good starting point, thanks. I'm not up to speed on how deep the keystone integration goes.
<jamespag`> aisrael, ok well around most of the day - will be out for lunch at about 11:30 UTC
<marcoceppi> jacekn: we're moments away from deploying a new review-queue, but it's stalled because it's not on IS infrastrture and I'm having problems fetching remote resources in our qa environment
<jacekn> marcoceppi: thanks, good news
<eeemil> Link to https://jujucharms.com/docs/devel/config-local from https://jujucharms.com/docs/devel/developer-getting-started is broken
<shilpa> Hi, I deployed a charm making using of juju resources, I have pushed empty packages to the charm store, when i deploy the charm , i see that the charm is stuck at Unknown package_status None
<shilpa> Can anyone help why we get this status message ?
<kwmonroe> ahoy jamespage -- do you know much about this bundle?
<kwmonroe> https://jujucharms.com/openstack-midonet-liberty/bundle/0/
<kwmonroe> we have an incompatible zookeeper change, and it looks like this bundle is the only one using zk.
<kwmonroe> also jamespage, you're the current maintainer of zookeeper, so if you have a sense of how much your charm is used, that will help us in deciding how much effort to put in to make it upgradable / back compat / etc..
<cory_fu> jamespage: When kwmonroe says "incompatible" he means it's layered, so you can't directly upgrade, and that it adds a required java relation.  Otherwise, the interfaces are the same.
<magicaltrout> see kwmonroe told you we defer to cory_fu for explanations ;)
<kwmonroe> lol
<jamespage> kwmonroe, cory_fu: hmm - I know midonet uses it - and I think contrail does as well - bbcmicrocomputer would be able to confirm
 * jamespage feels sheepish about 'current maintainer' status
<bbcmicrocomputer> kwmonroe: yes, contrail uses zookeeper
<cory_fu> jamespage: I think we're comfortable taking over maintainership since it will be a Bigtop charm now, but we're concerned about the transition
<kwmonroe> thx for the info -- we'll run through https://jujucharms.com/requires/zookeeper and see how deep the rabbits live.
<tinwood> cory_fu, jamespage tells me that you've been discussing actions in the context of reactive.  I was wondering if you had an exemplar of how much of reactive to bring up when running an action? Thanks
<drbidwell> Where do I find how to set the default directory to put lxd containers?  I am using an existing xfs file system instead of a zfs or btfs file system.
<neil__> Hi lazyPower, are you around?
<lazyPower> neil__ i am
<neil__> (Just poking the new etcd charm...)
<lazyPower> Awesome! Hows it goin?
<neil__> lazyPower, I've deployed the new charm into my OpenStack cluster, and so far things are not fully hooking up...
<neil__> lazyPower, ...which is I think as expected because I need to do some work in the charms that use etcd client proxies.
<lazyPower> how so?
<neil__> Well I have this charm 'neutron-api', which includes installing etcd to act as a proxy.
<neil__> And right now it's not getting the initial cluster string properly.
<lazyPower> neil__ https://github.com/juju-solutions/layer-etcd/blob/master/reactive/etcd.py#L156
<lazyPower> i found the issue. We must have missed a merge in the final bits before moving for release. the interface was updated, but looks like the invocation block never made it
<neil__> Ah, I was wondering about those lines being commented out :-)
<lazyPower> give me a couple minutes and i'll get you a fixed revision
<neil__> But then I also wondered if the code under hooks/relations/etcd-proxy was intended to be a better alternative to those lines...
<neil__> How is the code under hooks/relations/etcd-proxy hooked into the rest of the charm?  I couldn't find any explicit references to the EtcdProvider class.
<lazyPower> that code is the interface code :)
<neil__> Oh I see, I think EtcdHelper in the commented out code should in fact be EtcdProvider (with an appropriate import at the top of that file), because there's nothing else in the charm that has a provide_cluster_string method.
<neil__> One other thing - could the key be 'cluster' instead of 'cluster_string'?  'cluster' is what my two proxy client charms are expecting.
<neil__> (And I believe previous iterations of the etcd charm used 'cluster' - e.g. my latest version before your recent work, at https://github.com/projectcalico/charm-etcd/blob/master/hooks/hooks.py#L80)
<lazyPower> neil__ - negative, the cluster strings are now computed from the active members coming back from etcdctl
<lazyPower> the etcdhelper is deprecated fully, it was harder to understand than modeling the client utility.
<lazyPower> what i'll wind up doing is pulling the member list of who's actively participating in the cluster and send that over with client credentials. i'm not finding that branch, so i think we legitimately missed porting this
<lazyPower> neil__ https://github.com/juju-solutions/interface-etcd-proxy/pull/3
<neil__> Understood I think.  There's also what looks like reasonable code in the hooks/relations/etcd-proxy/README.md (in case you'd forgotten that).
<neil__> That PR LGTM - thanks.
<lazyPower> neil__ - https://github.com/juju-solutions/layer-etcd/pull/32
<kwmonroe> petevg: that suggestion to use bigtop::jdk_preinstalled: false may not work.. zk builds off of the bigtop base layer, which will report blocked when_not java.ready :/
<petevg> Darn :-/
<kwmonroe> so we'd have to tweak our java check in the base layer
<petevg> That sounds entertaining.
<petevg> Actually, since we're passing in overrides to the Bigtop class, it might be able to check for that value.
<kwmonroe> let's see what cory_fu comes up with after his current meeting.. it may be a non issue if we vote to rename zk to zk-bigdata or something
<petevg> Cool.
<cory_fu> I missed the context for this.  Are we suggesting not making ZK depend on the external java relation?
<cory_fu> Can we make the charm use the Bigtop java install by default but let the java relation override / switch the java when related?
<neil__> lazyPower, Just in a meeting at the mo - will look properly at your layer-etcd PR soon after that finishes.
<cory_fu> kwmonroe, petevg, kjackal: So it sounds like we'll make the bigtop zookeeper charm xenial only and leave the existing charm for trusty
<cory_fu> With that, we could keep the required java relation.
<neil__> lazyPower, Is it a concern that the new integration code in the PR is different from what is suggested in https://github.com/juju-solutions/interface-etcd-proxy/blob/master/README.md ?  Should the latter be updated?
<cory_fu> The relation name changes wouldn't be an issue either
<cory_fu> We'd have to have separate tests for trusty vs xenial
<kwmonroe> there's a pickle in there cory_fu
<kwmonroe> let's say i deploy hadoop-processing, which is currently gonna use all trusty charms.. including openjdk.  then i add xenial zk and it says "blocked waiting for java".
<kwmonroe> so i'm all like "cool, juju add-relation zk openjdk"
<kwmonroe> and then juju's all like "ERROR cannot add relation "openjdk:java hive:java": principal and subordinate applications' series must match "
<cory_fu> tinwood: The majority of the discussion around actions & reactive is at: https://github.com/juju-solutions/charms.reactive/pull/66
<tinwood> cory_fu, yes, jamespage pointed me to that - interesting read.  I'm trying to work out if it's possible to get the relation data whilst I'm doing an action. Hmm.
<cory_fu> kwmonroe: That's a problem with subordinates in general.  It would be an issue in mixed series deployments even if both ZK series were the same charm underneath
<tinwood> cory_fu, i.e. I have  some data set from the other end, and I want to check/use it in an action.
<neil__> lazyPower, sorry, will be AFK again for a while now - back later
<cory_fu> tinwood: The goal of that PR would be to be able to use @when and @action together, so your action would be predicated on the relation state and would get the relation instance
<tinwood> cory_fu, that would work nicely.  In the meantime, can you think of anyway to do with a 'plain' action?
<plars> anyone familiar with a problem where the bootstrap node constantly has *very* high load? I'm not sure if it's the cause or a symptom, but mongodb is hammering the logs
<plars> by very high, I mean up aroun 400-500
<plars> restarting juju-db brings it down temporarily, but it creeps back up pretty quickly
<tinwood> cory_fu, could I set a state, run reactive.main() and then pick up that state and a relation state?
<kwmonroe> plars: i haven't seen that, but i think #juju-dev might know better
<plars> kwmonroe: thanks
<cory_fu> tinwood: You could, and there are some examples of charms that do that.  You could also just use RelationBase.from_state('state.name') to get the relation instance directly (or None)
<tinwood> cory_fu, that latter thing is JUST the ticket.  Thx!  (I thought I'd stared at the code, but I must have missed that).  I'll try to code it so I can revert to an @action() if it lands.  Thanks for your help.
<cory_fu> tinwood: We don't tend to promote using from_state directly because it's generally better to make your preconditions clear with the decorators.
<cory_fu> And no problem, glad to help
<tinwood> cory_fu, yes, i understand that - I'm going to try to get it as early as possible so that it can be converted to a 'proper' @action() @when('state') def... form.
<lazyPower> neil__ - sounds good. https://github.com/juju-solutions/interface-etcd-proxy/pull/4
<neil__> lazyPower, your PRs all look good to me now.  Is there a way I can easily pull them together into a complete new charm to try out?
<lazyPower> neil__ in standup, give me 15 and i'll push an assembled version to my namespace
<neil__> lazyPower, thank you - sorry to keep bothering!
<lazyPower> neil__ url: cs:~lazypower/etcd-18  if this revision works for you, ping me back and i'll propose it against the proper charm
<neil__> lazyPower, thanks, will try that out now
<neil__> lazyPower, cycle will take about 45 mins though, so please don't hold your breath!
<lazyPower> :) I'm in no rush, I'm moving this week. I have to admit, Pods + movers = the way to do it.
<geetha> Hi , I have uploaded resource to the controller using `juju attach` command but when I try to fetch resource using `resource-get`, it's keep on trying to fetch the resource..(log : http://pastebin.ubuntu.com/18187857)/
<lazyPower> geetha - the pastebin looks truncated. can you paste the full log, or use a utility like `pastebinit` (which is apt-get installable) to send that paste over to paste.ubuntu.com?
<mskalka> hey guys, I'm having an issue deploying a charm that I've written. It keeps spitting out the following error: ERROR POST https://10.18.150.54:17070/model/3362bfce-179e-44b0-8c02-b6b7bb238d85/charms?series=trusty: URL has invalid charm or bundle name: "local:trusty/charm-DataCenterTopology-0" Does anyone know what I should be investigating to fix this?
<lazyPower> mskalka - try removing charm- from the name in metadata
<mskalka> lazyPower, save error as before
<lazyPower> mskalka - can you link me to your layer/charm?
<mskalka> lazyPower - like to the repo?
<lazyPower> yep
<lazyPower> i'll clone it and see if i can reproduce here, and try to help you find a fix
<geetha> lazyPower: you can see full log here: http://paste.ubuntu.com/18188302/, its keep on going..
<neil__> lazyPower (etcd), hmm: hook failed: "proxy-relation-joined" for neutron-api:etcd-proxy  Just going to investigate further.
<lazyPower> geetha - it looks like it ran on line 1752
<lazyPower> i see voting bits below, which tells me the charm went idle
<lazyPower> how large is the resource you are trying to fetch geetha?
<neil__> lazyPower (etcd) - http://pastebin.com/r4UiF3LL
<lazyPower> doh
<lazyPower> neil__ - sorry i know what i did there... duh
<lazyPower> my python is weak today
<kwmonroe> cory_fu: petevg, hive is in the same boat as zookeeper.. the current promulgated hive is non-layered, so it would have a broken upgrade path.  fortuantely, that one is only precise, so we should be able to drop in a xenial/trusty version without much hurt.
<kwmonroe> so here's what i propose.. we find all the services that might conflict with an existing promulgated non-upgradable charm, then fire off a note to the list.  if people need precise hive, they'll need to be explicity in their deployment instructions and bundles.  same for zk.
<mskalka> lazyPower - that did it! Thanks again!
<lazyPower> mskalka happy to help :)
<petevg> kwmonroe: I think that makes sense.
<geetha> lazyPower: The resource size is 163M...I have tested it already, it was not taking much time.
<geetha> I am deploying charm from charm store and attaching resource using 'juju attach' command
<lazyPower> geetha - yeah without more information i'm not much help :( sorry. This looks like you've followed the correct process. You may have uncovered a bug, can you investigate the logs on your controller to see if there are any errors scrolling in there while its attempting to fetch the resource?
<kwmonroe> geetha: is it still running?
<kwmonroe> geetha: you might want to 'juju ssh' to the unit and have a look in /var/lib/juju/agents/unit-ibm-im/charm/resources to see if the file is still being fetched
<kwmonroe> for anyone else seeing slow resource fetches, logs and observations would be great in bug 1594924
<mup> Bug #1594924: resource-get is painfully slow <resources> <juju-core:Triaged by dooferlad> <https://launchpad.net/bugs/1594924>
<arosales> cory_fu: promulgation issue https://github.com/juju/charm/issues/214
<arosales> cory_fu: feel free to add any examples to that issue specifically if you find it occurs with zookeeper
<lazyPower> neil__ sorry about the delay i got a bit distracted. cs:~lazypower/etcd-19 - i deployed a hollow charm and verified the data on the wire looked good this time around, no more python errors :)
<neil__> lazyPower, thanks, will give that a go now...
<neil__> lazyPower, while that's coming up, could I ask you about my step?
<lazyPower> sure
<neil__> lazyPower, thanks. So, on the machines where I have etcd-using units, I need an etcd proxy that connects over the network on its db side (using 'cluster') and allows localhost access only on its client side.  I'm not yet sure if I need TLS-security for the localhost connections.
<lazyPower> you wont, you'll need to provide the client keys on your proxy configuration however
<lazyPower> you wont be able to communicate with the primary etcd cluster without that key.
<neil__> lazyPower, Yes indeed, I definitely need the proxy to have keys on its connection to the non-proxies.  I believe I can get that using the requires part of interface-etcd-proxy - right?
<lazyPower> neil__ - yep. that interface is reactive based, so i dont know that you can consume it out of the box in your charm.. you're supporting a non layered approach right?
<neil__> lazyPower, Is there a cast-iron argument for saying that TLS security isn't needed for the localhost connections?  (I'll be very happy if there is, but I'm not greatly experienced yet in security questions.)
<lazyPower> neil__ - with that being said, the keys are all there for you. cluster, and the 3 ssl keys.
<lazyPower> ah, i dont think its required, thats kind of the point of the proxy right?
<lazyPower> thats solely up to you in the implementation details of that proxy
<lazyPower> i just gave you the pipeline to do so :)
<neil__> Thanks.
<neil__> At the moment, the etcd proxy function is integrated into the charms that need it (neutron-api and neutron-calico).  But I was thinking it might be nicer to have a separate etcd-local-proxy charm, and then just to deploy that alongside the other units that need it, and remove the etcd code from neutron-api and neutron-calico.
<neil__> If I did that I guess it might be quite easy to do the new charm using reactive...?
<lazyPower> yep!
<lazyPower> and you can grab that interface code too, the conversation is done for you, just follow the placement guide. YOu'll need to update  the systemd defaults template for etcd so you can declare the tls options. thats all thats different that i can think of
<neil__> Yes, that's what I was thinking.  But what do you mean by the placement guide?
<mskalka> hey everyone, is there any way to specify machines to deploy to outside of the 'deploy --to' command? Like pre-configured in a config file or something similar
<neil__> mskalka, In the bundle, yes.
<mskalka> neil__, thanks, I'll check that out
<neil__> mskalka, Complication is, I think, that the supported declarations depend on what version of Juju you're using; and it's not well documented.
<valeech> this may be a bit off topic, but does ubuntu openstack autopilot leverage juju to deploy the services on maas devices?
<mskalka> neil__, haha 'not well documented' seems to be the phrase of the month. It gives me a place to work from though.
<neil__> mskalka, But for Juju 2 the bundle can say, for example: "to: [ bird ]", which means "put the first unit of this service in the same place as bird"
<mskalka> neil__, bird in this example being another service?
<neil__> mskalka, yes
<mskalka> neil__, that's exactly what I'm looking for actually
<neil__> mskalka, some doc claims that you can specify machine numbers, but I've not got that to work reliably
<mskalka> neil__, I'm trying to get a charm deployed to every machine another charm operates on
<neil__> mskalka, hmm, I think that's possible.  But you might also want to look up 'subordinate' charms
<neil__> lazyPower, looks like your latest etcd charm is good: on the etcd proxy client machine I now have /etc/default/etcd with correct ETCD_INITIAL_CLUSTER.
<mskalka> neil__, I don't need to access any of the other charms on the machine, just gather information about the machine and its relation to other machines. It's basically a charm to create crushmaps for Ceph without actually touching the Ceph charm
<neil__> lazyPower, the etcd proxy is still failing to connect to the cluster, but that's expected because I haven't implemented code to get the keys from the relation and add those into /etc/default/etcd.
<neil__> mskalka, Sounds like you just need 'juju status' then.
<mskalka> neil__, if only.. it's got to be able to do some code execution
<lazyPower> neil__ - excellent
<lazyPower> neil__ - add the TLS keys to the defaults file and you should be in like flynn
<lazyPower> do you need those? i'm pretty sure i have them in my design notes
<neil__> do I need which?
<lazyPower> the tls flags to provide the defaults file
<neil__> sorry, I'm afraid I'm lost...
<lazyPower> neil__ https://gist.github.com/chuckbutler/3649337495ca3fa95bb6a3bdb11fc70f
<neil__> sorry, still not quite understanding - what is it that supports those flags?
<lazyPower> neil__ : thats what you'll want to set in the defaults file for flags to etcd
<magicaltrout> ran a  40 m cat 5 cable the length of  my garden..... then find its broken
<magicaltrout> woop
<lazyPower> so, receive and write out those client certificates (the interface code does this for you if you're using that), and use the path you saved those client certs to populate those flags in the defaults file, and you should be g2g w/ the new etcd implementation
<lazyPower> magicaltrout - blarg, hardware problems
<magicaltrout> indeed
<neil__> lazyPower, Right, got it, thanks.
<neil__> lazyPower, But I see two sets of possible settings: ETCD_CERT_FILE, ETCD_KEY_FILE, ETCD_TRUSTED_CA_FILE; and ETCD_PEER_CERT_FILE, ETCD_PEER_KEY_FILE, ETCD_PEER_TRUSTED_CA_FILE.  For a proxy connecting to a non-proxy, do you know which ones I need?  (I would guess the ones without _PEER_, but not at all sure about that.)
<lazyPower> the ones without PEER assume server installation
<lazyPower> those keys are client keys not server keys :)
<lazyPower> so i'm 90% certain you want the PEER flags
<neil__> lazyPower, Thanks, I think you're right - I found a note elsewhere that says "etcd proxies communicate with the cluster as peers so they need to have peer certificates." (http://docs.projectcalico.org/en/1.3.0/securing-calico.html)
<lazyPower> solid :) sounds like you're all set. Let me know how you make out with it. I'm going to be taking off in a bit
<cholcombe> thedac, mojo question for ya
<thedac> cholcombe: sure
<cholcombe> thedac, i'm working on a mojo spec for gluster.  I'm wondering what i should put in the deploy config=gluster-default.yaml delay=0 wait=True target=${MOJO_SERIES} MOJO_SERIES field
<cholcombe> it says it can't find 'trusty'
<cholcombe> thedac, i can show you the whole spec if you'd like.  i can push it up to a branch
<thedac> target is the name in the config file (gluster-default.yaml) that juju deployer uses to deploy. In openstack specs these like trusty-liberty and xenial-mitaka
<thedac> But it can be anything you named it in the bundle file
<cholcombe> i see
<cholcombe> thedac, cool i think that helps :)
<thedac> cool, let me know if it down not
<cholcombe> thedac, ah yes this was the thing i was hitting.  i ran mojo again.  it says 'gluster' not found.  Available base.  When i try putting base that also fails
<cholcombe> thedac, i should prob push this up to lp so i can show ya
<thedac> cholcombe: just as an example of multiple targets in a single bundle file http://bazaar.launchpad.net/~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs/view/head:/helper/bundles/full.yaml
<thedac> sure, it is probably the bundle file
<cholcombe> thedac, http://bazaar.launchpad.net/~xfactor973/mojo/gluster/revision/263
<thedac> ok, I'll take a look
<cholcombe> thedac, i'm a n00b i don't know what the heck i'm doing lol
<thedac> cholcombe: based on gluster-default.yaml you want target=base. Are you using juju 2.0? I don't think mojo is 2.0 ready
<cholcombe> thedac, i believe i'm using juju 1.x
<thedac> ok
<cholcombe> thedac, does it need a clean juju environment?
<thedac> Well, at least one without gluster deployed
<cholcombe> ok cool
<cholcombe> i think it got further
<cholcombe> it's complaining now about no charm metadata for precise/gluster/metadata.yaml
<cholcombe> i need to restrict it to trusty/xenial somehow
<thedac> ah, we need some more info in the bundle file.
<thedac> You can set series:
<thedac> cholcombe: let me fix the bundle a bit. :)
<cholcombe> thedac, much appreciated :)
<thedac> cholcombe: try this http://pastebin.ubuntu.com/18196928/
<cholcombe> thedac, interesting
<cholcombe> thedac, it's doing something \o/
<thedac> :)
<cholcombe> thedac, that got me to the testing part where it says it can't find my test script
<thedac> cholcombe: let me look again
<cholcombe> i assume it copies the mojo dir onto the unit somewhere?
<thedac> yes. /srv/mojo/$MOJO_PROJECT/$MOJO_SERIES/$MOJO_WORKSPACE
<cholcombe> hmm nothing is on there
<thedac> cholcombe: on your local not the instance
<cholcombe> ok
<cholcombe> thedac, i think i have a typo
<thedac> ah yes, test_gluster_store.py vs test_gluster.py
<cholcombe> right
<cholcombe> looks like once the service is up it's quick to change stuff and retest
<thedac> Yes, the deploy is the slowest bit
<cholcombe> thedac, success!
<thedac> \o/
<cholcombe> :)
<cholcombe> thedac, alright cool i'm going to put this up for review
<thedac> sounds good. I am heads down the rest of my afternoon. but I can take a look tomorrow
<cholcombe> sure thing
<cholcombe> thedac, thanks a bunch for the help :)
#juju 2016-07-01
<Facso> How do bundles handle exposing ports? or do they not?
<Facso> I'm about to test this with a simple web and database charm deploy, but thought I'd ask.
<kjackal> marcoceppi and juju at devopsdays Amsterdam in about an hour: http://livestream.com/accounts/20260359/devopsdays-amsterdam-2016
<magicaltrout> woop
<axino> lp:charms/trusty/ubuntu-repository-cache exists, I prepared a Xenial version at https://code.launchpad.net/~axino/charms/trusty/ubuntu-repository-cache/xenial-ready/+merge/298880, is this MP the right thing to do to make it happen ?
<magicaltrout> nice flip flops marcoceppi
<neiljerram> lazyPower, good morning!
<neiljerram> lazyPower, I've created an etcd-local-proxy charm at https://github.com/neiljerram/etcd-local-proxy (by starting from and hacking layer-etcd).  I wonder if you might take a look at it and see if it looks sane to you?
<neiljerram> lazyPower, There's also a needed PR for interface-etcd-proxy at https://github.com/juju-solutions/interface-etcd-proxy/pull/6
<neiljerram> lazyPower, However, when I build and deploy etcd-local-proxy, and also cs:~lazypower/etcd-19, I'm seeing TLS auth issues, so it's possible there's something wrong in how the certificates are generated, or in the etcd config at either end.  Have you done successful end-to-end TLS testing with an etcd master and proxy?
<marcoceppi>  magicaltrout hah, thanks ;)
<magicaltrout> good talk though
<magicaltrout> looks like you got a decent crowd
<lazyPower> neiljerram - i have not. only verified the data exchange on the wire. If you need server certificates, we can do that but its a different dance over the relation wire. It involves generating a CSR and having it signed by the etcd cluster
<lazyPower> s/cluster/leader/
<lazyPower> neiljerram - i think i knwo why you're getting ssl errors as well. the fork of etcd there is generating its own certificates
<lazyPower> wait no, i jumped the gun i see you removed that from metadata
<lazyPower> mbruzek - ping
<mbruzek> Hello lazyPower
<lazyPower> hey o/  neiljerram has been working with our latest release of etcd, and had some questions. Woudl be good to get your eyes on https://github.com/neiljerram/etcd-local-proxy
<mbruzek> neiljerram: What TLS issues are you seeing?
<neiljerram> mbruzek, thanks - first I was seeing (IIRC) 'x509 certificate signed by unknown authority'; then a bit later 'remote error: bad certificate'
<neiljerram> I will start a redeployment now so I can provide more detail...
<mbruzek> neiljerram: OK. Can you describe the model? I have got successful etcd to etcd TLS before (not using proxy)
<neiljerram> mbruzek, the model is etcdctl ----- local etcd proxy ----- remote etcd master
<mbruzek> And you are running etcdctl on your machine or in the cloud?
<neiljerram> mbruzek, there is TLS security for the connection between the proxy and the master; but no TLS security between etcdctl and the local proxy.
<neiljerram> mbruzek, etcdctl invocations are on the same machine where the proxy is running
<neiljerram> mbruzek, so the etcd master is  cs:~lazypower/etcd-19, which is Charles's latest unreleased version of layer-etcd
<neiljerram> mbruzek, and the etcd proxy is https://github.com/neiljerram/etcd-local-proxy, which is a charm that basically just uses the 'requires' side of interface-etcd-proxy
<lazyPower> mbruzek - i pushed etcd-19 in my namespace due to the missing etcd-proxy bits. We translated the interface, but missed the reactive code to support that
<lazyPower> everything but the reactive code in layer-etcd has merged in master of their respective repositories though, so we can rev ~containers once that lands
<lazyPower> if its ready for that
<mbruzek> I just diffed etcd-19 and etcd-local-proxy the etcd.py looks quite different.  Let me see if I understand correctly, you are connecting the etcd charm with a different etcd-proxy charm and the tls is not working?
<mbruzek> from etcdctl on proxy and the etcd on the master.
<neiljerram> mbruzek, That's all correct.   I really just used etcd-19 as a structural starting point, but etcd-local-proxy does a lot less, because it doesn't have to do leader election or registration, or generate its own certificates.  In principle - I think - it just needs to do the 'requires' side of the 'proxy' relation, save the credentials once they're available, update /etc/defaults/etcd, and restart the local etcd (proxy) service.
<mbruzek> neiljerram: Yeah that looks right.
<mskalka> hey everyone, is there any method for pulling the machine ID or INS-ID inside a hook or action?
<neiljerram> My deployment is up now, and here's what I see: http://pastebin.com/Xs4YLDtY
<mbruzek> neiljerram: OK please add to the pastebin: openssl x509 -in /tmp/etcd_cert -text
<mbruzek> neiljerram: That will print out the text of the certificate to make sure the one on your system is valid.
<mbruzek> I suspect it is, but want to start there.
<neiljerram> mbruzek, http://pastebin.com/4Dfa2r1V
<mbruzek> neiljerram: And these addresses are correct ? IP Address:54.164.27.251, IP Address:172.31.51.7
<neiljerram> mbruzek, 54.164.27.251 is the public IP of the AWS instance that contains etcd-19.
<neiljerram> and 172.31.51.7 is that same machine's private IP
<mbruzek> OK that is good so I believe the certificate is valid and stored correctly on the proxy machine.
<neiljerram> (the machine itself only knows its private IP)
<mbruzek> neiljerram: Yep
<mbruzek> neiljerram: Can you run the following? : etcdctl --debug --cert-file=/tmp/etcd_cert --key-file=/tmp/etcd_key --ca-file=/tmp/etcd_ca and pastebin the result?
<mbruzek> wait add the "ls /" to the end
<neiljerram> mbruzek, that seems to work: http://pastebin.com/3ze47iCz
<mbruzek> sweet!
<mbruzek> OK so I think the problem is that etcd-proxy reads the defaults file, but etcdctl does not.
<mbruzek> neiljerram: I am not familiar enough with etcdctl to know how to make those values default, some tools use env variables, other tools have a confiuration file.
<neiljerram> But I didn't think that the hop from etcdctl to the local proxy should use TLS at all...  So I'm a bit confused!
<mbruzek> neiljerram: Are you sure that etcd-proxy is properly configured with these keys/certs?
<neiljerram> I was just going to pastebin the proxy's config....
<neiljerram> ... http://pastebin.com/ewTBy5mJ
<mbruzek> I wonder if "PEER" is not valid in this case, let me do some searching
<mbruzek> Also I know they are in the defaults file, but are we sure that etcd is actually getting the values? Do you see in the etcd logs that the key/cert/ca is read in?
<mbruzek> https://coreos.com/etcd/docs/latest/security.html
<neiljerram> mbruzek, Yes, I think so: http://pastebin.com/jhg1HLve  (I just added ETCD_CA_FILE and other non _PEER_ settings to the config - but that didn't make any difference to our observations so far)
<mbruzek> neiljerram: So after doing that, you are still not able to connect using etcdctl without the  cert and key flags?
<neiljerram> mbruzek, Perhaps the etcd proxy doesn't really support TLS on one side (towards the cluster) but not on the other (towards the client).  So when it gets a client request it requires TLS auth, even if that client request is over http.  Is that possible?
<neiljerram> mbruzek, Yes, correct.
<neiljerram> mbruzek, and with those flags, I still _can_ connect
<mbruzek> neiljerram: I don't know etcd proxy well enough to answer that.
<mbruzek> neiljerram: Great. Still that is a pretty bad user experience, let me see if I can find more information about etcdctl to see if we can eliminate those flags
<neiljerram> mbruzek, Do you know who implemented the 'pillowmints' in the etcd charm?  I think they support that hypothesis.  (They set up env vars for etcdctl.)
<mbruzek> neiljerram: "pillowmints" was used by lazyPower for lack of a better name. I questioned him about that during a code review.
<neiljerram> mbruzek, It is a curious name :-) but nice.
<neiljerram> mbruzek, But it sets up these vars, for local etcdctl usage on the machine where the etcd master node is installed:
<neiljerram> ETCDCTL_CA_FILE=/etc/ssl/etcd/ca.pem
<neiljerram> ETCDCTL_CERT_FILE=/etc/ssl/etcd/server.pem
<neiljerram> ETCDCTL_KEY_FILE=/etc/ssl/etcd/server-key.pem
<mbruzek> Oh there you go!
<mbruzek> export ETCDCTL_CERT_FILE and the others and try etcdctl again
<neiljerram> mbruzek, Yes, that works.  But I still don't feel like I understand why.  How does TLS auth mix with doing a request over http:// (i.e. _not_ over https://) ?
<mbruzek> neiljerram: According to your pastebin with debug http://pastebin.com/3ze47iCz the http address is getting converted to https and my guess is that the etcdctl is making the direct https connection
<mbruzek> neiljerram: etcdctl must be talking http to the proxy but perhaps it is actually talking https to the server as well.
<neiljerram> mbruzek, Oh! So the proxy isn't really proxying; but rather redirecting!
<mbruzek> neiljerram: That would be my assessment from reading the pastebin you have there.
<neiljerram> mbruzek, It appears this is all a behaviour of etcdctl, rather than of the proxy.  'etcdctl --no-sync ls /' works even without those env vars.
<mbruzek> neiljerram: Interesting
<neiljerram> mbruzek, Thanks so much for helping me to understand all this.  I think I have all the pieces and understanding that I need now.
<neiljerram> lazyPower, Thanks also.
<mbruzek> neiljerram: Excellent, glad to help. If you need anything else ping me here.
<neiljerram> mbruzek, Thanks, I'll do that.
<mbruzek> I am the one that wrote the TLS layer so it is my responsibility. What I have found is TLS is hard
<lazyPower> mbruzek i try to help :)
<xilet> After a reboot a series of my units are stuck with "agent is lost, sorry! See 'juju status-history glance/0'"  The status-history is normal until the reboot and nothing since, any way to kickstart a container back to life? (2.0-beta)
<jose> xilet: did IP addresses change?
<xilet> No
<cholcombe> reactive is doing some odd things i'm having trouble figuring out
<cholcombe> i have a base layer that sets a state after it installs everything.  in my layer that uses it i @when() that state and for some reason the base layer never installs what it should but the state is firing
<cholcombe> i r confused :)
<cholcombe> nvm icey and i figured it out
<Yacka> It seems like --config has been removed from deploying bundles?
<Yacka> If I create a bundle for a vanillastack application and want to keep the configuration options separate what's the prescribed way in 2.0?
<Yacka> ERROR Flags provided but not supported when deploying a bundle: --config
<Yacka> Excellent.
<Yacka> Why should a deploy of a bundle be any different than a deploy for a charm?
<Yacka> They use the same command.
<kjackal> kwmonroe: thank you for the cleanup on hbase. I will study all the changes and aling my work based on your pointers. Many thanks!
<kwmonroe> np kjackal -- it wasn't too complicated, just doing actions a bit better than we used to.  i also like the collapsed report_status() handler vs individual handlers per missing state.
#juju 2016-07-02
<bbaqar> Hey guys. I have a charm unit that is showing up in juju status in an error state. I dont see any error in the juju logs and when i try to resolve the charm I get the error message: "ERROR unit "nova-compute/1" is not in an error state"
<bbaqar> for more info have a look at my ticket https://bugs.launchpad.net/juju-core/+bug/1598329
<mup> Bug #1598329: juju status showing charm unit in error state but getting "ERROR unit is not in an error state" message when resolving charm <juju-core:New> <https://launchpad.net/bugs/1598329>
<bbaqar> mup yes that was me
#juju 2017-06-26
<admcleod> .
<kjackal> Good morning Juju world!
<armaan> jamespage:  Hello, for upgrading from Mitaka to Newton, it is essential that we upgrade the host OS from trusty to Xenial? Is that assumption correct?
<jamespage> armaan: Newton is only available on >= xenial for Ubuntu so yes that is the case
<jamespage> however I'm not quite sure where Juju is on series upgrades between Ubuntu releases yet
<armaan> jamespage: ok, is it possible to throw away one lxc-managed unit of a service, and then redeploy it with Xenial, instead of the in-place upgrade of a whole OS?
<armaan> jamespage: I think "juju deploy" has "--series" for that; but I'm not sure if "juju add-unit" does too?
<jamespage> armaan: no - I think the juju approach to this was to allow you to in-place upgrade units, and they inform juju that the application is now series X for example
<jamespage> but as I said I'm not sure whether that work is in 2.2 or not
<jamespage> there are other complexities to the upgrade process as well - for example having to switch from LXC -> LXD
<armaan> jamespage: You mean Mitaka units are managed via lxc and Newton units are managed via lxd?
<armaan> jamespage: is there any documentation -- which i can look into?
<armaan> jamespage: perhaps, this script will be useful in migrating lxc containers to lxd. https://github.com/lxc/lxd/blob/master/scripts/lxc-to-lxd
<stub> Anyone know where the metadata.yaml documentation is?
<stub> Or to answer my question, can charms declare their supported architectures now?
<rick_h> stub: no, arch hasn't made it to the metadata. It's still a constraint since it's a machine property vs a charm one.
<stub> rick_h: I've got people wanting to add supported architectures to the snap layer, so a unit goes blocked if you deploy it onto an unsupported architecture. I was thinking that would be better done as a deploy time check.
<stokachu> jamespage: is the charmstore skipping a stable Ocata and waiting on Pike instead?
<aluria> hi o/ -- in Juju2, is there a way to get where an application is bound to? I see metadata.yaml's extra-bindings but I'd like to check on a live environment what the config is
<jamespage> stokachu: no
<jamespage> its just not been updated for ocata
<jamespage> yet
<stokachu> ok
<stokachu> jamespage: any eta by chance?
<chrome0> Network space question: is it possible to update jujus notion of available spaces on a machine? Eg. in cases where the provisioners' knowledge about available network resources is incomplete?
<chrome0> ^^ f.ex. a maas deployed machine, but which has no complete knowledge of the network ressources, or when doing a juju add-machine ssh:... style manual placement
<rick_h> chrome0: hmm, so there's reload spaces but that's about updating Juju to know about the spaces the underlying IaaS substrates knows about.
<rick_h> chrome0: there's juju add-space, but honestly I've not tinkered with spaces and manual machines because typically, spaces are set as constraints during deploy "bind x to y" and so the underlying system makes sure the machines/vms have interfaces on y
<chrome0> Yep - but in this case it's really applying an override if the underlying substrate has incomplete data
<rick_h> chrome0: so the goal, or the thing that's hurting you is that you've got a machine with different networks but you're not able to tell the application which ones to use for what purposes? (matching the bindings to the networks on the host that are sitting there available?)
<chrome0> rick_h : Yea so I have net resources on a machine but juju doesn't know about them.
<rick_h> chrome0: so the test would be, can you use juju add-space and build up a mental model of what's there and then deploy with bindings ?
<chrome0> rick_h : The spaces are present (and in use on other nodes). The machine I'm referring to here is the maas bootstrap node - whose networking maas obv doesn't control
<chrome0> I'd like to add a container to that maas node and use bindings there
<rick_h> chrome0: ? so you're trying to deploy something with Juju onto the maas machine itself?
<chrome0> Yes
<rick_h> oic, you just wanted to take the hardness up a level :P
<rick_h> hmmm, so you created the container manually on the maas machine and then used juju add-machine to register it in juju?
<chrome0> Well, making use of resources and all ;-)
<rick_h> chrome0: hah, yea it's a long standing feature request from folks for sure
<chrome0> I was actually thinking of doing add-machine for the metal itself
 * rick_h cringes a bit at that
<rick_h> the worry there is...is there any pattern of deploy/remove/destroy that could try to take down the maas machine then?
<rick_h> chrome0: the normal pattern folks use is to create a container on the maas machine and then juju add-machine to the model so you can deploy to it
<rick_h> it keeps Juju from messing at anything above the container level and for most things it's just as performant/etc.
<chrome0> Ok, that works for me as well
<chrome0> rick_h : ...but I'm still unsure how juju would know about the containers' network spaces?
<rick_h> chrome0: well, spaces in the end is tracking of subnets matched against what's on the network definitions in eni
<rick_h> chrome0: so...if the container has access to IPs on the host machine network then it should match up?
<rick_h> chrome0: if not...then there's the issue in that the container network info is scoped into a locked box and won't be able to see out to the other machines in the model
<chrome0> Yep
<rick_h> chrome0: honestly, all I can say is to experiment with it. If the containers can get IPs on the host (and that's in the container that's manually setup so juju doesn't control that) it might "just work"
<chrome0> Ack, cheers
<rick_h> chrome0: if not, I'd hit up the mailing list and see if we can get the network specialist folks to poke at it and suggest tweaks
<rick_h> they're probably asleep atm
<chrome0> I gotta run now, but will give that a shot later
<rick_h> chrome0: k, let me know how it goes
<chrome0> Cheers
<skay> lazyPower: I have a filebeats charm question. the example shows it being deployed in the same environment with logstash and the service to get logs from
<lazyPower> skay: yep
<skay> lazyPower: but I'd like to use filebeats from a different environment to send to an ELK stack running elsewhere. is that possible?
<lazyPower> skay: not currently, but the layer could be extended to support a manual configuration option for the es/logstash endpoint
<skay> lazyPower: okay. thanks for the sanity check
<lazyPower> skay: np, sorry about the limited feature set
<lazyPower> i haven't touched filebeat in quite a long time, the focus moved to k8s proper.
<skay> lazyPower: no apologies
<rick_h> skay: on the same controller? 2.2?
<lazyPower> oh good point rick_h, xmodel relations!
<skay> rick_h: I don't know. maybe in our staging environment, but I don't know how prod is set up
 * lazyPower had cranial flatulence
<skay> rick_h: lazyPower: tell me more, docs?
<rick_h> skay: so there's a feature flagged feature in 2.2 that the team's working on that lets you add relations across models
<rick_h> skay: sec, I've got a sheet on my desk for a blog post I want to get going but not done it yet. Let me see what I can pull up
<skay> rick_h: I am way behind on juju news!
<rick_h> skay: :) https://www.youtube.com/playlist?list=PLW1vKndgh8gI6iRFjGKtpIx2fnJxlr5FF just to plug
<skay> lazyPower: also, what is k8s an abbreviation for?
<lazyPower> skay: kubernetes
<skay> aw, this is like i18n or l10n
<rick_h> skay: can you load this? https://goo.gl/bHMVkx
<skay> rick_h: yes
<rick_h> heh yea "1, 2, skip a few..."
<rick_h> skay: so that's not in the official docs yet and such, but rough notes on what it is and how it works.
<rick_h> skay: we want to get users testing it between 2.2 and 2.3 so I'll be pushing out a larger call to bang on it, but your use case really matches up with what we want to do
<rick_h> skay: so any tinkering is <3
<lazyPower> ^
<lazyPower> bugs will be helpful too if you find defects
<rick_h> +1
<lazyPower> (in layer-filebeat)
<lazyPower> (and juju core)
<bdx> how can I completely uninstall juju-2.0.1 beta from osx?
<bdx> users experiencing difficulty installing 2.2 due to pre-existing beta install
<rick_h> bdx: brew uninstall --force juju
<rick_h> bdx: I had to do this as I was tinkering with things
<bdx> rick_h: awesome, thx
<rick_h> bdx: then brew upgrade and make sure brew info juju@2.0 looks like it's what you want there
<bdx> entirely ... `brew uninstall --force juju` doesn't seem to be doing the trick
<bdx> trying to find it and rm manually
<rick_h> bdx: does it run successfully? or fail with an error?
<bdx> successfully
<bdx> it just didn't actually uninstall 2.0.1
<lazyPower> :S
<bdx> rick_h: a combination of --force install and force uninstall seemed to do the trick
<bdx> I'm sorry I don't have more specifics here
<bdx> thx thx
<rick_h> bdx: hmm, ok thanks for the heads up
<rick_h> bdx: if you find something let us know and we'll try to get the word out and make sure folks know the work around
<Budgie^Smore> o/ juju world
<rick_h> what's up Budgie^Smore
<lazyPower> \o Budgie^Smore
<Budgie^Smore> not much waiting for a scheduling email
<natefinch> howdy juju folks.  Just curious if auto scaling is a thing that is being planned anytime in the near future?
<lazyPower> natefinch: SimonKLB wrote an autoscaler charm actually
<lazyPower> natefinch: @k8s-bot ok to test
<lazyPower> gah
<lazyPower> natefinch: https://jujucharms.com/charmscaler/2
<natefinch> wow
<natefinch> that was not the answer I was expecting, but that is super cool
<lazyPower> natefinch: Happy to help :)
<lazyPower> and its getting better ever day, SimonKLB has been pretty responsive to bugs/requests.
<lazyPower> *every
<natefinch> lazyPower: current company is using Nomad, but it doesn't do well with things that don't work in docker, like cassandra.  and since we use several of these apps, I was thinking about juju, but autoscaling is something we'd want to have available once we get to general availability
<cholcombe> i think ec2 is still having that issue i saw on friday where instances take an hour or more to spin up
<lazyPower> cholcombe: i just ran a deploy in us-east-1 that took me less than 20 minutes to bootstrap, execute tests, and complete.
<cholcombe> lazyPower: maybe something is wrong with us-west-2 then?
<cholcombe> maybe it's just me then.  hmm
<lazyPower> cholcombe: possible
#juju 2017-06-27
<magicaltrout> if the charm store says X revisions
<magicaltrout> is it possible to find out when the last one was?
<kjackal> magicaltrout: try "charm show". There is an "archive-upload-time" field that might be the one you need
<rick_h> magicaltrout: so that's going away. It's not really true at this point and is from trying to tie the source control/etc into the charm metadata.
<rick_h> magicaltrout: that's a good point if you're looking for some sort of "last time updated" which we can do with when things were pushed to the channel in the store.
<magicaltrout> thanks chaps
<SimonKLB> tvansteenburgh1 lazyPower bundletester in charmbox still seem to experience https://github.com/juju-solutions/bundletester/issues/108
<SimonKLB> bundletester is on 0.12.2 though
<lazyPower> SimonKLB: i executed a test on a freshly bootstrapped controller. what is your substrate that you're encountering this issue?
<SimonKLB> lazyPower: i think there might be something else that is bringing in the latest version of the websocket-client
<lazyPower> i wonder if matrix is bringing that in
<lazyPower> but whats weird is that i didn't run into that
<lazyPower> nor did kwmonroe_
<SimonKLB> lazyPower: im not running matrix
<SimonKLB> could it be amulet?
<lazyPower> SimonKLB: referring to something else bringing in that latest version fo wsc
<lazyPower> SimonKLB: rebuild omitting these lines of the setup https://github.com/juju-solutions/charmbox/blob/master/charmbox-setup.sh#L37-L41
<lazyPower> SimonKLB: lmk if that resolves the issue for you
<tvansteenburgh1> if it were that, wouldn't it have broken for you too?
<lazyPower> it depends on juju and theblues, both of which i believe pull in websocket-client
<lazyPower> tvansteenburgh: i woudl think so but i'm unsure if simon is still broken and i wasnt' on a fresh bootstrap
<lazyPower> i'm not terribly familiar with this issue
<SimonKLB> lazyPower: tvansteenburgh it's libjuju
<SimonKLB> or maybe not
<SimonKLB> :D
<SimonKLB> Collecting websocket-client>=0.18.0 (from jujuclient>=0.53->juju-deployer->-r /tmp/bundletester-KlKNGV/charmscaler/test-requirements.txt (line 3))
<SimonKLB>   Downloading websocket_client-0.44.0-py2.py3-none-any.whl (199kB)
<SimonKLB> juju-deployer ?
<lazyPower> well thats coming from apt
<lazyPower> interesting
<lazyPower> unless one of these from source installs is overriding it
<SimonKLB> lazyPower: i had juju-deployer in my test-requiremets.txt
<SimonKLB> that might not even be necessary
<SimonKLB> i think it was before i started using charmbox and needed to install it myself?
<SimonKLB> lazyPower: removing it made the test run at least, but not sure if the best solution is to remove it since people not running tests inside charmbox will be required to install it via apt first then, right?
<SimonKLB> just don't want to shoot myself in the foot here by simply removing it
<lazyPower> SimonKLB: fair, can you get a bug filed on the repo? I dont want ot lose track of this and we shoudl collab around it on the issue
<tvansteenburgh> wait
<tvansteenburgh> there's already a bug
<SimonKLB> tvansteenburgh: want me to just mention juju-deployer in the bundletester issue?
<tvansteenburgh> sure, that's fine. fixing this https://github.com/juju-solutions/bundletester/issues/108 is the correct fix
<lazyPower> tvansteenburgh: is that going to be published to the ppa or should we move charmbox to install from pypi?
<tvansteenburgh> what, bundletester?
<lazyPower> yes
<tvansteenburgh> bundletester has always been installed from pypi, it doesn't exist in a ppa
<lazyPower> tvansteenburgh: the thing is juju-deployer is coming from the ppa, bundletester is already being installed from pypi
<SimonKLB> https://github.com/juju-solutions/bundletester/issues/108#issuecomment-311365889
<vlad___> Hey guys does anyone know of a way to log into a node before juju has acquired it from maas? I can't get an answer in other IRCs and found no docs on it.
<rick_h> vlad___: ? juju acquires it pretty much right away. Juju provides init script data as part of starting the machine.
<rick_h> vlad___: I think there's some maas script stuff you can inject in there as part of the startup script that can be in addition to
 * rick_h goes to look
<vlad___> rick_h: Awesome thanks for the quick and informative answer!
<rick_h> vlad___: check out https://askubuntu.com/questions/636837/are-there-examples-of-custom-installation-scripts
<vlad___> rick_h: Thanks again going under to read
<rick_h> vlad___: hmm, so in the maas docs it says "The older debian-installer (preseed) method has been removed. Some remnants of preseed may still be found however. See /etc/maas/preseed directory."
<rick_h> vlad___: that seems to be the way to do this type of thing I think per http://caribou.kamikamamak.com/2015/06/26/custom-partitioning-with-maas-and-curtin-2/ and https://www.stackevolution.com/node/17
<rick_h> vlad___: best I can come up with atm
<vlad___> rick_h: Awesome!
<kwmonroe> SimonKLB: i was watching your bundletester chatter with lazyPower, and it just dawned on me.. did you bundletest against a jaas controller?
<vds> Hi all, I'm having issues in bootstrapping an openstack cloud: https://paste.ubuntu.com/24964072/
<vds> username, password and project id are correct, how do I get more information about what is going wrong
<rick_h> vds: so you can try adding --debug but I think it'll show the same. Is the auth endpoint right as well?
<vds> rick_h: one thing I don't understand is : 16:21:07 INFO  cmd cmd.go:141 no credentials found, checking environment
<vds> rick_h: but I did add the credentials...
<rick_h> vds: if you run juju show-credentials they're listed?
<rick_h> sorry, juju credentials
<vds> rick_h: yep
<vds> this is what I get with --debug https://paste.ubuntu.com/24964310/
<rick_h> hmm, beisner any hint on the service urls bit from ^ ?
<Budgie^Smore> o/ juju world
#juju 2017-06-28
<SimonKLB> kwmonroe: very late response, but no, it was against aws
<SimonKLB> kwmonroe: it's solved though, i updated the bundletester issue thread about my issue
<SimonKLB> any jujucharms.com admins here? need help with a broken login due to updated user on launchpad
<Neepu> Hey. I'm looking into backing up a OpenStack system, that is built with nova-lxd-containers (All-in-one Ubuntu OpenStack) are there any documentation to help with that? I'm assuming that backup is slightly different than usual due to the use of containers.
<tvansteenburgh> SimonKLB: rick_h can probably point you in the right direction
<tvansteenburgh> Neepu: beisner or jamespage might know
<beisner_> hi Neepu - the all-on-one openstack scenario is a handy dev/test/demo scenario, but it isn't recommended for production use cases.  openstack is a cloud, and clouds are meant to be distributed.  as such, we've not explored or documented a backup routine for that use case.
<Neepu> hi beisner_ ty for the answer, if i were to backup such a system, do you have a suggestion? it is not meant to be used in a production environment.
<beisner_> Neepu - there are a number of things that could be done.  when you say back up, are you talking about backing up the backing storage for running instances, or backing up the control and data plane of the cloud itself, or both?
<Neepu> I haven't really settled on that yet, as i'm not too familiar with LXD containers yet. But the requirements i'm dealing with is that, it should be easy to restore again. The most important feature would be to backup the guestOSes, but i think there already is a feature for that.
<Neepu> yep "nova backup", which probly backups inside the nova container.
<beisner_> An OpenStack cloud as-a-whole isn't really something that's meant to be backed up and restored, as one might do with a traditional application.  It's a stack of independent applications and api services, each of which have certain needs.  The process for doing that at the application level, regardless of the substrate, would be the same as is documented in the upstream openstack docs.
<beisner_> The process for doing that on a machine (container) level, is not documented or tested for use against an entire openstack cloud, but if this is a dev/test environment, it'd be interesting to see if lxd snapshots are helpful.
<Neepu> I see, i think i'd be good with lxd snapshot to extract the backup
<Neepu> I'm not too familiar with Juju's approach to OpenStack, but would snapshoting all LXD containers and start them let me run a functional OpenStack cloud?
<beisner_> Neepu - i don't believe that is tested.  the recommended approach to OpenStack Charms delivered via Juju is to use an HA topology, where api services are disposable, etc.
<beisner_> ie.  so when a machine dies, it's ok, there are 2 other machines that take the load while a repair/replacement is made.
<beisner_> that's the "cloudier" way to address this.
<Neepu> i see
<Neepu> cheers for the answers, i'll head off now :-)
<beisner_> Neepu - yw, happy to help.  see you around & best to ya!
<beisner_> hi vds - belated pong:  invalid region "" seems to be meaningful there.  have you set the region in your juju cloud config?  here's an example of what we do in CI:  https://github.com/openstack-charmers/charm-test-infra/blob/master/juju-configs/clouds.yaml#L27
<vds> beisner_: hi! :) how do I get the value for region from Openstack?
<beisner_> hi vds - it'd be the value set at deploy time (on nova-cloud-controller).  if one wasn't specified, it'll use the default: https://github.com/openstack/charm-nova-cloud-controller/blob/master/config.yaml#L125
<beisner_> RegionOne
<vds> beisner_: thanks
<beisner_> cheers vds :) yw
<kjackal> 243667
<rick_h> mhilton: we're helping SimonKLB here
<rick_h> SimonKLB: broke the system!
<mhilton> hi SimonKLB could you please try logging in on the staging system https://staging.jujucharms.com/ This should help resolve the login problems on jujucharms.com itself.
<Budgie^Smore> o/ juju world
<lazyPower> heya Budgie^Smore
<Budgie^Smore> what's happening today?
<lazyPower> Not much for me :)
<SimonKLB> mhilton: confirmed working :) thaanks
<mhilton> SimonKLB, thanks for that. unfortunately I don't have access to our production system to update that myself, but it is in the queue for the people that do, it's fairly simple so should be done reasonably quickly.
<SimonKLB> mhilton: no worries, thanks for looking into it
#juju 2017-06-29
<salmankhan> Hi Guys, I am facing a issue in maas+juju 2.2. I have set the internal packages repo in maas. When I bring up the machines, maas do populate those package repos to the machines sources.list. But when I bring up the LXD containers through juju, the lxd have the default archive.ubuntu.com in sources.list. It does not pick the repos through MAAS.
<salmankhan> Is there a way to populate internal package repo in sources.list of LXD through MAAS+juju.
<salmankhan> any help?
<stokachu> salmankhan: you'd probably have to do that through whatever charm you're deploying
<stokachu> salmankhan: or add machine then juju ssh to add those repos in the container
<salmankhan> stokachu: I found the way to do it through "juju model-config apt-mirror=http://<server-ip>/ubuntu"
<stokachu> salmankhan: ah cool even better
 * stokachu should look deeper at what else you can set with that
<salmankhan> yup
#juju 2017-06-30
<nileshsawant> All all, we are facing issue in juju add-machine
<nileshsawant> os deployment is marked as failed in maas
<nileshsawant> is anyone facing this issue ?
<tvansteenburgh> rick_h: you around?
<rick_h> Packing what's up?
<rick_h> tvansteenburgh: ^
<tvansteenburgh> rick_h: juju upgrade-charm foo --config cfg.yaml ... i can't figure out what the format of cfg.yaml is supposed to be. everything i've tried so far fails
<tvansteenburgh> rick_h: you happen to know anything about this? sorry to interrupt the packing
<tvansteenburgh> if you don't have time, no worries-
<rick_h> tvansteenburgh: hmm, I'd expect either a flat yaml list of config options or a root key "options" to match up with bundle format
<rick_h> tvansteenburgh: what's not working atm?
<tvansteenburgh> rick_h: okay, flat is what i tried first. then i tried giving it the output from `juju config foo -o cfg.yaml`
<rick_h> tvansteenburgh: that'd make some sense :)
<tvansteenburgh> but alas
<tvansteenburgh> i'll just keep tinkering. looks like i know what i'm doing at the docs sprint today
<rick_h> hah
<rick_h> tvansteenburgh: hmm, actually. does it need the service name in it?
<rick_h>  I'm looking atm let me see what I can find
<tvansteenburgh> well the output of `juju config foo -o cfg.yaml` has an application key in it, but juju doesn't like that if you feed it back in
<rick_h> k
<rick_h> I thought I remembered seeing some config like that. I must be remembering the output there
<rick_h> tvansteenburgh: so https://github.com/juju/juju/blob/8cb1f452f4d21e3d7d929ab0615c1a4d7b1cac42/cmd/juju/application/upgradecharm_test.go#L203 seems to imply it's just flat config
<tvansteenburgh> hmm
<tvansteenburgh> rick_h: mystery solved
<rick_h> tvansteenburgh: but not finding a lot in here that gives me confidence tbh. That's the only reference to upgade-charm and config|options I can see atm.
<tvansteenburgh> top level key has to be the app name
<rick_h> tvansteenburgh: yay?
<rick_h> tvansteenburgh: ugh, ok that's what I was remembering but damn that's not easy to figure out. :(
<tvansteenburgh> yeah, i'll get something in the docs later today, thanks for the help
<rick_h> tvansteenburgh: docs sprint is cool, but maybe update the help text for the upgrade-charm command as well?
<tvansteenburgh> +1
<rick_h> tvansteenburgh: I know evilnickveitch has updated them in the past and can walk through where that's edited at.
<evilnickveitch> tvansteenburgh, https://jujucharms.com/docs/2.2/developer-upgrade-charm is probably the most relevant page, though it doesn't really mention anything about config. The implication from the juju help is that it would be in the same format as config applied during a deploy
<tvansteenburgh> evilnickveitch: also this page https://jujucharms.com/docs/2.2/charms-config says at the bottom "you can do it with a file" but doesn't say what the format should be
<evilnickveitch> tvansteenburgh, erm, it doea. There is an example and everything https://jujucharms.com/docs/2.2/charms-config#configuring-an-application-at-deployment
<tvansteenburgh> wow
<tvansteenburgh> i actually looked at this page while i was floundering around, and totally missed that :/
<tvansteenburgh> so, never mind?
<evilnickveitch> tvansteenburgh, well, I think there are a few takeaways. The 'upgrade-charm' stuff should link here to explain the config option, probably
<evilnickveitch> Perhaps it could do with a longer/better example?
<evilnickveitch> or a note that says "TIM! LOOK AT ME!"
<tvansteenburgh> definitely the latter
<tvansteenburgh> it was just my fault, didn't read carefully enough. i'm glad to see we have it. i think it would be useful to link the upgrade-charm docs to that example
<evilnickveitch> +1
<evilnickveitch> I will add some issues
<tvansteenburgh> evilnickveitch: tyvm!
<ryebot> where can I find the source for the relation-list tool?
<aisrael> Hi folks, just a reminder that the next virtual docs sprint starts at the top of the hour. We'll be diving into the charms.reactive framework. https://www.youtube.com/watch?v=rS9hxsySfNA
<aisrael> We're about to start our virtual sprint on the charms.reactive framework docs. Feel free to follow along, ask questions, open new issues, or write some docs!
<Budgie^Smore> ok today's off the wall question... can I get a juju controller to manage multiple "clouds"? for instance is it possible to get controller to have one model on AWS and one model on bare metal using MAAS? (I know that JAAS can handle multiple public clouds based on the account credentials but not sure how that was accomplished)
<rick_h> Budgie^Smore: no, can't do it currently
<Budgie^Smore> rick_h: "on the roadmap"?
#juju 2017-07-01
<erik_lonroth__> Hello you guys. I know this a completely separate channel, but related to what I'm doing... I'm thinking about trying to play with deployment to raspberrypi:s and mini-pi:s with "ubuntu core". I have massive problems booting raspberrypi3 with ubuntu core but cant really seem to know where to ask for info. Any clues in this channel ?
<erik_lonroth__> OMG I figured it out my massive boot problems
<erik_lonroth__> It was my HAT serial interface on top of the rpi!
#juju 2018-06-25
<wallyworld> vino_: a few small things, let me know if you have questions
<wallyworld> veebers: not sure tbh, charm push may no know to expand the "~"
<babbageclunk> wallyworld: ok, think I've found the problem, just checking now.
<wallyworld> yay
<vino_> wallyworld: sure
<babbageclunk> wallyworld: that seemed to work. Trying it again with a 3-machine controller
<veebers> wallyworld: /tmp/blah didn't work, ~/tmp/blah did work (both are actual files). It's ok, not blocking me (suspect it's a snap/containment thing)
<wallyworld> veebers: ah, i had it the wrong way around in my mind. that makes sense for a snap yeah
<babbageclunk> wallyworld: yay, that worked too, checking one more scenario (2 machines in the controller, so one non-voter)
<wallyworld> babbageclunk: i'll be interested to see what fix was
<babbageclunk> It was in the upgrade step - I didn't defer-close the log store, so the raft worker just hung on startup. In the past that had always been caused by the transport not getting an address, so I was looking in the wrong place, but the logging I added showed it wasn't the problem.
<babbageclunk> wallyworld: ^
<wallyworld> ah, cool
<wallyworld> kelvin: i left a couple of small comments
<kelvin> wallyworld, thanks for reviewing, i just had a push to resolve ur comments.
<wallyworld> ok
<wallyworld> kelvin: looks good to land! on to next step
<kelvin> wallyworld, thanks, merging now
<babbageclunk> wallyworld: actually, my idea of testing a 2-machine controller relies on stuff in 2.4, so it's not necessary. Tidying up and pushing a PR now and then I'll do the unit tests.
<wallyworld> sgtm
<babbageclunk> (Oh, and I'll check bootstrapping 2.4 directly is still fine.)
<vino_> wallyworld: i am using IsolationSuite from github.com/juju/testing instead of BaseSuite in juju/juju/testing.
<wallyworld> that sounds fine
<vino_> it introduced few issues in reading the EchoArgs. Trying to fix it.
<vino_> rest all comments make sense.
<wallyworld> you could just leave the base suite out
<wallyworld> i don't think any of the other charm tests use it
<vino_> yes correct. And that will fix the CI issue i was facing.
<wallyworld> so let's drop it then
<vino_> Yes. charmDir_test uses IsolationSuite
<vino_> its req for the EnvPatching
<vino_> which is a good substitute for BaseSuite
<vino_> Just hold on - using IsolationSuite has introduced a path error. Which i need to identify.
<vino_> swapping to BaseSuite works well. SO there is something diff between them. I will fix that and push the commit.
<babbageclunk> wallyworld: https://github.com/juju/juju/pull/8850
<wallyworld> ok
<babbageclunk> Sorry, laptop died at a super-inopportune moment, but all better now
<wallyworld> babbageclunk: looks ok i think. for 2.5 i think we could consider moving the common raft business logic out of the worker and into a core/raft package that the upgrader also uses
<babbageclunk> wallyworld: yeah, sounds good
<wallyworld> vino_: using IsolationSuite should be all you need as that provides the PatchEnvironment functionality. Or you could just use CleanupSuite
<babbageclunk> wallyworld: do you think the ReplicaSetMembers function in upgrades.go needs a test?
<vino_> wallyworld: yes. IsoaltionSuite gave me trouble. I found out this CleanupSuite which is good.
<vino_> no Pathherror.
<wallyworld> babbageclunk: not for this PR
<vino_> iam pushing the commit now.
<babbageclunk> cool
<vino_> wallyworld: i haven't addressed one review comment - testing.ReadArgs()
<vino_> happy to discuss.
<wallyworld> ok, i'll wait for that
<wallyworld> did you ned help with it?
<vino_> not really for that.
<vino_> I am seeing a weird CI issue. for this NOVCS case.
<vino_> i dont see this ssue locally.
<vino_> It shd be nil returned from function.
<vino_> i am trying to look into the CI issue here.
<wallyworld> vino_: it's jus a typo
<wallyworld> c.Assert(err, gc.NotNil)
<wallyworld> should be c.Assert(err, jc, ErroIsNil)
<wallyworld> vino_: actually, all the other gc.IsNil for error checking added to the pr should also be c.Assert(err, jc.ErrorIsNil)
<vino_> ok. thank u. I was confused that i cudntnt see that issue Locally in my machine.
<vino_> thanks wallyworld.
<wallyworld> i'm sureprised by that - it should have failed locally
<vino_> i can show u.
<wallyworld> the logic error was checking for a not nil error when it would have been nil
<vino_> i do see here all 185 OK
<wallyworld> right but the issue is that the error should not have beenj returned as not nil
<wallyworld> the new function should just return nil if no vcs dir exists
<vino_> Yes. I am checking that if not nil  then assert
<vino_> i am explicitly returning NIL if NOVCS
<vino_> The new function does exactly the same as u r telling.
<wallyworld> right but the PR was checkinf that the error was not nil
<wallyworld> line 300 was c.Assert(err, gc.NotNil)
<wallyworld> this says that we expect a non-nil error
<wallyworld> but we want the error to be nil
<wallyworld> if that code passes when you run it locally i'm confused as to how
<vino_> the new function does the correct thing. But i am check here is wrong.
<vino_> is that what u mean - correct ?
<wallyworld> yes, if the PR says c.Assert(err, gc.NotNil) then that's wrong
<wallyworld> and hence the CI failure
<vino_> I get what u r saying.
<vino_> but i you telling all has to be changed. How come it is passing for me locally here.!
<wallyworld> that's a different issue
<wallyworld> what needs to be changed is the syntax for error checking. gc.IsNil is not how we do it as it sometimes can give wrong results if there's a pointer
<wallyworld> c.Assert(err, jc.ErrorIsNil) is what we use
<wallyworld> i didn't notice the use of c.Assert(err, gc.IsNil) before
<wallyworld> so there's 2 issues: 1. fix the incorrect assetion to make test pass, 2. replace gc.IsNil to make code more correct
<vino_> ok sure wallyworld.
<vino_> thumper: 1:1 ?
<veebers> wallyworld: It seems (in the code) that one can provide a 'revision' as a resource arg, but I don't see mention of that in the help etc. Does that ring a bell at all?
<wallyworld> vino_: tim is away sick today
<wallyworld> veebers: resources have revisions - each time one is updated the revsion increments
<wallyworld> there's a published tuple of charn rev,resource rev
<wallyworld> that's what is used by default
<veebers> wallyworld: ack, understand that; it's the "juju deploy --resource <rev>" that I was confirming, i.e. use x revision of the resource.
<wallyworld> right
<veebers> there doesn't seem to be any docs around that (at least not in deploy help)
<wallyworld> if you wanted to use a rev that's not published
<wallyworld> could be the case, haven't read the docs
<veebers> ack, no worries. Work needed on the helps docs there :-)
<vino_> ok. Thanks wallyworld
<wallyworld> babbageclunk: how goes the unit test writing?
<babbageclunk> almost done - just need to fix logging
<veebers> wallyworld: with the upcoming docker resource type, if the user supplies the resource as an arg to deploy, will we (somehow) allow them to supply secrets details too?
<wallyworld> no, these will come from the charm store
<wallyworld> the charm storage manages all that stuff, as it does with macaroons for private charms etc
<veebers> sweet, ah right you are
<vino> wallyworld: can u take a look at the Pr
<vino> when u get time
<wallyworld> ok
<adham> Hi everyone
<wallyworld> vino: have you pushed chmages?
<vino> yes.
<adham> Because of this issue https://stackoverflow.com/questions/50970133/installed-kubernetes-on-ubuntu-and-i-see-a-lot-of-nodes-are-getting-created-in-m
<adham> I deleted the VMs manually
<adham> I didn't know that this would affect kubernetes deployment
<vino> arent u seeing the changes yet wallyworld
<vino> ?
<adham> Is there a way that I can remove kubernetes deployments completely manually since conjure-down or up won't work?
<wallyworld> i can see some but not all? there's still var version and I can't see ReadEchoArgs
<vino> u wanted to discuss on the ReadArgs()  call
<adham> When I run juju list-models --debug, I see that it still connects to a vm that has been already deleted
<vino> yes.what it reads after that is the empty string.
<vino> we can discuss on that.
<vino> once it is read it resets the ptr.
<vino> thats why we cant use the ReadArgs again.
<wallyworld> adham: deleting the vms directly in k8s will leave orphaned entries in the juju model. you could try remove-machinbe --force
<wallyworld> vino: ok. so intead of the expectedVersion slice in the PR, it's cleaner to do what lines 167-170 of EachAsArgs() does
<wallyworld> to build up the string to compare withthe version file contesnts
<adham> how can I clean up at this point? @wallyworld: I had to do this if you saw my question in StackOverFlow, I have observed a huge flood of VMs once I deployed Kubernetes
<adham> and that was a clean installation
<wallyworld> what do you want to clean? the ensire controller? just the model?
<adham> So I have MAAS installed on the same server
<adham> I do not want to affect MAAS
<adham> I just want to cleanup the kubernetes and then reinstall it again (but after I know why those 70+ vms got generated)
<wallyworld> i've not used conjure-up to deploy k8s so i can't answer that question - the conjire up guys will be online in a few hours
<adham> Ok, I can wait for the conjure team, but in the meanwhile, have you ever seen a list of VMs like this in the stack over flow?
<wallyworld> if you want to keep the existing juju controller running, i think you will need to remove-machine --force all the machines for which you have manually deleted the k8s nodes
<adham> I am referring to the link to my question there...
<vino> wallyworld: yes. that code looks a bit professional.
<adham> how do you mean Wallyworld?
<adham> can you send me a sample command?
<wallyworld> after you have deleted from juju the orphaned machines, you should hopwfuly be able to then delete the model
<wallyworld> juju remove-machine --force <machineid>
<adham> how can I get the machine id?
<wallyworld> juju status
<wallyworld> will show all the k8s nodes
<wallyworld> from there you can see which ones correspond to the ones manually deleted
<wallyworld> it's all a bit abstract without being totally familair with your setup
<wallyworld> is the juju controller running on a maas node?
<adham> thx wallyworld, i'm running juju status, it's taking too long
<wallyworld> what version of juju?
<wallyworld> if you wanted to blow everything away and start again, you could kill that one maas node and allocate it back to the maas pool, and manually elete the k8s cluster
<wallyworld> you'd also need to reclaim any worker nodes
<wallyworld> but i've not done what you're doing myself so can't give specific advice
<wallyworld> adham: maybe you can get better help by asking in #conjure-up if it is a conjure-up issue
<adham> Sorry wallyworld, just received the messages now, the version is 2.3.8-xenial-amd64
<vino> wallyworld push the commit. plz let me know. there was another minor in DEbug log comment.
<adham> this server actually has the master maas controller
<adham> also, the command for juju status seems to be still no response till now
<adham> I will cancel it and re-run it with debug
<adham> ahh, ok, it's connecting to API addresses (which has been deleted I'm assuming)
<wallyworld> adham: juju status connects to the juju controller. i had thought you had only deleted k8s worker nodes?
<adham> I deleted all of the extra VMs
<adham> maybe I deleted that as well
<adham> poof
<wallyworld> vino: looks good!
<adham> Wallyworld
<adham> is htere a way that I can reverse everything about this deployment
<adham> for the kubernetes?
<vino> ok. i just pushed a last comment u gave :)
<vino> i will land it now.
<vino> wallyworld: in JUJU side i will update the dependencies.tsv first and then push the commit
<wallyworld> adham: so if the juju controller is realy gone, then you probably just need to blow away the k8s cluster itself. i assume the worker nodes for that were all on maas? if so, and the juju controller node has already been deleted, you could just  decommission those nodes in maas?
<wallyworld> vino: you update the juju deps after landing the charm.v6 change
<wallyworld> vino: and we need to make a small juju change also - we should trim the version string to say 100 before writing to the charm doc
<wallyworld> just to be defensive
<adham> how can I blow away the k8s cluster itself then?
<wallyworld> sounds like you have already started to do that? if the k8s nodes are all in maas, and juju controller node has been decommissioned, then you could also decommission the k8s nodes as well
<wallyworld> and return all those nodes to the maas pool
<adham> In maas, I no longer see any traces for the k8s
<adham> but not sure if i'm missing any area to look into
<wallyworld> i *think* you may have managed to clean it all up then :=)
<adham> then why juju status keeps being frozen, what do I do about this?
<wallyworld> if juju controller is gone and all k8s nodes are gone then you should be ok
<wallyworld> you need to remove the controller reference
<wallyworld> from your local client
<wallyworld> juju controllers
<wallyworld> and then juju unregister <controllername>
<adham> how can I be sure that this controller won't be a reference to MAAS itself?
<adham> Because when I deployed k8s, i made it to use MAAS's cloud
<wallyworld> maas is treated in juju as a cloud. you can see it via "juju clouds". the juju unregister command removes a controller entry from a local yaml file
<adham> ahh, I see conjure-canonical-kubern-a4d
<wallyworld> juju unregister  conjure-canonical-kubern-a4d
<wallyworld> should work
<wallyworld> it just removes a yaml file entry
<wallyworld> maas will still be running
<wallyworld> and can be used again with conjureup
<adham> $ juju unregister conjure-canonical-kubern-a4d
<adham> ERROR controller conjure-canonical-kubern-a4d not found
<adham> conjure-up-server-01-f34*
<adham> is * part of the name?
<wallyworld> the * indicates that's the current controller juju is using
<wallyworld> but the names above differ
<adham> the last one I mentioned with * is under Controller, the one before it is under model
<wallyworld> what does juju controllers say? that's what you pass to unregister
<adham> Doing so will prevent you from accessing this controller until you register it again.
<adham> I am about to deregister it now
<wallyworld> right, but here the controller machine has been shut down
<wallyworld> you have removed the controller machine manually right?
<adham> yes
<adham> I guess so
<adham> how can I ensure that?
<wallyworld> check that there's no machine in maas with the listed ip address
<wallyworld> but since status hangs and times out, it's a good bet it's gone
<adham> no, there is no machine in maas
<adham> If that's the case, I really want to redeploy K8s again
<adham> but that flood of VMs doesn't make sense, and I don't know why they were created
<adham> Do you have any idea about those machines and why possibly that they would created like that in https://stackoverflow.com/questions/50970133/installed-kubernetes-on-ubuntu-and-i-see-a-lot-of-nodes-are-getting-created-in-m ? I would really appreciate any knowledge around htis
<wallyworld> so unregister will remove that orphaned entry
<wallyworld> from juju client
<wallyworld> adham: that's a question best asked in #conjure-up
<wallyworld> i don't have any insight off the top of my head
<wallyworld> lots of folks use conjure-up so if there is a bug it would be goos to get it fixed
<adham> no, that's alright, I really appreciate your help here, it just made me progress, I was stuck for the last 4 days
<adham> thx Wallyworld
<wallyworld> adham: sorry i couldn't help more with the conjure-up side. i'm more a juju person
<wallyworld> adham: you should be able to get the help you need in that other channel. if not come back here and we can chase them up :-)
<adham> thx, will do! I'm in conjure-up channel
<wallyworld> adham: they folkd there are mainly USA based so you may not catch them for a few hours yet
<vino> wallyworld: we can chat here regarding the version string for charm.
<adham> that's why
<adham> that's fine*
<adham> so where are you based? I'm in Australia
<vino> so u want to keep 100 buffer before writing to the charm manifest
<wallyworld> adham: brisbane
<wallyworld> vino: not manifest. as we read in the charm from the zip, before we write to charmDoc
<adham> lol
<wallyworld> you too?
<adham> neighbours :D
<wallyworld> vino: since we don't always control what goes into charm zip
<wallyworld> the best we can do is be careful about what we accept
<vino> wallyworld: ok before we repackage and write to charmDoc
<vino> So what we wriet is charmDoc needs to be sensible.
 * vino make coffee and be back
<adham> Wallyworld, I know that this is off the topic and beyond your knowledge
<adham> But I just wanted to double check with you if I have any luck
<adham> when you bring up k8s environment, do you actually see funny names vms?
<adham> Cuz I am conjuring up the k8s again and now I see casual-guinea, famous-jackal, etc.. Do you have similar VMs in your environment?
<wallyworld> adham: those names are autogenerated. there's a "pet names" library that is sued. i can't recall the exact name
<wallyworld> they take a random adjective and put with a random animal name
<wallyworld> vino: correct
<adham> Hmm, and do you have similar names in your environment?
<adham> Is there away to have a proper naming convention?
<adham> I think this is 100% pure conjure-up since I'm using conjure-up
<wallyworld> adham: that's actually a maas naming conversion i think. ie the hostnames. juju doesn't care about or use hostnames as such
<wallyworld> juju mainly uses machine numbers 1, 2, 3 etc that it generates
<wallyworld> there might be a maas option to control hostname generation, not sure off hand
<adham> ok that explains
<adham> and would deploying k8s would deploy 70+ vms?
<adham> I just conjured up with minimal installation no addons, and it's still "deploying", and yet I see 10 vms in maas?
<adham> I mean 10 new vms
<adham> 20 machines now
<wallyworld> adham: there's kubernetes-core and CDK. which bundle did you choose?
<adham> CDK
<adham> Also, shall report the naming problem in Juju? I really recommend at least a friendly
<wallyworld> i would expect to see several VMs for that as it is full HA so several redundant nodes for easyrsa etc
<adham> name
<wallyworld> which naming problem? the machine name generation in maas?
<adham> I used to think that this is a virus or something
<wallyworld> the maas node namig is a deliberate design decison to at least give the nodes unique english names
<adham> Yes, I understood that part, I meant to put a RFE in juju asking for when having juju to create a machine, is to provide an english machine name at the beginning
<adham> i.e controller_1
<adham> lb_1
<adham> etc...?
<wallyworld> adham: juju let's ypu specify whatever controller name you want - conjure up is what's generatig the names you see
<adham> ahh, ok, then this RFE would go to conjure-up
<adham> quiet tricky when you have multple projects and many teams
<wallyworld> for the controller name and model names
<wallyworld> there may be a way to tell conjure-up, not sure
<adham> thx Wallyworld, i've redirected this to conjure-up team, I am waiting for them to be available
<wallyworld> np, good luvk
<manadart> stickupkid: Took a look at your patch. Looks good. I would make 2 changes.
<manadart> 1) Put the mock that is currently in environs/mocks into environs/testing.
<manadart> 2) Instead of making a GetServerEnvironmentCert, make the cert a property. We already call GetServer on instantiation to get data from Environment - just populate it there.
<stickupkid> manadart: thanks for the comment
<manadart> stickupkid: NP.
<manadart> Need a review: https://github.com/juju/juju/pull/8855
<rick_h_> hml: thanks for that! Can you please file a bug around the OS_CACERT needing to be in OS config, part of add-cloud, etc. and I'd be curious your thoughts on implementation time for that.
<hml> rick_h_: thereâs a bug already along these lines: https://bugs.launchpad.net/juju/+bug/1777897
<mup> Bug #1777897: add-cloud fails when adding an Openstack with a self-signed certificate <juju:Triaged> <https://launchpad.net/bugs/1777897>
<rick_h_> hml: ah right thanks
<hml> rick_h_: will ponder time question
<knobby> wallyworld: Since conjure-up is just doing a juju deploy under the hood, I would think that the runaway VM creation would fall into juju in some way. I've never seen this with conjure-up or juju before though. I was hoping that adham had some old version of juju or something. I assume the juju debug-log would be useful if it happens again.
<rick_h_> knobby: definitely debug-log what's going on if there's some issue there
<veebers> Morning o/
<wallyworld> veebers: can you jump into release call
<veebers> wallyworld: omw
<wallyworld> babbageclunk: now that release is out, you can haz review we mentioned last week? :-D pretty please with sugar on top https://github.com/juju/juju/pull/8839
<babbageclunk> oh yes! looking now
<babbageclunk> Thanks for sending the email btw
<wallyworld> babbageclunk: no worries, i also had to fix the lp upload thing etc - can't wait for that bug to be fixed
<veebers> wallyworld: have we pushed on that bug yet? (I haven't >_>)
<wallyworld> veebers: i *think* tim has, was going to check again this week
<veebers> wallyworld: ack, cheers
#juju 2018-06-26
<babbageclunk> wallyworld: approved
<wallyworld> yay, tyvm
<vino> wallyworld: could u plz take a look at PR ?
<wallyworld> i can sson, just debugging an issue at a client
<vino> sure.
<vino> wallyworld: please ping me to discuss related to pending tasks for charmversion
<wallyworld> vino: pr is lgtm. we can discuss next steps now. want a hangout?
<vino> sure. wallyworld
<veebers> wallyworld, babbageclunk: that's the best way to produce a Reader from a struct? (i.e. I'm going to get a reader back from the charmstore that is yaml data, I want do produce a similar yaml reader from a string provided at the CLI).
<veebers> wallyworld, babbageclunk is it create a struct, yaml marshal it and get []Bytes, then create a buffer from that?
<wallyworld> you want to unmarshall a string into yaml?
<babbageclunk> yeah, that sounds right - you can use bytes.NewReader once you've got the []byte
<veebers> wallyworld: no (I don't think). I want to take a single provided detail i.e. ImageName and create a reader that is the yaml equiv of the backing datastructure (csclient.params.DockerInfoResponse)
<veebers> wallyworld: i.e. if I ask the charmstore for the data for a docker resource I'll get a reader that I can unmarshal into a DockerInfoResponse, I want to create a reader that I can do that with when I get just the imagename from the CLI
<veebers> babbageclunk: ack, thanks I'll give that a whirl
<adham> Hi wallyworld, I'm trying to ping conjure-up, but it seems noone is really active there
<stickupkid> manadart: updated per comments https://github.com/juju/juju/pull/8856
<manadart> stickupkid: Ack.
<manadart> stickupkid: Approved it with a comment on the getter name.
<stickupkid> manadart: damn, i put Get in at the very last minute with that one
<stickupkid> i'll fix
<stickupkid> I do find it weird we have `GetCertificate` and `GetConnectionInfo` on the same struct though
<manadart> stickupkid: Those come from the embedded LXD server, and represent GET reqs, so it is a bit different; but I take your point.
<stickupkid> yeah, hence why i was 50/50
<manadart> stickupkid: Now we just hope merging yours doesn't create too much of a conflict for mine.
<stickupkid> manadart: this is either going to be easy or out and out painful - nothing inbetween :D
<manadart> stickupkid: Got another one to follow; moves the profile stuff to container/lxd. That makes just lxdStorage to move off rawProvider before it can be binned.
<stickupkid> manadart: this is great! it's hard to work out what to do with two providers for the same source
<stickupkid> manadart: i'll go over your PR as well now
<manadart> stickupkid: Cheers. I am about to add a third to that pipeline.
<stickupkid> manadart: done
<manadart> stickupkid: Ta.
<wallyworld> kelvin: 1:1?
<kwmonroe> hey wallyworld!  adham has the most unfortunate timing -- #conjure-up is only really monitored during normal hours (not this wacky void outside of UTC-5 to UTC-8).  he disconnects before i can respond, but if he comes back online asking about that totally bizarre 70+ machine spin-up from conjure-up cdk on maas, please ask for the conjure-up.log -- it's no doubt your (juju) problem, but we need the logs to prove it ;)
<kwmonroe> cory_fu req'd the same from his SO question: https://stackoverflow.com/questions/50970133/installed-kubernetes-on-ubuntu-and-i-see-a-lot-of-nodes-are-getting-created-in-m
#juju 2018-06-27
<wallyworld> kwmonroe: will do :-)
<thumper> I'm not sure it is juju... just sayin
<thumper> although I'm biased
<thumper> wallyworld: just chatted with veebers about the launchpadlib upload bug
<thumper> it seems it is a bug in the underlying python library, but we identified a potential work around
<wallyworld> great ok
<thumper> and veebers is testing it against staging lp
<wallyworld> awesome
<wallyworld> ty
<thumper> thank veebers
<veebers> thumper: no worries, thanks for chasing that up
<veebers> thumper, wallyworld argh, need to sort out 2fa for the staging sso. I set it up years ago. I think I might still have paper codes somewhere :-\
<thumper> haha
<thumper> veebers: if it takes more than a few minutes, see if you can poke IS about it
<thumper> sometimes it is faster to get it reset...
<veebers> thumper: heh have already pinged. I think a reset is all that's possible :-)
<thumper> I noticed :)
 * veebers checks room for thumpers hidden cameras
<thumper> no, but I'm in the IS channel :)
<thumper> and I have the konversation summary view with all channels in it
<veebers> hah ack, I have something similar
<veebers> thumper: remind me that emacs mode you liked for binary/hex data
<thumper> hexl-mode
<veebers> sweet, thanks
<thumper> https://bugs.launchpad.net/bugs/1778749
<mup> Bug #1778749: error when using juju-reboot from hook <juju:New> <https://launchpad.net/bugs/1778749>
<wallyworld> babbageclunk: not sure if you'll have time today for that review? if you're in the middle of something can wait till another time
<babbageclunk> wallyworld: oh, sorry! looking now
<wallyworld> no worries, can wait if you're busy
<vino> wallyworld: I have pushed the PR. please review it.
<wallyworld> sure
<vino> iam yet to test the charm push
<vino> also please let me know if more work is needed in unit test case.
<wallyworld> vino: let me know if comments don't make sense
<vino> sure wallyworld
<vino> wallyworld: mostly name conventions.
<vino> but just one question
<wallyworld> sure
<vino> we are not returning error incase version string is generated
<vino> so incase of error instead of logging what needs to be done.
<vino> i just want to make sure its empty string
<vino> annotate the err
<vino> ?
<wallyworld> you could argue that an error should result in the operation failing. but then again, should it be fatal if someone has a git charm checkout and no git available. i think we can start with just logging an error
<vino> ok. i will log at error.
<wallyworld> or maybe a warning
<wallyworld> i think a warning is better as an eror means fatal
<vino> ok sure.
<vino> and the last comment.
<vino> is that for unit tests ?
<wallyworld> yeah, there's ArchiveTo tests
<vino> more units tests for ArchieveTo
<wallyworld> they need to be updated or a new one added
<vino> ok.
<veebers> thumper, wallyworld: just finished testing the launchpadlib upload issue workaround and it's all good. I'll push the updated script and fire off an email so the team is aware (in case, for whatever reason, things go borked during a release)
<wallyworld> yay, ty
<wallyworld> veebers: will ned to update the jenkins job to remove the warning text also
<veebers> wallyworld: it might be worth keeping it there, in case something goes wrong with the work around. Although perhaps you're right /me being over cautious
<wallyworld> ifsomething goes wrong withthe workaround that's a bug to be fixed IMO
<veebers> true, but it'll be good to have documented a workaround for the workaround, in case its work isn't around ^_^
<wallyworld> it would but that's just papering over a problem that shjould now be considered fixed
<wallyworld> i'm sure we could manually recovere from other issues with release jobs but we don't - we fix the job
<thumper> veebers: thanks, I appreciate your effort around testing and checking
<manadart> jam: Any glaring issues with mine and Simon's PRs? Keen to merge and move it along.
<jam> will look now
<jam> manadart: there is something
<jam> probably small, thoug
<manadart> jam: Shoot.
<jam> manadart: specifically, you changed contanier.Metadata instead of being a map[string]string it is now *just* a modelUUID, and we shouldn't call that Metadata
<manadart> The prior implementation of metadata was an unnecessarily heavy abstraction that meterialised container config into a data structure called where it was set/retrieved and transparently written as container config prefixed with "user." where necessary.
<manadart> Now, config is set directly with the prefixes, and metadata is just a method to retrieve the user-namespaced keys directly for all those custom keys - container UUID, model UUID , cloud-init ser-data etc.
<manadart> ^ "... into a data structure where it was..."
<jam> manadart: yeah, actually I missed that you were passing the key *into* Metadata()
<naturalblue> Hi. Is it possible to have juju run a script on deploy of a machine. i am trying to have a loopback device created on deploy. Thanks
<rick_h_> morning party people
<cory_fu> kwmonroe, wallyworld, thumper: To be fair, the most reasonable explanation that I can come up with for so many VMs is multiple runs of conjure-up, thinking that maybe some of them failed while waiting for the cluster to report ready, or during a post-deployment step.  We need to make it more obvious when a deployment fails after handing it off to Juju that it may be partially or even fully functional and how to interact with it or at least tear it
<cory_fu> down.
<cory_fu> stokachu: https://github.com/conjure-up/conjure-up/issues/1482 re ^
<stokachu> +1
<naturalblue> can you change a space binding once lxd has been deployed
<manadart> rick_h_: I said constraints should be easy, but it was *really* easy. I actually have it working here. Instance types is trivial too. I will test it properly and propose a PR in the morning.
<rick_h_> manadart: cool ty for looking. It'll be fun to play with and I'm sure bdx and magicaltrout  will be very interested
<bdx> manadart: thats great news
<hml> stickupkid: if you look at https://github.com/juju/juju/pull/8849 - i added the gomock matcher
<stickupkid> hml: OfTypeBool to me sounds like any bool would suffice, should the name be IsTrue?
<hml> stickupkid: i think i have it working for your choice of true or false
<hml> stickupkid:  at least the tests seems to indicate
<stickupkid> hml: maybe we should explicitly test for bool i.e. return o.b == b.(bool)
<hml> stickupkid: perhapâ¦ gomock didnât like the reflect line i had to check that the value was a bool
<stickupkid> hml: https://github.com/juju/juju/pull/8849/files#r198563697
<stickupkid> hml: better explained
<hml> stickupkid: thx
<Daltron7> Hello there,
<Daltron7> Can anyone tell me if they seen the error: DEBUG unit.neutron-gateway/0.config-changed Cannot find device "eno2" and ERROR juju.worker.uniter.operation hook "config-changed" failed: exit status 1.
<Daltron7> P.S. I already have `net.ifaces=0` configured on the MAAS server
<rick_h_> Daltron7: not sure, so the config-changed hook ran and whatever the charm is doing in that hood is failing with the error about eno2
<rick_h_> Daltron7: the config in question here is the application config "juju config neutron-gateway"
<rick_h_> Daltron7: might have to hit up the charm authors to see what's failing
<Daltron7> seems like it is trying to configure eno2 and failing: unit.neutron-gateway/0.config-changed subprocess.CalledProcessError: Command '['ip', 'link', 'set', 'eno2', 'up']' returned non-zero exit status 1.
<wallyworld> babbageclunk: thanks for review! the k8s stuff is gnarly for sure. but it works. i'll be documenting ther workflow and expected setup of storage classes etc and feedback there will help finetune the api usage
<babbageclunk> wallyworld: cool, sounds good!
<veebers> wallyworld, babbageclunk: When using state.db().Run(...) with a simple []txn.Op{} Inserting a doc that has an Id, should I expect that Id to be == _id for that entry?
<wallyworld> no, as the model uuid is prepended
<wallyworld> if you look at mongo directly
<veebers> wallyworld: ah *thats* what it is, cool thanks
<wallyworld> that's done automagically
<wallyworld> and it is stripped off on the way out
<babbageclunk> what wallyworld said
<veebers> wallyworld: I'm not seeing it stripped off when I do "coll.FindId(resourceID).One(&raw)" raw["_id"] == "some-uuid-string-here:<actual id>"
<veebers> Ah, am I circumventing the auto-niceness doing it that way
<wallyworld> it won't be there as you're looking at the raw data
<wallyworld> the doc itself will have the correct DocId
<babbageclunk> veebers: note, this also doesn't happen for collections that are marked as Global: true in allcollections.go
<veebers> ah right, that makes sense. Cheers wallyworld
<wallyworld> global collections are used across models without the need for discrimination
<veebers> wallyworld, babbageclunk oh, right saw that too (Global in allcollections). Ian, should the docker resource storage be global? I presume at the moment the uuid used in the id will be the controller?
<wallyworld> charms are per model so no
<veebers> wallyworld: ack
<wallyworld> veebers: veeeeeeeebeeeeeera
<wallyworld> staaaaaaanduuuuuup
<veebers> wallyworld: oh oop ssory omw
<hml> wallyworld: my fault :-)
<babbageclunk> good one hml
<veebers> hml: re: adding a failure cause, it's not a job change but a thing added via jenkins ui. Will walk through after standupp :-)
<veebers> hml: actually I got ahead of myself, there was already a rule there that I only had to tweak (use \d+ instead of a specific number). If you have a look now at the job the failure reason is obvious
<hml> veebers: i see it.  thank you.  :-)
<veebers> hml: not the description is wrong :-| I updated it to be correct. I'll add what I did to our discourse
<hml> ty
#juju 2018-06-28
<wallyworld> thumper: can haz review? https://github.com/juju/juju/pull/8867
<babbageclunk> veebers: I made the mistake of trying to use go-guru-callers, it is taking a long time and making my computer very sad.
<veebers> babbageclunk: hah yeah I don't think I've had anything useful from that before. I usually need to kill it before long
<babbageclunk> I think I might just be setting the scope wrong...
<veebers> babbageclunk: I *think* the scope should be something like: github.com/juju/juju/... but I'm not 100% certain
<babbageclunk> veebers: yeah - from the docs it sounds like it should be github.com/juju/juju/cmd/jujud - the package that has the main (which is the starting point). I guess the problem is that it's whole-program analysis for a too-big program, at least for my computer.
<veebers> babbageclunk: get more computers
<veebers> babbageclunk: heh yeah, I would be interested in how well it works with a smaller project etc. I notice a bit of slow down when I change branches etc. as it compiles bits to give me completion etc.
<thumper> wallyworld: with you shortly, finishing with IS
<thumper> wallyworld: omw now
<thumper> wallyworld: bug 1778970
<mup> Bug #1778970: offer will not leave model <juju:New> <https://launchpad.net/bugs/1778970>
<kelvin_> wallyworld, would u mind to take a look these PRs when u have time: https://github.com/juju/charm/pull/249/files https://github.com/juju/jujusvg/pull/56/files  https://github.com/juju/charmstore/pull/808/files ? thanks
<wallyworld> kelvin_:  sure, will do
<wallyworld> vino: want to join HO again?
<vino> yes.
<wallyworld> kelvin_: with the svg PR, you should wait to land the latest charm.v6 change and use the dep from that one
<kelvin_> wallyworld,  yes, the charm.v6 is the dep for all the others, and I will do juju finally after all of these landed as well.
<kelvin_> wallyworld, and one more for bundlechanges please, thanks https://github.com/juju/bundlechanges/pull/41/files
<kelvin_> i will update dependencies.tsv for it.
<wallyworld> kelvin_: and for svg as well
<wallyworld> evenn though PR is already proposed
<wallyworld> kelvin_: just in a meeting, will finish looking soon
<kelvin_> wallyworld, thanks
<thumper> wallyworld, vino: team standup?
<wallyworld> thumper: in meeting which was delayed
<wallyworld> try and finished soon
<manadart> Anyone ever get this when building Juju? "readSym out of sync".
<stickupkid> manadart: never seen that before
<manadart> Need a review for constraints (cores, mem, instance type) support for LXD container machines: https://github.com/juju/juju/pull/8869
<stickupkid> manadart: looking now
<stickupkid> manadart: look really clean...
<stickupkid> s/look/looks
<manadart> stickupkid: I finished it and then rewrote it. This way it's a one-liner to apply it in the provider.
<stickupkid> manadart: mines merging now
<manadart> stickupkid: Nice. want to sync up after lunch?
<stickupkid> manadart: hell yeah
<stickupkid> manadart: you've got failure in your PR, it looks like an intermitent failure so I've just do a retry
<manadart> jam stickupkid: Looking for a review of https://github.com/juju/juju/pull/8862.
<manadart> Going to land https://github.com/juju/juju/pull/8869 when it goes green.
<rathore> trying to deploy openstack-lxd but having issues, keystone never completes database relation even when mysql/0 unit is ready. Ceph-mon/0,1,2 gets stuck in Bootstrapping MON cluster
<rathore> any suggestions how can I fix it
<jam> manadart: quid-pro-quo? https://github.com/juju/juju/pull/8871
<manadart> jam: Deal.
<manadart> stickupkid: Want to jump on the guild HO?
<stickupkid> yeah
<stickupkid> ah, someone already there
<jam> manadart: reviewed 8862
<manadart> jam: Many thanks.
<manadart> jam: Approved yours too.
<jam> rick_h_: question for you about build/release/etc process.
<jam> with 2.3 being an LTS, I'd like to keep the 2.3 branch up to date and merge those changes into the 2.4 branch. However, as we are real-close to a 2.4.0 should I wait to merge a 2.3 patch into 2.4 ?
<jam> even if it is effectively a no-op? (it does have some small, eg line-numbers-in-a-diff, changes)
<jam> I do think that we generally want 2.3 => 2.4 => develop, so we know that any patches that we *do* backport to 2.3 are fully applied to new code bases.
<jam> anyway, I have https://github.com/juju/juju/pull/8872 that potentially updates 2.4, but I'm happy to wait until 2.4.0 is cut to actually merge that.
<jam> have a good weekend lal
<jam> all
<rick_h_> jam: definitely wait atm
<rick_h_> jam: it'll have to go into 2.4.1
<rick_h_> have a good weekend jam
<w0jtas> hi i have fresh deployment of openstack using juju charms / conjure-up but i cannot start first instance , in logs i see: "Instance failed to spawn: ImageNotFound: Image  could not be found." what's weird looks like imagename is missing
<adham> conjure-up deploys kubernetes (CDK bundle) without setting the machines names, and so this causes MAAS to auto-pick a random name for each machine from pet names library. is there a way that we can have the conjure-up to use a naming convention?
<adham> So I'm in #conjure-up channel and they are redirecting me to here
<kwmonroe> adham: we're here too.  no, you can't have conjure-up set machine names
<kwmonroe> because juju can't set machine names
<kwmonroe> because juju doesn't care about machine names
<rick_h_> w0jtas: starting the first instance in which way?
<adham> Kwonroe, I have 70+ machine created by conjure-up kubernetes, if you saw my MAAS window, and see how much funny names are there
<rick_h_> kwmonroe: lol, just realized the pet-names library is called "pet" when we say to stop keeping pets (servers) and start driving cattle.
<adham> you would definitely reconsider the deployment here
<adham> connjure-up & juju are a great tool but honestly this little issue is destroying their greatness
<rick_h_> adham: the more machines the better. You shouldn't ever really care about the machines but what's running on them. Juju's taking the application/task based view of the world and so machines are expendable little things that you can reallocate all the time.
<rick_h_> adham: can I ask why the names are important? What task/etc are you doing that is driving you to referencing the machines individually?
<w0jtas> rick_h_: in horizon i want to start first instance with ubuntu 16.04 using lxd
<adham> rick_h_, the machines on MAAS got not tag, no definition, they are just pet names, you cannot distinguish which machine is what
<adham> it would make sense if we have for example, lb1, controller1, master1, something relative
<adham> but not blank
<rick_h_> adham: oh we definitely encourage putting tags on your maas machines so that you can target machines for storage/networking/etc.
<adham> you want me to tag 70+ that was created by conjure-up/juju?
<rick_h_> w0jtas: do you have the images used loaded into glance? I'm not sure how that's pre-seeded in an OS install. You might check with the OS folks.
<rick_h_> adham: no, in MAAS you do it once and you don't have to redo/etc. It's just part of setting up the machine infrastructure. Maybe I'm missing where you're heading there
<w0jtas> rick_h_: i have on list 2 ubuntu images to choose, 14.04 and 16.04 when creating instances
<rick_h_> adham: so Juju supports leveraging/using MAAS tags.
<adham> here is a sample of how machine names look like on my MAAS >> aware-code aware-yak casual-corgi casual-whale casual-mole clear-hound close-liger cool-troll decent-beetle divine-bug driven-drake easy-cod equal-frog equal-swan exotic-earwig expert-cow expert-slug fair-bee first-dog frank-monkey gentle-racer good-koi grown-bunny guided-eft handy-wahoo hip-hornet holy-bass holy-hen intent-bear large-kit
<adham> how can I know which one is at least load balancer, and which one is controller?
<rick_h_> adham: by looking at juju and saying "juju ssh load-balancer/0"
<rick_h_> using the task based approach
<rick_h_> adham: the machine names are part of the MAAS setup when you commission the machines though.
<rick_h_> adham: for instance, in my maas I have nuc1 through nuc8
<rick_h_> juju/conjure-up doesn't really care about the maas machine name
<adham> during Kunjure Up, I'm using our MAAS as the cloud
<adham> conjure-up*
<rick_h_> adham: right, understand. But the names of the machines in MAAS come from commissioning in MAAS, before you ever run conjure-up
<adham> Yes, if you commission a pod that doesn't have a name
<rick_h_> adham: conjure-up or juju don't change or modify the maas machine names at all
<adham> or a machine I mean
<rick_h_> adham: conjure-up doesn't comission the machine. That's done ahead of time when adding hardware to MAAS
<adham> those machines did not show up unless I ran the conjure-up because these are actually the kubernetes machines, if I deleted those machines, kubernetes will go down
<rick_h_> adham: so I'm failing to grok that statement there
<adham> I'm confused to be honest
<adham> how can I cooperate the 2 of them, or should I install kubernetes away from MAAS
<rick_h_> adham: so you have a maas, with nothing running on it. And you go to the list of nodes. Each node has a name. That name is the machine name. Before kubernetes, conjure-up, juju, anything else is involved.
<rick_h_> adham: did you conjure-up kubernetes onto your MAAS?
<adham> yes
<w0jtas> rick_h_: anything i should check on my setup ? it's fresh conjure-up openstack / lxd setup, my first attempt so i am newbie here :(
<adham> and our MAAS has VMs and machines on it already
<rick_h_> adham: ok, before you ran conjure-up you had a MAAS setup and that MAAS had X machines comissed into it
<adham> yes
<rick_h_> adham: and when you go to the nodelist you see those names
<adham> the funny ones are only after kubernetes deployment
<rick_h_> w0jtas: sorry, I don't know enough about openstack/glance to diagnose. I have to suggest checking out the openstack charms irc channel/mailing list.
<rathore_> anyone: how can I get 2 different configs of charms installed on same machine? I have different configurations of ceph-osd charms for 2 types of servers ? Thanks
<adham> but nothing changed to our VMs and machines (where they have proper names)
<rick_h_> adham: are the funny ones VM's registered in MAAS? and not the original MAAS machines then?
<adham> they are not original maas machines
<rick_h_> rathore_: so you have to deploy them as two different applications
<adham> they were made by conjure-up
<rick_h_> rathore_: juju deploy ceph-osd ceph-mode1
<rick_h_> rathore_: juju deploy ceph-osd ceph-mode2
<stokachu> w0jtas: what's wrong
<rathore_> rick_h_ : Thanks a lot
<rick_h_> rathore_: np, if you need different ocnfigs then you'll want to log/perform other operations/scale them differently so it's just reusing the same charm for two purposes.
<w0jtas> stokachu: i installed openstack using conjure-up / lxd  and now in horizon i want to run first instance, but it's failing and in node logs i see "Instance failed to spawn: ImageNotFound: Image  could not be found."
<rick_h_> adham: ok, so conjure-up created some VMs with pet-names that are now registered in MAAS somehow?
<adham> correct
<rick_h_> adham: is this the MAAS "devices" list or the node list?
<stokachu> w0jtas: ok, so a glance issue is happening
<rathore_> rick_h_: Cool. I will try it out
<adham> When we first time saw this, we thought that this is spam or a virus that came with kubernetes, so we deleted them, kubernetes deployment became offline
<adham> we deleted kubernetes completely and brought down the controller
<w0jtas> stokachu: how do i check glance condition then ? any status debug or whatever
<stokachu> w0jtas: sec
<rick_h_> adham: ouch yea, ok. I'm guessing this is on devices list vs the machine list?
<stokachu> https://github.com/conjure-up/spells/blob/master/openstack-novalxd/steps/01_glance/glance.sh
<stokachu> w0jtas: ^
<adham> after discussion with conjure-up and juju, we understood that this normal and caused by the deployment because no machines names given
<stokachu> that's basicaly what you need to run to import the images, maybe that failed somewhere, can you check in ~/.cache/conjure-up/openstack-novalxd/steps
<adham> we then tried to redeploy kubernetes, and here we see again the list of funny names 70+ machines
<adham> those names can be seen from MAAS
<rick_h_> adham: ok, sorry I'm catching up. So these are probably the lxd containers created for the k8s cluster registered in MAAS as devices.
<adham> and if we launch list the vms from the terminal on linux
<rick_h_> adham: gotcha
<adham> we still see those funny names, almost everywhere
<rick_h_> adham: so can you confirm that in MAAS you go to the nodes page and there's the table. At the top of the table is filters for "12 Machines    34 Devices    1 Controller"
<rick_h_> adham: and that the funny names only show up in the Devices filter?
<w0jtas> stokachu: where should i find glance.log file ?
<adham> yes
<adham> and also in the vm list from the terminal (outside maas) when listing the VMs
<adham> Actually machines not devices that I'm referring to
<adham> sorry, my apologize nodes
<stokachu> w0jtas: ~/.cache/conjure-up/openstack-novalxd/steps/01_glance
<adham> I'm currently do not have access to the MAAS
<adham> seems like Kubernetes deployment has taken over the local load balancer on the same server
<stokachu> w0jtas: you can also juju ssh nova-cloud-controller/0
<stokachu> source the credentials file there
<stokachu> and perform glance commands
<adham> Is there a way that I can disable the kubernetes load balancer and keep the original load balancer on the server as the default?
<rick_h_> kwmonroe: ^ wasn't there something about leveraging external cloud bits?
<rick_h_> adham: will ask the k8s experts. I don't think so though because the charms are a combined solution tested together to work so pulling it apart would potentially break stuff.
<adham> thx rick_h_, I'm in kubernetes channel talking with them
<rick_h_> adham: ok
<adham> but can you please help me about anyway if it's possible at all to have juju or conjure-up to set names for hte machines?
<adham> rather than leaving them blank, and force MAAS to set funny names to them?
<rick_h_> adham: the other question is if there's any way to get conjure-up to use non-petnames for the containers created. I'm not sure how that is setup.
<rick_h_> yea, I'm not sure if conjure-up is asking maas to name them or is providing names for them. I've not registered VMs in MAAS like that.
<adham> i'm trying to bring in cory_fu from conjure-up channel
<adham> as he's the one who redirected me here
<rick_h_> adham: it looks like the add-device allows setting a name. So the question is how would you name them on deploy of the k8s cluster? I mean you don't want to be "juju deploy canonical-kubernetes --lxd1: prettyname, --lxd2: prettyname...
<cory_fu> rick_h_: To clarify, it sounds like adham is using pods in MAAS such that the VMs are created on-demand rather than the older way of doing things where all VMs are pre-created with specific resource sets and managed manually via MAAS
<cory_fu> In that scenario, the names are auto-generated by MAAS and conjure-up / juju has no way to influence them
<rick_h_> cory_fu: oh, the kvm pods stuff?
<cory_fu> Yes
<rick_h_> oh, I was wondering why I'd not run into this before
<adham> thx cory_fu, appreciated...
<cory_fu> rick_h_: My understanding is that this should function very similarly to the public cloud, where you have no control over the instance name / ID, but Juju should create tags in the metadata to indicate which Juju machine is running on that instance.
<rick_h_> adham: ok, so bad news is I've got no path forward for you. I'd love if you filed a bug on bugs.launchpad.net/juju and bring up names to kvm pods in MAAS though as that might be something we need to update Juju to supply at VM creation time but I've not played with the pods stuff in MAAS yet.
<cory_fu> rick_h_, adham: For instance, on my k8s deployment on AWS, my instance i-04c41c1309bde47d4 got the tag juju-machine-id=conjure-kubernetes-core-0a9-machine-0
<rick_h_> cory_fu: right, exactly.
<rick_h_> cory_fu: so we'll have to setup something using pods and see what Juju does and update anywhere we're not treating it correctly
<adham> https://stackoverflow.com/questions/50970133/installed-kubernetes-on-ubuntu-and-i-see-a-lot-of-nodes-are-getting-created-in-m
<cory_fu> rick_h_: At the end of the day, though, I think adham's real issue is that there were too many VMs created and he can't track down why or what roles each is playing.
<adham> would this help?
<rick_h_> adham: a bit, but the key thing is how the MAAS is setup regarding the pods usage/etc.
<cory_fu> I'm really not sure why more than around 9 VMs would have been created unless conjure-up was run multiple times.  That's one for the Juju controller, and one for each machine required by CDK
<rick_h_> adham: because the root thing is that this isn't a typical MAAS with bare metal machines going
<adham> exactly cory_fu "
<adham> <cory_fu> rick_h_: At the end of the day, though, I think adham's real issue is that there were too many VMs created and he can't track down why or what roles each is playing."
<cory_fu> adham: Do you still have your ~/.cache/conjure-up/conjure-up.log file available?  That should have a record of everything that conjure-up did, including requesting new machines.
<adham> this is exactly what I'm running through
<rick_h_> adham: then where did the VMs come from? What's the "virtual machine manager" tool?
<adham> Luckily, I still have this https://github.com/conjure-up/conjure-up/issues/1476
<adham> this issue happened (after deleting the machines thinking they are a virus), but I'm over it
<w0jtas> stokachu: so i see error on glance in neutron.log: keystoneauth1.exceptions.auth.AuthorizationFailure, then in keystone i see error: Unable to load certificate - ensure you have configured PKI with "keystone-manage pki_setup"
<adham> this problem is no longer persisting, but if you are looking for the log files, it's all packaged there
<adham> virtual machine manager to list the vms
<rick_h_> adham: I'm trying to understand your setup so we can replicate it and diagnose why the tags about what resources were used for aren't making it. You say you comissions X bare metal nodes into a MAAS running somewhere correct?
<adham> so rick, like cory_fu mentioned KVM, but I use the virtual-machine manager for virsh
<w0jtas> stokachu: and command not working: keystone-manage: error: argument command: invalid choice: 'pki_setup'
<cory_fu> adham: I don't see any logs attached to that GitHub issue.  Also, that connection error indicates that conjure-up tried to connect to a controller but couldn't, presumably because you had deleted the VM while Juju still had a record of it as a valid controller (in ~/.local/share/juju/controllers.yaml)
<adham> to replicate my enviornment >> follow >> https://tutorials.ubuntu.com/tutorial/create-kvm-pods-with-maas >> once you can commission machines successfully, you can proceed with conjure-up kubernetes, you will reproduce 100% what I have here
<rick_h_> adham: ok, so you have MAAS running and you used a the virtual-machine-manager to create VMs and registered those VMs in MAAS?
<rick_h_> adham: ty, that's what I needed.
<manadart> jam: In case it gets lost in the torrent of Github mail. I commented on the PR. Pinning a single CPU is done via range syntax - "limits.cpu=0-0
<adham> I ran cory_fu, I ran juju unregister controller so I think this issue fixed
<adham> hmm
<adham> let me check
<adham> there is only one controller there which is the current one
<adham> I removed the previous controller
<adham> cory_fu, I thought I attached the logs, one moment, I'll double check
<adham> cory_fu, can you please recheck the issue?
<adham> I uploaded the logs
<cory_fu> adham: I think one thing that isn't clear from that tutorial when using MAAS with conjure-up is that once you have a pod available, if you don't tell conjure-up what pre-allocated machines to assign each unit to by providing a (if I recall correctly) tag=<machine-tag> constraint on the CONFIGURE APPLICATIONS screen of conjure-up (you have to select Configure for each application), then Juju will assume you want new machines and will ask MAAS to
<cory_fu> allocate them as it sees fit, which will likely lead to new machines being allocated from the pod.  But I'm not certain about that because I don't have much experience with pods in MAAS
<rick_h_> adham: so looking at that tutorial "... will be created and a random name will be assigned to it as soon as you hit the "Compose machine" button."
<rick_h_> cory_fu: right, Juju isn't pod-aware so the thing is Juju just asks MAAS for a machine to use and since they're generated I'll bet it just creates them
<rick_h_> cory_fu: I'll have to play with this and try it out. I've not used it yet.
<adham> I can retry with this
<adham> Guys, it's been a huge sufferring for to find someone where I can talk about this issue, someone who knows and can help about it, at least with knowledge
<rick_h_> adham: hey, we're here most days. Happy to help as this is going to get me to play with something new in MAAS I've not done yet.
<adham> do you guys mind if I can please email both of you in a group email with updates where we can continue discussion about this?
<rick_h_> adham: sorry it took a bit to dig into what you had running there, but I think it's coming together
<rick_h_> adham: file a bug please. That's the email thread and it'll let others see/find/etc
<adham> that's fine, and even better for me
<adham> can you pls tell me where to file it
<adham> and cory_fu, can you please also watching this?
<cory_fu> adham: Of course
<adham> or stay in the loop, in case we need to refer back to conjure-up :D
<adham> thx
<adham> rick_h_, where can I file the bug?
<rick_h_> adham: bugs.launchpad.net/juju
<adham> do you need the logs as provided in the github issue?
<cory_fu> adham: Please do link to the GitHub issue and StackOverflow question, for context
<rick_h_> adham: anything you've got we'll take and look into.
<rick_h_> adham: to be clear, we can't change/set the machine names as those come from MAAS. However, we should have noted with tags or metadata in MAAS what the machines are up to.
<cory_fu> adham: The only thing I see in that conjure-up.log file is 5 failed attempts to bootstrap.  I could see that having created 5 new VMs in MAAS but I can't possibly imagine how it would have created more than that, though.  Does your MAAS have those created VMs still available?  Can you see if there were any Juju-assigned tags on them?
<adham> rick_h_: https://bugs.launchpad.net/juju/+bug/1779161
<mup> Bug #1779161: conjure-up kubernetes creates 70+ VMs on KVM managed by MAAS with funny names <juju:New> <https://launchpad.net/bugs/1779161>
<adham> cory_fu: I have had few VMs before Kubernetes --- I created this issue after deleting the VMs then I couldn't conjure-up/down anymore the kubernetes, I thought those VMs a viruses, so I deleted them manually via the virtual machine manager and MAAS, having this done has corrupted kubernetes and prevented conjure-up/down from working
<adham> this is the time when I uploaded the logs
<adham> someone explained to me the details of juju, and from here I was able to tear the rest of kubernetes down manually via juju, and then I redeployed and confirmed that those vms are from kubernetes
<adham> rick_h_: I don't really expect this >> "juju deploy canonical-kubernetes --lxd1: prettyname, --lxd2: prettyname... because it doesn't make sense
<adham> but at least instance i-04c41c1309bde47d4 got the tag juju-machine-id=conjure-kubernetes-core-0a9-machine-0 << I expect the machine name to be conjure-kubernetes-core-0a9-machine-0 if name is set to blank
<adham> this way it should avoid (hopefully) the MAAS from setting those un-named machines with pet names
<adham> cory_fu, are you still here?
<adham> rick_h_ are you still here?
<rick_h_> adham: still here. But it's the work day so will go in/out sometimes with calls/etc.
<adham> ahh, no, it''s alright, just making sure that both of you got everything and at least we're all linked together on the bug ticket
<rick_h_> adham: so we'll have to see. Typically with MAAS machine names are long setup before Juju gets the machine. With this support for kvm-pods if MAAS now allows that to be tweaked over the API we'll ahve to see what changes Juju needs to work with it.
<adham> I am going to tear down the current kubernetes installation
<rick_h_> adham: right, I've fired off an initial email making sure if anyone on the team's played with the pods stuff and honestly we'll have to find a block of time to set that all up and see how it works
<rick_h_> we've not used it yet
<adham> And try >> "cory_fu> adham: I think one thing that isn't clear from that tutorial when using MAAS with conjure-up is that once you have a pod available, if you don't tell conjure-up what pre-allocated machines to assign each unit to by providing a (if I recall correctly) tag=<machine-tag> constraint on the CONFIGURE APPLICATIONS screen of conjure-up (you have to select Configure for each application), then Juju will assume you want new machines and
<adham>  will ask MAAS to"
<cory_fu> adham, rick_h_: I can't seem to subscribe myself to the Juju bug on LP
<cory_fu> adham, rick_h_: Can one of you please try subscribing johnsca (Cory Johns) to that bug?
<kwmonroe> cory_fu: you show up as subscribed to me
<cory_fu> adham: And yes, sorry I got pulled away for a minute as well
<kwmonroe> under the "Notified of all changes" heading is Cory Johns
<cory_fu> kwmonroe: Oh, ok.  It's not showing for me.  That's fine then
<adham> I subscribed you
<adham> can you pls check cory_fu
<rick_h_> adham: so I don't udnerstand what you mean by "And try..." with cory's quote
<adham> I'm going to see if I set the constraints
<cory_fu> adham: I guess it just doesn't show me to myself.  :p
<adham> I will avoid the MAAS autonaming
<rick_h_> adham: I can tell you that's true. Against, MAAS is allocating machines on the fly using a virtual machine setup so they won't be reused unless you specify a placement constraint
<cory_fu> stokachu: Can you please confirm the correct syntax for setting a constraint to target a specific machine in conjure-up?
<cory_fu> stokachu: Also, how do you handle the case where there are multiple units, like for kubernetes-worker?
<cory_fu> adham: Please watch for stokachu's reply to ^, because I'm not certain of the correct syntax
<adham> thanks cory_fu, will do, it's 1:43 AM here, I might go to sleep soon as I have work tomorrow at 8 AM
<adham> I can't really keep my eyes open anymore
<cory_fu> adham: I completely understand.
<adham> I am actually talking to you guys from Australia
<cory_fu> adham: One of the Juju folks that you've spoken with in the past, yeah, the ones from Australia, could also tell you the constraint syntax to target MAAS machines, since it's actually a Juju constraint that's being specified
<adham> wallyworld is the one I spoke to and who advised me to reach out to conjure-up team
<adham> I think it's confusing probably or I'm not doing good in explaining
<adham> my apologize guys for the hassle
<adham> is it possible that we can use the bug ticket for discussion and mentioning? It would be really great if stokachu posted an update there
<adham> ?
<cory_fu> adham: Yes, I'll comment on there with my understanding of the issue as well
<adham> thanks cory_fu
<adham> goodnight
<stokachu> it's "tags=<tagname>"
<stokachu> see https://docs.jujucharms.com/2.3/en/reference-constraints
<magicaltrout> kwmonroe: https://www.dropbox.com/s/jtcterg4ft7f09z/Screenshot%20from%202018-06-28%2017-26-15.png?dl=0
<magicaltrout> we've got some catching up to do ;)
<rick_h_> magicaltrout: :)
<rick_h_> zeestrat: is this your stuff turned tutorial? https://tutorials.ubuntu.com/tutorial/tutorial-charm-development-part1#0
<rick_h_> bdx: ^ might be cool for your new folks when you bring them in
<zeestrat> rick_h_: no, sir. What tipped you off? Looks nice though.
<rick_h_> zeestrat: no? I thought you were working with folks on putting your docs stuff into a tutorial
<rick_h_> zeestrat: I saw it shared from the twitter account actually. https://twitter.com/ubuntu/status/1012417625685708800
<zeestrat> Nope. I know our boy LÃ¶nroth has been working on some docs/tutorial stuff so he might know.
<rick_h_> zeestrat: ok well figured I'd check
<zeestrat> Thanks for the consideration :) looks like the tutorial page could use some authorship details and perhaps a link to the source.
<PatrickD_> Hi guys, trying to install Kubernetes right now, and it seems the Kubernetes Charms are using series xenial. Any idea if it would work with Bionic ?
<tvansteenburgh> PatrickD_: yeah it should. that likely won't be the default until after the 18.04.1 release
<PatrickD_> What's the easiest way to force it to Bionic ?
<pmatulis> juju deploy cs:bionic/<charm> ?
<tvansteenburgh> PatrickD_: trying to figure that out. i thought you could do `juju deploy canonical-kubernetes --series bionic --force`, but it appears those args only work on individual charms, not bundles
<PatrickD_> yeah, we also tried that :)
<tvansteenburgh> PatrickD_: for now i'm afraid you'll have to deploy the charms individually so you can use `--force --series bionic`
<tvansteenburgh> PatrickD_: raw bundle is here: https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive/bundle.yaml
<tvansteenburgh> that'll show you what charm and relations you need
<tvansteenburgh> charms
<PatrickD_> Or I use Xenial, and find a way to use a 4.12+ kernel (drivers for Dell 640 requirement). Any easy way to do that ?
<tvansteenburgh> PatrickD_: where are you deploying? public cloud?
<PatrickD_> on MAAS
<tvansteenburgh> PatrickD_: i'm not exactly a MAAS expert, but it seems like you could create a xenial image with the kernel you want
<tvansteenburgh> PatrickD_: also `juju model-config -h | grep cloudinit -C3`
<tvansteenburgh> could potentially upgrade kernel that way. haven't tried it.
<rick_h_> PatrickD_: honestly charm pull the bundle and edit the default series on it to bionic.
<rick_h_> PatrickD_: otherwise we rely on the charm default as the author suggests series X. No way around it without editing the bundle because it's a lot of assuming that each charm supports a series in a bundle
<tvansteenburgh> rick_h_: that won't work
<tvansteenburgh> rick_h_: we don't have bionic in the charms' series list yet
<rick_h_> tvansteenburgh: oh, at all? I gotcha.
<rick_h_> tvansteenburgh: yea, then the "bundle" isn't ready for that yet heh.
<PatrickD_> yeah, trying to use bionic kernel in xenial, which would be just fine.
<rick_h_> PatrickD_: in that case I'd deploy the bundle and then juju run the commands to get the new kernel in place/reboot
<rick_h_> PatrickD_: vs bootstrapping the whole other series (if just the kernel is what you're after)
<PatrickD_> Also have an issue with MAAS zones in Juju. we have 3 zones (default, A and B). It either goes to default or B, but I want it to go to A... Any way to specify MAAS zone when deploying the bundle ?
<rick_h_> PatrickD_: I thought zones were meant to be like AZ in public clouds so that they were rotated/spread across for fault tolerance.
<rick_h_> PatrickD_: so Juju should rotate them as you add-unit.
<rick_h_> PatrickD_: that said, you might try a constraint of zone=xxx
<rick_h_> but not sure on that tbh
<rick_h_> PatrickD_: I'd start with a juju deploy xx --constraints zone=yyy first
<rick_h_> and see if Juju will do that
<PatrickD_> constraint doesn't work, but what you say makes sense... maybe I should remove unused zones then.
<PatrickD_> considering that there are zero machines in default and B, it's a bit strange that it wants to go to B
<rick_h_> PatrickD_: hmm, yea that's not right
<thumper> we don't currently support zone placement in bundles
<thumper> we don't have zone as a constraint
<thumper> we have talked about it before
<thumper> a key here is that maas zones are quite different to other cloud zones
<thumper> tvansteenburgh, PatrickD_: I *think* you could specify a bundle overlay to change the default series
 * thumper takes a look
<tvansteenburgh> thumper: sure, if the charm supports the series, but it doesn't
<thumper> ah
<thumper> hmm...
<thumper> yeah
<thumper> fair call
<tvansteenburgh> PatrickD_: fwiw i'm working on adding bionic to the charms, but it won't be done today
#juju 2018-06-29
<wallyworld> veebers: i left a few comments, let me know if you need further clarification
<kelvin__> wallyworld, LGTM it's much nicer now to ensure our operator app using StatefulSet. Thanks.
<wallyworld> kelvin__: awesome, thank you. we stil use deployment controllers for stateless pods though
<wallyworld> that's my understanding of best practice
<wallyworld> also, the operator pod is still only a single pod, not yet managed
<wallyworld> the workload pods using storage with be stateful sets
<kelvin__> wallyworld, ic, we are using operatorPod() to ensure operator.
<wallyworld> kelvin__: yeah, for now it's just a single pod. but we need to change that
<kelvin__> yeah, i saw the TODO
<veebers> wallyworld: sweet will do
<veebers> wallyworld: I'm inclined to move a reciever method (member method, unsure of name) that doesn't need any state into a stand alone (i.e. checkFiles(files map[string]string)). Any reason I shouldn't
<veebers> wallyworld: note, I'm changing that method from checkFiles(files map[string]string) -> checkFile(files string) as we iterate at a higher level now
<babbageclunk> veebers: channelling wallyworld I reckon that sounds great
<veebers> babbageclunk: sweet thanks babbageworld (or would that be wallyclunk?)
<babbageclunk> ha, I like either of those
<veebers> ^_^
<veebers> Mr Babbageworld Wallyclunk.
<babbageclunk> Almost as good as Engelbert Humperdinck
<babbageclunk> Or Benedict Cumberbatch
<veebers> hah, indeed!
<thumper> veebers: the only time I don't is when I use the receiver to effectively namespace the function
<veebers> thumper: ack, this is unexported helper function.
<thumper> then I think it is fine
<veebers> which , actually, does use stored state so I'll leave this one, but will do the other helper that doesn't ^_^
<wallyworld> veebers: sounds ok, i'll look once pushed
<veebers> wallyworld: I thought the use of params.DockerInfoResponse in state might be icky. Where is best to create a core struct that will be used internally?
<wallyworld> veebers: in the top level core package
<wallyworld> a subpackage under there
<wallyworld> maybe resources
<veebers> wallyworld, kelvin__ : develop doesn't build with the latest commit of charm.v6, are we still waiting for something to land in develop so it will?
<wallyworld> veebers: you sure you ran godeps?
<wallyworld> there were several upstream changes
 * veebers makes sure
<wallyworld> builds for me
<veebers> wallyworld: godeps kicks charm.v6 to 0f7685c8 (the commit before kelvin__ merged his bits), but tip for charm.v6 is 6e010c5e0
<wallyworld> ah, we may be waiting on a juju deps update
<wallyworld> to bring in tip of v6
<kelvin__> i will have to update a few deps later after bundlechanges done
<wallyworld> veebers:  so your branch should leave deps untouched
<veebers> kelvin__, wallyworld: ah ack makes sense, all good.
<wallyworld> kelvin__: if you have landed upstream, you can land anytime
<kelvin__> wallyworld, yeah, i will update all them together with bundlechanges upgrade in juju.
<wallyworld> sounds great ty
<ybaumy> hmm i tried .. conjure-up stable/beta/edge and get the same behavior http://pastebin.centos.org/878811/
<ybaumy> anyone can help me
<ybaumy> ah yes kubernetes canonical install going on
<ybaumy> only the master  is not working
<wallyworld> ybaumy: we'll need  a lot more info - k8s logs, juju logs, what substrate etc
<ybaumy> im deploying on localhost
<wallyworld> using lxd cloud?
<ybaumy> wallyworld: localhost with lxd
<wallyworld> which bundle? cdk or kubernetes-core?
<ybaumy> canonical kubernetes
<ybaumy> i enabled helm and prometheus
<wallyworld> that's a fairly heavy weight bundle for use on local host - you may be out of resources or something. but hard to say without more info
<ybaumy> my localhost has 8 cores and 24GB ram
<wallyworld> you'll need to look at k8s logs to see why it can't schedule that pod
<ybaumy> k mom phone
<thumper> wallyworld: fyi https://github.com/juju/juju/pull/8875
<wallyworld> looking
<thumper> wallyworld: I have a follow up one that adds the values to the model config and updates api server call
<wallyworld> righto
<thumper> ybaumy: what does juju status say?
<kelvin__> wallyworld, would u mind to have a quick look on this fix? https://github.com/juju/charm/pull/250/files  thanks
<wallyworld> sure
<ybaumy> thumper: juju status says waiting for kube-system pods to start
<ybaumy> and as you can see in the pastebin .. the scheduler cannot start the pods
<thumper> ybaumy: can you pastebin the output of juju status?
<ybaumy> http://pastebin.centos.org/878896/
<ybaumy> on beta there was 1.10.4 i think
<ybaumy> and i had the same problem
<ybaumy> cpu and memory i think are not the problem. the vm is pretty much idle right now
<thumper> ybaumy: it looked like the kubernetes-master unit was still setting up
<thumper> kubernetes-master/0*      waiting   idle   9        10.48.163.224   6443/tcp                                 Waiting for kube-system pods to start
<thumper> everything else was up and running
<wallyworld> thumper: lgtm
<thumper> until that is marked as "active" you probably won't have much luck
<thumper> if it has been a long time
<thumper> there may be other issues
<thumper> wallyworld: any ideas?
<ybaumy> thumper: http://pastebin.centos.org/878811/ this goes on forever
<wallyworld> kelvin__: lgtm, i wondered about that at the time but forgot to ask
<kelvin__> wallyworld, ah. missed that place.. thanks.
<wallyworld> thumper: ybaumy: i have seen the master node take a while to start usually. sometimes a few minutes
<wallyworld> but if it doesn't start you need to look at k8s logs
<thumper> Jun 29 05:42:28 juju-161da2-9 systemd[1]: Failed to reset devices.list on /system.slice/calico-node.service: Operation not permitted
<thumper> that line looks a little suspicious
<ybaumy> wallyworld: where can i find the k8 logs
<wallyworld> i would try just the kubernetes bundle not the CDk one
<wallyworld> ybaumy: you need to use kubectl to look at the particular artifact that is not working
<ybaumy> ah ok
<wallyworld> kubectl logs -f ....
<ybaumy> the kubernetes bundle has it also those features like integrated helm and elasticsearch?
<ybaumy> and prometheus
<wallyworld> i would start simple by trying the smaller kubernetes-core bundle
<wallyworld> just to ensure that works
<ybaumy> ok
<kelvin__> wallyworld, got another one for bundlechanges, https://github.com/juju/bundlechanges/pull/41/files thanks.
<ybaumy> i wanted to give our team a quick way to deploy a cluster on one vm with alot of features so i thought that canonical bundle would be nice
<wallyworld> kelvin__: looking
<wallyworld> ybaumy: it is nice, but we need to step back to try and see the source of the error
<wallyworld> starting with a smaller bundle can help verify that the basics are all ok
<ybaumy> installing.. will take few minutes.. bbl. need to got to meeting
<wallyworld> and you can still add promethius etc is you need to. but you may not need HA
<wallyworld> ok, i'll be offline soon but other folks will be here
<ybaumy> thanks wallyworld
<wallyworld> we'll get it fixed for you, just may talk a bit of debugging :-)
<wallyworld> kelvin__: lgtm!
<kelvin__> thanks, wallyworld
<w0jtas> hello, anyone could help fixing keystone error? "ERROR Signing error: Unable to load certificate - ensure you have configured PKI with "keystone-manage pki_setup""
<manadart> stickupkid: Hop on the guild HO when you are set up this morning.
<manadart> stickupkid: Review? https://github.com/juju/juju/pull/8877
<stickupkid> manadart: done
<manadart> stickupkid: Ta.
<stickupkid> manadart: on da hangout
<manadart> stickupkid: Added that test.
<stickupkid> manadart: approved the changes
<manadart> stickupkid: Ta.
<stickupkid> manadart: i've removed the remote from the lxd provider, but i'm not sure it's worth the effort removing it from the tools/lxdclient - there is some logic there that i'm unsure is even worth unprising from it
<manadart> stickupkid: Sure. Propose it and we can have a look.
<stickupkid> manadart: sorry was manual testing https://github.com/juju/juju/pull/8878
<w0jtas> how can i remove dead controller from juju ?
<stickupkid> w0jtas: do you have access to the controller at all?
<manadart> w0jtas: If the controller machine is dead/gone and kill-controller does not work, you can use "unregister". Note that that just removes it from your local registry and doesn't do resource cleanup.
<w0jtas> manadart: thanks!
<manadart> w0jtas: Sure thing.
<w0jtas> after running juju upgrade-charm neutron-gateway i have error on status list: "hook failed: "upgrade-charm"" what can i do here to debug ?
<w0jtas> ok nvmd, deleted setup and starting again
<stickupkid> manadart: so i've removed the rawProvider from my branch now
<manadart> stickupkid: Nice.
<rick_h_> Morning party folks
<stickupkid> manadart: so the only thing left is sorting out the lxdclient when connecting to local - i guess the storage and default network
<stickupkid> manadart: i also introduce the server spec, there is a lot of testing to do here :D
<stickupkid> manadart: also i think we should consider some of the internal namings whilst we're here tbh
<manadart> stickupkid: HO?
<manadart> rick_h_: Morning.
<stickupkid> manadart: yeap
<stickupkid> rick_h_: morning
 * manadart heads to GUILD HO.
<w0jtas> how can i change storage size in juju ? during install i've set 10GB :/
<rick_h_> w0jtas: check out the root disk size constraint.
<w0jtas> rick_h_ where ? :)
<rick_h_> w0jtas: https://docs.jujucharms.com/2.3/en/reference-constraints
<w0jtas> thanks
<w0jtas> hmm "juju get-model-constraints" doesn't return anything
<w0jtas> in nova-compute log i see "phys_disk=9GB used_disk=0GB"
<manadart> hml: From the doc: "juju upgrade-series complete 4". What is the "4"?
<hml> manadart: a machine number
<manadart> hml: Ah, got it. Thanks.
<PatrickD_> When trying to deploy kubernetes bundle xenial with 16.04-hwe kernel (Bionic still not supported), on maas (juju on bionic, maas on bionic, if that makes a difference), I get this error in juju status : hwe_kernel": ["xenial has no kernels available which meet min_hwe_kernel(ga-18.04)
<tvansteenburgh> rick_h_: that mean anything to you? ^
<PatrickD_> still trying to find out where it comes from (juju or maas ?). my default_min_hwe_kernel is ga-16.04 (from get-config name=default_min_hwe_kernel) can't find where ga-18.04 comes from.
<rick_h_> tvansteenburgh: PatrickD_ that looks like a maas error
<rick_h_> PatrickD_: run juju status --format=yaml
<rick_h_> See if there's any other details in the machine section.
<rick_h_> PatrickD_: might also have to check the debug log/controller for provisioning feedback there.
<PatrickD_> same error there... still searching. will look at logs.
<pmatulis> PatrickD_, yes, that's a MAAS error
<pmatulis> check your kernel settings. easiest to do in their web UI
<pmatulis> currently it looks like the minimum kernel is set at 18.04 but you are requesting 16.04
<PatrickD_> there is no "min_hwe_kernel" only "default_min_hwe_kernel", which is right now set at ga-16.04, not even 18.04.
<stickupkid> hml: looks good to me https://github.com/juju/juju/pull/8874
<hml> stickupkid: ty
<stickupkid> manadart: trying to work out how to test the error messaging from lxd errors is interesting
<manadart> hml: Approved #8866, notwithstanding any implementation detail to be discussed with externalreality.
<hml> manadart: ty -
<PatrickD_> Looks like I will have to wait for Bionic charms, or for soemone in MAAS to help me with this issue :) Any ETA for Bionic Charms ?
<manadart> stickupkid: Approved #8878 with one comment on naming.
<stickupkid> manadart: naming is the hardest thing
<stickupkid> manadart: is officially NewXXX is supposed to returns a pointer and MakeXXX not
<stickupkid> manadart: but I guess that boats sailed...
<manadart> stickupkid: Not going to die on that hill. Land it.
<stickupkid> manadart: I remove lxdclient completely and it all works for bootstrapping, i'll land my patch later
<stickupkid> manadart: we broke the error messaging I believe, I'm going to try and reinstate it
<manadart> stickupkid: OK,
<stickupkid> manadart: by we, I probably mean me, when I moved to the new client for getting the local address
<tvansteenburgh> PatrickD_: no ETA yet, but it'll be soon. likely next week
<stickupkid> does anyone know what is supposed to happen with juju register localhost?
<stickupkid> is that supposed to work
<hml> stickupkid:  localhost being a controller name?
<stickupkid> localhost being lxd
<hml> stickupkid: is it supposed to workâ¦ juju register requires a controller name or registration string
<stickupkid> just hangs on my end, i'm trying to work out if i've broken anything
<hml> stickupkid: i wouldnât have expected it to be possible.
<stickupkid> it doesn't work on 2.4 either
<hml> stickupkid: what are you wishing to accomplish with it
<stickupkid> hml: see if it tries to grab any credentials... etc
<hml> stickupkid: you need to boot strap a controller - add a user to it - which provie give the registration string
<hml> s/provie/provide
<stickupkid> hml: thought as much
<stickupkid> hml: cool, well i'm done for the week - have a good weekend :)
<hml> stickupkid: you too!
#juju 2018-06-30
<ybaumy> cant get either conjure-up or juju to deploy to localhost
<ybaumy> master cant start pods
<ybaumy> is there a deployment guide for localhost
<ybaumy> do i have to do something different. i used juju to deploy on vsphere alot of times and never had any trouble with k8 packages
<ybaumy> this seems to be somehow the same kind of problem ... https://github.com/conjure-up/conjure-up/issues/1377
<ybaumy> but its not only conjure-up
<ybaumy> kube-apiserver.daemon[13489]: Error: error creating self-signed certificates: mkdir /var/run/kubernetes: permission denied
<ybaumy> 018-06-30 04:42:07 INFO juju-log Checking system pods status: heapster-v1.5.3-6c9b557676-4f5br=Pending, kube-dns-7b479ccbc6-mnjjj=Pending, kubernetes-dashboard-6948bdb78-mvp4b=Pending, metrics-server-v0.2.1-85646b8b5d-g4682=Pending, monitoring-influxdb-grafana-v4-b54d59784-d2xbb=Pending
<adham> hello everyone
<adham> rick_h_
<adham> I'm currently stuck on removing kubernetes
<adham> So that I can go with the option of the constraints
<adham> I had a loadbalancer that was running on the server haproxy
<adham> after installing kubernetes using conjure-up, it seems it blocked my loadbalancer or at least my loadbalancer is no longer at the front
<adham> I don't know how to reverse that, I've been searching almost everywhere but no luck, routes, firewall, ip tables, no hint at all
<adham> can anyone help please?
<adham> I'm currently stuck on removing kubernetes, so that I can go with the option of the constraints
<adham> I had a loadbalancer that was running on the server haproxy and after installing kubernetes using conjure-up, it seems it blocked my loadbalancer or at least my loadbalancer is no longer at the front.
<adham> I don't know how to reverse that, I've been searching almost everywhere but no luck, routes, firewall, ip tables, no hint at all, can anyone help please?
<ybaumy> ok this is what i found out https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/262#issuecomment-401534126 .. even if i use kubernetes-core here. i dont think it matters.
<ybaumy> the lxd localhost deployment is broken .. let alone that i had to "mkdir /var/run/kubernetes; chmod 777 /var/run/kubernetes; usermod -a -G lxd ubuntu" on the master
<ybaumy> also i had to set a lxc profile for which fits this deployment
<ybaumy> sorry to say this. but scattered documentation here and there cost me few hours
<ybaumy> i would really need some help here
<adham> guys
<adham> I just figured
<adham> juju status haproxy
<adham> it shows me a controller or model
<adham> is there a way where I can remove this or unregister this controller or model?
<adham> I am very 1 minute away from resolving my issue
<adham> do anyone know how to undo juju haproxy?
<adham> or cancel it?
#juju 2018-07-01
<thumper> morning
<veebers> wallyworld, babbageclunk: What else do I need for a test, I'm getting no test to run: https://paste.ubuntu.com/p/ZMWFtBZtkP/
<babbageclunk> veebers: is that in a new package?
<veebers> babbageclunk: aye
<babbageclunk> you'll probably need a package_test.go
 * thumper was going to say that
<veebers> babbageclunk: ah right, I'll give that a whirl. Cheers
<thumper> we started the pattern of doing the setup of tests for packages in their own file
<thumper> veebers: there are two different ones
<thumper> one if you need mongo, and a different way if you don't
<babbageclunk> veebers: look at core/auditlog for a trivial one.
<veebers> sweet, cheers guys. Should be pretty simple as there is no deps needed (it's a simple test :-))
<veebers> babbageclunk, thumper sweetbix that works. \o/
<thumper> veebers: just changed our 1:1 from hangout to meet
<veebers> thumper: ack, I'll re-join
<thumper> wallyworld: could I get you to triage bug 1778033 as you have been in storge recently?
<mup> Bug #1778033: juju stuck attaching storage to OSD <4010> <juju:New> <https://launchpad.net/bugs/1778033>
<wallyworld> ok
#juju 2020-06-22
<hpidcock> thumper: looking
<thumper> hpidcock: ta
<kelvinliu> just pushed and released the new mariadb and mediawiki testing charms for CI,  got this tiny PR to upgrade those charms for test, anyone got 1min to take a look plz? thank you https://github.com/juju/juju/pull/11738
<kelvinliu> and https://github.com/juju/juju/pull/11739 ports the change to develop
<thumper> https://github.com/juju/juju/pull/11740
 * thumper looks at kelvinliu's branch
<thumper> kelvinliu: good to go
<kelvinliu> ty
<hpidcock> looking thumper
<Chipaca> thumper: ð
<thumper> ah poo
<thumper> sorry Chipaca
<thumper> Chipaca: got busy watching some testing
<thumper> and for whatever reason, didn't actualy create a calendar item
<thumper> Chipaca: I could chat now for a bit if you are still around
<thumper> this was on me sorry, I should have made the calendar entry, then I would have remembered
<jam> thumper, I think he went back to sleep, but we'll plan for it next week
<thumper> jam: ok, sorry
<Chipaca> thumper: you are forgiven :-) i got a nice early run out of it (the "go back to bed" plan didn't pan out)
<Chipaca> the sleep-deprived me of 10 years ago scoffs at current-day me
<thumper> :)
<Eryn_1983_FL> hi guys
<Eryn_1983_FL> so i am trying to figure out what logs i need to pull for ssl cert issues on openstack/juju
<Eryn_1983_FL> barbican/1*                  waiting   idle   0/lxd/7  10.3.251.11     9311/tcp,9312/tcp  'shared-db' incomplete
<Eryn_1983_FL>   barbican-vault/11*         blocked   idle            10.3.251.11                        'secrets-storage' missing
<Eryn_1983_FL> i think something is wrong with the vault
<hml> Eryn_1983_FL: try âjuju debug-log âinclude unit-barbican-1 âinclude unit-barbican-vault-1 âreplayâ   these are the logs for those 2 units
<Eryn_1983_FL> ty
<Eryn_1983_FL> juju debug-log âinclude unit-barbican-1  âinclude unit-barbican-vault-1 âreplay
<Eryn_1983_FL> ERROR unrecognized args: ["âinclude" "unit-barbican-1" "âinclude" "unit-barbican-vault-1" "âreplay"]
<hml> Eryn_1983_FL: sorryâ¦ things got auto corrected on me.  the include and replay args use 2 dashes.
<Eryn_1983_FL> ok
<Eryn_1983_FL> https://paste.debian.net/1153329/
<Eryn_1983_FL> https://paste.debian.net/1153329/
<hml> Eryn_1983_FL:  hrm.. just those 53 lines?   what are your logging levels set to?  âjuju model-config logging-configâ
<hml> was there anything for barbican-vault?
<Eryn_1983_FL> thats all that came out
<Eryn_1983_FL>  juju model-config logging-config
<Eryn_1983_FL> <root>=WARNING
<Eryn_1983_FL> no
<Eryn_1983_FL> i dont see it
<hml> Eryn_1983_FL: can we change it to âjuju model-config logging-config=`<root>=WARNING;<unit>=DEBUG`â ?
<Eryn_1983_FL> i can try..
<Eryn_1983_FL> back ticks?
<hml> Eryn_1983_FL:  yes, or quotes work too
<Eryn_1983_FL> got it
<Eryn_1983_FL> maybe its been too long and the errors were last week when we done it
<hml> Eryn_1983_FL: unfortunately that will only change for logs moving forwardâ¦ has the charm with the relation for âshared-dbâ deployed?
<hml> whatâs the dbâs status?
<Eryn_1983_FL> barbican/1*                  waiting   idle   0/lxd/7  10.3.251.11     9311/tcp,9312/tcp  'shared-db' incomplete
<Eryn_1983_FL> i dont think so
<Eryn_1983_FL> https://paste.debian.net/1153330/
<Eryn_1983_FL> can we rip out and start over?
<hml> Eryn_1983_FL:  looks like the db is in an error state.
<Eryn_1983_FL> the vault and barbican
<Eryn_1983_FL> yeah
<hml> Eryn_1983_FL: iâd get the db happy - then see what happens with barbican
<Eryn_1983_FL> ok
<rick_h> petevg:  did you bootstrap to the cluster? I'm failing to bootstrap at "Contacting controller to verify accessibility" and wonder what I've missed
<jamespage> Eryn_1983_FL: is barbican directly related to mysql-innodb-cluster? it really should be related via a mysql-router subordinate
<jamespage> that might be the cause of the shared-db-relation-departed errors
<Eryn_1983_FL> not sure
<Eryn_1983_FL> he just restarted
<Eryn_1983_FL> barbican-mysql-router??
<Eryn_1983_FL> ill see what happens after the reinstalls
<Eryn_1983_FL> https://paste.debian.net/1153341/
<Eryn_1983_FL> ok guys how do i fix the certs
<Eryn_1983_FL> vault is initalized and i got the keys
<Eryn_1983_FL> https://paste.debian.net/1153347/
<Eryn_1983_FL> https://paste.debian.net/1153349/
<rick_h> petevg:  what's your az --version output looking like?
<petevg> rick_h: 2.5.1 *
<petevg> Full paste:
<petevg> https://paste.ubuntu.com/p/cCjyqBCV3J/
<rick_h> petevg:  ok, far ahead of me even on the edge snap then. Let me try all this with your version. You confirmed you got it from the deb link in your email?
<rick_h> geeze, nothing like pipe to bash...
<petevg> rick_h: I don't think that I got it w/ the deb link. I'm actually not sure what I did to install :-/
<rick_h> petevg:  ok, well getting 2.7 from the deb link now so will try that
<Eryn_1983_FL> are there juju ovn docs i can use that are for this kind of deployment?
<Eryn_1983_FL> forum?
<Eryn_1983_FL> discourse?
<hml> jamespage: ^^
<Eryn_1983_FL> anyideas what i should do rick_h ?
<rick_h> Eryn_1983_FL:  what to do about using vault? I'm not familiar with driving that tbh.
<Eryn_1983_FL> driving?
<Eryn_1983_FL> allright who do i gotta murder to get this working right,
<rick_h> Eryn_1983_FL:  configuring, managing, etc. Looking at the traceback to see what the ask is. You mention having keys?
<Eryn_1983_FL> yeah
<Eryn_1983_FL> vault is open we got keys
<Eryn_1983_FL> how do i resolve the certificate missing error
<rick_h> Eryn_1983_FL:  this is maintained by the openstack team. They've written the charms and know how to hold them right to dance for you.
<Eryn_1983_FL> network wont build yet
<rick_h> Eryn_1983_FL:  that looks like a missing relation
<rick_h> Eryn_1983_FL:  juju status --relations
<rick_h> Eryn_1983_FL:  do you see a relation on barbican-vault and vault?
<Eryn_1983_FL> Relation provider                      Requirer                                         Interface                       Type         Message
<Eryn_1983_FL> barbican-mysql-router:shared-db        barbican:shared-db                               mysql-shared                    subordinate
<Eryn_1983_FL> barbican-vault:secrets                 barbican:secrets                                 barbican-secrets                subordinate
<rick_h> Eryn_1983_FL:  oh, but your vault is in an error state "hook failed: update-status"
<rick_h> Eryn_1983_FL:  so you have to mark it resolved and get it to kick again
<rick_h> Eryn_1983_FL:  juju resolve vault/0
<Eryn_1983_FL> ok
<rick_h> Eryn_1983_FL:  might have to look at the log file on vault to see why update-status is erroring
<Eryn_1983_FL>  juju resolve vault/0
<Eryn_1983_FL> ERROR unit "vault/0" is not in an error state
<rick_h> Eryn_1983_FL:  ok, looking at your paste, is it ready now then?
<Eryn_1983_FL> here let me make anew one
<Eryn_1983_FL> https://paste.debian.net/1153354/
<Eryn_1983_FL> vault says read and active.
<Eryn_1983_FL> ready
<rick_h> Eryn_1983_FL:  ok, good there. I do see this though
<rick_h>   barbican-mysql-router/0*       error     idle            10.3.251.28                        hook failed: "install"
<rick_h> Eryn_1983_FL:  so have to check out why that's unhappy
<Eryn_1983_FL> pulling logs
<Eryn_1983_FL> would that prevent the ovn  certs from being there
<rick_h> Eryn_1983_FL:  no idea on that one. That's the question for the openstack folks. They have a community setup for the charms that would be good to check with. Let me get the link
<Eryn_1983_FL> ok
<Eryn_1983_FL> thanks
<Eryn_1983_FL>  juju debug-log --include barbican-mysql-router-0  --replay  is this suppose to take forever?
<rick_h> https://docs.openstack.org/charm-guide/latest/find-us.html
<Eryn_1983_FL> thank rick we been at this for months
<Eryn_1983_FL> ty
<rick_h> Eryn_1983_FL:  hmm, I'd suggest just juju ssh barbican-mysql-router/0
<rick_h> Eryn_1983_FL:  and from there less /var/log/juju/unit<tab>
<rick_h> and it'll complete the unit name and get you into the unit log
 * rick_h is old school and wants to see the log file for himself I guess
<Eryn_1983_FL> hehe
<Eryn_1983_FL> rick_h is teaching me the ways of the kungu fu of juju
<Eryn_1983_FL> https://paste.debian.net/1153356/
<rick_h> Eryn_1983_FL:  lol you're getting the same thing I am on a k8s install right now
<rick_h> mystery error
<rick_h> Eryn_1983_FL:  what version of Juju?
<Eryn_1983_FL> :(
 * rick_h hopes you saying 2.8.0
<rick_h> petevg:  ^ same error, on a MAAS, 2.8.0
<Eryn_1983_FL> juju version
<Eryn_1983_FL> 2.8.0-bionic-amd64
<Eryn_1983_FL> fml
<Eryn_1983_FL> bugs?
<rick_h> manadart_:  achilleasa anyone around? this is kinda :(
<rick_h> Eryn_1983_FL:  I've been working to pull notes together to file one today.
<Eryn_1983_FL> ok
<Eryn_1983_FL> well im going to go fight with openstack peeps,
<rick_h> Eryn_1983_FL:  ok, well good news you're not alone and I don't think this is you. Bad news is I don't know why we're getting this error and have to chase down smarter folks that work on Juju to HALP!
<Eryn_1983_FL> see who wins
<Eryn_1983_FL> ok
<rick_h> Eryn_1983_FL:  lol well you can check on the certs working order, but they're not going to be able to help much with this issue
<Eryn_1983_FL> ovn vs openvswitch?
<Eryn_1983_FL> boss is wanting to put in openvswitch instead
<rick_h> Eryn_1983_FL:  yea, they can help more with that. fnordahl is a bit of an expert there and might be able to help but we'll see
<Eryn_1983_FL> ok
<pmatulis> Eryn_1983_FL, you just need to pass a CA cert to vault
<petevg> rick_h: catching up on the backscroll (I'm technically at lunch right now): I don't see the Juju bug? I do know that vault can be tetchy about certificates, and the OpenStack folks can help out w/ that.
<petevg> Speaking of the OpenStack folks, I see that pmatulis has jumped in. Thx!
<pmatulis> Eryn_1983_FL, juju run-action --wait vault/leader generate-root-ca
<pmatulis> Eryn_1983_FL, please tell me what documentation you were following so i can improve
<rick_h> petevg:  it's the exit 1 error on install that got hit on there as well.
<manadart_> rick_h: I have failures connecting to both achive.ubuntu.com and the charm store. Failed charm deployment earlier, now I can't bootstrap.
<petevg> rick_h: right. Which I think has to do w/ certs not being generated. Things fail in really bad ways if your vault isn't set up properly. (Security software is always a headache.) I'll keep an eye on the convo -- if there is a bug in Juju, I'd definitely like to get it filed and fixed.
<Eryn_1983_FL> im not sure what he followed
<pmatulis> Eryn_1983_FL, i see, you are helping someone else? (too much backscroll, sorry)
<Eryn_1983_FL> its cool
<Eryn_1983_FL> no worries
<Eryn_1983_FL> he install openvswitch now
<Eryn_1983_FL> great
<Eryn_1983_FL> now i we got sqllite errors on octavia
<Eryn_1983_FL> how do i get missing relations i need neutron-plugin and messaging
<rick_h> manadart_:  yea, I'm seeing the archive one now I think
<rick_h> manadart_:  the issue seems to be that things hang/don't get marked as an error so there's no retry/etc path really
<Eryn_1983_FL> anybody know where octavia stores it sqlite db
<jamespage> Eryn_1983_FL: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-ovn.html is the general deployment guide for the charms (that section covers ovn)
<jamespage> octavia - https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-octavia.html
<jamespage> apologies - you caught me at me end of day
<jamespage> I see pmatulis is helping as well
<Eryn_1983_FL> i think we are going to try this doc
<Eryn_1983_FL> https://jaas.ai/openstack-lxd/bundle/1
<Eryn_1983_FL> and see if that helps us solve some issues,
<Eryn_1983_FL> sigh i swear every time we try to install something it breaks something else
<Eryn_1983_FL> now the networks are down on two machines
<Eryn_1983_FL> ffs
<rick_h> kwmonroe:  ping, when you get a few can I bug you please for k8s info for the demo prep?
<kwmonroe> ping away rick_h
<rick_h> kwmonroe:  https://meet.google.com/zhg-vdsi-rgi?authuser=1 I'll bring a list :)
<Eryn_1983_FL> i got a wacked error
<Eryn_1983_FL>     raise SourceConfigError(
<Eryn_1983_FL> charmhelpers.fetch.SourceConfigError: Invalid Cloud Archive release specified: bionic-train on this Ubuntuversion (focal)
<Eryn_1983_FL> for ovn
<josephillips> hey
<josephillips> someone know what is the cicle of the charms i mean what is the charm i cna download normaly from juju
<josephillips> in github from stable?
<josephillips> stable/year.month right?
<Eryn_1983_FL> hi guys we are trying to do https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/rocky/app-certificate-management.html
<Eryn_1983_FL> https://paste.debian.net/1153390/
<petevg> josephillips: I'm not sure what you're asking?
<petevg> You can visit the charmstore at https://jaas.ai/store
<petevg> If you want to deploy a charm on a machine from a specific ubuntu release, you can do that by adding the --series flag to the juju deploy command.
<petevg> If you want to deploy a specific charm revision, you do so by specifying it like so:  juju deploy cs:<charm>-<revision>
<tvansteenburgh> hello juju friends
<tvansteenburgh> i've got a bundle where i deploy some stuff "to: [lxd:0, lxd:1]" etc...
<tvansteenburgh> when the dust settles i've got one lxd with both and ipv4 and ipv6 address, and the rest have ipv6 addresses but no ipv4
<tvansteenburgh> is there a way to tell juju, "hey, give all my lxds an ipv4 address"
<rick_h> tvansteenburgh:  :/ since Juju normally hates that ipv6 on lxd that's surprising. Is the host machine setup with some custom lxd config?
<tvansteenburgh> rick_h: no, nothing custom. it's a machine newly provisioned by maas
<tvansteenburgh> rick_h: the machines *are* getting both ip4 and 6 addresses
<tvansteenburgh> which is fine, i'm just not sure what's happening with lxd
<rick_h> kwmonroe:  do we have any default alerts/dashboards for graylog that are interesting?
<Eryn_1983_FL>  hey guys how do i get the ca private key from openstack
<kwmonroe> rick_h: unfortunately we don't have any default dashboards for graylog.  we leave it completely up to the user as to what log they choose to care about.  that said, graylog's UI is pretty good for drilling down.  the System->Inputs->Show Recvd Msgs gives you a host of fields that you can drill on.  It's not a dashboard, but it's still pretty powerful ootb.
<rick_h> kwmonroe:  yea, all good just checking
<rick_h> petevg:  lol well found the memory limits on kvm pods on guimaas
<rick_h> 21 deployed machines and it's cranky now
<hml> rick_h: thatâs it?  :-D
<rick_h> hml:  well a few are hardware, but 15 vms nuc4 cried uncle and said no more
<rick_h> but my juju status is now scrolling off the terminal so I just got it going well enough lol
<kwmonroe> rick_h: tvansteenburgh, can you come back to me in https://meet.google.com/zhg-vdsi-rgi?authuser=1 re: dashboards
<rick_h> kwmonroe:  yea omw
<tlm> very tiny PR if anyone has two seconds https://github.com/juju/juju/pull/11742
<kwmonroe> rick_h: remove-unit --force doesn't mess around.
<kwmonroe> s/unit/machine
<rick_h> kwmonroe:  yea, it's pretty brutal
#juju 2020-06-23
<hpidcock> tlm: approved
<tlm> thanks hpidcock
<timClicks> I would like to change a few headings in the `juju status` output
<timClicks> "Inst id" -> "Instance ID"
<timClicks> "AZ" -> "Availability Zone"
<timClicks> "Rev" -> "Revision"
<timClicks> "SAAS" -> "Remote Application"
<timClicks> "App" -> "Application"
<tlm> sounds better to me timClicks
<timClicks> for the CMR section of `juju status`, we use the heading "Store" when we should probably use "Controller" or "Source"
<timClicks> For models on the same controller, "Store" is reporting the controller name
<kelvinliu> anyone could help to take a look plz? thanks!  https://github.com/juju/juju/pull/11743
<zeestrat> timClicks: Those suggestion sound great and make it clearer, especially the SAAS -> Remote Application one.
<manadart_> stickupkid or achilleasa: IP address extra field migration: https://github.com/juju/juju/pull/11744
<rick_h> petevg:  morning, when you're around wondered if I could sync up on the gitlab image party
<rick_h> timClicks[m]:  so the SAAS has actually been nice in these demos
<rick_h> timClicks[m]:  as folks are actively talking about building "internal SAAS" and so it's made a lot of sense the way it's worded in the business cases (just for some customer engaging contexts)
<rick_h> zeestrat:  ^
<manadart_> stickupkid: You'll like this: https://github.com/juju/juju/pull/11745
<zeestrat> rick_h: I see your point, but me and past self disagree :) https://bugs.launchpad.net/juju/+bug/1728631/comments/4
<mup> Bug #1728631: [2.3] consume feature hard to manage <docteam> <juju:Expired> <https://launchpad.net/bugs/1728631>
<rick_h> zeestrat:  I understand, it's just something that I've been surprised at how well that explains things in the past few weeks engaging with some folks in the real world
<stickupkid> manadart_, wooooow
<stickupkid> finally, hate that method
<tvansteenburgh> manadart_: can i bug you with a question?
<manadart_> tvansteenburgh: Shoot.
<tvansteenburgh> manadart_: i'm got a bundle with some "to: [lxd/0, lxd/1]" directives in it. When I deploy, my machines are getting ipv4 addrs, but the lxds are getting ipv6 addrs. Is there a way force the lxds to get ipv4 addrs?
<tvansteenburgh> I mean I know how to do it with lxd directly, but not sure how to do it since Juju is setting up lxd
<manadart_> tvansteenburgh: 1, which Juju version; 2, are the containers space constrained/bound? I.e. are they using a bridged NIC from the host, or lxdbr0?
<tvansteenburgh> manadart_: 2.7.6, and it appears they are using a bridged nic from the host
<tvansteenburgh> manadart_: https://pastebin.canonical.com/p/XKZPtPWhzF/
<manadart_> tvansteenburgh: Gimme a few.
<petevg> rick_h: so I have promised to make folks waffles this morning, which means I wonât be at my desk until the sync. I have Adam Israelâs gitlab charm deployed on AKS, though.
<petevg> Iâm tempted to just replace the Juju one, since Adamâs actually has public source code, and is up to date!
<rick_h> petevg:  ok, all good enjoy waffles. Just when you're in I wanted to say hi.
<manadart_> manadart_: Can you get me /etc/netplan/whateveritis.yaml?
<tvansteenburgh> manadart_: was that for me?
<manadart_> tvansteenburgh: Derp. Yes.
<tvansteenburgh> talking to yourself again?
<manadart_> tvansteenburgh: Someone's got to.
<tvansteenburgh> manadart_: http://paste.ubuntu.com/p/DqjMc2qSYw/
<achilleasa> manadart_: stickupkid https://pastebin.canonical.com/p/bJ2SV9fJQg/ ... boo :-(
<stickupkid> achilleasa, knew it
<achilleasa> so now I need to figure out how the firewaller works :D
<stickupkid> my bet still stands
<achilleasa> at least we know that it's not the provider... it's the firewaller
 * achilleasa needs to dig deeper
<manadart_> tvansteenburgh: This is odd. I need to look into it.
<tvansteenburgh> manadart_: ack
<stickupkid> hml, this is the charmhub find PR https://github.com/juju/juju/pull/11736
<Eryn_1983_FL> well boss finally for ovn to work, but then he rebooted and broke mysql
<Eryn_1983_FL> also removed octavia, so thats got to go back in
<Eryn_1983_FL> any idea what this means guys for mysql
<Eryn_1983_FL> https://paste.debian.net/1153489/
<Eryn_1983_FL> it cant connect to the mysql cluser
<Eryn_1983_FL> yet i can telnet port 3306
<Eryn_1983_FL> and ping just freaking fine
<petevg> beisner, jamespage: does the error Eryn_1983_FL is running into above look familiar to you?
<petevg> rick_h: Looping back to your request from this morning: want to jump into the Juju daily? I've got some time to chat.
<Eryn_1983_FL> i think i made progress
<Eryn_1983_FL> it had a cluster issue but i started replication agasin
<rick_h> petevg:  omw
<Eryn_1983_FL> i got 2/3 working so far
<Eryn_1983_FL> https://paste.debian.net/1153489/
<Eryn_1983_FL> ok now its working
<rick_h> tvansteenburgh:  anyone free that knows how expose works on k8s charms able to hop into a call? https://meet.google.com/dxr-hngd-beo
<Eryn_1983_FL> ok so vault is in blocked status
<Eryn_1983_FL> i restart the services and it seems ok in the lxd
<Eryn_1983_FL> but im still blocked
<Eryn_1983_FL> should i juju resolve vaul?
<tvansteenburgh> rick_h: kelvinliu should know
<tvansteenburgh> oh maybe he's eod
<tvansteenburgh> somebody on wallyworld's juju k8s team
<tvansteenburgh> rick_h: i think juju expose adds an ingress rule for the charm, do you need more specifics than that?
<tvansteenburgh> rick_h: there's some stuff in Discourse about it too, see https://discourse.juju.is/t/getting-started/152 and search page for "Exposing gitlab"
<rick_h> tvansteenburgh:  yea, trying to figure it out but I'm on ask and so "ingress rule" and what needs to work is a bit fuzzy
<rick_h> everything I see is that in CK you have to configure/get a worker node IP of the cluster
<rick_h> tvansteenburgh:  so I'm not sure how this works in a hosted k8s world
<tvansteenburgh> rick_h: if you're just trying to make it work, the steps in that discourse post ^ should be sufficient
<Eryn_1983_FL> so is there a way i can check why vault is blocked?
<rick_h> tvansteenburgh:  so I deployed it with the loadbalancer config argument, grabbed the IP of the unit, set the config for juju-external... to that $IP.xp.io and exposed it and nadda
<rick_h> tvansteenburgh:  I don't get how .xp.io gets invoked into it
<tvansteenburgh> rick_h: what cloud?
<tvansteenburgh> rick_h: the ip should be that of the LB, if you have one, or the IP of a worker node if you don't have an LB
<rick_h> tvansteenburgh:  ok, so I need to get those details in an AKS setup from kubectl then probably
<rick_h> tvansteenburgh:  ok, so I've got a "kubnet" networking setup by default on AKS. And in the networking details the only interesting is DNS Service 10.0.0.10?
<rick_h> tvansteenburgh:  hmmm, there's a http routing option I'm turning on and see if that gives me anything
<knkski> rick_h: i just got on, so am missing the context. are you trying to get external access to your charms?
<rick_h> knkski:  yes, on aks atm
<rick_h> knkski:  looking at https://docs.microsoft.com/en-us/azure/aks/http-application-routing which seems in the ballpark but not sure how that integrates with juju's pod spec config provided
<knkski> rick_h: if you're using AKS, you're not using Charmed Kubernetes at all, right? If so, I'm not really sure how to get the external IP of the cluster, but once you do, it should be as easy as `juju config charm-name juju-external-hostname=$EXTERNAL_IP && juju expose charm-name`
<rick_h> knkski:  yea, I moved to gke because there I can get it on the load balancer, set the config and have the address but :(
<knkski> rick_h: also, what charm are you trying to access externally?
<rick_h> knkski:  gitlab or mediawiki
<knkski> rick_h: so no connection after you've exposed the charm? does `juju status` show the charm as exposed?
<knkski> and if so, what address does it show?
<rick_h> knkski:  yea gke and juju status show one pod up on 35.223.146.32
<rick_h> knkski:  i might just hijack a juju meeting later in the day tonight and make them make it work. :)
<knkski> rick_h: would you be able to send me the kubeconfig for the cluster? i can poke at it if you'd like.
<rick_h> knkski:  going into a call, thanks for the offer.
<josephillips> hi
<josephillips> question im perform a fork for a juju charm
<josephillips> but reading the code i found this on a part of the code If the charm was installed from source we cannot upgrade it. For backwards compatibility a config flag must be set for this code to run, otherwise a full service level upgrade will fire on config-changed."""
<josephillips> what exactly means that
<josephillips> is charm-swift-proxy
<josephillips> https://github.com/openstack/charm-swift-proxy/blob/0ce1ee67f8ee69f7c6fada10979aaf1415c7cf68/charmhelpers/contrib/openstack/utils.py#L1358
<josephillips> what i have to do just set action-managed-upgrade to true
<josephillips> on config
<josephillips> ?
<pmatulis> josephillips, what is your objective?
<josephillips> pmatulis: understand how i can perform the upgrade
<josephillips> if i use my fork
<pmatulis> josephillips, https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-upgrade-openstack.html
<jesseleo> Hello all, I have been having trouble getting juju bootstrapped on lxd here is the bug I filed: https://bugs.launchpad.net/juju/+bug/1884814
<mup> Bug #1884814: bootstrap localhost: ERROR Get "https://10.194.144.1:8443/1.0": Unable to connect to: 10.194.144.1:8443 <juju:New> <https://launchpad.net/bugs/1884814>
<jesseleo> let me know if you want me to provide any more info
<hml> jesseleo:  bootstrap timed out and failed.  because the juju client was unable to connect to the lxd instance created.
<jesseleo> Hey hml I can launch a container manually on the same machine and have connectivity, I just don't know how to verify the connectivity exists with the juju controller container as well
<hml> jesseleo:  correctionâ¦
<hml> jesseleo:  itâs timing out, but failed waiting for the lxd server to respond to writing an lxd profile
<hml> hrm
<jesseleo> I tried reinstalling juju and lxd snaps a number of times to no avail I was wondering what the next step in troubleshooting should be?
<jesseleo> https://paste.ubuntu.com/p/4nvKhfbwXj/ it doesn't look like its using the socket inside the snap
<jesseleo> but i could be wrong just pokin around
<hml> jesseleo:  the pastebin is usually around sudo and groups.  though i didnât htink that was an issue with the snap
<pmatulis> jesseleo, what ubuntu release?
<jesseleo> 20.04
<pmatulis> confirm that you are using the snap that is installed by default?
<pmatulis> the 'lxd' snap
<pmatulis> maybe you have LXD deb packages installed too
<jesseleo> https://paste.ubuntu.com/p/8PvSG6d44q/
<thumper> jesseleo: have you logged out and back in after setting up lxd?
<thumper> I think there may be some group changes that impact permissions
<pmatulis> jesseleo, so you're running 'lxd.migrate'. that tells me you *are* running the lxd deb packages
<jesseleo> pmatulis corret
<jesseleo> correct
<pmatulis> jesseleo, that could be the problem
<pmatulis> jesseleo, so you have existing containers that you need to move to the snap?
<pmatulis> jesseleo, if so, i suggest moving to #lxd to resolve the migration and then look at juju
<jesseleo> pmatulis ohh now I get what you mean. I just reinitialized lxd more than once so i thought I needed to run lxd.migrate. I just spun up a clean virtual box because I needed 20.04
<pmatulis> jesseleo, ah ha. you are running Ubuntu on Virtualbox
<jesseleo> pmatulis yeah running it headless then I ssh into it
<pmatulis> btw, lxd.migrate is to migrate containers that exist under the deb lxd packages to the env managed by the lxd snap
<jesseleo> pmatulis thank you good to know
<pmatulis> jesseleo, did you confirm that lxd is even supposed to work in a virtualbox environment?
<jesseleo> pmatulis yeah it works. been using it for months on my 18.04 server
<jesseleo> yeah its been working well on my other machine
<pmatulis> with Juju as well?
<jesseleo> pmatulis Yeah I was writing charms on my other machine and everything was working great. but charmcraft wouldn't install my requirements so I elected to move to 20.04 and thats where I'm running into this bootstrap issue
<pmatulis> jesseleo, can you launch a native lxd container?
<pmatulis> (lxc launch ubuntu)
<jesseleo> yeah I tried that earlier it works
<jesseleo> https://paste.ubuntu.com/p/K8DDG69sfR/
<pmatulis> it would be good to doublecheck that the versions for juju, vbox, and lxd work on 18.04 but do not work on 20.04
<rick_h> tlm:  can I bug you please about exposing an k8s app on gke/aks?
<tlm> you can but not sure i'll be able to help :|
<rick_h> tlm:  ok, no pressure but I kinda need to solve this or find someone to help or bust tbh
<rick_h> tlm:  meet you in your standup room?
<tlm> yep
<rick_h> tlm:  do you know if the mediawiki charm actually configures the wiki? Looking at the source it sets pod_spec data but when you launch it mediawiki wants to walk you through setting up the LocalSettings.php so the db relation details don't actually get you a configured db?
<tlm> I think you need to set the relation info
<tlm> kelvinliu: you need to add the relations when deploying mediawiki ?
<kelvinliu> tlm: yes, mediawiki requires the relation
#juju 2020-06-24
<rick_h> kelvinliu:  right, but looking at the relation I don't see it writing out the config actually. just setting pod_spec
<rick_h> kelvinliu:  does something write that out to the LocalSettings.php in some way?
<rick_h> kelvinliu:  once the relation is there does it actively come up as a usable wiki or still want more configuring
<kelvinliu> rick_h: which charm are u using?
<kelvinliu> u just need relate the mediawiki with mariadb then expose
<rick_h> kelvinliu:  right, so when you expose it and load the page does it come up saying "LocalSettings.php not found.
<rick_h> Please complete the installation and download LocalSettings.php.
<kelvinliu> u see this from the wiki web page?
<kelvinliu> im not sure, for testing purpose, I just always check the webpage gives status code < 400
<kelvinliu> rick_h: im deploying it to see what's in the page now
<rick_h> kelvinliu:  k, I'm trying it on another model to see as well
<rick_h> kelvinliu:  so I setup a deploy on microk8s and I still get the "setup your wiki" and when you walk through that it asks you to configure your database manually
<rick_h> kelvinliu:  so it doesn't actually look like the db relation "works" when you look at it
<kelvinliu> rick_h: in the mediawiki podspec I can find the db host, port, db_name, pwd, username are all set
<kelvinliu> rick_h:  i think the charm doesn't config the db credential ENV name correctly
<kelvinliu> https://hub.docker.com/_/mariadb/
<kelvinliu> they should be MYSQL_<xxx> but not MEDIAWIKI_<xxx>
<kelvinliu> rick_h: https://pastebin.ubuntu.com/p/VHQKtQrHXK/
<rick_h> kelvinliu:  right I mean I understnad but do you see my point that you can't use the wiki as a wiki?
<kelvinliu> rick_h: the db relation works but the charm itself does get the relation data correctly set to the podspec
<kelvinliu> does NOT
<rick_h> kelvinliu:  right
<kelvinliu> rick_h: sorry, the solution is probably not to rename the env names, but we need to generate the LocalSettings.php using the relation db data then mount into the pod using configmap
<kelvinliu> this is the thing was missing in the charm
<kelvinliu_> hpidcock: or tlm could u take a look this small pr plz? ty  https://github.com/juju/juju/pull/11748
<hpidcock> kelvinliu_: lgtm
<kelvinliu_> ta
<flxfoo> Hi all,
<flxfoo> Sorry to bother... i install `pip3 install juju` to write a script some time ago. now the script can ages to run (retrieving public ips of nodes). I have as well those messages coming out.
<flxfoo> unknown facade CAASModelOperator
<flxfoo> unexpected facade CAASModelOperator found, unable to decipher version to use
<flxfoo> unknown delta type: id
<stickupkid> flxfoo, those are warnings, it's safe to ignore. Essentially there is a facade that libjuju doesn't know about, but juju advertises
<stickupkid> manadart_, CR please https://github.com/juju/juju/pull/11736
<manadart_> stickupkid: Yep, need a few. If you've time, I'll swap for https://github.com/juju/juju/pull/11749.
<stickupkid> stupid apt-cache was down... argh
<flxfoo> stickupkid: thanks!
<flxfoo> a little one quick (not an issue).
<flxfoo> running the script (basically connect to controller and gather informations). Sometimes it goes like less than a second to finish, sometimes more than a minute... is this normal behaviour? if yes could it deplend on controller resources?
<stickupkid> flxfoo, what version of juju and what substrate you hitting against?
<flxfoo> stickupkid:version is 2.8.0
<flxfoo> public ips from a model
<flxfoo> units public ips
<manadart_> stickupkid, achilleasa: Forward merge; kind of big, but no conflicts: https://github.com/juju/juju/pull/11750.
<achilleasa> manadart_: lgtm; I noticed your description commit and am wondering whether I need to update juju/description to include the VirtualPortType for exported devices (develop branch only)
<manadart_> achilleasa: Yeah, you will need to.
<stickupkid> manadart_, https://github.com/juju/juju/pull/11736#discussion_r444760182 <- for all DTOs or just for the client?
<manadart_> stickupkid: I lean towards all, maybe even in a separate sub-package to make it as dependency free as possible - DTO dependence on client elements would become circular.
<stickupkid> manadart_, yeah ok, I'll do that now
<achilleasa> manadart_: can you take a quick look at https://github.com/juju/description/pull/81?
<manadart_> achilleasa: Yep.
<stickupkid> achilleasa, is juju-2.7 branch in bundlechanges a valid 2.7 branch or is it tainted?
<achilleasa> stickupkid: I think I bumped it recently to fix the panic for relations without endpoints; let me check if that landed on 2.7
<stickupkid> achilleasa, yeah it has
<stickupkid> wicked
<achilleasa> yes
<achilleasa> can I get a CR on https://github.com/juju/juju/pull/11752?
<manadart_> achilleasa: I'll look at yours. Can you check out this mechanical one? https://github.com/juju/juju/pull/11753
<achilleasa> manadart_: sure
<Eryn_1983_FL> 24 Jun 2020 14:54:48Z  juju-unit  error      hook failed: "leader-settings-changed"
<Eryn_1983_FL> ahhhhh
<Eryn_1983_FL> ovn freaking broke again
<achilleasa> manadart_: got two questions on your pR
<gsamfira> hello folks. Any chance we can get Windows Server 2019 juju tools in simplestreams: http://streams.canonical.com/juju/tools/streams/v1/com.ubuntu.juju-devel-tools.json ?
<petevg> gsamfira: we've got a fix committed, but it hasn't made it into a release yet. Have you tried bootstrapping w/ --agent-stream=proposed?
<petevg> https://bugs.launchpad.net/juju/+bug/1872495
<mup> Bug #1872495: can't deploy win2019: no matching agent binaries available <simplestreams> <juju:Fix Committed by petevg> <https://launchpad.net/bugs/1872495>
<stickupkid> hml, can I get a CR https://github.com/juju/juju/pull/11754 and https://github.com/juju/bundlechanges/pull/63/commits/9c9a93fa5758432ec07279ca9ece5c686008a802
<gsamfira> petevg: will give it a shot in the morning! Thanks!
#juju 2020-06-25
<thumper> https://github.com/juju/juju/pull/11755
<thumper> for anyone
<thumper> just forward porting
<hpidcock> thumper: looking
<hpidcock> thumper: any merge conflicts?
<thumper> hpidcock: I resolved them
<thumper> had to change the imports to be v2
<thumper> for workertest
<thumper> but that was it
<thumper> hpidcock: thanks
<manadart_> achilleasa: I think I addressed your comments.
<stickupkid> manadart_, you seen thishttps://bugs.launchpad.net/juju/+bug/1855013
<mup> Bug #1855013: upgrade-series hangs, leaves lock behind <seg> <upgradeseries> <juju:Triaged> <https://launchpad.net/bugs/1855013>
<stickupkid> manadart_, achilleasa can you do a CR https://github.com/juju/bundlechanges/pull/64 - it's a forward port of https://github.com/juju/bundlechanges/pull/63
<achilleasa> stickupkid: done
<stickupkid> ah we broke go mod in 2.8 branch, win win - trying to resolve it now
<stickupkid> merging forward (example: from 2.7 -> 2.8) will most likely break go mod
<achilleasa> stickupkid: why so?
<stickupkid> achilleasa, ho? got an issue
<manadart_> stickupkid: https://github.com/juju/juju/pull/11758
<stickupkid> manadart_, ho?
<stickupkid> daily
<stickupkid> achilleasa, https://github.com/juju/juju/pull/11759
<Eryn_1983_FL> how do i deploy another nova-cloud-controller/0?
<Eryn_1983_FL> the one i got on 0 is broken with hook failed install
<hml> Eryn_1983_FL: juju add-unit nova-cloud-controller,  if you havenât removed the application.  although depending on how the install hook failed, you might get the same results
<Eryn_1983_FL> ok
<Eryn_1983_FL> something is happening..
 * Eryn_1983_FL nervous giggle
<Eryn_1983_FL> nova-cloud-controller/1      waiting   allocating  3        10.3.251.44                        waiting for machine
<Eryn_1983_FL> so its putting it on a diff machine not even in the cluster curently
<Eryn_1983_FL> great now 0/1 are just down
<Eryn_1983_FL> wtf.
<Eryn_1983_FL> 1 is back up now 0 down still is this normal for machines just to go down for no reason?
<mcaju> hi, I've made available on Archlinux's AUR juju 2.8.0 . Now I just have to spread the word, somehow...
<Eryn_1983_FL> you were right hml
<Eryn_1983_FL> hook failed install
<Eryn_1983_FL> wtf.
<Eryn_1983_FL> i must pray to the wrong linux gods, for it to work on ubuntu/juju
<Eryn_1983_FL> 2020-06-25 13:25:17 ERROR juju.worker.uniter.operation runhook.go:136 hook "install" (via explicit, bespoke hook script) failed: exit status 1
<gsamfira> petevg: proposed agent stream worked great. Thanks a lot! :)
<petevg> gsamfira: glad to hear it! ou're welcome :-)
<manadart_> mcaju: Nice. The thing to do would be to mention it at https://discourse.juju.is
<mcaju> manadart_: Ok
<manadart_> stickupkid: Forward port: https://github.com/juju/juju/pull/11760
<stickupkid> why...? cannot update github.com/juju/charmrepo/v5 from local cache: cannot hash "/home/simon/go/src/github.com/juju/charmrepo": open /home/simon/go/src/github.com/juju/charmrepo/internal/test-charm-repo/series/format2/hooks/symlink: no such file or directory
<stickupkid> hmmm
<stickupkid> manadart_, hml, CR please https://github.com/juju/charm/pull/310
<hml> stickupkid: looking,
<hml> stickupkid:  there are no tests for error paths?
<stickupkid> nope, just that there is an error
<hml> stickupkid: approved.  ty for that change
<stickupkid> achilleasa, hml, CR https://github.com/juju/charmrepo/pull/162
<stickupkid> or even petevg :)
<flxfoo> stickupkid: sorry for yesterday, could not get your answer (if any)
<flxfoo> any tips on floating ips with aws? I am trying to setup an ha cluster with percona cluster, of course "Resource: res_mysql_90aa447_vip not running" any idea?
<Eryn_1983_FL> so
<Eryn_1983_FL> how do i get an old version of juju
<petevg> Eryn_1983_FL: you can install anything back to Juju 2.3 via the snap. You just need to do "snap refresh juju --channel <version>/stable"
<petevg> Eryn_1983_FL: you can install anything back to Juju 2.3 via the snap. You just need to do "snap refresh juju --channel <version>/stable"
<petevg> (Whoops. Sorry for the dupe.)
<Eryn_1983_FL> ok
<petevg> You can see available versions by doing "snap info juju"
<Eryn_1983_FL> petevg you are so awesome
<Eryn_1983_FL> i have 2.8.0 installed..
<Eryn_1983_FL> is that bad?
<Eryn_1983_FL> how does that affect how my charms work?
<petevg> Eryn_1983_FL: 2.8 is the latest release, and it should work just fine with existing charms. There are some open issues, which will be fixed in a 2.8.1 release.
<petevg> That's just the client version, btw. Your controller won't change unless you specifically upgrade your controller.
<petevg> And newer clients can talk to older controllers.
<Eryn_1983_FL> :(
<Eryn_1983_FL> sigh i can't even get it to remove machines
<petevg> What command are you using to remove the machines? And did you ever determine a reason for the install hook failing?
<Eryn_1983_FL> no i didnt figure it out i looked the logs and i dont see anything but exit 1
<Eryn_1983_FL> juju remove-machine 3
<Eryn_1983_FL> the fact that my servers keeps going down and up and the ovn keeps picking a new leader, makes me wonder if something is wrong with my hw or the network..
<Eryn_1983_FL> 3        stopped  10.3.251.44   above-ram            focal   default  Deployed
<petevg> I'd definitely guess that there were hardware or network issues. I'd take a look at disk, ram and cpu utlization on the underlying host machines.
<Eryn_1983_FL> mmm ok its not down it just juju status says it went down.
<petevg> The Juju agent is just a process that runs alongside your workload, reporting back to the controller with status and changes.
<Eryn_1983_FL> ok
<Eryn_1983_FL> question
<Eryn_1983_FL> is i remove node 0 will the services be started again on a different machine
<Eryn_1983_FL> does it matter is i use bionic or focal?
<petevg> Eryn_1983_FL: bionic or focal should just work. Juju doens't automatically reallocate services, so if you remove machine 0, you'll need to add-unit to re-add a unit for each application that was deployed to that machine.
<Eryn_1983_FL> ok
<Eryn_1983_FL> makes sense
<Eryn_1983_FL> ill nave to redeploy the controller and the valut,
<Eryn_1983_FL> both were on that 0 machine,
<Eryn_1983_FL>   => There are 20 zombie processes.
<Eryn_1983_FL> fml
<petevg> Eryn_1983_FL: The controller should live in its own model, separate from any charms that you've deployed. Were you deploying charms into the controller model?
#juju 2020-06-26
<timClicks> Docs review available for anyone with a rubber stamp https://github.com/juju/docs/pull/3507
<timClicks> Actually, I'm just going to merge it in as it only brings the repo up-to-date with what's already there
<timClicks> thumper: do you have any thoughts on the use cases for LXD should be described? with LXD clustering, I feel like saying that it's only for local development is selling it short
<thumper> yes local development only would be selling it short
<thumper> one use case I had thought about was using LXD cluseter to install maas
<thumper> using LXD clusters, you have a very lightweight cloudish thing
<stickupkid> manadart_, achilleasa : CR https://github.com/juju/juju/pull/11762
<manadart_> stickupkid: I'm looking.
<stickupkid> manadart_, quick HO?
<stickupkid> manadart_, hopefully that'll fix everything in that PR
<manadart_> stickupkid: I already approved it.
<achilleasa> manadart_: where was that code that you were showing me? My grep-fu only seems to match a call to Subnets() in client/subnets/cache.go with an UnknownId arg
<manadart_> achilleasa: environes/networking, environs/space ?
<manadart_> `ReloadSpaces` is in environs/space/spaces.go. That's where the fallback to subnet discovery is.
<achilleasa> ah just found it :D
<achilleasa> so the thing is that I can't seem to be able to get all CIDRs from the lxd api
<flxfoo> hi all,
<flxfoo> I would like to add some little things in `cloudinit-userdata` model config, but I have this error in return: The update path 'settings.' contains an empty field name, which is not allowed.
<achilleasa> stickupkid: or manadart_ simple PR for a bug https://github.com/juju/juju/pull/11763
<manadart_> achilleasa: Will do it in a sec.
<achilleasa> manadart_: not in a rush. If you have a few min later let's jump back into the HO to chat subnet discovery again bec I think I'm stuck
<manadart_> achilleasa: Swap you: https://github.com/juju/juju/pull/11764
<stickupkid> hml, i'm unsure we would want a mutex in the firewaller that gates every openstack call
<achilleasa> manadart_: clear to land 11764
<stickupkid> hml, esp. when ensureGroups could take forever depending on the api calls
<achilleasa> can I get a CR on https://github.com/juju/juju/pull/11765?
<stickupkid> hml, we should be resilent against things not being there imo
<hml> stickupkid: rgr.  in that case.  is it appropriate for us to be removing security group rules like this?  that arenât in our list or egress rules?
<stickupkid> let me check that next
<stickupkid> hml, this worries me though https://github.com/juju/juju/pull/11756/files#diff-cc8fa9cbee78e4f349622a6a1d6d9f24R570
<hml> hahahaha
<manadart_> achilleasa: Ta.
<achilleasa> status tests seem to be flaky
<stickupkid> hml, I think what the code is doing is fine, we just need to handle the issue that we may end up deleting a sg that's already gone
<stickupkid> https://github.com/juju/juju/pull/11756
<hml> stickupkid: k,  iâll review shortly
<achilleasa> hml: can you also add https://github.com/juju/juju/pull/11765 to the review queue? it's a really small patch
<hml> achilleasa:  sure
<achilleasa> petevg: you might also want to QA this before we land it ^^^
<petevg> achilleasa: roger that.
<bdx> hit this just now https://bugs.launchpad.net/juju/+bug/1885339
<mup> Bug #1885339: unit not removed from relation <juju:New> <https://launchpad.net/bugs/1885339>
<bdx> I remove a unit, but it never leaves the relation
<bdx> I'm thinking this can't be intended
<bdx> is it?
#juju 2020-06-27
<jam> bdx, is one side of the relation not responding?
<jam> IIRC, both sides have to ack that it is going away before Juju will destroy it
<bdx> jam: this is a peer relation
