#juju 2011-12-05
<nijaba> SpamapS: all 5 issues your reported for limesurvey should now be fixed
<nijaba> SpamapS: thanks again for your help
<marcoceppi> Can Juju manage deployments between different environments?
<marcoceppi> Like, I've got three bare metal servers, so to juju that's three environments - but I would like to relate things between environments
<_mup_> Bug #900104 was filed: watch_related_units callback is icky <juju:In Progress> < https://launchpad.net/bugs/900104 >
<SpamapS> marcoceppi: 3 bare metal servers would just be one env if you used orchestra
 * marcoceppi needs orchestra
<marcoceppi> Thanks SpamapS
<SpamapS> marcoceppi: to the bigger question.. right now envs are decent ways to group availability zones.. so crossing envs usually involves something special to handle the latency between two envs. :)
<SpamapS> hrm.. limesurvey's sessioning doesn't seem to work right
<SpamapS> loses my session whenever it load balances to another node
<_mup_> Bug #900186 was filed: no spec for machine placement story <juju:In Progress by fwereade> < https://launchpad.net/bugs/900186 >
<_mup_> Bug #900227 was filed: guarantee start hook execution on machine boot <juju:In Progress by fwereade> < https://launchpad.net/bugs/900227 >
<amithkk> !Rejoin
<amithkk> :P
<rog> TheMue: to check out the sources
<rog> it's best, i think, to create a directory to hold all the branches
<rog> in bazaar, each branch has its own directory
<rog> i use $HOME/src/juju
<TheMue> reminds me of svn
<mainerror> Houston, we have a problem! My ThinkPad stopped charging its batteries!
<rog> i usually keep one pristine directory for each branch, representing the trunk
<rog> mainerror: ouch
<TheMue> mainerror: so your name matches, that's bad (in this case)
<mainerror> -_-
<rog> TheMue: then to check out the python sources, for example, i'd do: cd $HOME/src/juju; bzr branch lp:juju juju-trunk
<mainerror> Sad mainerror is sad.
<rog> TheMue: to check out the go juju sources, bzr branch lp:juju/go go-trunk
<rog> oops, forgotten one crucial step
<rog> TheMue: before you start checking things out, should make a shared repo so all the branches can share data
<rog> TheMue: in my case, cd $HOME/src/juju; bzr init-repository
<TheMue> ok
<rog> TheMue: then you can branch to your heart's content.
<TheMue> needs a locations, so in your case just . ?
<rog> yeah
<TheMue> fine
<rog> TheMue: before you start work on some source code, create a branch for that item of work - usually the branch directory name reflects the work that you're doing
<rog> TheMue: i usually start when a generic name then change it to something more specific when i know what the merge request does
<TheMue> rog: ok, very important hints
<TheMue> rog: I'm currently fetching the sources, so it works fine
<rog> cool
<rog> TheMue: when you actually want to submit some code for review, you should use Gustavo's lbox tool
<rog> TheMue: (you can get it with apt-get install lbox)
<niemeyer> Hello jujuers
<rog> niemeyer: yo!
<TheMue> moo gustavo
<fwereade> heya niemeyer
<TheMue> rog: which repository for lbox?
<rog> TheMue: good point, i can't remember. one mo.
<rog> TheMue: sudo add-apt-repository ppa:gophers/go
<TheMue> lbox installed O>
<niemeyer> TheMue: You can get the full workflow here: http://blog.labix.org/2011/11/17/launchpad-rietveld-happycodereviews
<TheMue> niemeyer: thx, I've already read it after you've published it. but now I have to dig deeper.
<nijaba> hey guys. Great to see that my charm has been put into lp:charm/limesurvey.  Small question for you: what's the process for me to upload changes to it from now on?  Make the changes in my branch and request a merge?
<TheMue> ok, afk for lunch
<mpl> niemeyer: greetings. nothing to report for goyaml, it builds out of the box with weekly.
<niemeyer> mpl: Fantastic
<niemeyer> mpl: I think we're good, as far as weekly goes
<niemeyer> mpl: I have a suggestion that may enable you to collaborate more closely with rog
<niemeyer> mpl: with something more involved
<mpl> cool
 * rog is listening
<niemeyer> mpl: We need to introduce ssh support
<niemeyer> mpl: We use ssh in a few different ways
<niemeyer> mpl: I suggest you spend some time looking at the Python implementation to understand some of the background
<niemeyer> mpl: Before diving onto the implementation
<niemeyer> mpl, rog: I'd like to try using exp/ssh for the forwarding
<rog> niemeyer: the zookeeper connection?
<mpl> niemeyer: when you say the python implementation, you mean the parts of juju that use ssh?
<niemeyer> rog: Yeha
<niemeyer> mpl: That's right
<rog> niemeyer: seems like a good idea.
<niemeyer> mpl: Check out juju/state/ssh*.py, and juju/providers/ec2/* for hints of how that
<niemeyer> 's used
<mpl> rog, niemeyer: so how is the zookeeper connection and that forwarding you're talking about done at the moment in the go port?
<niemeyer> mpl: There's nothing
<mpl> not at all, or with some other way than ssh?
<mpl> oh ok.
<rog> mpl: haven't even got as far as starting zk yet...
<mpl> rog: how do you suggest we go about that? shall I read those bits of code and report to you when I'm done or when I have questions about it?
<niemeyer> mpl: Yeah, the first task is to understand how it works today, so you have an idea of what has to be done
<rog> mpl: i suggest you look at the way the python code does its ssh connection, and see if you can replicate that with a local zookeeper instance (started by gozk) and an independent go program, all running on the same machine
<rog> niemeyer: that sound reasonable?
<niemeyer> mpl: Basically, the zk connection is insecure at the moment, and we use ssh to fix that
<niemeyer> rog: +1
<mpl> got it, thanks.
<niemeyer> Lunch time
<niemeyer> biab
<jcastro_> marcoceppi: howdy
<jcastro_> marcoceppi: I was off on Friday other than the charm school, how's it going?
<marcoceppi> jcastro: good
<jcastro> marcoceppi: hey so, just so I have the spreadsheet clear, which one are you working on now?
<jcastro> phpmyadmin?
<jcastro> or did that one get accepted?
<marcoceppi> it's just about ready, I need to finish the package based installation: found a few bugs
<jcastro> ah awesome
<jcastro> any idea what george is working on next?
<marcoceppi> Steam is in a holding pattern while I try to get all the game's server configs sorted out. They're not included by default and poorly documented game to game, I've also got Redmine going
<marcoceppi> jcastro: no idea
 * jcastro nods
<jcastro> I see nijaba rocked out limesurvey
<marcoceppi> yeah, I gave it a spin over the weekend, it rule
<marcoceppi> s
<nijaba> jcastro: just keeping my word to you :)
<jcastro> nijaba: that's going to be a great charm, it's actually very useful to people and very cloudy ... up when you need it, and then goes away when you don't
<SpamapS> seriously.. limesurvey is fun...
<SpamapS> I created a survey.. "can you crash this site?" Y/N
<SpamapS> was thinking we could give it a shot with a massive scale-out
 * SpamapS moves from cold drafty downstairs to overly warm ovenlike upstairs
<jimbaker>  SpamapS, you're in LA. how can it be cold? in contrast it's already at a toasty 3 deg F outside
<marcoceppi> I noticed something that might be a bug, but I'm not sure. If you do a deploy of mysql charm + charm that requires mysql, add-relation, upgrade non mysql charm, destroy charm that required mysql, deploy again and add-relation MySQL charm doesn't send over the relation information.
<marcoceppi> Should MySQL delete the database after a service has broken the relation?
<SpamapS> jimbaker: 45F is "cold" here. ;)
<SpamapS> marcoceppi: no
<SpamapS> marcoceppi: but it should re-set the info
<SpamapS>     print "database exists, assuming configuration has happened already"
<SpamapS>     sys.exit(0)
<SpamapS> marcoceppi: definitely broken
<marcoceppi> Okay, so if it detects that the database is already there, it should send over the information again. Seems like that would put a lot of work on the charm to check if database already exists to avoid re-installing it
<SpamapS> marcoceppi: honestly, the "db" relation is silly the way it uses the service name for the database.
<SpamapS> marcoceppi: puts a lot of restrictions on how you can use the relation
<marcoceppi> I think, and I could be wrong - because I understand the use case of keeping the database around, a db-relation-broken should remove the database
<SpamapS> marcoceppi: mysql-shared is a more realistic use of mysql
<SpamapS> marcoceppi: no, we want to make data cleanup manual
<SpamapS> marcoceppi: thats also why machines aren't terminated automatically
<marcoceppi> cool, well I'm having issues with db-admin-relation-changed, and I just realized it shouldn't be creating a database in the first place!
<SpamapS> marcoceppi: *exactly*
<marcoceppi> So my point is moot
<marcoceppi> but it's a bug of another color
<SpamapS> there's also a bug where the username/password is *always* randomly generated..
<SpamapS> so when the relation is broken.. we can't cleanup the users
<marcoceppi> I thought MySQL charm stored that data locally for tracking?
 * marcoceppi was using an older version of the MySQL charm
<SpamapS> user = subprocess.check_output(['pwgen', '-N 1', '15']).strip().split("\n")[0]
<SpamapS> marcoceppi: thats in common.py
<marcoceppi> Should the charm track user/pass per private IP?
<SpamapS> IP?
<SpamapS> no
<marcoceppi> private host*
<SpamapS> 1 service, 1 user, but it can grant that user access from all the addresses of the service
<SpamapS> this is something that confused me early on ... you get *one* set of relation data that you can set..
<SpamapS> so the mysql unit's relation-set must work for *all* the related units on the other side.
<SpamapS> you can't have a user per unit without doing   relation-set wordpress-0-user=foo
<SpamapS> then thats pointless because they all get access to it anyway
<marcoceppi> So, it's really per service then
<marcoceppi> Should the charm store the user/pass info on a per service basis? or does it do that already?
<SpamapS> it must actually
<SpamapS> but it doesn't right now
<marcoceppi> Hum, I suppose plain text storage would be too much to ask for, in regards to security?
<marcoceppi> Because that would probably be the easiest
<SpamapS> Simplest thing to do is just to put it in files.
<SpamapS> we don't have to store the password
<SpamapS> just the usernames to clean up
<SpamapS> but frankly.. the root PW is stored (and must be).. so.. we don't have much to protect in that manner.
<marcoceppi> Oh, I see now. Each unit of a service gets it's own username and password for the services db
<SpamapS> no
<marcoceppi> Just when I thought I had it
<SpamapS> each service gets a username and password
<marcoceppi> Where is that tracked then? Bootstrap?
<SpamapS> where is what tracked?
<marcoceppi> The username and password.
<SpamapS> in the relation settings
<marcoceppi> If it's tracked in the relation settings, why can't the charm just pull the username from that when relation is broken and do a clean up then?
 * marcoceppi is poking in areas he doesn't quite understand
<SpamapS> marcoceppi: because the way broken is triggered is the relation settings node is deleted from zookeeper ;)
<SpamapS> marcoceppi: which is, I think, a bug. ;)
<marcoceppi> ah :)
<SpamapS> bug #791042 to be exact
<_mup_> Bug #791042: *-relation-broken has no way to identify which remote service is being broken <juju:Confirmed> < https://launchpad.net/bugs/791042 >
<SpamapS> marcoceppi: you're going through the same questions I did .. so.. if nothing else you're helping me feel less like an A-hole for reporting so many bugs ;)
<marcoceppi> \o/
<mpl> niemeyer: just fixed a little error on there: https://wiki.ubuntu.com/gozk :)
<mpl> SpamapS: what you don't get an "your attention to detail is appreciated" message everytime you open a new bug? :)
<SpamapS> Since most of my bugs are "This $@!% is $%#!ing broken %!@ing fix it!" ... no. ;)
<niemeyer> mpl: Cheers! :)
<mpl> np
<hazmat> SpamapS, its not actually deleted, but i'm not sure that its reachable via the cli in that case
<SpamapS> hazmat: good to know :)
<fwereade> gn all :)
<koolhead17> :P
<koolhead17> SpamapS: sir
<SpamapS> koolhead17: aye?
<mpl> niemeyer: I get a weird behaviour with that gozk example on the wiki. the first error check passes, but then I never get passed event := <-session and I continuously get messages such as this one:
<mpl> 2011-12-05 18:54:51,868:28689(0x7f3c313d0700):ZOO_ERROR@handle_socket_error_msg@1528: Socket [127.0.0.1:2181] zk retcode=-7, errno=110(Connection timed out): connection timed out (exceeded timeout by 0ms)
<koolhead17> SpamapS: am finally close to running custom cloud image + juju + openstack
<mpl> and yes, I have zookeeper running, I can connect to it with zkCli.sh
<niemeyer> mpl: Looks like it's trying to connect to the wrong server?
<mpl> niemeyer: well, the server is indeed listening on 2181. besides, if that was the case, shouldn't it just fail at the first error check after gozk.Init, instead of filling my term with those messages?
<mpl> hmm, wtf zkCli.sh doesn't report any error when there's no server running.
<SpamapS> koolhead17: custom cloud image ?
<niemeyer> rog: Have we ever done that refactoring so that all errors are sent to event channels, including temporary ones?
<niemeyer> mpl: Well, we have tests
<niemeyer> mpl: Can you get tests to pass?
<rog> niemeyer: off for day. will have a look tomorrow.
<jimbaker> fwereade, take care!
<koolhead17> SpamapS: i have my openstack inside proxy and in order to get internet on instance i have to modify the cloud image
<niemeyer> rog, fwereade: Cheers!
<SpamapS> koolhead17: sounds interesting!
 * TheMue has to admit that his Python time has been some years ago. Will have to dig a bit into twisted.
<mpl> niemeyer: after a bzr pull in gozk, there still is a problem with weekly: retry_test.go:198 it seems err.Error() should be changed to err.String() here. or String() to be changed to Error() in gozk.go
<niemeyer> mpl: Crap, I may have missed that
<mpl> niemeyer: that said, it seems my default zookeeper env is not really well setup. I'll try and dig further.
<jcastro> SpamapS: did you know about this? https://launchpad.net/juju-gui
<niemeyer> mpl: There's a lost branch from rog
<niemeyer> mpl: Not sure if it'll fix it, but I'll work on getting it merged right now, and make sure it works with weekly
<mpl> niemeyer: well, I went the quick route and just changed that one to String() on my end so I could go further.
<niemeyer> mpl: Good move
<mpl> niemeyer: then the tests fail because default configuration here expects things to be in root only locations, like the log file.
<mpl> and I don't run gotest as root.
<niemeyer> mpl: Cool, I'm with you.. I'm just finishing merging that branch and solving conflicts, and will then see if there's anything else to address
<niemeyer> mpl: Definitely, we never do that
<mpl> I'll have to leave in a couple mins to get my food as the local indian though :)
<mpl> bbl
<niemeyer> mpl: "OK: 39 passed"
<_mup_> juju/zookeeper r24 committed by gustavo@niemeyer.net
<_mup_> Merged update-server-interface branch from Roger. [r=niemeyer]
<_mup_> This improves the interface used to manage the zookeeper processes.
<_mup_> This changeset also includes fixes to get the branch integrated
<_mup_> into the current state of trunk.
<mpl> niemeyer: thx. I'll retry as soon as I can, and when I have a more suitable zookeeper env setup.
<niemeyer> mpl: FWIW, I don't have a special zk env setup
<niemeyer> mpl: I'm using packaged zk
<mpl> niemeyer: I've aptitude installed it
<mpl> and done nothing else more.
<niemeyer> mpl: Me too
<mpl> hmm
<niemeyer> mpl: Are you on Oneiric?
<mpl> nope, lynx.
<mpl> but I'm on lynx on my other laptop, and I remember the tests passed fine there.
<mpl> *laptop as well, ...
<SpamapS> Hrm, is it possible that resolved --retry does not work?
<SpamapS> http://paste.ubuntu.com/760813/
<mpl> anyway, I have a stupid flat tire to fix so I can go to work tomorrow, so ttyl.
<SpamapS> Thats what I see in the logs after retrying the db relation
<niemeyer> mpl: lynx?
<mpl> niemeyer: lucid lynx
<niemeyer> mpl: Ah
<SpamapS> http://paste.ubuntu.com/760814/
<SpamapS> after that
<SpamapS> so it didn't really retry the hook that failed
<niemeyer> mpl: Yeah, that might be a real issue
<niemeyer> SpamapS: Is that trunk or main?
<niemeyer> s/main/universe
<SpamapS> 427
<SpamapS> so.. basically trunk
<SpamapS> 426 actually
<SpamapS> ii  juju                    0.5+bzr426-1juju2~preci next generation service orchestration system
<niemeyer> SpamapS: Ok, so it's worth checking with hazmat/fwereade
<niemeyer> SpamapS: There has been work taking place in that area
<SpamapS> I'll try to narrow it to a reproducible test case
<SpamapS> But thus far, looks like any hook that errors is not retried
 * hazmat checks traceback 
<hazmat> bug 878462
<_mup_> Bug #878462: resolved --retry does not retry the hook <juju:New> < https://launchpad.net/bugs/878462 >
<SpamapS> hazmat: yeah I thought I had run into this before
<hazmat> SpamapS, before it was with wrt to non rel hooks
<hazmat> SpamapS, it looks like there is  fix for this already in restart-transitions
<SpamapS> hazmat: so something in flight right now?
<hazmat> SpamapS, yup
<SpamapS> cool!
 * SpamapS changes topic..
<SpamapS> so.. philosophy question here..
<SpamapS> mysql has about 450 variables that can be set at various times
<SpamapS> some only at startup
<SpamapS> some without restart..
<SpamapS> I'm of a mind to just expose two things in config.yaml.. "Alternative my.cnf" which would just be your hand-crafted my.cnf .. and "dynamic-global-variables" which would be a set of vars that you want to set.
<hazmat> SpamapS, use postgres ;-)
 * hazmat wonders if that's a valid answer
<SpamapS> Yes, its valid as long as you'd like to spend 2x as much on scaling. ;)
<SpamapS> to be fair, you'd have 0.1% more correct data. :)
<SpamapS> for these and other numbers, please see "my rear end" which is where I pulled them from. :)
<hazmat> SpamapS, those startup values be changed between starts?
<SpamapS> Same thing applies to postgresql actually.
<hazmat> SpamapS, postgres has a lot less hacks masquerading as config
<SpamapS> hazmat: some of them have really strang effects when you change them between restarts... but all of them *can* be changed.
<SpamapS> hazmat: http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html .. which one is more of a hack than slony?
<SpamapS> ;)
<hazmat> SpamapS, slony is so pre 9.0 ;-)
<SpamapS> ok well now that we've gotten the religious argument out of it.. lets *PRETEND* that I said postgres and I was talking about wal log sizes and buffer sizes.
<SpamapS> (lets also pretend that postgres's multi-process model can scale beyond 1000 concurrent connections too! ;)
<SpamapS> postgres has tons of tunables .. and some things that can't be changed like charsets
<SpamapS> so its the same philosophical situation
<SpamapS> you have a database, which is tunable. Should we try to model all of them into a giant config.yaml .. or just give admins the tools to set them.
<SpamapS> It would be *cooler* if they were in a config.yaml
<SpamapS> but more work now, and later when new versions of mysql come out.
<elmo> how is this not configuration management?
<SpamapS> elmo: it is config management.. we've said all along that config management tools are useful for applying the things that one has told juju about.
<niemeyer> SpamapS: Yes, they should be go into config.yaml over time.. is there any disagreement there?
<SpamapS> If puppet already has a module for building my.cnf with all the settings specified or not.. I'll gladly steal that for the charm.
<SpamapS> niemeyer: I'm asking philosophically, do we want to just let people specify their own my.cnf or build the my.cnf for them from the individual settings.
<niemeyer> hazmat: It'd be nice to get the latest agreement in terms of the subordinate spec committed onto the documentation as a draft
<niemeyer> SpamapS: I'm personally fine with whatever people are happiest with, but my gut feeling is that independent options are nicer to work with
<SpamapS> niemeyer: Ok well there's another way to look at it which is to have some higher level options, like 'dataset_size' or 'transaction_safety' which adjust several options based on best practices.
<niemeyer> SpamapS: Maybe.. having conflicting options has implications, though
<SpamapS> niemeyer: specific overrides general is typically how I'd handle that.
<niemeyer> SpamapS: Sure, but it's not clear what's specific and what's general
<SpamapS> so "dataset_size=99G" is general.. "innodb_buffer_pool_size=3915852021" is general
<SpamapS> err
<SpamapS> specific
<SpamapS> :-P
<niemeyer> SpamapS: If one sets two conflicting options, one of them will win, and it's not clear what's what
<niemeyer> SpamapS: Not advocating against it, but just recognizing that there are cognitive challenges there
<SpamapS> sure it is.. if I set both of those, I'd fully expect innodb_buffer_pool_size to win even if normally dataset_size wins.. I'd also log a warning.. "dataset_size ignored for innodb_buffer_pool_size"
<SpamapS> The moment where I start doing individual variables is where I stop trusting the general settings to do anything for me.
<niemeyer> SpamapS: Me, on the other hand, being a postgres newbie, would be completely puzzled
<SpamapS> well why are you setting a specific variable if you're a noob?
<niemeyer> SpamapS: That's what I mean.. you're taking the stance of having full understanding of what's going on and expecting others to be equally informed. Doesn't work in practice.
<SpamapS> Ok not sure where I took a stance. My suggestion is that we make config.yaml comprised of big easy knobs for newbies, and tiny intricate knobs for the experts
<niemeyer> SpamapS: and my suggestion is to use caution while doing that, because setting conflicting options blindly confuses people
<SpamapS> Yeah, I think we'd have to have it document in the description which variables it was setting.
<SpamapS> "Changes query_cache_size, innodb_buffer_pool_size, etc. etc."
<niemeyer> SpamapS: There are other things that may be done, such as using appropriate naming that gives hints
<SpamapS> override_* for the variables might help
<SpamapS> like... set dataset-size=99G override-innodb-buffer-pool-size=500G seems clear to me that I'm overriding that one variable.
<niemeyer> SpamapS: Yeah, or the other side, wizard_*..
<marcoceppi> IMO the charm should set those depending on the size of the instance it;s on
<niemeyer> s/_/-/
<marcoceppi> It's more hardware dependent then anything else
<SpamapS> marcoceppi: you don't necessarily have permission to take all the resources on the instance you're on.
<SpamapS> for LXC, that would be disaster. :)
<SpamapS> they all say they have 4G, because my laptop has 4G
<marcoceppi> blurg
<hazmat> i wonder if they would say that if we where using the mem cgroup
<hazmat> probably
<SpamapS> probably just a bug in the /proc wrapping
<SpamapS> but thats beside the point
<SpamapS> Even on a bare instance.. with subordination.. you don't know how much RAM to leave for instrumentation/extra stuff
<SpamapS> I think we should strive for charms to have defaults that work best on m1.small .. and people can tune-up from there. Eventually maybe we will be able to have config settings that default to '2*constraint(mem)'
<SpamapS> or, I suppose better would be contraint(mem)/2 ;-)
<SpamapS> Ok for mysql I'll just expose general-dataset-size and general-query-cache-mode .. if people want to set more, they can add them as needed I suppose.
<SpamapS> (though that gets back to the whole question of how do I fork and switch my deployed service to my local charm)
<hazmat> SpamapS, yeah.. we'll need to be able to upgrade across repos to support that
<hazmat> we've effectively said specifying a --repository doesn't affect the lookup order..
<hazmat> which is problematic when your trying to say which repository to use
<SpamapS> I think it has to be an explicit decision
<SpamapS> "I was using that charm identifier, now please start using this one"
<hazmat> i guess you do so via a fully qualified reference
<SpamapS> upgrade-charm doesn't take a charm name, it takes a service name
<hazmat> juju upgrade --using=cs:~fewbar/mysql database
<SpamapS> seems like thats not an "upgrade" .. more of a switch or change
<SpamapS> More of a psychological problem :)
<niemeyer> SpamapS: Agreed.. it should be its own internal command, even if we implement it internally as mostly the same
<niemeyer> s/internal command/command
<niemeyer> juju replace database cs:~fewbar/mysql
<niemeyer> Or some such
<_mup_> Bug #900517 was filed: config-get on an int set to 0 does not return '0' but an empty string <juju:New> < https://launchpad.net/bugs/900517 >
 * SpamapS commits config.yaml for memcached
<negronjl> m_3_: ping
<hazmat> jimbaker, i think forget how tricky/messy this sshclient/forward code was
<jimbaker> hazmat, i tried rewriting that code twice, so i agree with that!
<jimbaker> i figured a rewrite might help understand why i was getting that strange behavior seen in bug 892254, but unfortunately it was too big of a hole, so i punted
<_mup_> Bug #892254: SSHClient does not properly handle txzookeeper connection errors <juju:New> < https://launchpad.net/bugs/892254 >
<jimbaker> hazmat, it would be worth fixing test_share_connection if you end up fixing sshclient
<Ryan_Lane> m_3_: last week we upgraded from cactus to diablo, if you guys would like to try things out again :)
<mpl> I had to deal with that stupid flat tire, so I couldn't pursue that further. I'll retry tomorrow. especially since I'll have my other machine, on which I believe the tests pass. See you tomorrow.
#juju 2011-12-06
<SpamapS> Ryan_Lane: w00t!
<SpamapS> Ryan_Lane: m_3 is on holiday all week.. best email him to be sure he gets the message. :)
<Ryan_Lane> ah. ok. cool ;)
<hazmat> jimbaker, figured that one out
<hazmat> re connection error, its was the chain aware deferred thingy on errors
<hazmat> jimbaker, i'm wondering about resurrecting your last rewrite
<SpamapS> config-get needs the same --format=shell that relation-get has
<SpamapS> config-changed runs ridiculously slow with 40 execs .. :-P
<akgraner> ok Silly question but do you all have weekly IRC meetings and if so where can I find the wiki link to the agenda and meeting logs etc - I wanted to include juju in with the development teams weekly meetings section in UWN
<SpamapS> No there's no weekly IRC meeting.
<akgraner> well that explains why I couldn't find anything - thanks :-)
<jimbaker> SpamapS, sounds good
<SpamapS> akgraner: we do a google hangout weekly, and it definitely *needs* an agenda and minutes..
<akgraner> ahh ok when you all formalize that can you drop the link into the news channel so we can include it please :-)
<jimbaker> hazmat, i hope you have better luck in that rewrite
<akgraner> SpamapS, looking forward to the juju webinar...:-)
<jimbaker> but it makes perfect sense that it was the chain aware stuff - that was the obvious suspect
<SpamapS> akgraner: yeah should be good. Hopefully we can finish our slides tomorrow and have time to practice. :)
<_mup_> Bug #900560 was filed: after upgrade-charm, new config settings do not all have default values <juju:New> < https://launchpad.net/bugs/900560 >
<SpamapS> jimbaker: I just used juju scp btw. :)
<SpamapS> jimbaker: ty!
<negronjl> SpamapS: I just uploaded hadoop-mapreduce charm ... It's a charm that allows you to decouple your mapreduce job from your hadoop cluster.
<SpamapS> negronjl: w00t!
<SpamapS> negronjl: would love to see a web interface for that.. isn't that kind of what HIVE does?
<negronjl> SpamapS: hive is more of a layer on top of hadoop, this charm just takes care of all the setup and cleanup that is supposed to happen before and after a job is run
<negronjl> SpamapS: I'll work on hive, pig, etc. eventually :)
<negronjl> SpamapS: hive lets you use more SQL like commands to query hadoop
<SpamapS> that would be nice for demos.. dump a giant chunk of data in and start doing ad-hoc queries
<negronjl> SpamapS: I'll work on that soon ...
<negronjl> SpamapS: this new hadoop-mapreduce charm shows users how they can use their own custom mapreduce jobs, make a charm out of it and run it against a hadoop cluster that's already deployed via juju
<negronjl> SpamapS: so, it allows you to juju deploy it all :)
<negronjl> SpamapS: I'll work on hive next ( as time permits )
<Skaag> can I make my own ubuntu servers juju compatible so that my apps do not use amazon's ec2 cloud?
<SpamapS> sounds great tho
<Skaag> so it's not possible?
<SpamapS> Skaag: you can, I was responding to negronjl ;)
<Skaag> ah, very cool
<SpamapS> Skaag: right now you only can do that using the 'orchestra' provider
<Skaag> which costs money?
<SpamapS> nope
<SpamapS> just time.. ;)
<SpamapS> its kind of raw
<SpamapS> and you only get one service per machine
<Skaag> yes that sucks
<Skaag> I assume it's going to be fixed...
<SpamapS> Yeah there are two features which will help with that.. subordinate services and placement constraints.
<SpamapS> subordinate services will let you marry two things to basically create like a super charm.
<SpamapS> placement constraints will let you define where you want services to be placed based on things like cpu, ram, or just arbitrary tags that you assign to machines
<SpamapS> Skaag: so depending on what you want, your needs should be served
<SpamapS> cripes, its late.. I need to go
 * SpamapS disappears
<Skaag> sounds awesome!
<Skaag> thanks SpamapS
<shang> hi all, anyone experienced issue when deploying ganglia to EC2? (using  "bzr checkout lp:charm/ganglia ganglia" command?)
 * hazmat yawns
<jimbaker> SpamapS, glad juju scp proved to be useful
<niemeyer> Good morning!
<fwereade> morning niemeyer!
<niemeyer> rog: I've mixed things up in your review, sorry about that
<niemeyer> That's what I take for review late at night
<TheMue> moo
<niemeyer> rog: You have a saner review
<niemeyer> TheMue: Yo
<rog> niemeyer: thanks, i'm on it
<rog> niemeyer, fwereade, TheMue: mornin' everyone BTW
<fwereade> heya rog
<TheMue> yo rog and fwereade
<niemeyer> rog: Btw, gocheck has a FitsTypeOf checker that does what you wanted there
<niemeyer> Assert(x, FitsTypeOf, (*environ)(nil))
<rog> niemeyer: thanks. all done. https://codereview.appspot.com/5449065/
<Daviey> why isn't this review handled on launchpad?
<niemeyer> Daviey: It is..
<niemeyer> Daviey: https://code.launchpad.net/~rogpeppe/juju/go-juju-ec2-region/+merge/84256
<niemeyer> Daviey: But codereview does inline code reviews
<niemeyer> Daviey: So we use both
<Daviey> seems a quirky workflow, but ok. :/
<niemeyer> Daviey: http://blog.labix.org/2011/11/17/launchpad-rietveld-happycodereviews
<niemeyer> Daviey: The quirky workflow is significantly simpler and faster to deal with
<TheMue> Do we have a complete paper for the used tools, conventions, workflows, standards and conventions?
<niemeyer> TheMue: Nope
<niemeyer> TheMue: You can check out the Landscape one for some background
<TheMue> something like golangs "Getting Started" would be fine. nothing big.
<niemeyer> TheMue: This blog post is a Getting Started pretty much
<TheMue> landscape one?
<rog> niemeyer: next: https://codereview.appspot.com/5449103/
<niemeyer> TheMue: Nah, nevermind.. reading it again I notice it's of little use for us
<TheMue> niemeyer: ok
<rog> niemeyer: one mo, i need to merge trunk
<fwereade> niemeyer: consider a unit agent coming up to discover that the unit workflow is in state "stop_error"
<TheMue> niemeyer: maybe I'll write down my "starter experiences" ;)
<niemeyer> TheMue: Sounds like a plan
<niemeyer> fwereade, rog: ok
<fwereade> niemeyer: it seems to me that that could only have happened as a result of a failed attempt to shut down cleanly at some stage
<niemeyer> fwereade: Hmm.. yeah
<fwereade> niemeyer, and that in that case it would be reasonable to retry into "stopped" without hooks, and then to transition back to "started" as we would have done had the original stop worked properly
<fwereade> niemeyer, please poke holes in my theory
<niemeyer> fwereade: What do you mean by `to retry into "stopped"`
<niemeyer> ?
<niemeyer> fwereade: The user is retrying? THe system? etc
<fwereade> niemeyer, I was referring to the "retry" transition alias
<fwereade> niemeyer, may as well just explicitly "retry_stop"
<niemeyer> fwereade: I'm missing something still
<niemeyer> fwereade: Probably because I don't understand it well enough
<fwereade> niemeyer, I think I've misthought anyway
<niemeyer> fwereade: Does stop transition back onto "started" when it works?
<fwereade> niemeyer, a clean shutdown of the unit agent puts the workflow into "stopped", and that will go back into "started" when it comes up again
<niemeyer> fwereade: Aha, ok
<niemeyer> fwereade: That makes sense
<niemeyer> fwereade: So the unit shuts down, and goes onto stop_error because stop failed..
<fwereade> niemeyer: so I think the general idea, that coming up in "stop_error" should somehow be turned into a "started" state, remains sensible
<niemeyer> fwereade: You mean automatically?
<rog> niemeyer: all clear now.
<fwereade> niemeyer, I do, but I may well be wrong
<niemeyer> fwereade: I'm not sure
<niemeyer> fwereade: I mean, I'm not sure that doing this automatically is a good idea
<niemeyer> fwereade: I'd vote for explicitness until we understand better how people are using this
<fwereade> niemeyer, that's perfectly reasonable
<niemeyer> fwereade: The reason being that a stop_error can become an exploded start and/or upgrade soon enough
<niemeyer> fwereade: I'd personally appreciate knowing that stop failed, so I can investigate what happened on time, rather than blowing up in cascade in later phases, which will be harder to debug
<fwereade> niemeyer, that definitely makes sense as far as it goes
 * fwereade thinks a moment
<niemeyer> fwereade: resolved [--retry] enables the workflow of ignoring it
<fwereade> niemeyer, but it leaves the unit in a "stopped" state without an obvious way to return to a "started" state
<fwereade> niemeyer, I'll need to check whether bouncing the unit agent will then do the correct swicth to "started"
<fwereade> niemeyer, which I guess would be sort of OK, but it's not necessarily the obvious action to take to get back into "started" once you've resolved the stop error
<niemeyer> fwereade: What about "resolved"?
<niemeyer> fwereade: "resolved" should transition it to stopped rather than started
<niemeyer> fwereade: It always moves it to the next intended state, rather than the previous one
<fwereade> niemeyer, yes, but "stopped" is not a useful state for a running unit to be in
<niemeyer> fwereade: Indeed.. but I don't know what you mean by that
<niemeyer> fwereade: The stop hook has run to transition it onto stopped.. if you resolved a stop_error, it should be stopped, not running
<fwereade> niemeyer: it should, yes, but the purpose of that stopped state is to signal that it's in a nice clean state for restart (if the restart ever happens ofc)
<fwereade> niemeyer: (it could just have gone down for ever, but I don't think that's relevant here)
<fwereade> niemeyer: we end up past the point at which we should detect "ok, we were shut down cleanly so we can be brought up cleanly, let's bring ourselves back up"
<niemeyer> fwereade: You're implying that the stop hook is only called for starting, which is awkward to me
<fwereade> "without painful considerations like 'we're in charm_upgrade_error state, reestablish all watches and relation lifecycles but keep the hook executor stopped watches'"
<fwereade> (er, lose the trailing "watches" above)
<fwereade> niemeyer: I'm implying, I think, that the stop hook is called for stopping, and that that can either happen because the whole unit is going away forever *or* because we've been killed cleanly for, say, machine reboot
<niemeyer> fwereade: It can also happen for any other reason, like a migration, or cold backup, etc
<fwereade> niemeyer: and that when the agent comes up again, "stopped" is considered to be a sensible state from which to transition safely to "started", just as None or "installed" would be
<fwereade> niemeyer: I'm not sure I see the consequences there, expand please?
<niemeyer> fwereade: That's fine, but the point is that some of these actions are not safe to execute if stopped has actually blown up, I think
<fwereade> niemeyer: I'm comfortable with the decision not to automatically start after stop_error
<fwereade> niemeyer: I'm not confident that we have a sensible plan for transitioning back to started once the user has fixed and resolveded
<niemeyer> fwereade: It feels like there are two situations:
<niemeyer> fwereade: 1) The unit was stopped temporarily, without the machine rebooting
<niemeyer> fwereade: 2) The unit was stopped because the machine is rebooting or is being killed
<niemeyer> fwereade: Are you handling only 2) right now?
<niemeyer> fwereade: Or is there some situation where you're handling 1)?
<niemeyer> rog: Review delivered
<rog> niemeyer: on it
<fwereade> niemeyer, I'm handling both, I think -- how do we tell which is the case?
<niemeyer> fwereade: Is there a scenario where the unit stops without a reboot? How?
<fwereade> niemeyer: not a *planned* situation
<niemeyer> fwereade: What about unplanned situations.. how could it happen?
<fwereade> niemeyer: but what about, say, an OOM kill?
<niemeyer> fwereade: OOM kill of what?
<fwereade> niemeyer, the unit agent... could that never happen?
<fwereade> niemeyer, even, a poorly written charm that accidentally kills the wrong PID
<niemeyer> fwereade: Wow, hold on. The unit agent dying and the unit agent workflow are disconnected, aren't them?
<niemeyer> Sorry
<niemeyer> fwereade: Wow, hold on. The unit agent dying and the unit workflow are disconnected, aren't them?
<fwereade> niemeyer: the workflow transitions to "stopped" when the agent is stopped
<niemeyer> fwereade: WHAAA
<niemeyer> fwereade: How can it possibly *transition* when it *dies*?
<fwereade> niemeyer: ok, as I understand it, when we're killed normally, stopService will be called
<fwereade> niemeyer: it's only a kill -9 that will take us down without our knowledge
<niemeyer> fwereade: and an OOM, and a poorly writen charm, and ...
<fwereade> niemeyer: indeed, I'm considering that there are two scenarios, respectively equivalent to kill and kill -9
<rog> niemeyer: response made.
<fwereade> niemeyer: in neither case do we know for what reason we're shutting down
<fwereade> niemeyer: (and in only the first case do we know it's even happened)
<niemeyer> fwereade: So why are we taking a kill as a stop? Didn't you implement logic that enables the unit agent to catch up gracefully after a period of being down?
<fwereade> niemeyer: we were always taking a kill as a stop
<fwereade> http://paste.ubuntu.com/761597/
<fwereade> (given that a "friendly" kill will stopService)
<niemeyer> fwereade: ok, the question remains
<fwereade> niemeyer: ...sorry, I thought you were following up your last message
<fwereade> niemeyer: I have implemented that logic, or thought I had, but have discovered a subtlety in response to hazmat's latest review
<fwereade> niemeyer: well, in hindsight, an obviousty :/
<hazmat> g'morning
<fwereade> heya hazmat
<hazmat> i ended up rewriting the ssh client stuff last night
<hazmat> fwereade, i'm heading back to your review right now
<fwereade> hazmat: might be better to join the conversation here
 * hazmat catches up with the channel log
<fwereade> hazmat: I'm pretty sure the initial version of the unit state handling is flat-out wrong :(
<fwereade> niemeyer: ok, stepping back a mo
<fwereade> niemeyer: when the unit agent comes up, the workflow could be in *any* state, and we need to make sure the lifecycles end up in the correct state
<niemeyer> fwereade: Right
<niemeyer> fwereade: What I'm questioning is the implicit stop
<fwereade> niemeyer: cool, we're on the same page then
<niemeyer> fwereade: We should generally not kill the unit unless we've been explicitly asked to
<niemeyer> fwereade: In the face of uncertainty, attempt to preserve the service running
<fwereade> niemeyer: I am very sympathetic to arguments that we should *not* explicitly go into the "stopped" when we're shut down
<fwereade> niemeyer: cool
<fwereade> niemeyer: am I right in thinking there's some bug where the stop hook doesn't get run anyway?
<niemeyer> fwereade: I agree that we should transition stop_error => started when we're coming from a full reboot, though
<niemeyer> fwereade: The stop story was mostly uncovered before you started working on it
<fwereade> niemeyer: heh, I think I'm more confused now :(
<fwereade> niemeyer: the discussion of start hooks on reboot seems to me to be unconnected with the actual state we're in
<fwereade> niemeyer: by it's nature, that uarantee is an end-run around whatever normal workflow we've set up, isn't it?
 * fwereade resolves once again to lrn2grammar, lrn2spell
<hazmat> fwereade, why would it be an end run around to what the state of the system is?
<hazmat> errors should be explicitly resolved
<hazmat> i see the point re stop being a effectively a final transition for resolved
<niemeyer> fwereade: Hmm
<niemeyer> fwereade: I agree with hazmat in the general case
<niemeyer> fwereade: What I was point at, though, is that there's one specific case where that's not true: reboots
<niemeyer> fwereade: Because we're *sure* the unit was stopped, whether it liked it or not
<fwereade> hazmat, niemeyer: surely there are only a few "expected" states in which to run the start hook?
<niemeyer> fwereade: That's a special case where *any* transition should be transitioned onto "stopped", and then the normal startup workflow should run
<fwereade> niemeyer: so we should set up state transitions to"stopped" for every state?
<niemeyer> fwereade: That's a bit of a truism (sure, we don't run start arbitrarily)
<hazmat> i'm wondering if the reboot scenario is better handled explicitly
<fwereade> niemeyer: ...including, say, "install_error"?
<niemeyer> hazmat: Right, that's my thinking too
<hazmat> via marking the zk unit state, rather than implicit detection
<niemeyer> hazmat: Nope
<niemeyer> hazmat: Machines do crash, and that's also a reboot
<hazmat> true
<niemeyer> fwereade: No.. why would we?
<niemeyer> fwereade: You seem to be in a different track somehow
<niemeyer> fwereade: Maybe I'm missing what you're trying to understand/point out
<fwereade> niemeyer: I think the discussion has broadened to cover several subjects, each with several possible tracks ;)
 * fwereade marshals his thoughts again
<niemeyer> fwereade: There are two cases:
<niemeyer> 1) Reboot
<niemeyer> In that case, when the unit agent starts in this scenario, it should reset the state to stopped, and then handle the starting cleanly.
<niemeyer> 2) Explicit stop + start (through a command to be introduced, or whatever)
<niemeyer> In this scenario, a stop_error should remain as such until the user explicitly resolves it
<niemeyer> resolving should transition to *stopped*, and not attempt to start it again
<niemeyer> Then,
<niemeyer> There's a third scenario
<niemeyer> 3) Unknown unit agent crashes
<niemeyer> Whether kill or kill -9 or any other signal, the unit agent should *not* attempt to transition onto stopped, because the user didn't ask for the service to stop.
<niemeyer> Instead, the unit agent should be hooked up onto upstart
<niemeyer> So that it is immediately kicked back on
<niemeyer> Even if the user says "stop unit-agent".. we should not stop the service
<hazmat> 3) sounds good, the stop dance needs explicit coordination with non container agents, getting explicit transitions removed from implicit process actions is a win.
<fwereade> niemeyer: ok, so I guess at the moment we don't have anything depending on the existing stop-on-kill behaviour
<hazmat> 2) isn't really a use case we have atm, but sounds good, 1) the handing nears some exploration, what's an install_error mean in this context
<niemeyer> hazmat: Good point, agreed
<hazmat> are we resetting to a null state, or are we creating guide paths to start from every point in the graph
<niemeyer> hazmat: Even on reboots, it should take into account which states are fine to ignore
<niemeyer> stop_error is one of them
<niemeyer> start_error is another
<fwereade> niemeyer: so indeed I'm very happy with (3) and only a little confused about (2): because I'm now not sure when we *should* be in a "stopped" state
<hazmat> niemeyer, right.. by ignore you mean ignoring resolved action, and just reset the workflow to a point where we can call start, and possibly install again
<niemeyer> fwereade: Imagine we introduce three fictitious commands (we have to think, let's not do them now): juju start, juju stop, and juju restart.
<hazmat> juju transition <state>
<niemeyer> fwereade: Do you get the picture?
<niemeyer> hazmat: Ugh, nope.. mixing up implementation and interface
<hazmat> ;-)
<fwereade> niemeyer: yes, but I'm worried about here-and-now; I see https://bugs.launchpad.net/juju/+bug/802995 and https://bugs.launchpad.net/juju/+bug/872264
<_mup_> Bug #802995: Destroy service should invoke unit's stop hook, verify/investigate this is true <juju:New> < https://launchpad.net/bugs/802995 >
<_mup_> Bug #872264: stop hook does not fire when units removed from service <juju:Confirmed> < https://launchpad.net/bugs/872264 >
<hazmat> juju coffee, bbiam
<niemeyer> fwereade: Both of these look like variations of the 1) scenario.. why would we care about any erroring states if the unit is effectively being doomed?
<fwereade> niemeyer: it seems to me that if we don't stop on stopService, we will *never* go into the "stopped" state
<fwereade> niemeyer: I'm keen on making that change
<niemeyer> fwereade: Why? Stop can (and should, IMO) be an explicitly requested action.
<niemeyer> fwereade: Stop is "Hey, your hundred-thousand dollars database server is going for a rest"
<fwereade> niemeyer: back up again: you're saying that we shouldn't go into "stopped" just because the unit agent is shutting down
<niemeyer> fwereade: It doesn't feel like the kind of thing to be done without an explicit "Hey, stop it!" from the admin
<fwereade> niemeyer: so it's a state that we won't ever enter until some hypothetical future juju start/stop command is implemented?
<niemeyer> fwereade: Define "shutting down"
<niemeyer>  fwereade Shutting down can mean lots of things.. if it means "kill", yes, that's what I mean
<niemeyer> fwereade: If it means, the unit was notified that it should stop and shut down, no, it should execute the stop hook
<fwereade> niemeyer: ok, that makes perfect sense, it just seems that we don't actually do that now
<niemeyer> fwereade: No, it's not a hypothetical future.. a unit destroy request can execute stop
<niemeyer> fwereade: Because it's an explicit shut down request from the admin
<niemeyer> fwereade: But that's not the same as twisted's stopService
<fwereade> niemeyer: ok, but in that case we'll never be in a position where we're coming back up in "stopped" or "stop_error", which renders the original question moot
<niemeyer> fwereade: That's right
<niemeyer> fwereade: it's a problem we have, and that will have to be solved at some point, but it doesn't feel like you have it now
<niemeyer> Alright, and I really have to finish packing, get lunch, and leave, or will miss the bus
<fwereade> niemeyer: sorry to hold you up, didn't realise :(
<fwereade> niemeyer: take care & have fun :)
<niemeyer> fwereade: No worries, it was a good conversation
<niemeyer> Cheers all!
<hazmat> niemeyer, have a good trip
<hazmat> fwereade, so do you feel like you've got a clear path forward?
<fwereade> hazmat: still marshalling my thoughts a bit
<fwereade> hazmat: the way I see it right now, I have to fix the stop behaviour and see what falls out of the rafters, and then move forward from there again
 * hazmat checks out the mongo conference lineup
<hazmat> fwereade, fixing the stop behavior is a larger scope of work
<hazmat> fwereade, i'd remove/decouple  stop state from process shutdown and move forward with the restart work
<fwereade> hazmat: ...then my path is not clear, because I have to deal with coming up in states that are ...well, wrong
<hazmat> fwereade, a writeup on stop stuff.. http://pastebin.ubuntu.com/761640/
<hazmat> i feel mixing up different things though
<hazmat> fwereade, got a moment for g+?
<fwereade> hazmat, yeah, sounds good
<niemeyer> hazmat: Thanks!
<niemeyer> Heading off..
<niemeyer> Will be online later from the airport..
<rog> quick query: anyone know of a way to get bzr, for a given directory, to add all files not previously known and remove all files not in the directory that *were* previously known? a kind of "sync", i guess.
<rog> i can do "bzr rm $dir" then "bzr add $dir" but that's a bit destructive
<mpl> rog: I'm still a bit lost between the different gozk repos. what's the difference (in terms of import) between launchpad.net/gozk and launchpad.net/gozk/zookeeper? is one of them a subpart of the other or are they really just different versions of the same thing overall?
<rog> mpl: the latter is the later version
<rog> mpl: the former is the currently public version
<rog> mpl: i just found a pending merge that needs approval from niemeyer, BTW
<mpl> rog: ok, thx. and with which one of them do you think I should work?
<rog> mpl: launchpad.net/gozk/zookeeper
<mpl> rog: good, because gotest pass with that one for me, while they don't with the public one. also, how come there is no Init() in launchpad.net/gozk/zookeeper? has it been moved/renamed to something else?
<rog> mpl: let me have a look
<rog> mpl: Init is now called Dial
<mpl> yeah looks like it, thanks for the confirmation.
<lynxman> hazmat: ping
<hazmat> lynxman, pong
<rog> mpl: seems like my merge has actually gone in recently
<lynxman> hazmat: quick question for you, we're trying to see how to do a remote deployment with juju
 * hazmat nods
<lynxman> hazmat: so we were thinking about a "headless" juju deployment in which it connects to zookeeper once the formula has been deployed
<lynxman> hazmat: part of the formula being setting up a tunnel or route to the zookeeper node
<lynxman> hazmat: what are your thoughts about that? :)
<hazmat> lynxman, what do you mean by remote in this context?
<lynxman> hazmat: let's say I have a physical platform and I want to extend it by deploying nodes on another platform which is not on the same network
<lynxman> hazmat: but it's just an extension of the first platform, in N nodes as necessary
<hazmat> lynxman, like different racks different or like different data centers different
<lynxman> hazmat: different data centers different :)
<hazmat> ie. what's the normal net latency
<lynxman> hazmat: latency would be a bit higher than usual, let's say around 200ms
<hazmat> and increased risk of splits
<lynxman> hazmat: exactly :)
<hazmat> lynxman, so in that case, i'd say its probably best to model two juju different environments, and have proxies for any inter-data center relations
<lynxman> hazmat: hmm could you point me towards proper documentation for proxying? is Juju so far okay with that?
<hazmat> lynxman, hmm.. maybe its worth taking a step back, what sort of activities do you want that coordinate across the data centers
<lynxman> hazmat: we want to extend a current deployment, let's say a wiki app, set up the appropriate tunnels for all the necessary db backend comms
<lynxman> hazmat: just to support periods of higher traffic
<lynxman> hazmat: so I'd extend my serverfarm on certain hours of the day, for example :)
<hazmat> lynxman, well not exactly that which is well known.. extending it across the world during certain hours of the day on architecture that wants connected clients is a different order
<hazmat> lynxman, so proxies are probably something that would be best provided by juju itself, its possible to do so in a charm, but in cases like this you'd effectively be forking the charm your proxying
<lynxman> hazmat: pretty much yeah
<lynxman> hazmat: we want to use the colocation facility of juju to add a proxy charm under the regular one and such
<hazmat> lynxman, atm we're working on making its so that juju agents can disconnect for extended periods and come back in a sane state
<lynxman> hazmat: hmm interesting, any ETA for that or is it in the far future?
<hazmat> lynxman, its for 12.04
<lynxman> hazmat: neat :)
<hazmat> lynxman, its not clear how the remote dc is exposed via the provider in this case which i assume is orchestra
<lynxman> hazmat: the idea is to add either a stunnel or a route to a vpn concentrator, which will be deployed by a small charm or orchestra itself as necessary
<hazmat> lynxman, right, but it wouldn't be a single orchestra endpoint controlling machines at each data center, they would be separate
<lynxman> hazmat: exactly
<hazmat> lynxman, so i'm still thinking its better to try and model this as separate environments
<lynxman> hazmat: it'd be configured by cloud-init
<lynxman> hazmat: hmm I see
<lynxman> hazmat: so any docs you can point me at on how to connect two zookeeper instances?
<hazmat> its not a single endpoint we're talking too, and even just for redundancy, we'd want each data center to be functional in the case of a split
 * hazmat ponders proxies
<hazmat> so in this case you'd want to have the tunnel/vpn as a subordinate charm, and a proxy db, that you can communicate with.
<hazmat> hmm.. lynxman i think the nutshell is cross dc isn't on the roadmap for 12.04, we will support different racks, different availability zones, etc. but i don't think we have the bandwidth to do cross-dc
<hazmat> well
<lynxman> hazmat: well we're trying to investigate the options on that basically
<lynxman> hazmat: our first idea was a headless juju that could deploy a charm and as part of the charm connect itself back to the zookeeper
<lynxman> hazmat: just to keep it as atomic as possible
<hazmat> lynxman, fair enough, lacking support in the core, the options are if you have a single provider endpoint, you can try it anyways and it might work. or you'll be doing custom charm work to try and cross the chasm.
<hazmat> headless juju is not a meaningful term to me
<lynxman> hazmat: headless as the head being zookeeper :)
<hazmat> lynxman, still not meaningful ;-)
<hazmat> its like saying a web browser without html
<lynxman> hazmat: hmm the idea is to deploy a charm through the juju client and once the charm is setup let it connect through a tunnel to zookeeper to report back
<lynxman> hazmat: does that make more sense?
<hazmat> less
<hazmat> it would make more sense to register a machine for use by juju
<hazmat> since a single provider is lacking here
<hazmat> and then it would be available to the juju env, and you could deploy particular units/services across it with the appropriate constraint specification
<lynxman> hazmat: so I'd need to do the tunneling part as a pre-deployment before juju using another tool, be it cloud-init or such, right?
<hazmat> but the act of registration startups a zk connected machine agent
<lynxman> hazmat: then just tell juju to deploy into that machine in special using a certain charm
<hazmat> lynxman, what's controlling machines in the other dc?
<hazmat> lynxman, are they two dcs with two orchestras ?
<lynxman> hazmat: best case scenario cloud-init
<lynxman> hazmat: not necessarily
<lynxman> hazmat: but cloud-init is also integrated into orchestra so...
<hazmat> lynxman, yup
<lynxman> hazmat: it's a good single point
<hazmat> lynxman, the notion of connecting back on charm deploy isn't really the right one.. juju owns the machine since it created it, and the machine is connected to zk independent of any services deployed to it
<lynxman> hazmat: that's why I wanted to pass the idea through you to know what you thought :)
<hazmat> hence the notion of registering the machine to make it available to the environment, but thats something out of core as it violates any notion of interacting with the machine via the provider
<lynxman> hazmat: exactly, it does violate the model somehow
<hazmat> as far as approaching this in a way thats supportable in an on-going fashion, i think its valuable to try and model the different dcs as different juju environments that are communicating
<hazmat> then you could deploy your vpn charm as a subordinate charm through out one environment to provide the connectivity to the other env
<hazmat> the lack of cross env relations and no support for the core is problematic, but it sound more like a solvable case of a one-off deployment
<hazmat> via custom charms
<hazmat> actually maybe even generically if its done right..
<lynxman> hazmat: but that's the idea, the other dc can be used on and off, different machines, different allocations
<lynxman> hazmat: that's why I was opting for an atomic solution
<hazmat> a proxy charm would be fairly generic
<mpl> rog: which merge were you talking about?
<rog> mpl: update-server-interface (revision 24 in gozk/zookeeper trunk)
<mpl> rog: bleh, I find launchpad interface to view changes and browse files really awkward :/
<rog> mpl: i use bzr qdiff when codereview isn't available
<rog> bzr: (apt-get install qbzr)
<mpl> rog: anyway, what is this merge about. are you pointing it out because it is relevant to the Init -> Dial change we talked about?
<rog> mpl: it had lots of changes in it
<rog> mpl: and quite possibly that change included, i can't remember
<mpl> ok
<hazmat> fwereade, its not clear that coming up with an installed state would result in a transition.
<hazmat> to started
<hazmat> re ml
<_mup_> Bug #900873 was filed: Automatically terminate machines that do not register with ZK <juju:New> < https://launchpad.net/bugs/900873 >
<hazmat> jimbaker, incidentally this is my cleanup of sshclient.. http://pastebin.ubuntu.com/761938/
<jimbaker> hazmat, you have a yield in _internal_connect, but no inlineCallbacks
<hazmat> jimbaker, sure
<jimbaker> hazmat, so i like the intent (the inline form is much better for managing errors imho), but is that supposed to work as-is?
<hazmat> jimbaker, do you see a problem with it?
<hazmat> jimbaker, there's a minor accompanying change to sshforward
<jimbaker> hazmat, i just wonder why it's not decorated with @inlineCallbacks, that's all
<marcoceppi> So, can a charm run config-set and set config options?
<SpamapS> no
<SpamapS> well
<SpamapS> maybe?
<SpamapS> marcoceppi: its worth trying 'juju set' from inside a charm.. but my inclinationw ould be that it couldn't because it wouldn't have AWS creds to find the ZK server.
<marcoceppi> Ah, okay.
<SpamapS> marcoceppi: I do think its an interesting idea to be able to adjust the whole service's settings from hooks.
<marcoceppi> For things like blowfish encryption key, I'd like to randomly generate it and have it set in the config so juju get <service> will show it
 * marcoceppi writes a bug
<SpamapS> marcoceppi: not sure ZK is a super safe place for private keys
<marcoceppi> Neither is plaintext files, but that's what I'm working with
<niemeyer> Yo
<SpamapS> marcoceppi: what you want is the ability to feed data back to the user basically, right?
<marcoceppi> more or less
<marcoceppi> yes
<SpamapS> marcoceppi: yeah there's a need for that, Not sure if a "config-set" would be the right place
<marcoceppi> mm, it's just the first thing that came to mind for me
<SpamapS> marcoceppi: bug #862418 might encompass what you want
<_mup_> Bug #862418: Add a way to show warning/error messages back to the user <juju:Confirmed> < https://launchpad.net/bugs/862418 >
<marcoceppi> thanks
<marcoceppi> Ugh, this is probably the wrong channel, but I can't get this sed statement to work. Surely someone has had to escape a variable that was a path to work with sed before
<marcoceppi> I started with s/\//\\\//g because that seems like it would logically work, but it doesn't.
<SpamapS> marcoceppi: you can use other chars than /
<marcoceppi> For a file path?
<SpamapS> s,/etc/hosts,/etc/myhosts,g
<marcoceppi> ohhh
<marcoceppi> that actually helps a lot.
<marcoceppi> I completely forgot about that
<SpamapS> Yeah, I get all backslash crazy sometimes too then remember
<nijaba> hello.  Quick question: if I want to pass parameters to my charm (retrieved by config-get), do I need to explicitly mention the yaml file on the deploy command?  I could not find doc about this.  Did I miss it?
<SpamapS> nijaba: you can pass a full configuration via yaml in deploy, or use 'juju set' after deploy.
<nijaba> SpamapS: but do I have to specify the yaml or will chamname.yaml be automatically used?
<SpamapS> nijaba: if you want deploy to use a yaml file you have to mention it
<nijaba> SpamapS: ok, thanks a lot :)
<nijaba> SpamapS: another question.  in my config.yaml, how do I set my default to be an empty string.  I get: "expected string, got None" error
<marcoceppi> nijaba: default: ""
<nijaba> SpamapS: nm, found the doc finaly :)
<nijaba> marcoceppi: thanks :)
<SpamapS> IMO the default for type: string *should* be ""
<SpamapS> None comes out as "None" from config-get
<SpamapS> or empty, I forget actually
<SpamapS> hrm
<nijaba> "" works as expected
<marcoceppi> SpamapS: I've got a couple of improvements for charm-helper-sh (mainly fixes to wget) should I just push those straight to the branch or would a review still be a good thing?
<SpamapS> marcoceppi: lets do reviews for everything.. we'll pretend people are actually using these and we don't want to break it. :)
<marcoceppi> <3 sounds good
<nijaba> you'd better, cause I do now :)
<marcoceppi> \o/
<marcoceppi> It's actually a bug fix that would result in an install error :\
<hazmat> SpamapS, do you have ideas on how to reproduce  bug 861928
<_mup_> Bug #861928: provisioning agent gets confused when machines are terminated <juju:New for jimbaker> < https://launchpad.net/bugs/861928 >
<SpamapS> hazmat: yes, just terminate a machine using ec2-terminate-machines
<hazmat> SpamapS, ah
<hazmat> SpamapS, thanks
<SpamapS> hazmat: I don't know if there's a race and you have to do it at a certain time.
<hazmat> SpamapS, i've been playing around with juju terminate-machine .. haven't been able to reproduce, i'll triggering it externally with ec2-terminate-instances
<SpamapS> hazmat: right, because juju terminate-machine cleans up ZK first. :)
<hazmat> jimbaker, have you tried reproducing this one?
<jimbaker> hazmat, not yet, i've been working on known_hosts actually
<nijaba> another stupid question: can I do a relation-get in a config-changed hook?  how do I specify which relation I am taling about?
<nijaba> *talking
<niemeyer> Flight time..
<niemeyer> Laters
<hazmat> nijaba, not at the moment
<nijaba> hazmat: harg.  So if I have a config file that takes stuff from the relation, the logic to implement config-changed is going to be quite complicated...
<hazmat> nijaba, you can store it on disk in your relation hooks, and use it in config-changed
<hazmat> nijaba, or just configure the service in the relation hook
<nijaba> hazmat: ah, coool.  thanks a lot
<hazmat> nijaba, don't get me wrong, that is a bug
<nijaba> hazmat: I don't, I just like the workaround
<SpamapS> nijaba: one way I've done what you're dealing with is to just have the hooks feed into one place, and then at the end, try to build the configuration if possible.. otherwise exit 0
<_mup_> txzookeeper/trunk r45 committed by kapil.foss@gmail.com
<_mup_> [trivial] ensure we always attempt to close the zk handle on client.close, stops background connect activity associated to the handle.
<SpamapS> nijaba: tsk tsk.. idempotency issues in limesurvey charm. ;)
<nijaba> SpamapS: can you be a bit clearer?
<SpamapS> nijaba: I've got a diff now.. :)
<SpamapS> nijaba: if you remove the db relation and add it back in.. fail abounds ;)
<SpamapS> nijaba: or more correctly, if you try to relate it to a different db server
<nijaba> SpamapS: ah.  let me finish the current test and will fix
<SpamapS> nijaba: no I have a fix already
<SpamapS> nijaba: I'll push the diff up
<nijaba> SpamapS: ok thanks, will look
<SpamapS> mv admin/install admin/install.done
<SpamapS> chmod a-rwx admin/install.done
<SpamapS> This bit is rather perplexing tho
<nijaba> SpamapS: for "security" reason, install procs are to be moved away or admin interface will compllain
<nijaba> SpamapS: they recommend to completely remove, but that'smy way of doing it
<nijaba> SpamapS: just so that it could be reused
<robbiew> negronjl: you ever see this type of java error with the hadoop charms? -> https://pastebin.canonical.com/56816/
<negronjl> robbiew: checking
<SpamapS> nijaba: ok, makes sense
<negronjl> robbiew: bad jar file ...
<negronjl> robbiew: Where is the jar file itself in the system ?
<robbiew> yeah...but the same example worked when I deployed using default 32bit oneiric system...this is a 64bit
<SpamapS> nijaba: seems rather silly to remove perfectly good tools.. ;)
<robbiew> negronjl:  /usr/lib/hadoop
<robbiew> i can unzip it
<negronjl> robbiew: did you do this all with juju ?
<negronjl> robbiew: if so, was it in AWS ?  Do you have the AMI so I can re-create ?  I normally use 64-bit oneiric ones and I haven't seen that error ..
<negronjl> robbiew: using the following ....      default-image-id: ami-7b39f212
<negronjl>     default-instance-type: m1.large
<negronjl>     default-series: oneiric
<negronjl> robbiew: in my environment.yaml file
 * robbiew double checks his
<robbiew> hmm
<robbiew> negronjl:  default-image-id: ami-c162a9a8
<negronjl> robbiew: I normally use the latest m1.large, oneiric AMI that I can find in http://uec-images.ubuntu.com/oneiric/current/
<negronjl> robbiew: Today's latest one ( for oneiric, m1.large ) is: ami-c95e95a0
<negronjl> robbiew: I am currently testing that one just to be sure
<robbiew> negronjl: do we do them for m1.xlarge?
 * robbiew was playing around with types 
<negronjl> robbiew: I haven't tried them but, now is as good a time as any so, trying now :)
<negronjl> robbiew: give me a sec
<robbiew> lol
<SpamapS> nijaba: http://bazaar.launchpad.net/~charmers/charm/oneiric/limesurvey/trunk/revision/10
 * SpamapS should have pushed that to a review branch.. bad bad SpamapS
<negronjl> robbiew: no xlarge images
 * SpamapS runs off to dentist appt that starts in 8 minutes
<negronjl> robbiew: I see cc1.4xlarge though
<negronjl> robbiew: test that one ?
<robbiew> negronjl: yeah...so I'm clearly of in the weeds :)
<robbiew> eh...nah
<robbiew> let me use the right settings first ;)
<robbiew> I had it working with the defaults
<negronjl> robbiew: k ... let me know if I can break anything for you :)
<robbiew> so I figured it was user error
<robbiew> lol...sure
<nijaba> SpamapS: thanks :) but this means that the install proc may run multiple times even for the same db.  does this work?
<nijaba> SpamapS: just proposed a merge for you
<fwereade> hazmat, you're right; I don't think a unit would automatically transition from installed to started, in the current trunk
<fwereade> hazmat, but the only valid starting state for the original code was None
<fwereade> hazmat, and given the clearly-expressed intent of the install transition's success_transition of "start", it seemed like a clear and obvious thing to do :)
<fwereade> gn all
<marcoceppi> SpamapS: I'm still writing tests for the hook, so I'll just push that up as a different merge request later
<marcoceppi> test for the helper*
<robbiew> hazmat: any chance we could get some workitems listed on https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-roadmap?  at least around the features we want to deliver?  I need to confirm, but i think we can attach any bugs we want to fix.
 * robbiew is sure SpamapS would love to help you guys with this (wink wink)
 * hazmat takes a look
<robbiew> hazmat: doesn't necessarily have to be you....if there's a list of who's doing what, I can put the info in myself...is it the kanban?
<hazmat> robbiew, its in other blueprints
<robbiew> oh
<robbiew> hazmat: juju project blueprints?
 * robbiew can look there
<hazmat> robbiew, https://blueprints.launchpad.net/juju
<Daviey> upstream vs distro blueprints.
<robbiew> hazmat: cool, thx
<robbiew> negronjl: so I'm still getting the error...maybe I'm missing something
<robbiew> do I need to use the hadoop-mapreduce charm now?
<robbiew> I was simply deploying hadoop-master, hadoop-slave, and ganglia
<robbiew> then relating the master and slave services...and ganglia and slave services
<negronjl> robbiew: you shouldn't ....
<negronjl> robbiew: the hadoop-mapreduce is there for convenience ... what are the steps of what you are doing ?
<robbiew> relate master slave....relate ganglia slave
<robbiew> pretty much m_3's blog post
<negronjl> robbiew: let me deploy a cluster ... give me a sec
<robbiew> it worked with default type and oneiric today....only difference is the 64bit types.
<robbiew> ok
<_mup_> juju/upgrade-config-defaults r428 committed by kapil.thangavelu@canonical.com
<_mup_> ensure new config defaults are applied on upgrade
<negronjl> robbiew: deploying now .. let me wait for it to complete and I'll see  what it does
<robbiew> cool
<robbiew> negronjl: I wonder if it's a weird java bug
<robbiew> negronjl:  was following the steps here: http://cloud.ubuntu.com/2011/11/monitoring-hadoop-benchmarks-teragenterasort-with-ganglia-2/
<robbiew> fyi
<robbiew> gotta run and pick up kids...and do the dad thing...will be back on later tonight.  If you find anything just update here or shoot me a mail
<negronjl> robbiew: ok ... I'll email it to ya
<robbiew> cool...good luck! :P
<SpamapS> nijaba: yes install just exits if it is run a second time on the same DB
<nijaba> SpamapS: k, good :)
<nijaba> SpamapS: do you code your charms local or on ec2?
<SpamapS> nijaba: ec2
<SpamapS> the ssh problem with the local provider makes it basically unusable for me.
<SpamapS> nijaba: I was using canonistack for a while but it also became unreliable. :-P
<SpamapS> EC2 is still the most reliable way to get things done with juju. :)
<nijaba> SpamapS: which one?  canonistack or your's
<SpamapS> us-east-1 ;)
<nijaba> was talking about openstack
<SpamapS> nijaba: pinged you back on the merge proposal.. I think the right thing to do is just re-delete the "already configured" check
<nijaba> sounds good to me
<SpamapS> nijaba: I only ever tried against canonistack.
<nijaba> k
<SpamapS> nijaba: I really like limesurvey. Its a lot more fun than wordpress for testing things. :)
<SpamapS> just have to convince it to stop using myisam. :)
<nijaba> SpamapS: hehe, it's been my learning project for both packaging and juju now :)
<nijaba> SpamapS: I still can't get the package in though
<nijaba> SpamapS: been ready for 2 years, lack sponsor for both debian and ubuntu
<nijaba> SpamapS: my merge should help you use InnoDB, it's now in the config
<SpamapS> nijaba: packaging PHP apps is so pre-JUJU
<nijaba> SpamapS: true, true
#juju 2011-12-07
<nijaba> SpamapS: merge done
<nijaba> resubmited
 * nijaba off to see morpheus
<SpamapS> nijaba: merging now, THANKS!
<nijaba> SpamapS: my pleasure
 * nijaba really having fun
<_mup_> juju/ssh-known_hosts r427 committed by jim.baker@canonical.com
<_mup_> Initial commit
<nijaba> SpamapS: was wondering why having a readme is not mandatory for charm. Aren't they a bit dry without inbreed doc?  Shouldn't juju offer a "man" option to access documentations about charms?
<SpamapS> nijaba: I've thought about exactly that, having a 'charm info xxx' command that intelligently looks for README* readme* and cats them together into less would be cool. :)
<SpamapS> s/less/$PAGER/
<SpamapS> nijaba: I think they also need a maintainer: field in metadata.yaml
<nijaba> SpamapS: this last bit was acked a fe days ago, IIRC
<nijaba> SpamapS: I'll open a bug :)
<adam_g> SpamapS: hey clint, should i push this lonely precise branch (lp:~gandelman-a/charm/precise/rabbitmq-server/900440) directly to lp:charm/precise/rabbitmq-server since theres nowhere to file a proper merge proposal?
<SpamapS> adam_g: how about you push the oneiric branch into precise, then do a MP against that?
<_mup_> juju/ssh-known_hosts r428 committed by jim.baker@canonical.com
<_mup_> Support machine recycling
<_mup_> Bug #901017 was filed: Juju should have a "info" or "man" option <juju:New> < https://launchpad.net/bugs/901017 >
<adam_g> SpamapS: ahh
<adam_g> SpamapS: hmm. no dice pushing the current lp:charm/rabbitmq-server up to lp:charm/precise/rabbitmq-server http://paste.ubuntu.com/762302/
<SpamapS> adam_g: right.. I guess I have to do the "initialize" step before we can push to that series. :(
<SpamapS> adam_g: can you at least push it to ~gandelman-a/charm/precise/... ?
<adam_g> SpamapS: yeah, already there: lp:~gandelman-a/charm/precise/rabbitmq-server/900440
<SpamapS> adam_g: so you can probably push the oneiric one to lp:~charmers/.....
<SpamapS> adam_g: then do the MP
<SpamapS> adam_g: meanwhile I'll try to figure out how we initialize the series
<adam_g> that might work
<adam_g> i could push to lp:~charmers/charm/precise/rabbitmq-server/trunk, suppose i'll propose against that...
<SpamapS> yeah that will work
<_mup_> juju/upgrade-config-defaults r429 committed by kapil.thangavelu@canonical.com
<_mup_> use lazy computation of default values instead of recording them to config state
<_mup_> juju/upgrade-config-defaults r430 committed by kapil.thangavelu@canonical.com
<_mup_> config value validation no longer returns defaults
<_mup_> juju/upgrade-config-defaults r431 committed by kapil.thangavelu@canonical.com
<_mup_> no longer explicitly touch defaults in upgrade, the lazy computation suffices.
<_mup_> Bug #901043 was filed: switch charm subcommand to change origin of charm and upgrade <juju:New> < https://launchpad.net/bugs/901043 >
<hazmat> SpamapS, is bug 900517 different than the upgrade config defaults issue?
<_mup_> Bug #900517: config-get on an int set to 0 does not return '0' but an empty string <juju:New> < https://launchpad.net/bugs/900517 >
 * SpamapS reads
<SpamapS> hazmat: its entirely possible that this was actually the same effect.
<SpamapS> hazmat: easy to test that hypothesis
 * hazmat does a UTSL
<hazmat> SpamapS, i still haven't managed to reproduce bug 861928, i suspect its timing dependent, if you do manage to reproduce, it would be helpful to attach the entire provisioning agent log
<_mup_> Bug #861928: provisioning agent gets confused when machines are terminated <juju:New for jimbaker> < https://launchpad.net/bugs/861928 >
<SpamapS> hazmat: interesting
<SpamapS> hazmat: you know.. kees was experiencing it on the oneiric version (r398) .. its possible that its been fixed inadvertently with some of the ZK / API fixes
<hazmat> SpamapS, yeah.. jimbaker fixed another provisioning agent bug post oneiric afaicr
<SpamapS> have we broken backward compatibility with r398 at all? I have half a mind to propose that we just put r427 in oneiric-updates
<SpamapS> features be damned. ;)
<SpamapS> only problem is.. we can't actually upgrade deployed environments
<hazmat> SpamapS, i doubt that's an issue in practice
<SpamapS> that we can't upgrade the provisioning agent?
<hazmat> SpamapS, that their are long lived juju environments extant
<SpamapS> kees had one very long lived for doing sbuild fanout
<hazmat> SpamapS, but fair enough
<hazmat> SpamapS, he shut it down though.. 45usd spend
<SpamapS> until it stopped working
<SpamapS> Anyway, I agree, nobody should have a long lived 11.10 juju cluster. :)
<SpamapS> would be good to come up with an upgrade story for 12.04's juju
<SpamapS> if william finishes the upstart job stuff.. we can at least put in the packages to stop/start the agents on upgrade
<hazmat> indeed that will be key, we can probably do some of dance around that, but the biggest question mark on the upgrade story, is just coordinating a code drop/rev across a cluster of different release series
<hazmat> ideally just a binary drop..
<SpamapS> Which is why I think we're going to eventually have to host juju packages on the juju service nodes
<SpamapS> Otherwise precise won't be able to play in a "Q" managed cluster
<SpamapS> should be fairly easy... each juju package just needs to include a script which builds itself for every series you want to support
<SpamapS> and of course, we have to build a test suite which makes sure that actually works ;)
<hazmat> SpamapS, re the config set to 0, afaics its not an issue
<hazmat> hmm.. maybe it is
<hazmat> something sounds familiar
<SpamapS> I was thinking it might be an issue where the value might not be carefully checked for None
<_mup_> juju/sshclient-refactor r428 committed by kapil.thangavelu@canonical.com
<_mup_> refactor the sshclient (zk over ssh tunnel)
<_mup_> juju/sshclient-refactor r429 committed by kapil.thangavelu@canonical.com
<_mup_> increase the default timeout
<_mup_> juju/sshclient-refactor r430 committed by kapil.thangavelu@canonical.com
<_mup_> robust zk conn
 * SpamapS cheers hazmat on
 * hazmat falls asleep
<rog> mornin'
<TheMue> moo rog
<rog> TheMue: yo
<mpl> rog: is the example in the example dir of zookeeper working for you (that is, once you've replaced Init with Dial and fixed the err.String() calls)?
<rog> mpl: i'll try it
<mpl> rog: here I have two problems with it. 1) it doesn't return as it should if I don't have any zookeeper server running. 2) I get loads of error messages for error, coming apparently from this point: event := <-session (it doesn't get past there apparently).
<mpl> s/for error//
<rog> mpl: yeah, me too - loads of time out errors
<rog> mpl: i think the timeout must be wrong
<rog> mpl: yeah, the timeout should be 5e9 not 5000
<rog> mpl: BTW i'm not sure what it should do if there's no zk server running
<rog> mpl: here's my updated version: http://paste.ubuntu.com/762565/
<mpl> rog: well, I don't know what it should do, but err should be != nil when Dial fails, and it seems it's not the case for me.
<rog> mpl: i'm not sure that Dial can ever fail
<mpl> oh
<mpl> how come?
<rog> mpl: because the connection itself is asynchronous
<mpl> ah yes
<mpl> good point, thx
<mpl> so that err check is pretty moot
<rog> mpl: i think that's wrong, and gustavo and i have talked about changing it in the past, but the changes haven't been made yet
<mpl> ok, another thing I don't get, why do I get tons of messages and not just one? that chan read is not in a loop.
<rog> mpl: looking at zk C source, it looks like the only way it can return an error is if the hosts arg is malformed
<rog> mpl: the messages are printed by the zk client code
<rog> mpl: (logging is turned on by default, which i think is wrong too)
<mpl> rog: you mean they come from underlying calls of Dial?
<rog> mpl: yeah - they come from within the C API
<mpl> rog: and not in any case as a result of this: "event := <-session" ?
<rog> mpl: indeed - that blocks until the connection is made. i don't know if zk ever decides that it can't connect.
<mpl> rog: ok, that's reassuring then,  thx.
<rog> mpl: you can turn the debugging messages off
<mpl> ah cool, it finally worked.
<rog> mpl: zookeeper.SetLogLevel(0)
<mpl> good to know, thx.
<mpl> rog: ok, I'll elaborate from that example to play with ssh.
<rog> mpl: sounds good
<TheMue> re
<hazmat> g'monring
<TheMue> moo hazmat
<TheMue> for documentation purposes: are there some special bazaar configuration settings for juju?
<rog> TheMue: not as far as i know
<TheMue> fine, makes it easier
<TheMue> I'm working on a "Getting Started"
<fwereade> hazmat, is there some reason you know of for the particular shape of the code around CharmUpgradeOperation?
<fwereade> hazmat, because the workflow is perfectly capable of synchronising the state if we make the charm upgrade much more like a normal transition, but it's much hairier if there's a reason *not* to do it as a normal transition
<hazmat> fwereade, not sure what you mean
<hazmat> fwereade, you mean push more of the operation out of the watch callback and into the transition?
<fwereade> hazmat, that everything done CharmUpgradeOperation ought IMO to be done on the lifecycle, like the other things that happen as part of of a state transition
<fwereade> hazmat, and if we do that we can easily just call "self.workflow.synchronize(executor)" in place of the boolean tangle in the original MP
<hazmat> fwereade, hmm. so my thought there its not something that is manageable completely internal to the lifecycle,  it depends on external mutable persistent settings, which is very different then anything else in the lifecycle
<fwereade> hazmat, on the service's charm id?
<hazmat> ie. you can't just call lifecycle.upgrade() and expect it to work, the external state needed to be put in place first.. where as you can call any of the other lifecycle methods
<hazmat> fwereade, on the upgrade flag
<fwereade> hazmat, hmm, hadn't had that perspective
<hazmat> fwereade, i thought the plan was not to do anything on upgrade_error
<hazmat> fwereade, how does this issue arrise?
<fwereade> hazmat, you recall the plan to make the workflow know how to set up the lifecycle and executor to match the current state
<fwereade> hazmat, to do so, we need to be able to detect the errors which occur while the executor is paused, so we can restore it correctly
<hazmat> fwereade, i thought we'd moved on to its  an easy thing to distinguish in the upgrade transition, and we'll be dealing with disconnected op sync anyways, so exact match isn't nesc (queueing in the background)
<hazmat> fwereade, the error from the executor is paused is noted in the state
<fwereade> hazmat, how is it noted?
<fwereade> hazmat, we don't even try to fire a transition until some time after we've stopped the executor
<hazmat> fwereade, although juju could probably use a more robust setup there from pause, to enclose the rest in a try/except block
<hazmat> fwereade, so from pause to transition, its set a zk value, and extract a charm to disk
<hazmat> if the transition/hook fails we'll get into a recorded error state
<fwereade> hazmat, and if anything goes wrong during the extract or the zookeeper set, we'll be in a weird state
<hazmat> fwereade, a try/except around the others can manually fire transition to an error state
<hazmat> on error
<hazmat> fwereade, its an odd scenario regardless if we have a half extracted charm on disk
 * hazmat ponders
<fwereade> hazmat, agreed, but I don't think we can guarantee that that will *never* happen
<hazmat> fwereade, agreed, although we can do a better job of minimizing, but its not clear that encompassing more to the error state, is helpful wrt to retry, the coordination state is gone on retry
<hazmat> the flag is cleared, and we don't know that we can safely execute the upgrade hook again, because we don't know the state on disk or zk of the charm
<hazmat> and if we renter the entire ugprade operation, we don't have the coordination state to trigger any changes, and it will early exit
<fwereade> hazmat, isn't it just down to the order of operations?
<hazmat> perhaps
<fwereade> hazmat, if we extract, then set in ZK, then fire the hook
<hazmat> fwereade, i don't see how that helps, the flag is cleared
<hazmat> fwereade, and you can't set the flag in an error state
<hazmat> fwereade, your right though, an error here should be recorded as a charm upgrade error
<fwereade> hazmat, because we can know by the unit charm id whether or not the extraction of the latest charm has completed; if it has we can move straight on to firing the hooks(or not) according to the "resolved" command
<fwereade> hazmat, if the charm ids don;t match, we start the operation from scratch
<fwereade> hazmat, (when we retry)
<hazmat> fwereade, so right now error states always refer to hook errors..
<fwereade> hazmat, from the POV of the workflow state, which represents what the unit is actually doing, I feel that "half-extracted charm that's 100% broken" should absolutely represent an error
<hazmat> fwereade, it definitely should, i'm just trying to work through the implications of changing the meaning of an error state, what retry means in this context, and changing the interactions/responsibilities of lifecycle compared to any extant uses.
<hazmat> fwereade, there's a notion that upgrades flags shouldn't survive restarts, which is one reason why we cleared the flag early
<hazmat> i'm trying to recall if there was more to it that
 * SpamapS stretches and yawns
<hazmat> fwereade, so when would the upgrade flag get cleared?
<fwereade> hazmat, my idea is that we cleat the upgrade flag as soon as we see it, but we kick off an upgrade_charm transition, which is "started"->"started"
<fwereade> hazmat, if we're not in a started state we just bail before we even try the transition
 * hazmat nods
<fwereade> hazmat, the lifecycle.upgrade_charm will do the early parts before stopping the hooks and quietly bail out on errors, equivalently to now
<fwereade> hazmat, but once we hit the stop-hooks-start-messing-with-disk-state point, any subsequent errors should come out and be detected as transition failures
<hazmat> fwereade, how do you renter the upgrade charm state?
<hazmat> error state that is
<hazmat> on a process restart
<robbiew> hazmat: just got an email from fernanda...TZ mixup?
<fwereade> hazmat, it's just an existing workflow state, I'm already in that state when I come up
<hazmat> robbiew, doh.. indeed that is tz mixup, i thought it was +1 hr
<hazmat> fwereade, but the process mem state is different
<hazmat> fwereade, ah.. so the executor is still stopped
<hazmat> because we never started the lifecycle, and we're not listening to any rel lifecycles
<fwereade> hazmat, lifecycle.running and executor.,running are not especially closely related
<hazmat> fwereade, yup.. so if we restart in a charm upgrade error state.. the lifecycle is stopped, the exec is running, but nothing feeding into it
<fwereade> hazmat, the executor needs to be stopped during upgrade error states
<fwereade> hazmat, all teh rest of the time it's fine
<hazmat> fwereade, how does it get stopped on restart
<fwereade> hazmat, we just don't start it explicitly, we let the workflow do so if it's in a state which needs it
<hazmat> fwereade, and how is it any different than the lifecycle just being stopped
<fwereade> hazmat, so it's just "self.workflow.synchronize(self.executor)" and then we're in the state we must have been in when we left off last time
<fwereade> hazmat, from outside perspective no different, I guess -- no hooks are executing -- but... well, why exactly are we explicitly stopping the executor when we could just stop the lifecycle like we do with, say, configure?
<fwereade> hazmat, ...only just thought of that :/
<hazmat> robbiew, just rescheduled for 20m from now
<robbiew> hazmat: cool
<hazmat> fwereade, because the ability to run a hook now (ahead of any queued hooks) has a safety notion that the executor is stopped, in part to guarantee that there are no other currently executing hooks
<hazmat> fwereade, i need to switch tracks for a little bit, but i'll definitely ponder this some more
<fwereade_> hazmat, isn't the reason that the unit relation lifecycles' schedulers could still be busily executing queued hooks at any stage?
<hazmat> fwereade, not sure if you saw this.. because the ability to run a hook now (ahead of any queued hooks) has a safety notion that the executor is stopped, in part to guarantee that there are no other currently executing hooks
<fwereade_> hazmat, exactly so
<hazmat> fwereade, i need to switch tracks for a little bit, but i'll definitely ponder this some more.. i think this is worthwhile.. part of the issue though on either an extract failure or a state change failure, is that its signals a signficant problem
<fwereade_> hazmat, I think it comes down to my conviction that we're better off restoring process state on startup -- which state can be encapsulated in 2 bools -- than we are by complicating the logic we run all the time
<fwereade_> hazmat, ok, ttyl -- ping me to continue when you're free :)
<hazmat> fwereade_, isn't restoring the state as simple as is -> if not self.running: self.lifecycle.start, else self.executor.start()
<fwereade_> hazmat, well, "started" implies both running, but yeah, it's not complicated
<fwereade_> hazmat, you seemed at one stage to be arguing against it
<hazmat> fwereade_, actually i was hoping for that since it was the simplest thing, but the notion that upgrade error should encapsulate non hook errors has some merit
<hazmat> fwereade_, definitely worth exploring, and i think a good track
<fwereade_> hazmat, I think it is the simplest thing
<SpamapS> http://www.ustream.tv/channel/vclug-venturaphp  .. me.. talking about juju to a local LUG ... unfortunately, the demo failed because I had a lucid AMI in my environments.yaml
<SpamapS> totally forgot that I had been monkeying around with the AMI. :-P
<SpamapS> Pretty much flies off the rails at 22:00
 * SpamapS goes off to get the family out so he can get work done.
<hazmat> fwereade, connectivity problems?
<fwereade> hazmat, yeah, sorry about that, didn't actually notice it happening until just now
<hazmat> fwereade, no worries
 * kees waves "hi"
<kees> so, I discussed some of the trouble I had with the provision here last sunday. not the best time for catching people, i realize.
<kees> *provisioner
<kees> SpamapS pointed me to where cloud-init does it's work, but ultimately I wasn't able to get the provisioner back on its feet.
<kees> hazmat: what's the best way for me to help debug the troubles I ran into?
<SpamapS> kees: using the PPA version would go a long way to figuring out if this is already fixed or not.. which I suspect it may have been
<SpamapS> kees: we still need to make the agents more robust and restartable, which fwereade is working on right now.. but I think some of the ZK stuff has been fixed since 11.10 released
<kees> SpamapS: how do I find AMIs with the PPA version built-in?
<kees> SpamapS: and why not SRU these fixes to Oneiric?
<SpamapS> kees: you don't need an AMI.. you just add 'juju-origin: ppa' to your environment settings
<SpamapS> kees: Its hard to isolate the fixes because there have been massive changes.
<kees> SpamapS: hrm, let me try...
<SpamapS> kees: also if your client version is from the PPA, it will automatically deploy with the PPA
 * SpamapS curses himself for forgetting to run the test suite before commit to trunk.. https://launchpadlibrarian.net/86855907/buildlog_ubuntu-precise-i386.juju_0.5%2Bbzr428-1juju2~precise1_FAILEDTOBUILD.txt.gz
<kees> SpamapS: if I just set "juju-origin: ppa", is that sufficient, or do I need to also install juju from the PPA?
 * SpamapS puts on the cowboy hat
<hazmat> kees, that's sufficient
<SpamapS> Has the ZK schema bumped since r398?
<hazmat> SpamapS, no
 * kees attempts a bootstrap...
<hazmat> SpamapS, there's been some minor additions, but no changes to the cli interactions
<kees> one of the really goofy bugs I ran into was that --environment seemed to be ignored by a lot of commands
<hazmat> kees, that's odd just about every command takes that option
<hazmat> kees, it has to be specified after the sub command.
<kees> i.e. I tried to do  juju bootstrap --environment sample2 after my "sample" environment's provisioner freaked out.
<kees> and then juju status --environment sample2 always failed.
<kees> then I destroyed sample2, and then juju status couldn't find sample any more
<kees> so I had to hard-code the instance list in the source to get control back.
<SpamapS> kees: did they both have the same control-bucket ?
<kees> what is a control-bucket? :)
<SpamapS> kees: the thing that uniquely identifies an environment in the provider...
<hazmat> kees, its an s3 bucket that's spec'd in environments.yaml.. its env specific
<hazmat> it gets autogenerated the first time around, but it can't be copied between multiple environments, without causing issues
<kees> ah, I see that now. does that get added automatically? I don't remember adding that or admin-secret
<kees> yeah, that would totally be what happened then
<kees> I just copied the entire "sample" section and changed the name.
<hazmat> hmm.. we should probably warn/error if we see that come up
<kees> heh, d'oh.
<SpamapS> hazmat: yeah control-bucket should have the env name in it.. so we should be able to error out.. "control bucket foo has env name X not Y"
<kees> seems like that should be stored somewhere else instead of injected into environment.yaml
<kees> okay, well, that explains that glitch at least. :)
<SpamapS> kees: its used by clients to find the ZK server, so it has to be in environments.yaml
<SpamapS> Tho one thing that would work is to change it to control-bucket-prefix: .. and by default just prepend that to the env name.
<hazmat> SpamapS, that would create an implicit fail scenario around changing an env name
<kees> it might be nice to have the finding of the master instance show up in --verbose (i.e. the processing of the ec2 instance list, etc)
<hazmat> although for local provider it already is
<hazmat> since we use the env name on disk
<kees> I spent a lot of time trying to figure out how juju was deciding which was a master instance when I broke it with sample2.
<hazmat> kees, its always machine 0 atm
<SpamapS> hazmat: err, env name can't be changed AFAICT, its used for so many things... ec2 group names for one.
<hazmat> SpamapS, ugh.. good point
<kees> hazmat: I mean the stuff before "Connecting to environment".
<hazmat> SpamapS, that sounds quite sensible then.. along with a nice warning in the doc about it
<kees> hazmat: when I bootstrapped using the same control-bucket, suddenly juju would only talk to the new instance
<rog> could we derive the control bucket name by combining the env name and the access id in some way?
<rog> thus removing the need for a user to invent another name
<kees> https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment <- this could add some details about what the control bucket is.
<_mup_> Bug #901311 was filed: automatically prefix control bucket with the environment name <juju:New> < https://launchpad.net/bugs/901311 >
<rog> hazmat: could that work?
<_mup_> juju/ssh-known_hosts r429 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<SpamapS> rog: I'm a little hesitant to make use of the access key id in any permanent context
<hazmat> robbiew, not sure we want to include access id its tied to an external/provider notion
<rog> e.g. envname + salt + hash(salt+accessid)
<SpamapS> rog: they can be created and discarded quite often
<hazmat> rog, &
<kees> is there documentation on the potential contents of environment.yaml?
<hazmat> rog, take orchestra for example.. what's an access id.. or local provider, its a provider specific notion
<kees> e.g. how would I discover "juju-origin: ppa" otherwise?
<rog> hazmat: does orchestra have a control-bucket field?
<hazmat> kees, https://juju.ubuntu.com/docs/provider-configuration-ec2.html?highlight=origin
<hazmat> rog, doh.. good point
<kees> hazmat: ah-ha! thanks. I knew I'd found that before at some point.
 * SpamapS goes OTP
<kees> hazmat: maybe link to that from https://juju.ubuntu.com/docs/getting-started.html#configuring-your-environment ?
<hazmat> rog, the other thing with access id, is it assumes the identity is shared across all users of the env
<hazmat> rog, which is true/required atm for bootstrap/destroy-environment
<kees> what about making "juju-origin" be "PPA" by default, since that should always be the latest/greatest? that could be SRUed to oneiric.
<rog> hazmat: that's true.
<rog> hazmat: but it might be a useful default
<rog> hazmat: if there's no entry for control-bucket, for example
<hazmat> rog, maybe not though.. they need access to bucket, which we have setup as private by default atm.. i just want to leave options open for delegation of access
<rog> hazmat: if we want multiuser access, the bucket must be readable by other users, right?
<hazmat> rog, yeah.. i'm not sure we'd ever make that not a required arg for ec2, if its an auto on deterministic setting.. well you can change your id or switch accounts, and then poof your env is gone
<kees> hazmat: okay, so, I spawned a bunch of units, and I've hit exactly what I saw on Sunday.
<kees> machines:
<kees> ...
<kees>   6: {dns-name: '', instance-id: i-0b65094c}
<kees> ...
<kees>       builder-debian/5:
<kees>         machine: 6
<kees>         public-address: null
<kees>         relations: {}
<kees>         state: null
<kees> machine 6 hasn't been noticed, and the unit stays "public-address: null"
<rog> hazmat: isn't that already true? (given that the bucket is private)
<hazmat> kees, also fwiw the latest client btw shows more information on status regarding machine state (pending from the provider, running, etc)
<hazmat> kees, public-address is null till the machine actually comes up and starts the machine agent..
<hazmat> its not instaneous
<hazmat> it takes a minute, for the machine to launch, and have packages installed and to be available
<kees> ah, well, it just came up. heh. sunday I waited though. it wasn't up after an hour.
<hazmat> kees, definitely broken then, but its not something you can determine instaneously is all i'm saying
<kees> hazmat: right, absolutely.
<hazmat> kees, what i'm trying to verify though is.. A) is the bug something we've already fixed in the ppa B) if not what's the provisioning agent log look like
 * kees nods
<kees> let me try to trigger the missing machine fault, one sec.
<kees> kaboooom
<kees> here was my steps:
<kees> $ juju terminate-machine 10
<kees> oops, ignore that
<kees> steps:
<kees> $ juju remove-unit builder-debian/7
<kees> $ juju terminate-machine 10
<kees> $ juju add-unit builder-debian
<kees> at which point the provisioner explodes with python backtraces
<kees> 2011-12-07 08:52:01,217 provision:ec2: twisted ERROR: KeyError: 'Message'
<kees> 2011-12-07 08:52:01,217 provision:ec2: twisted ERROR: Logged from file provision.py, line 156
<kees> what logs can I provide? :)
<hazmat> kees, awesome. the log is in /var/log/juju .. i think its provisioning-agent.log but i'm not sure of the exact filename
<hazmat> kees, its on machine 0 of the env
<hazmat> i think i kept using destroy-service instead of remove-unit when i was trying to reproduce this
<drt24> So I am trying to use orchestra as per http://cloud.ubuntu.com/2011/09/oneiric-server-deploy-server-fleets-p2/ and this is failing because following those instructions does not appear to result in pxe booting being setup correctly on the provisioning server.
<kees> hazmat: http://paste.ubuntu.com/762905/
<drt24> I now have got the dhcp server running but it still isn't configured to do pxe things properly.
<drt24> and so I get "No filename" errors when trying to boot client VMs
<hazmat> kees, thanks thats very helpful
<hazmat> that looks like a bug in txaws
<drt24> (this is on oneiric VMs)
<kees> hazmat: cool, excellent.
<kees> hazmat: I assume that moving the ppa fixed the bring-up bug, or it's a hard race to lose and I just got "lucky" on sunday
<hazmat> kees, its really not a racy normally, i'm sorry that was your first juju experience. the client cli status reporting is much better now about keeping the user informed about what's going on (is the provider machine up, is juju read on the machine).  the provisioning bug in particular has been a little hard to reproduce, and its been unclear what version and what the bug is.. but i think thanks to your help we should be able to fix that i
<hazmat> n the next day or two. and it indeed its seem to be a bug in txaws in that it varies/reproduces based on ec2 error response variation.
<kees> cool, thanks for looking into it!
<TheMue> hazmat: you wrote about a presentation about juju. would you please send it to me?
<kees> it was frustrating for sure, but it was still _way_ easier to bring up a bunch of identical instances this way.
<kees> the charm stuff is nice :)
<hazmat> TheMue, we have them shared in an ubuntu one folder atm
<TheMue> hazmat: ah, ok. still have not used my account. so I'll try it now.
<TheMue> hazmat: does it cover the dependencies of external components (like zk) and internal/external modules and libraries
<TheMue> ?
<hazmat> TheMue, no
<hazmat> TheMue, its a very high level architecture diagram
<TheMue> hazmat: ok, but I think it will help
<fwereade> hazmat, btw, need an opinion on how it's acceptable to detect unexpected shutdowns during the critical window of filesystem-screwage during upgrade-charm
<fwereade> hazmat, the workflow state seems like such an obvious place to put it, but I don't think it's a good idea to fire a transition while midway through executing another transition
<fwereade> hazmat, so if I were to do that I'd have to have a callback on workflow that called set_state on itself explicitly
<fwereade> hazmat, which feels like a bit of a perversion of the state machine
<fwereade> hazmat, hm, I have to stop now :( I'll pop back on later
<hazmat> fwereade, doh.. sorry.. definitely i think your idea is  good (collapse part of upgrade op into the transition), go for it
<fwereade> hazmat, the issue is that I feel I should be able to handle the fact that the process could suddenly die while we're half way through extracting the charm
<hazmat> fwereade, that's independent really of the workflow aspect
<fwereade> hazmat, well, the trouble is it's intimately bound up with it, because if we come up from an incomplete upgrade we need to go into upgrade_charm_error state
<fwereade> hazmat, it certainly can't go on the lifecycle, we don't want that explicitly controlling the workflow
<kees> hazmat, SpamapS: if you're interested, I've got another set of juju blog posts up now:
<kees> http://www.outflux.net/blog/archives/2011/12/07/juju-bug-fixing/
<kees> http://www.outflux.net/blog/archives/2011/12/07/how-to-throw-an-ec2-party/
<hazmat> fwereade, sure, that state can be signaled by the error handler, but the aspect of doing the upgrade in such a way as to handle unexpected errors is independent of the location of the code
<fwereade> hazmat, I guess it could go on the unit agent itself, but it's a step in he opposite direction from the (IMO nice) move of state-reconciliation from unit agent to workflow
<fwereade> hazmat, teh workflow really feels like the right place for it
<hazmat> fwereade, so what happens on a retry?
<hazmat> of upgrade_error
<fwereade> hazmat, the usual: if unit charm id doesn't match service charm id, download and unpack before running the hooks
<fwereade> hazmat, and if it does, we know we're recovering from a state post-successful-replace, and we just fire the hooks if we're asked
<hazmat> fwereade, sounds good
<fwereade> hazmat, I'm just trying to figure out whether an "unlicensed" state transition, that doesn't go through the normal transition logic, is in any way acceptable
<hazmat> fwereade, just make an additional transition
<hazmat> fwereade, what's the scenario?
<fwereade> hazmat, and it's explicitly OK to fire a transition in the course of another transition?
<hazmat> fwereade, no.. but the lifecycle can call other lifecycle methods
<fwereade> hazmat, when we hit the point of no return *something* needs to record the fact that we're in a risky state
<kickinz1_> hi!
<fwereade> hazmat, as said above I think the workflow is the right place for it
<SpamapS> kees: ty, reading your posts now. ;)
<hazmat> fwereade, huh? the transition handler itself is supposed to be risky/failable.. that's the benefit it and i thought the point.. it will record failures
<kickinz1_> May I ask aquestion?
<SpamapS> kees: btw, you should be able to use us-west-2 now ;)
<hazmat> kickinz1_, sure
<jimbaker> kees, cool post. i'm working on the ssh key management now, so that will take one step out of your process
<kickinz1_> I'm in the process of using juju with orchestra
<kickinz1_> When creating the boot strap, it fails with this error:
<kickinz1_> /root/.juju/environments.yaml: environments.orchestra.default-series:
<kickinz1_> The only place I see this is onbugs, but while using etckeeper.
<hazmat> fwereade, ah.. i think we're agreeing.. i think the point of no return stuff should be in the transition handler with a conditional guard, hence failures there record state, and can be retried. sounds good.
<kickinz1_> (https://bugs.launchpad.net/bugs/872553)
<_mup_> Bug #872553: [SRU] upon creating a node via juju & orchestra, etckeeper hangs <verification-done> <Orchestra:Invalid by andreserl> <etckeeper (Ubuntu):Fix Released by kirkland> <orchestra (Ubuntu):Invalid by andreserl> <etckeeper (Ubuntu Oneiric):Fix Released by kirkland> <orchestra (Ubuntu Oneiric):Invalid by andreserl> < https://launchpad.net/bugs/872553 >
<fwereade> hazmat, I'm not totally certain whether we're talking past one another or not, 1 sec
<SpamapS> kickinz1_: can you maybe pastebin the whole error, like from $ juju ....   to the next $ ?
<kickinz1_> ok
<fwereade> hazmat, I'm talking about something like this: http://paste.ubuntu.com/762978/
<fwereade> hazmat, on UnitWorkflowState
<fwereade> hazmat, damn, really must go, bbl
<kickinz1_> http://pastebin.com/NNqBkiNn
<kickinz1_> any idea?
<kickinz1_> I'm using precise
<hazmat> fwereade, the state changes should go in the watch callback not the workflow
<hazmat> fwereade, the existing upgradecharm op will continue to exist, and it can do some basic checks, but it will kick off the state change after clearing the upgrade flag, the transition handler holds the rest of the code to the upgrade, it should be retryable cleanly, if it fails the unit goes into an upgrade_charm_error.
<hazmat> kickinz1_, do you have a default-series set in your environments.yaml ?
<_mup_> Bug #901343 was filed: juju.control.tests.test_status.StatusTest.test_render_dot broken <juju:In Progress by clint-fewbar> < https://launchpad.net/bugs/901343 >
<kickinz1_> no
<kickinz1_> I'm getting the source of juju to look at what it expect.
<kickinz1_> Funny names...."astounding, mgnificent, overridden, puissant"...
<kickinz1_> thanks! default-series: oneiric made it work!
<niemeyer> Hello!
<mainerror> o/
<niemeyer> mainerror: Yo
<niemeyer> rog: You'll like some of the upcoming improvements on lbox..
<rog> niemeyer: cool
<niemeyer> Just need to test them now.. no Launchpad connection on the flight :)
<rog> niemeyer: a couple of new reviews for you BTW
<niemeyer> rog: and you just got one
<rog> niemeyer: yay!
<rog> niemeyer: make that 3 new reviews - i'd forgotten about that one!
<rog> niemeyer: i've updated the cloudinit package merge proposal
<niemeyer> rog: Sorry, btw, I did a big mess before leaving while working on lbox..
<niemeyer> rog: Repeatedly sending the same message
<rog> niemeyer: that's fine. i just ignored 'em all :-)
<rog> niemeyer: was there any signal in there, in fact?
<drt24> solution to my problem: run sudo orchestra-import-isos and then add and remove the cobbler server configuration
<niemeyer> rog: Any signal? How do you mean?
<rog> niemeyer: did any of the messages mean anything?
<niemeyer> rog: No, in the end I was on crack consistently, because both changesets were already merged
<rog> niemeyer: i thought so. just checking.
<rog> niemeyer: http://codereview.appspot.com/5444043/ in case you didn't get a notification email
<rog> niemeyer: (that one's independent of the others)
<niemeyer> rog: Thanks
<niemeyer> rog: That was one of the things I fixed in the plane, btw
<rog> niemeyer: cool
<niemeyer> rog: It should now send a ptal
<niemeyer> rog: The other is to detect the -cr automatically after first use
<rog> niemeyer: ideally it should let me look at the codereview page before sending any mail
<niemeyer> rog: and the other is to checkout target branches automatically for diffing
<rog> niemeyer: just to do a last sanity check
<niemeyer> rog: and finally I've added support for default flags
<rog> niemeyer: with Go reviews, i often end up uploading several times before mailing
<niemeyer> rog: Most of that is untested, though, obviously.. will be fun to see what works :-)
<niemeyer> rog: That was done too
<rog> niemeyer: +50 for auto downloading!
<niemeyer> rog: there's a new -prep flag now
<rog> oh yes, this one often bites me
<rog> :
<niemeyer> rog: You can use at any time to upload without requesting the review
<rog> i'll do -target ../foo-trunk
<niemeyer> rog: It will also leave the Merge Proposal in Launchpad as Work In Progress, rather than Needs Review
<niemeyer> rog: We should put that in the branch itself
<rog> and lbox propose doesn't check that the dir exists until after the file's been edited
<niemeyer> rog: It looks for ".lbox"
<rog> (the description)
<niemeyer> Ok, let me give you some quick reviews
<kees> SpamapS: us-west-2> yay! I will save a little money and a little latency. :)
<kees> jimbaker: excellent! I look forward to that. :)
<niemeyer> kees: <envy>
<kees> niemeyer: ?
<kees> niemeyer: oh, that I have an ec2 region in my state?
<niemeyer> kees: Yeah :-D
<SpamapS> https://code.launchpad.net/~clint-fewbar/juju/fix-dot-test/+merge/84827
<SpamapS> Woudl appreciate a quick review cycle on that.. fixes the test suite on trunk.
<SpamapS> Would even
<kickinz1_> bye!
<nijaba> SpamapS: Hello.  Can you think of anything else that would be needed for Limesurvey's charm, or should I move on to roundcube?
<SpamapS> nijaba: if it has the ability to make use of readonly slaves so we can scale it out even more, that would be cool, but its not really necessary. ;)
<nijaba> SpamapS: I do not think this is possible in Limesurvey
<SpamapS> nijaba: I plan to write a mysql-proxy charm when subordinate charms land that will direct SELECT to a slave, and all others to a master. Should be interesting. :)
<nijaba> SpamapS: sounds really cool :)
<SpamapS> I wonder if MySQL cluster works in any useful way on EC2.. probably not with the latency spikes.
<SpamapS> 7.2 will have memcache protocol access built in, that should be cool. :)
<nijaba> SpamapS: ok, so I'll move on to Roundcube, making a first version of it that carries smtp/imap server address in the config.  Will update it once someone will have charmed a mail server to depend on it
 * nijaba wonders if dependencies can be made optional
<SpamapS> nijaba: really even after the mail server is charmed, it will be useful to be able to just set it in the configs and not have to relate anything.
<SpamapS> nijaba: yes, optional: true can be added as an attribute after the interface: xxx
<SpamapS> which I find completely brain imploding.. requires: optional..
<SpamapS> :-P
<nijaba> SpamapS: so that' what we should do
<rog> niemeyer: off for the day, see ya tomorrow?
<rog> ttfn all
<niemeyer> rog: Yeah, have a good evening
<niemeyer> rog: and you have another review
<_mup_> juju/sshclient-refactor r431 committed by kapil.thangavelu@canonical.com
<_mup_> cleanup cli output when connection refused
<SpamapS> bcsaller: I noticed you had some subordinate branches in review. How close are we to having some things to play with? I had this *crazy* idea for a charm..
<SpamapS> mk-query-digest can take tcpdump output, and tell you what queries sucked
<SpamapS> So.. throw that on your apps for 5 minutes, related back to somewhere to storage the output.. and you can get like, an instant picture of your app
<SpamapS> and where it sucks
<bcsaller> SpamapS: thats cool. While the feature set is getting closer to alpha its still at the starting gate in terms of reviews.
<bcsaller> SpamapS: and history shows that always takes a whiel
<SpamapS> Yeah
<SpamapS> I'm eager
<SpamapS> I have a bunch of cool ideas and I want to try them out. ;)
<SpamapS> negronjl: btw, would appreciate a review on this https://code.launchpad.net/~clint-fewbar/charm/oneiric/mysql/add-config/+merge/84697
<negronjl> SpamapS: ok ... working on it now
<_mup_> juju/ssh-known_hosts r430 committed by jim.baker@canonical.com
<_mup_> Do not create known_hosts files in actual home directory when testing
<_mup_> juju/ssh-find-zk r430 committed by jim.baker@canonical.com
<_mup_> Initial refactoring
<_mup_> juju/ssh-find-zk r431 committed by jim.baker@canonical.com
<_mup_> Merged upstream
<negronjl> SpamapS: looks good ... deployed with multiple changes in config.yaml and it all works good as far as I can tell.  Approved.
<negronjl> SpamapS: should I merge it as well ?
<SpamapS> negronjl: no I think the proposer should merge if they're a member of charmers
<SpamapS> negronjl: and thanks for the review!
 * SpamapS is eager to start writing tests as well
<negronjl> SpamapS: no prob.
<SpamapS> Pushed up to revision 69.
 * SpamapS giggles like beavis and butthead
<hazmat> niemeyer, i wanted to try one of the lbox -cr reviews, but it seems to have trouble.. it wants the -for branch to point something on disk, but it doesn't like a trunk checkout, any ideas?
<_mup_> Bug #901463 was filed: SSH Client code and output cleanups <juju:In Progress by hazmat> < https://launchpad.net/bugs/901463 >
<hazmat> also wasn't clear on the -bp if it wants the name of a blueprint or  a link,
 * SpamapS goes to get a RedBull from 7-11...
<hazmat> bcsaller, can subordinate charms talk to each other in the same container level isolation that they can with the master?
<bcsaller> hazmat: they need a relationship defined
<hazmat> bcsaller, but would that be a normal s2s rel? or a container scoped one
<bcsaller> hazmat: we talked about that, its wasn't clear that it was a high priority use case, I would think we'd honor the subordinate flag on the relationship. It would be a special case though as they are not subordinate to each other
<bcsaller> hazmat: what use case do you see?
<hazmat> bcsaller, in terms of impl if the container scope is just another type of relation
<hazmat> bcsaller, i was thinking about doing something with volume management and backup for cassandra, a volume manager that can attach volumes to the node, and a backup cassandra plugin, that could snapshot and transfer data to the volume
<_mup_> juju/ssh-find-zk r432 committed by jim.baker@canonical.com
<_mup_> Fix tests to support refactoring
<bcsaller> hazmat: so even without special support those could both be subordinate with a normal relationship and they would be able to filter for the right pair
<bcsaller> but I think we can do better and you seem to as well
 * SpamapS chugs redbull...
<SpamapS> http://bit.ly/surjbS
<SpamapS> BUG TRIAGE RAMPAGE!!!
 * SpamapS storms off into launchpad
<marcoceppi> louel, Gotenks.
<Daviey> hazmat: bug 804203, is https://issues.apache.org/jira/browse/HBASE-2418 related?
<_mup_> Bug #804203: Juju needs to communicate securely with Zookeeper <security> <juju:Confirmed for hazmat> < https://launchpad.net/bugs/804203 >
<SpamapS> Daviey: its related in that HBASE needs to implement the same level of auth controls as Juju does when working with zookeeper.
<hazmat> yup
<hazmat> basically node level acls to protect portions of the zk tree from anon clients
 * hazmat heads out to dinner
<hazmat> Daviey, its not on the roadmap atm for 12.04
<elmo> *blink*
<elmo> seriously?
<Daviey> SpamapS / hazmat: thanks
 * SpamapS is somewhat frustrated about that one as well
<niemeyer> hazmat: Hmm
<niemeyer> hazmat: What does bzr info print for your checkout
<niemeyer> ?
<SpamapS> Hmm, so config settings can't contain non-ASCII data
<SpamapS> print >>stream, str(result)
<SpamapS> UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-10: ordinal not in range(128)
<SpamapS> seems like just changing that to unicode(result) would work
<SpamapS> of course, really, I just want the raw bytes no matter what..
<niemeyer> Woohay.. new features of lbox working well
<niemeyer> SpamapS: Ugh..
<niemeyer> SpamapS: That's a super well known wart of Python :-(
<SpamapS> wart in what way?
<SpamapS> unicode is tricky?
<niemeyer> SpamapS: Luckily 3.0 is fixing it, so people will stop doing it all the time
<niemeyer> SpamapS: It's not on itself
<niemeyer> SpamapS: The problem is how Unicode evolved within the language
<SpamapS> Seems like a wart of all programming done before 2005 :-P
<SpamapS> Java tried, even they got it wrong. :-P
<niemeyer> SpamapS: >>> u"Ã©" + "Ã©"
<niemeyer> Traceback (most recent call last):
<niemeyer>   File "<stdin>", line 1, in <module>
<niemeyer> UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 0: ordinal not in range(128)
<niemeyer> SpamapS: Just too easy to get wrong
<niemeyer> SpamapS: 3.0 is fixing that by separating raw bytes from *human text* more clearly
<SpamapS> oh thats good
<niemeyer> SpamapS: 3.X, that is
<SpamapS> so how do you get raw bytes with 2.7 ? unicode(var) ?
<SpamapS> that seems wrong
<niemeyer> SpamapS: "Ã©"
<niemeyer> SpamapS: That's raw bytes
<niemeyer> SpamapS: But was also the correct way to do human text until several years ago
<niemeyer> SpamapS: Hence the mess
#juju 2011-12-08
<SpamapS> Eh, but I have a variable with raw bytes from a file.
<SpamapS> and I want to print them out
<niemeyer> SpamapS: foo.decode("utf-8"), for instance
<SpamapS> actually.. I think for this..
<niemeyer> SpamapS: Which yields unicode
<SpamapS> config-get and relation-get need to have a --format=raw ..
<niemeyer> Or rather, the Python notion of unicode
<SpamapS> because right now we're tacking on a \n
<niemeyer> SpamapS: Yeah, possibly
<SpamapS> Ok, well at the moment, high-byte chars in config/relation settings will likely cause a stacktrace
 * SpamapS tries it
<SpamapS> actually no .. hm
<SpamapS> json format encodes it in proper json notation
<SpamapS>  config-get --format=json disable-large-pages
<SpamapS> "\u5b50"
<kees> å­ made me look
<SpamapS> hahaha
<SpamapS> Thought about going with a 3 byte char but thats just not playing fair ;)
<SpamapS> ok so unicode does cause an error.. but config-get and relation-get just silently swallow it
<SpamapS> or rather.. high byte chars
<_mup_> Bug #901494 was filed: config-get and relation-get silently drop values with high-byte characters <juju:New> < https://launchpad.net/bugs/901494 >
<_mup_> Bug #901495 was filed: config-get and relation-get silently drop values with high-byte characters <juju:Triaged> < https://launchpad.net/bugs/901495 >
 * SpamapS continues his bug triage RAMPAGE
<hazmat>  elmo its on the wishlist
<elmo> hazmat: not having a go at you, but FWIW if I didn't work in Canonical, I can't imagine using juju in a production environment without that bug being fixed
<hazmat> elmo, part of the issue with bothering with the acls, is that there isn't any transport level security
<hazmat> elmo, but fair enough... does puppet or chef have any notion of multi-user?
<hazmat> granted at the level of acl granularity without transport level security, its just about consistency from compromised nodes,
<hazmat> which a ca will give you..
 * hazmat wonders about keystone
<SpamapS> hazmat: puppet and chef do have to at least verify each node individually before they're allowed to do things like export configs or shot data into data bags.
<SpamapS> s/shot/shove/
<SpamapS> $ killall java
<SpamapS> so true
<osa> hi folks, can you please explain how i can change a "unit" status from stop to start using juju?
<SpamapS> doh
<smoser> SpamapS, i'm maingly walking through http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<smoser> trying to get localdev . oneiric host, i pulled juju from bzr.
<smoser> after 'juju bootstra', i got
<smoser> SSH authorized/public key not found.
<smoser> 2011-12-08 03:44:11,349 ERROR SSH authorized/public key not found.
<smoser> then made myself a key (ssh-keygen)
<smoser> now juju status shows
<smoser> http://paste.ubuntu.com/763416/
<SpamapS> smoser: there are a bunch of logs in weird places you can check
<smoser> i destroyed and started over and fine.
<SpamapS> smoser: but ultimately, I think your main machine is not started
<smoser> probably a bug though if you dont have .ssh/id_rsa
<SpamapS> smoser: have you tried a destroy-environment and bootstrap again?
<smoser> yeah, and that worked.
<smoser> we can poke at a later date.
<smoser> heres the initial stab i made at juju local provider -> openstack
<smoser> http://paste.ubuntu.com/763456/
<smoser> (unsuccessful atm)
<SpamapS> smoser: I like the way the config file pans out... one big yaml for the whole thing
<smoser> thats all adam_g
<smoser> do we know what that internal error network issue is about ?
<smoser> that forces a reboot?
<SpamapS> the pty thing?
<SpamapS> I think it must be an lxc bug
<SpamapS> if you're talking about how ssh starts to fail on the containers
<smoser> not pty, SpamapS
<smoser> libvirt network issue on 'juju bootstrap'
<smoser> was reproducible 2/2 for me on oneiric instance in canonistack.
<SpamapS> OH
<SpamapS> That I don't know
<SpamapS> smoser: the wya virsh is used is kind of weird..
<SpamapS> way even
<smoser> i thought so too
<SpamapS> I believe its done that way so that juju can just sudo out for those few things
<SpamapS> but.. seems like juju could just fork and sudo exec itself..then use libvirt.
<smoser> nah, its not that.
<SpamapS> or leave the network alone
<smoser> not just the groups.
<smoser> something else i think.
<SpamapS> my old crappy lxc provider just let libvirt's default network do the dirty work
<SpamapS> smoser: is a bug filed about the network problems?
<SpamapS> smoser: IIRC there was one that involved non-english locales that was fixed right after 11.10 released
 * SpamapS should just go through the 29 commits since 11.10 and cherry pick the bug fixes into an SRU
<smoser> dont know if ther eis a bug
<smoser> it is documented at http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<SpamapS> smoser: documented pretty scantily
<smoser> google "juju local"
<smoser> thats what you come up with
<smoser> yes, its crappy documentation of that bug. and there should be a bug.
 * SpamapS went on a triage rampage today, so may be leaning a bit too much toward "File a bug"
<SpamapS> took new bugs from 75 -> 55 .. but .. still 184 open bugs.. we have to figure out which ones of those actually matter. :-P
<TheMue> moo
<rog> TheMue: mornin'
<TheMue> Wonna setup an EC2 instance today. Any useful hints before I step into a common known trap?
<mpl> hi all
<TheMue> hi mpl
<TheMue> hmm, mysql up and running on a second instance. how may I login via ssh to an instance?
<koolhead11> hi all
<nijaba> TheMue: via juju ssh?
<TheMue> nijaba: Ah, that's the missing link. thx
<nijaba> np
<nijaba> TheMue: I tend to use this a lot less than 'juju debug-hooks' in my (small) practice of charm dev
<TheMue> neat
 * koolhead11 installed oneiric and will try juju on local system
<TheMue> nijaba: yep. I'm just doing my fist steps and want to look around a bit.
<nijaba> marcoceppi: did you change ch_get_file recently, or was I wrongly assuming that the downloaded file would always be /tmp/download?
<koolhead11> jcastro: around
<nijaba> koolhead11: jcastro is on vacation this week
<koolhead11> nijaba: ok. am using http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<nijaba> (which does not mean he is not around though)
<koolhead11> nijaba: :P
<nijaba> koolhead11: sorry, never tried myself yet
<koolhead11> nijaba: np. i am giving it a try :D
<nijaba> koolhead11: I would be interested in your findings :)
<koolhead11> sure
<koolhead11> smoser: i think its my openstack and networking which gave put me in all the trouble
<koolhead11> i want to play with juju so i need to see that working :D
<koolhead11> customized cloud image would not b having any issue.
<koolhead11> #optimistic thinking :D
<koolhead11> Am i supposed to do any configuration for networking? http://paste.ubuntu.com/763669/
<drt24> could someone fix: https://help.ubuntu.com/community/Orchestra/Juju so that the orchestra environment has a default-series as otherwise juju complains of it being invalid (currently having problems with my ubuntu sso)
<drt24> Also having fixed that I get "Could not find any Cobbler systems marked as available and configured for network boot." though I have two VMs who booted off it.
<jseutter> juju bootstrap fails with "error: internal error Network is already in use by interface virbr0".  Does anyone know how to fix this?
<koolhead11> jseutter: reboot your machine
<koolhead11> i had same error
<jseutter> koolhead11: I've tried that twice, with no change
<koolhead11> and You may also need to set the net.ipv4.ip_forward=1 in you /etc/sysctl.conf.
<jseutter> k, taking a look
<jseutter> hm, do I need to reboot after modifying sysctl.conf?
<jseutter> rebooting.
<koolhead11> aah not needed
<koolhead11> marcoceppi: hey there
<koolhead11> jseutter: it was not needed
<koolhead11> now you will get an error for missing keypair
<koolhead11> so generate one
<koolhead11> :D
<jseutter> koolhead11: ah.
<koolhead11> :P
<jseutter> koolhead11: I still get the same error: Network is already in use by interface virbr0
<koolhead11> jseutter: but i don`t pastebin the whole output
<koolhead11> i just added net.ipv4.ip_forward=1 in you /etc/sysctl.conf
<koolhead11> this line and did juju bootstrap
<koolhead11> next error i got now is for keygen
<jseutter> koolhead11: hm...  I must have a different issue then...
<koolhead11> jseutter: your using Oneiric
<koolhead11> ?
<jseutter> koolhead11: yes
<koolhead11> http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<koolhead11> see this
<koolhead11> am thinking to edit this with putting <bold> in the error part and also add the missing key error
<drt24> solution to bootstrap not working: sudo cobbler system edit --name your-system-name --netboot-enabled=True --mgmt-classes=orchestra-juju-available
<jseutter> koolhead11: hm.  Yeah, that askubuntu page you referenced results in me getting the same error still
<jseutter> koolhead11: unless I'm misunderstanding something you're trying to tell me
<koolhead11> jseutter: i also tried juju while having conversation with u and faced same errors
<koolhead11> drt24: ?
<jseutter> koolhead11: ah.  I was originally following the CharmSchool page on the juju site, then found the error, then came here to irc.
<jseutter> koolhead11: ahasenack has it working.  I have a mac - what about you?
<koolhead11> jseutter: well i have ubuntu :d
<jseutter> :P
<jseutter> koolhead11: just wondering if it is hardware related.
<koolhead11> jseutter: so your running ubuntu inside some VM or however it works in mac
<jseutter> koolhead11: no, I'm running ubuntu on mac hardware
<nijaba> jseutter: any chance you are connected to network via wifi?
<jseutter> nijaba: yes, I'm on wifi
<nijaba> jseutter: I don't think wifi supports bridging
<drt24> koolhead11: I asked a question earlier, and that was the solution.
<nijaba> jseutter: in fact I know for a fact it does not
<nijaba> jseutter: limitation in the wifi drivers
<koolhead11> aaaaaaaaaaah nijaba if that is the case am going to edit askubuntu doc again :D
<nijaba> koolhead11: we had the same pb with kvm a few years back
<koolhead11> ooh. ok
<nijaba> "Warning: Network bridging will not work when the physical network device (e.g., eth1, ath0) used for bridging is a wireless device (e.g., ipw3945), as most wireless device drivers do not support bridging!"
<nijaba> from https://help.ubuntu.com/community/KVM/Networking#bridgednetworking
<ahasenack> nijaba: the bridge does not involve the wifi interface
<ahasenack> nijaba: it's just like firing up local kvm instances that use libvirt
<ahasenack> nijaba: the guest is not getting an IP address from the LAN dhcp server, but from libvirt
<ahasenack> it's 192.168.122.X
<nijaba> ahasenack: ah...  ok, sorry
<nijaba> koolhead11: jseutter: ^^
<jseutter> nijaba: nod.  I'm sitting beside ahasenack
<koolhead11> nijaba: ooh am going to remove the comment i just added :(
<koolhead11> nijaba: i am done with $ juju expose wordpress
<koolhead11> but state seems to be null http://paste.ubuntu.com/763702/
<nijaba> koolhead11: whoa, weird
<koolhead11> nijaba: :P
<jseutter> hm..  If I run juju bootstrap as root, then it works
<koolhead11> http://paste.ubuntu.com/763706/
<koolhead11> in verbose
<koolhead11> jseutter: where is your config file locates
<koolhead11> d
<jseutter> ~/.juju/environments.yaml
<koolhead11> and its owned by ordinary user with sudo access correct?
<koolhead11> i just started the debug log
<drt24> Any idea on what to fix when using orchestra and deploying using juju fails with "Invalid host for SSH forwading: ssh: Could not resolve hostname small0.orchestra.lan: Name or service not known" (the orchestra server does indeed not know what IP small0.orchestra.lan should point to - though cobbler does.
<koolhead11> and i can see 2011-12-08 17:45:58,864 Machine:0: unit.deploy DEBUG: Creating master container...
<jseutter> logging off to save battery
 * koolhead11 wonders if he should file a bug.
<TheMue> After adding a relation between mysql and wordpress (like in the tutorial), why is the relation called db? Is this part of one of the both charms?
<marcoceppi> nijaba: The path to the dowloaded file is returned from the func. You should be doing something like DOWNLOADED_FILE=`ch_get_file ... ...`
<nijaba> marcoceppi: yes, I figured that, but it seemed to have worked differently previously, or I was on crack
<koolhead17> nijaba: i figured out what i was doing wrong. :)
<koolhead17> it takes good time to reach  state: started
<marcoceppi> nijaba: I did, but it didn't work for things like downloading from source forge, or dynamic download scripts since the file was being saved as "download" or "download.php?id=x". This update ensures the file name stays intact, the downside is it's not saved in /tmp/ anymore
<koolhead17> and also i have to use $ juju deploy --repository=/usr/share/doc/juju/examples local:wordpress
<SpamapS> TheMue: db is just a free-form identifier name hat is supposed to put in context the interface.. so for instance in mediawiki ther is a need for a slave: and db: both with interface: mysql
<nijaba> marcoceppi: ah, so I was not on crack then :P
<koolhead17> hola SpamapS i finally juju running :D
<marcoceppi> I don't know why I didn't get your charm picked up. I pulled all the official charms down and ran a grep for anyone using ch_file_get to make sure it was being assigned
<nijaba> marcoceppi: my charmed worked, then broke, so I was surprised
<koolhead17> via LXC though
<marcoceppi> I should go check the charms again
<nijaba> marcoceppi: fixed now, so no worried.  Will have to wait for someone to approve my merge proposal
 * koolhead17 is happy!!
 * marcoceppi reviews
<marcoceppi> nijaba: Why are you also echoin' the error in limesurvey-common?
<nijaba> marcoceppi: easier to debug?  would that be a pb?
<marcoceppi> It's not a problem, I was just curious since no one would really see the echo - It would just show up in the juju-log :)
<nijaba> marcoceppi: unless you are using juju debug-hooks
<marcoceppi> Oh, duh.
<marcoceppi> It is a little early here for me
<nijaba> np :P
<marcoceppi> nijaba: Shoot, just realized that the upgrade-hook wasn't updated to use the new ch_get_file
<nijaba> marcoceppi:  it should just be a link...
<nijaba> marcoceppi: to install
 * marcoceppi wanders off to get coffee
<TheMue> SpamapS: Thx, found the definition in the charms.
 * nijaba got his roundcube charm working :)
<koolhead17> nijaba: :D
<SpamapS> nijaba: *Saweet*
<SpamapS> BTW, sabdfl and I are running a webinar in 1 hour...
 * SpamapS fishes out the link
<SpamapS> http://www.brighttalk.com/webcast/6793/39309
<nijaba> SpamapS: just retweeted about it
<SpamapS> nijaba: ty, retweeted
<koolhead17> SpamapS: good luck!! :)
<SpamapS> nijaba: in your limesurvey charm.. you shouldn't need to juju-log *and* echo..
<SpamapS> juju-log should be echoing its message
<nijaba> SpamapS: 1300 registrations to your webinar!
<SpamapS> only 1300? ;)
<hazmat> nice
 * hazmat wades through the aftermath of SpamapS's bug-rampage
<drt24> how do I tell if juju bootstrap is making progress?
<SpamapS> drt24: thats the one step that we don't have much insight into, because it starts the first juju agents on new machines
<nijaba> SpamapS: regarding juju-log, this is not what I am observing :(
<drt24> SpamapS: is there a log somewhere of what it is doing?
<SpamapS> nijaba: hrm, we should fix that, though I can see why it might not.. because then the messages would double up in the charm log
<koolhead17> drt24: juju -v bootstrap
<koolhead17> does gives some info
<SpamapS> yes, but that will not tell you what the new instance is doing
<SpamapS> drt24: if its an ec2 instance, you can use ec2-get-console-output .. but that sometimes lags by a couple of minutes
<SpamapS> drt24: juju status will try to wait a bit for the instance to be started
<drt24> I am using orchestra
<SpamapS> drt24: for orchestra, you should have logs from the install on the orchestra server
<SpamapS> drt24: I don't recall the exact location for them, but /var/log/syslog *might* have it
<drt24> juju status just gets "ERROR Connection refused" after juju bootstrap says it has finished successfully.
<SpamapS> drt24: right thats because the install hasn't completed yet
<koolhead17> drt24: juju -v status
<drt24> syslog has lots of "Could not open dynamic file '/var/log/orchestra/rysyslog/.../cron' - discarding message
<SpamapS> drt24: yeah I think thats a bug.. have to create the dirs underlying those logs and it will fill them up
<nijaba> duh, I wrongly changed bug #795479 to fix released instead of proposed, and now I can't change it back :(
<_mup_> Bug #795479: Charm needed: roundcube <new-charm> <juju Charms Collection:Fix Released by nijaba> < https://launchpad.net/bugs/795479 >
<nijaba> SpamapS: sounds great :)
<drt24> "ERROR 'dict' object has no attribute 'read'" when running juju bootstrap.
<drt24> File "/usr/lib/python2.7/dist-packages/juju/providers/orchestra/launch.py" line 45
<hazmat> rog, what version of go are we using? stable, weekly, tip?
<rog> hazmat: weekly.
<rog> hazmat: it's fast moving at the moment, but stable is just too old
<hazmat> rog, cool, thanks
<mpl> rog: so I've played with exp/ssh a bit (just connecting a client to a server), and it seems at first sight that there's no out of the box mechanism to do port forwarding (which is what python juju uses iiuc). Gonna investigate some more.
<rog> mpl: interesting.
<kickinz1> Hi!
<rog> kickinz1: hi
<kickinz1> I just have another question....May I?
<SpamapS> kickinz1: please do!
<kickinz1> I'm re-installing an orchestra server to use it in conjonctionwith juju
<kickinz1> (managing an openstack cloud)
<nijaba> marcoceppi: thanks for reverting my mistake
<kickinz1> The other tim (11.10), I had some little troubles, that I've been able to manage, now, I quite know how to do, but...
<marcoceppi> nijaba: which, what?
<nijaba> marcoceppi: fix-released/proposed
<kickinz1> In cobbler I don't have anything but hardy by default, I have to use cobbler-ubuntu-import, why?
<marcoceppi> nijaba: Oh! No problem :D
<kickinz1> The other time I didn't need to do that.
<kickinz1> Is ther a special comand to do to doanload all repos?
<kickinz1> Cause I don't have the profiles with *_juju now...
<kickinz1> Maybe I'm not clear enough?
<kickinz1> I don't have mgmtclasses too....
<SpamapS> kickinz1: hm
<SpamapS> kickinz1: you do need to run the import script, it should end up with all supported releases
<_mup_> juju/robust-zk-connect r422 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<kickinz1> SpamapS: Thanks, but which scritp? cobbler-ubuntu-import?
<SpamapS> kickinz1: yeah I think thats it
<kickinz1> I ran it, but doesn't worked, I have the repos but not the profiles (cobbler-ubuntu-import natty-x86-64 oneiric-X86_64 .. )
<drt24> kickinz1: or orchestra-$something? that got some of my brokenness fixed
<kickinz1> SpamapS & drt24: thanks orchestra-import-isos, was the good one thanks!
<whit> hazmat: howdy kapil
<hazmat> whit, greetings
<whit> hazmat: if I wanted to setup a single box setup of openstack, is that possible to bootstrap with juju?
 * whit has a spare ubuntu box, and needs to get a openstack environment set up to play with
<nijaba> marcoceppi: thanks for your review.  I have now addressed your remarks (or I think I have)
<hazmat> whit, i think i'd recommend just using the devstack script in that scenario.. http://devstack.org/
<whit> hazmat: yeah...
<SpamapS> whit: smoser has been working on making a juju version of devstack tho
<SpamapS> hazmat: ^^
<SpamapS> smoser: how's that juju-devstack script coming btw?
<whit> heard that in the webcast this morning, hence me showing up here :)
 * whit has futzed with devstack script
<smoser> "working".
<smoser> not much.
<whit> was tempted to hack it to use supervisor rather than a host of screen calls
<SpamapS> Actually the screen bits are nice. I wouldn't mind having that for the juju local provider's consoles.
<smoser> only tried a bit last night. hopefully adam_g will be able to help some. but the goal is that its largely just the same juju openstack providers that he's working on.
<smoser> whit, "supervisor" ?
<SpamapS> smoser: yeah I'd expect it to be a short list of juju commands and the specialized "scaled down" config.yaml
<SpamapS> smoser: I think he meant supervisord
<whit> smoser: supervisord? http://supervisord.org/
 * whit shrugs
<whit> it's nice for tying together processes that make up a single service and managing those processes together
<SpamapS> whoa.. 11 hours for recipe build queue.. :-P
<smoser> interesting.
<smoser> the niceness of the screen or tmux is that you go to a window for the log of nova-compute
<smoser> kill it
<smoser> edit code
<smoser> press up a copule times, start it
<smoser> cycle is very fast.
<smoser> clearly something else could provide the same, even monitoring writes and re-loading like the django development webserver does.
<smoser> but it works well enough
<whit> smoser: yeah, that is nice.  the cycle with supervisorctl is simular
<whit> for example, I have a series of applications sandboxed in virtualenvs, managed by supervisor
<whit> I have one command to start everything
<whit> I issue one command to stop something I want to work directly on
<whit> start it by hand in the foreground, futz with it
<whit> get it to where I want, restart it under supervisor
<SpamapS> whit: yeah, vagrant users cite similar things. We kind of want you to have that in juju, but also have the ability to move the same thing into production easily.
<rog> team meeting now?
<marcoceppi> nijaba: Looks good to me!
<rog> hazmat: ^
<hazmat> rog, indeed
<marcoceppi> nijaba: Just need someone else to look it over :)
 * whit nods at SpamapS 
<whit> makes sense
<whit> though, we leverage that system to move things to prod easily, since supervisor is also running everything in prod
<whit> but it's all handrolled
 * whit shrugs
<hazmat> rog, fwereade, jimbaker, bcsaller, invites out
<jimbaker> hazmat, ok
<niemeyer> Hello there!
<nijaba> marcoceppi: I am sure SpamapS will be happy to review ;)
<mpl> 'lo gustavo
<nijaba> Question: should charm enable ssl as an option or should ssl be handled by the loadbalancer?
<SpamapS> nijaba: good question
<SpamapS> nijaba: I think there's a decent argument for apache and apache ssl to become subordinate charms to accomplish things like that
<nijaba> SpamapS: so at he moment you would recommend not to implement it as an option in charms?
<SpamapS> nijaba: since subordinate charms are still a long way off, it might be interesting to see it done like that.
<nijaba> SpamapS: I think it would make sense for roundcube.  I'll work on it...
<marcoceppi> The problem with SSL is getting a trusted certificate
<hazmat> http://theopenphotoproject.org/
<marcoceppi> Not sure how you would work that in, but I'm interested
<nijaba> marcoceppi: in the meantime, I like snakeoil better than no ssl on my emails :)
<marcoceppi> True true :)
<SpamapS> well we could charm up openca :)
<SpamapS> Also you can have Amazon do your SSL w/ cloudfront
<nijaba> if I need to store a value that can be used by all my unit.  This is a value that should be generated once for the first unit, then reused by all others. is there a 'config-set' that I could use?
<marcoceppi> ah, dash is giving me headaches again
<adam_g> is it assumed that `unit-get private-address` from one side of a relation will be the same as `relation-get private-address` from the other?
<koolhead17> SpamapS: is it possible to get the presentation slide of today`s juju cast/talk sumwer
<SpamapS> koolhead17: I believe its attached to the webinar somehow
<koolhead17> SpamapS: i have not finished watching it. cool i will wait for the link :)
<koolhead17> SpamapS: nopes, i was not able to get/see any link for the slides :(
<marcoceppi> I'm wondering about the best way to handle this test for charm-helper-sh
<SpamapS> marcoceppi: tests should be relatively easy to write
<marcoceppi> SpamapS:  I'm trying to do this in all.sh (so you can include _all_ libraries)
<SpamapS> marcoceppi: you can "mock" wget with an alias.. or.. even better.. fire up a tiny webserver
<marcoceppi> However, I can't reliably get the dirname when sourcing. I'm trying to avoid hardcoded paths :\
<marcoceppi> I've got a test case written :) It does a test against every type of file pull
<SpamapS> marcoceppi: OH I've had this problem before.
<SpamapS> marcoceppi: sourcing does not set $0
<marcoceppi> The internet is failing with help, since they're written in dash I can't use $BASH_SOURCE
<SpamapS> thats sort of why BASH_SOURCE exists. :)
<marcoceppi> DASH_SOURCE doesn't seem to exist either :\
<SpamapS> marcoceppi: no, dash is minimalistic by design
<SpamapS> -rwxr-xr-x 1 root root 950896 May 18  2011 /bin/bash
<SpamapS> -rwxr-xr-x 1 root root 109768 Oct 27 11:20 /bin/dash
<marcoceppi> Right, I understand the goal I'm just hitting a wall with this
<SpamapS> marcoceppi: One way to go is to have users "exec" a command to get all..
<SpamapS> marcoceppi: another is to call it all.bash , and don't let people use "all" from dash.
<marcoceppi> I suppose exec /usr/share/charm-helper/sh/all.sh would be a good solution
<marcoceppi> will anything that file sources be available in the global source?
<marcoceppi> scope*
<SpamapS> marcoceppi: I'd be in favor of the latter.. keep all simple for the people who just want bash... let dash/csh/ksh weirdos use the individuals
<SpamapS> marcoceppi: no you'd do this
<marcoceppi> So, have an all.bash
<SpamapS> eval `/usr/share/charm-helper/bash/all.bash`
<marcoceppi> looks kind of ugly
<SpamapS> and it would spit out the source lines
<SpamapS> marcoceppi: you're going to have to figure this out when two helpers share code anyway ;)
<marcoceppi> So, right now I have this in the file: http://paste.ubuntu.com/764170/
<marcoceppi> Which obviously won't work when sourced, but it's the idea I had
 * marcoceppi had ideas for a ch_require function when separate ch modules required functions from another module
<SpamapS> marcoceppi: right, just add 'echo' in front of the . and you can eval/exec it.
<marcoceppi> hum, so there really isn't any way around that scoping issue then? I'll go ahead and do that then open a bug to see if we can avoid having to use eval
<_mup_> juju/robust-zk-connect r423 committed by jim.baker@canonical.com
<_mup_> Backoff for 1s if environment pending because no addr assigned
<_mup_> juju/trunk r430 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] yield on add_auth calls in tests [f=832384]
<SpamapS> marcoceppi: I'm guessing there isn't. But maybe its time to evaluate whether "sh" is worth it. :)
 * marcoceppi *nods furiously*
<marcoceppi> :)
<marcoceppi> I'll give this a go then! thanks :D
<SpamapS> marcoceppi: it may be a good idea for us to just say "never mind that was a bad idea, use bash"
 * SpamapS has lots of bad ideas
<marcoceppi> SpamapS has more good ones than bad
<fwereade_> hazmat: ping
<SpamapS> nijaba: isn't @ubuntucloud supposed to retweet stuff I tweet @ubuntucloud ?
<hazmat> fwereade_, pong
<fwereade_> hazmat, if a relation goes away while a unit agent is down, can you think of a way to reconstruct the workflow such that we can call the departed hook?
<hazmat> fwereade_, i don't know that we  need to do that in the workflow
<fwereade_> hazmat, well, ok, maybe we just need the lifecycle
<hazmat> fwereade_, its not something we would call the departed hook for, just relation-broken if the relation was removed.
<hazmat> fwereade_, when the down agent comes back it has a delta to apply or queue for hook execution, the relation-broken hook in that sense is no different
<fwereade_> hazmat, hm, ok, but wouldn't we call departed if it went away while we were running?
<hazmat> fwereade_, departed is called when other nodes that where participating in the relation go away
<hazmat> fwereade_, the on disk record of state, needs a delta computation against current state when it comes back up, to determine what the missed events/hooks are
<fwereade_> hazmat, ok, scratch nonsense about departed
<fwereade_> hazmat, the question remains, how do I construct a lifecycle which I can then cause the relation-broken hook to fire?
 * hazmat ponders
<fwereade_> hazmat, the unit_relation appears to be needed for the HookScheduler -- if it were just the watches it were needed for, we could just about get away with a localised hack
<SpamapS> fwereade_: I had some thoughts on that
<hazmat> fwereade_, i think you just create one ad hoc
<fwereade_> hazmat, heh, ok, that looked doable but kinda evil
<SpamapS> fwereade_: really, when the client says "that relationship is no more" .. it should insert a "TODO" item for each side to ACK the state
<SpamapS> fwereade_: so if one side is offline, thats fine, when it comes back it will see that it has a TODO and do it, then ack it (possibly just by deleting the TODO)
<hazmat> fwereade_, actually even simpler than that
<hazmat> fwereade_, a static method on the unit rel lifecycle, just give it the executor and the rel name.
 * fwereade_ peers at the code
<hazmat> and it will queue a broken hook
<hazmat> fwereade_, look at the depart lifecycle method, thats basically the same notion
<hazmat> hmm... the relhookcontext..
<hazmat> so effectively its already broken for depart hooks
<fwereade_> hazmat, the unit relation is still needed.. yep ^^
<fwereade_> hazmat, hadn;t thought if it that way but, yeah, I think so
<fwereade_> SpamapS, I think what you say makes sense, but it's a bit far outside current mental scope for me to be sure
<SpamapS> Avoids having to store state locally if you keep everything in ZK
<fwereade_> SpamapS, this is true, I can't actually think of a good reason to store that state locally, but solving this specific problem feels like a pretty fearsome change...
<SpamapS> fwereade_: not knowing the code, I can't comment much on how hard it will be.. only that I think this would work. ;)
<fwereade_> hazmat, wait, hold on, will the UnitRelationState actually be *gone*? it suddenly looks to me as though it *is* deleted immediately if the *service* goes away, but may be left around if just the *relation* goes away
 * fwereade_ reads some more with furrowed brow
<hazmat> fwereade_, so right now it appears that the full hook cli api for relations would in a rel-broken  hook.. if the underlying rel states haven't been garbage collected... spamas's notion of capturing intent/acks has some value as it can help a gc known when it safe to clean out the rel structure. an offline node, could be offline for a while.. ie. it should be expected that the relation is gone... so that comes back to what are the expecta
<hazmat> tions of a relation-broken hook execution and the applicable apis that can be used. i can't get around the notion that at a minimum the hook should have access to its own unit's relation settings, so we do need the structures marked explicitly for cleanup as they execute -broken hooks.
<hazmat> fwereade_, if the rel goes away its removed in  _process_service_changes
<fwereade_> hazmat, ok, if unit rel state really always stays around (which I don't think it does when the service is destroyed)
<hazmat> so the memory structures are pruned for unit-rel state, the underlying zk structures are still present atm (lacking gc), and so the unit rel can be constructed on the fly/transiently for the broken hook execution, but that needs the ack coordination around gc of the state.
<fwereade_> hazmat, I had the impression that GC didn't exist yet
<fwereade_> hazmat, I was just poking around, and I guess I *couldn't* find anything that deleted unit-relation states
<hazmat> fwereade_, self._relations.pop
<hazmat> fwereade_, you mean the in memory  state or zk state?
<fwereade_> hazmat, sorry, zk state
<fwereade_> hazmat, things I am now pretty sure of
<fwereade_> hazmat, actually I'm not that sure, thinking again
<hazmat> fwereade_, right so the gc doesn't exist, but it has an obvious implementation for most of things, this is one place where coordination would be needed, at the very least that should get captured.. the same problem arises in the stop case (except the states are indeed gc'd), we need to capture intent/ack instead of justs modifying the state
<hazmat> in this case the state is being modified atm, but it will be, so the need for ack is still there, just creating a new node for the unit would suffice upon depart.
<hazmat> fwereade_, i wonder though if we don't also need a separate hook context for broken, it should never see any other members of the rel
<fwereade_> hazmat, I was just looking upon RelationHookContext.get_members, with a sour expression
<hazmat> nor set settings, just be able to retrieve its own settings
<fwereade_> hazmat, even that may be tricky, doesn't service destruction delete the unit containers?
<hazmat> fwereade_, yeah.. this is a more generic problem we have multiple places
<hazmat> we express intent by modification of the state, and then have actors change their env to match state
<hazmat> but the state we're modifying is identity, and we create missing identity scenaros for the actors
<hazmat> we need to capture the intent as a state change that can be coordinated upon by the actors while they retain their identity
<fwereade_> hazmat, ok, so basically the unit agent is responsible for finally trashing its own state in ZK as it dies, and for trashing the last of the service state as well (if the service is marked as going-away, I guess)
<fwereade_> hazmat, not *is*, *should be*
<hazmat> fwereade_, actually it formalizes a new trusted actor, a gc agent, that can enact the final removal of actor identity from the topology
<hazmat> but the gc actor needs coordination with any actor that it has completed the intent
<hazmat> the actor can of course attempt to clean up its own nodes
<fwereade_> hazmat, ah, ok: so for example the client marks "dying", each unit agent marks itself "dead", GC cleans up
<hazmat> but the topology node in particular is off limits to most actors in the system as it represents truth in the system, as expressed by the user
<hazmat> the notion then is that the topology can be extended to capture these intents from users (destroy-service/remove-unit) which still trigger the requisite state changes to enact behavior, but with the identity changes being async to that
<fwereade_> hazmat, sure, there's no reason to mess with the topology itself, surely the dying flag can live in the actor's own nodes?
<hazmat> so sort of like on ec2-terminate-instances the instance isn't immediately dead, and it continues to appear in the output of describe-instances for some time, till its identity is finally gc'd
<hazmat> fwereade_, only if you trust the actor ;-)
<fwereade_> hazmat, ...ha, yes
<hazmat> i'm not dead yet ;-)
<fwereade_> hazmat, far from it :)
<SpamapS> arg.. argparser.HelpFormatter is not an extendable class. :/
<fwereade_> hazmat, ok; so that would be cool, but my proximate problem is in calling relation-broken
<fwereade_> hazmat, which it seems is broken anyway
<fwereade_> hazmat, I'm somewhat taken by the idea of a less-featureful RelationDepartedHookContext
<fwereade_> hazmat, but there wouldn't be much difference between that and a plain HookContext
<fwereade_> hazmat, (but that specifically is not really a relevant consideration atm)
<fwereade_> hazmat, this would definitely qualify as an API change
<hazmat> fwereade_, that sounds good to me, you'd need stub impl of the relhookcontext, and you'll need at least rel-name identity to queue up the broken hook.
<fwereade_> hazmat, does that feel like the least worst accessible solution to you?
<fwereade_> hazmat, yeah, I can probably keep track of the name :p
<hazmat> fwereade_, yeah.. that seems okay.
<hazmat> the api change is sane as well
<hazmat> but should get to the list
<hazmat> i'll need to write up the intents stuff
<fwereade_> hazmat, indeed, I'll start writing it up now, may not land until tomorrow morning
<hazmat> fwereade_, sounds good
<hazmat> SpamapS, yeah.. we use raw formatter normally..
<fwereade_> hazmat, cheers
<hazmat> SpamapS, i'm pretty disappointed with the extensibility of argparse
<hazmat> its a mess down there
<hazmat> it works, and is very useful, but extending it is ugly
<SpamapS> hazmat: techniclaly this is violating its API:
<SpamapS> class JujuFormatter(argparse.HelpFormatter):
<SpamapS> "Only the name of this is considered public API"
<SpamapS> hazmat: doesn't seem to be designed for flexibility.. just seems to want to be the simplest to use.
<hazmat> SpamapS, yeah.. and its still miles better than what it replaced.. the python stdlib has gone through 3 gens of option parsing
<hazmat> SpamapS, alternatively we could use commandant (extracted from bzr)
<hazmat> jimbaker, could you review the ssh cleanup branch.. ideally they should go in together to avoid the output oddities.
<hazmat> SpamapS, just wanted to confirm txzk in the ppa is a nightly from txzk trunk?
<_mup_> Bug #901901 was filed: error in provisioning agent when terminating machine <juju:New> < https://launchpad.net/bugs/901901 >
<hazmat> niemeyer, how's the conference?
<jimbaker> hazmat, i reviewed that branch, so it's all set
<hazmat> jimbaker, cool, just need one more +1
<jimbaker> hazmat, sounds good. bcsaller, you want to look at this branch, lp:~hazmat/juju/sshclient-refactor ? it's a pretty sweet refactoring imho, and will allow us to get rid of perhaps the most annoying issue in juju :)
<niemeyer> hazmat: It was great so far
<niemeyer> hazmat: Today is actually a more "compact" meeting with just the more active users/developers
<niemeyer> hazmat: Several interesting brainstorms
<niemeyer> hazmat: Tomorrow is the real MongoSV conference..
<niemeyer> hazmat: Expected to have 1000+ participants (!)
<hazmat> niemeyer, right on, wow
<jimbaker> hazmat, now that bcsaller has approved your branch, just need to a get final OK on lp:~jimbaker/juju/robust-zk-connect. it's a very small change, just adds a sleep to backoff  while waiting for the addr to be assigned instead of hammering away
#juju 2011-12-09
<SpamapS> hazmat: looking now
<SpamapS> hazmat: looks like it is .. note that it won't be installed on anyone's system because the version is < than the one in 11.10
<SpamapS> hazmat: 11.10 has 0.8.0-0ubuntu1 , the ppa has 0.8.0-0juju45~oneiric1 .. j < u
<hazmat> bummer
<SpamapS> hazmat: IMO its a good thing. :) This PPA shouldn't have anything more than you *need* to run juju.
<SpamapS> hazmat: we probably need a dev PPA for stuff like that.
<hazmat> SpamapS, its not a breaker yet, but it will be in future ppa revs of the juju
<SpamapS> hazmat: at that point we will put the backport in the PPA.
<hazmat> jimbaker, i'm wondering if it would be faster to just reset the groups on shutdown of ec2
<hazmat> rather than playing the waiting game
<jimbaker> hazmat, that does sound reasonable and equivalent
<jimbaker> i think it was just an attempt to not create too much garbage
<jimbaker> in terms of lots of security groups hanging around
<jimbaker> hazmat, i'm pretty certain this is what was done in an earlier version, i don't know if that ever went through review
<jimbaker> although the reset then was done at SG acquisition, so a bit different i guess
<hazmat> hmm.. yeah
<hazmat> jimbaker, group removal at shutdown almost never works for me
<hazmat> it always gives up
<hazmat> so i'm wondering if its worth the bother
<jimbaker> hazmat, hmmm... it does tend to work for me, but i tend to just run the wordpress stack at most
<hazmat> effectively.. i wait 30s.. and then.. 2011-12-08 19:14:20,668 ERROR Instance shutdown taking too long, could not delete groups juju-public-0
<hazmat> and it moves on
<jimbaker> yeah, and without ill effect, since it can just use those SGs anyway
<hazmat> well it will try to delete them latter as  i recall
<hazmat> and fail if can't delete them
<hazmat> ie. if you try to bootstrap immediately
<hazmat> resetting the security group means no waiting or errors
<jimbaker> hazmat, that does sound like a valid diff approach then
<hazmat> on bootstrap we can go ahead and clear out any detected garbage
<hazmat> ugh..
<hazmat> that sounds rather odd though.. but the reality is the sgs are still present, so its better than nothing
<jimbaker> hazmat, it sounds reasonable to me. cleanup is supposed to solve the bounce problem seen in yes | juju destroy-environment && juju bootstrap - so if it doesn't, or not reliably, we need to revisit
<hazmat> interesting that error kees saw only exhibits in the us-west-1 region
<hazmat> the response from ec2 is different
<hazmat> so txaws parsing goes awry
<hazmat> when stringfing the error msg
<_mup_> juju/provisioning-agent-bug-901901 r431 committed by kapil.thangavelu@canonical.com
<_mup_> let the logging package format the exception
<jimbaker> hazmat, that is very interesting
<adam_g> is it possible to change default-image-id at deploy time?
<fwereade_> hazmat, is it deliberate that there's no RelationWorkflow transition from error -> departed?
<adam_g> http://paste.ubuntu.com/764414/  :|
<hazmat> fwereade_, i believe it was, but in retrospect it seems reasonable that there should be one
<fwereade_> hazmat, cool, cheers
<hazmat> hmm
<fwereade_> hazmat, even if we don't want to fire a departed hook I think we need to be able to make that transition
<fwereade_> hazmat, I could be convinced either way on the fire-hook question
<osadmin> Whenever I reboot my host juju reports it as stopped even though it is running. Anyone know how to fix this?
<osadmin> Anyone, Whenever I reboot my host juju reports it as stopped even though it is running. Anyone know how to fix this?
<hazmat> osadmin, having juju survive reboots is a work in progress atm, which provider are you using?
<osadmin> hazmat, provider? not sure but I am running the most uptodate ubuntu server version
<hazmat> osadmin, are you running juju services on ec2, or physical machines via orchestra, or local/lxc dev on a machine
<osadmin> hazmat, running on physical machines via orchestra. hosts are running openstack
<hazmat> osadmin, could you pastebin the output of juju status
<hazmat> osadmin, at the moment, agents that juju launches aren't set to come back up on machine boot, its something thats being worked on though.
<osadmin> hazmat, will do, and fyi here is the doco I followed to create the env
<osadmin> hazmat, https://wiki.edubuntu.org/ServerTeam/UbuntuCloudOrchestraJuju
<osadmin> hazmat, http://pastebin.com/HuzfJqiq
<osadmin> hazmat, is there anyway I can manually reset the agent status?
<hazmat> osadmin, yes, its a little involved, but the command that launched the agent is in the cloud-init userdata
<hazmat> osadmin, its the output of... sudo cat /var/lib/cloud/instance/user-data.txt
<hazmat> er. its in the output of
<osadmin> hazmat, that would be great as I am using "juju ssh" to access the hosts
<osadmin> hazmat, ok I have logged into the host and am looking at that file now.
<osadmin> hazmat, what do I do with this? Sorry (noob to this stuff)
<hazmat> osadmin, hm.. that will start the machine agent.. but that won't start the unit agents..
<hazmat> osadmin, so for example this is what i have my in output of that file.. http://pastebin.ubuntu.com/764439/
<hazmat> osadmin, the command to run the agent is embedded in there... for that output its this one.. JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181    python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid
<hazmat> you'd just run that with a sudo prefix on the cli
<hazmat> the machine will start reporting in, it looks like it will restart the unit agents, so that should do it
<osadmin> hazmat, lost my irc for a moment, back now and will look over the pastebin
<hazmat> <hazmat> osadmin, the command to run the agent is embedded in there... for that output its this one.. JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181    python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid
<hazmat> <hazmat> you'd just run that with a sudo prefix on the cli
<osadmin> hazmat, ok
<hazmat> osadmin, fwiw i'd recommend running from the ppa, we keep it pretty stable, and when the restartable feature/bug fix lands, it will be there first, there's also some additional status output and fixes that are useful for orchestra usage.
<osadmin> hazmat, getting errors I will paste what I did
<osadmin> hazmat, http://pastebin.com/dr6BSMZe (added sudo to this command)
<hazmat> osadmin, there's a trailing '] that shouldn't be there
<osadmin> hazmat, oh, I removed that and got an error, will paste the error
<osadmin> http://pastebin.com/CBAqj0gj
<osadmin> hazmat, http://pastebin.com/CBAqj0gj
<hazmat> osadmin, the full command should look like this..
<hazmat>  JUJU_MACHINE_ID=3 JUJU_ZOOKEEPER=ip-10-176-22-254.us-west-1.compute.internal:2181    python -m juju.agents.machine -n --logfile=/var/log/juju/machine-agent.log --pidfile=/var/run/juju/machine-agent.pid
<hazmat> ie. it specifies environment variables
<hazmat> the whole line needs to be used
<osadmin> hazmat, I did the following was this wrong?  export JUJU_MACHINE_ID=4; export JUJU_ZOOKEEPER=oscc-01.itos.deakin.edu.au:2181
<hazmat> osadmin, that should be fine
<hazmat> osadmin, you can't use sudo then
<hazmat> the shell environment won't persist through the sudo
<hazmat> you'd have to use a root shell if your going to do it that way
<osadmin> ok
<osadmin> trying
<osadmin> no errors
<osadmin> hazmat, juju status has not changed however
<osadmin> hazmat, I can now "juju ssh" into the host, I will recheck juju status again
<osadmin> hazmat, status still says stopped
<hazmat> osadmin, can you pastebin the machine agent log file /var/log/juju/machine-agent.log
<osadmin> ok
<hazmat> osadmin, there's a cli tool that makes that easier.. apt-get install pastebinit
<hazmat> and then you can.. cat /var/log/juju/machine-agent.log | pastebinit
<hazmat> and it will give you a url
<osadmin> thx
<hazmat> bcsaller, jimbaker could i get a +1 on this trivial.. http://paste.ubuntu.com/764452/
<osadmin> hazmat, host may not be able to get out at this stage. May have to do it the old fashion way.
<bcsaller> hazmat: lgtm
<osadmin> hazmat, here is the tail of the file you requested http://pastebin.com/Z1QgvpEC
<osadmin> hazmat, here is the whole log file. http://pastebin.com/u9SWwc5x
<hazmat> hm..
<hazmat> osadmin, could you paste log file at /var/lib/juju/units/nova-compute-1/charm.log
<hazmat> osadmin, the machine agent looks like its running fine.. the charm.log will show the service unit agent log file
<osadmin> hazmat, ok fyi: here is the juju status output. http://pastebin.com/qAhdTggJ
 * hazmat nods
<_mup_> juju/trunk r431 committed by kapil.thangavelu@canonical.com
<_mup_> [trivial] provisioning agent fix, let the logging package format the exception [f=901901][r=bcsaller]
<osadmin> hazmat: tail of the file for starters. http://pastebin.com/Ge63NQvg
<osadmin> hazmat,whole of the requested log file is here: http://pastebin.com/9jChCVnS
<osadmin> hazmat: 2nd try http://pastebin.com/iHfLuWUh
<osadmin> hazmat, lol, grabbed to much with that last pastebin, u may have to scroll down a bit to see the contents of the log file
<hazmat> osadmin, yeah.. that's not going to recover without some surgery.. your probably better off just removing the unit, terminating the machine, and adding a new unit
<hazmat> ie. juju remove-unit nova-compute/1, juju terminate-machine 4, juju add-unit nova-compute
<osadmin> hazmat, thanks. Will do but first, will doing this delete any apps from nova-comput/1?
<hazmat> osadmin, it will
<hazmat> well.. it probably will
<osadmin> om
<osadmin> ok
<hazmat> i'm not sure if orchestra is going to reinstall the machine when its cleared out
<hazmat> er.  shutdown
<hazmat> for the next boot.. my understanding is atm it doesn't, so the data would still be there, but i wouldn't count on it
<osadmin> I guess I could wait until the fix is released
<osadmin> hazmat, d u think the fix will be a while away?
<hazmat> osadmin, the fix won't help for an existing installation, there's a branch in review which implements it
<hazmat> so not to far away
<hazmat> probably another week or two
<osadmin> hazmat, thats ok, I will be rebuilding this very soon. If timing is right, I will build with the fixed version. D u think release bfore xmas is poss?
<osadmin> ok
<osadmin> thanks
<hazmat> osadmin, np
<osadmin> hazmat, what d u use juju for mainly?
<nijaba> Good morning
<Sander^work> does juju work with vmware ?
<nijaba> SpamapS: @ubuntucloud will republish your tweets, except if you tweets start either by "@ubuntucloud" or "RT" or "âº".  Hence why your tweet was not retweeted
<nijaba> SpamapS: so move @ubuntucloud toward the end, and it will be retweeted
<shafiqissani> how to deploy wordpress to a single instance
<shafiqissani> i.e. bootstrap instance + mysql instance + wordpress instance all on the same instance
<rog> shafiqissani: you can't do that currently.
<shafiqissani> I see
<fwereade> shafiqissani, some people have been bringing up single EC2 instances and running the local provider on just that one instance
<fwereade> shafiqissani, so it's not *impossible*, but it is not a configuration we would recommend for production
<shafiqissani> fwereade: I know it is not the optimal configuration but imagine it to be on the line of shared hosting
<shafiqissani> fwereade: a site or service that does not require high availability and get very little traffic would a scenario for such a configuration
<fwereade> shafiqissani, indeed, there are interesting possibilities when units can share machines, and we plan to do something about that -- but it's not on the current roadmap yet
<shafiqissani> hm so the solution for now is an ec2 instance with all the deploys runnning on local configuration using lxc as base
<shafiqissani> man virtualization inside of virtualization! ... is it just me or does that sound crazy :D
<fwereade> shafiqissani, yep; that's the current one-machine solution
<fwereade> shafiqissani, heh, I take your point, but juju isn't necessarily working with ec2 "machines": it could be working with real hardware managed by orchestra
<shafiqissani> fwereade: right, the service level abstraction ... got it
<nijaba> Has anyone used juju scp command successfully?
<rog> nijaba: jimbaker's the one to ask about that :-)
<nijaba> rog: actually his mail to the ml describing it is more useful than the help for the command.  Got it to work now!
<rog> nijaba: cool!
<hazmat> Sander^work, no re vmware virtualization, yes wrt to cloud foundry, rabbitmq, etc.
<hazmat> nijaba, unfortunate.. it probably should be the help for the command
<hazmat> nijaba, what's unclear about the output of juju scp -h
<nijaba> hazmat: I think it just lacks an example.  or maybe the "[remote_host:]file1" should be "[remote_host:]sourcefile1" and  [remote_host:]file2 be [remote_host:]destfile1
<nijaba> hazmat: also, what would be really cool, is to be able to use scp from a charm to the bootstrap machine.  This way I could put my some file on bootstrap and scp the files from it to the charm automagically
<nijaba> hazmat: but I guess I am trying to work around bug 814974
<_mup_> Bug #814974: config options need a "file" type <juju:Triaged by jimbaker> < https://launchpad.net/bugs/814974 >
<_mup_> Bug #902143 was filed: juju set <service_name> --filename does not work <juju:New> < https://launchpad.net/bugs/902143 >
<hazmat> nijaba, indeed
<koolhead11> marcoceppi: hey
<hazmat> bcsaller, none of your branches show up on the kanban view
<Sander^work> hazmat, Can juju install several wordpress installations to one apache and one mysql server?
<fwereade> hazmat, UnitLifecycle._process_relation_changes has an interesting little dance where all removed relation workflows are explicitly stopped before any depart transitions are fired
<hazmat> Sander^work, no, juju would model those as separate services, the wordpress charm is not done in a multi-tenant fashion
 * hazmat puts on his dancing shoes
<fwereade> hazmat, this seems to be intended to ensure that no other hook executions (from joined, say) can sneak in once we know that we're departing
<hazmat> fwereade, interesting, indeed, thats seems quite correct
<fwereade> hazmat, but I don't see how it can work; stop itself will yield
<hazmat> fwereade, the logical flow to depart takes account the yield
<fwereade> hazmat, sorry, don't follow, restate please
<hazmat> fwereade, at the end of the stop, the scheduler is stopped, their maybe a hook execution that will happen before the depart, but the depart will be last
<Sander^work> hazmat, so it's possible to create a new wordpress charm that can be deployed twince to one instance?
<hazmat> fwereade, the concurrency on the yield isn't relevant in this context, because at the end of the stop method the scheduler which serves as a sync point is stopped, and concurrent notifications/executions go through the scheduler,  the depart directly schedules on the executioner, and it will be post any concurrent activity from the rel.
<hazmat> Sander^work, juju doesn't do density outside of the local provider atm
<hazmat> and the local provider isn't routable
<fwereade> hazmat, ...if that's the case, why don't we just stop inside the do_depart method on workflow?
<fwereade> hazmat, which we do in fact do
<Sander^work> hazmat, what is a local provider?
<fwereade> hazmat, I'm worrying we really shouldn't execute normal relation hooks at all once we know we've departed, because we can't be sure that all the relevant state still exists
<hazmat> Sander^work, https://juju.ubuntu.com/docs/provider-configuration-local.html
<hazmat> fwereade, moving stop to inside do_depart is fine, but i don't see how that changes what happens
<hazmat> fwereade, we execute stop immediately after we're notified
<fwereade> hazmat, but this may just be because I'm still a little bit unsure about (1) what state needs to exist to run a relation hook and (2) what state may or may not be suddenly cleared by client operations
<hazmat> fwereade, and the zk structures are in place
<fwereade> hazmat, if all the necessary zk structures will remain in place throughout all client operations, then there's no need for the dance, right?
<hazmat> fwereade, the comment directly reasons why the dance is there
<hazmat> to avoid things like.. modify after depart
<Sander^work> hazmat, what do you mean by "do density" ?
<hazmat> Sander^work, multiple units on a single 'machine'
<fwereade> hazmat, you just said "fwereade, moving stop to inside do_depart is fine, but i don't see how that changes what happens"; I'm confused
<Sander^work> hazmat, ah, ok. Is there any reason why it dosn't do density outside of the local provider?
<fwereade> hazmat, either all we care about is stop-before-depart, in which case we can move it; or the little stop-everything-and-only-then-depart-everything dance is unnecessary
<fwereade> hazmat, ...right?
<fwereade> hazmat, sorry, scrambled something there
<fwereade> hazmat, stepping back
<fwereade> hazmat, (1) the only thing we care about is that no other relation hooks can fire once the relation-broken hook is has done so; agree?
<fwereade> hazmat, (2) once we've called stop(), we can be sure that no other relation hooks will fire; agree?
<hazmat> Sander^work, there's some work that will achieve density in a consenting adults fashion via unit placement/resource constraints, there's additional work being done to allow subordinate charms to live in a container with a parent/master charm for things like logging etc. The main reason for lack of density in a rigorous fashion, is that juju allows for dynamic port usage by a charm, and this is problematic when putting two independent cha
<hazmat> rms with port conflicts on the same machine, a the conflict is undetectable apriori. there's some talk of using like a soft network overlay to alleviate that for density, but its not on the roadmap atm
<fwereade> hazmat, (3) therefore, we can call lifecycle.stop() in workflow.do_depart(), and we can guarantee that from that point on no further hooks can be scheduled , so we're safe to just run lifecycle.depart(); agree?
<hazmat> 1) yes, 2) yes, but one may be currently executing, 3) yes
<fwereade> hazmat, and if you do agree with all the above, I don't understand the purpose of the dance, because it's just duplicating work already done in do_depart
<hazmat> fwereade, the purpose of the dance is to immediately stop all broken hooks
<hazmat> fwereade, if you do it in depart, your having exeuctions of a depart hooks, and more hooks for broken relations can be executing, as the rels are serially stopped.
<hazmat> where as the dance ensures all rels that are broken are stopped, and then executes their individual depart hooks
<hazmat> er. broken hooks via depart transition
<Sander^work> hazmat, I whould like to see a diffrence on density when it comes to applications that uses another service's port. 2x Wordpress can easily be installed into one apache instance without any port issues.
<hazmat> Sander^work, you could write a wordpress charm that encapsulated that capability, ie multi-tenant wordpress hosting in a single unit
<fwereade_> hazmat, is that correct, or am I still missing something?
<hazmat> fwereade, say i have 5 broken relations, the current dance ensures all 5 are stopped before executing any of their depart hooks
<hazmat> fwereade, your suggesting that we go through each of the rels, stop it execute its broken hook, and then process the next
<fwereade_> hazmat, what would be the negative consequences of failing to do so?
<fwereade_> hazmat, really that we just go through each and fire the departed transition, and trust the transition to ensure the lifecycle is stopped
<hazmat> fwereade, the problem is that may be events for those 5, that are happening and scheduling/executing hooks while your executing for the one.. ie your processing htem in serial
<hazmat> which means your getting hook execution for those not processed, even though the rel is known to be broken
<Sander^work> hazmat, Am I understanding it right?.. So I then can deploy wordpresse installs on demand into customer's directories for one fixed apache instance?
<fwereade_> hazmat, ok, that's fine; but we can't be sure that won't happen anyway, can we? we yield several times in the course of stopping all those lifecycles, and the not-yet-stopped ones could still be scheduling hooks
<hazmat> Sander^work, a charm can do whatever it wants to do on a machine, in this case you'd have to write the charm yourself
<hazmat> the existing wordpress charm doesn't address that use case
<fwereade_> hazmat, and if it's a situation we're already prepared to accept, I don't see that reducing its incidence is exceptionally important
<hazmat> fwereade_, indeed its an optimistic guarantee not an absolute, if there is concurrent activity happening at that sec
<Sander^work> hazmat, Ok. Do you know about any documents I should read to be able to write a charm like that?
<fwereade_> hazmat, and the consequences of unjustified optimism could be, at worst, ..?
<hazmat> fwereade_, the goal is minimizing hook execution for hooks known broken, waiting on a scheduler is minimal
<hazmat> waiting on hook executions creates a large gap
<fwereade_> hazmat, ok, thanks for clearing that up; the original comment seemed to me to be suggesting that the stop would prevent *any* extra hooks from slipping in
<hazmat> fwereade_, we could probably offer a better guarantee of that, if we stopped the executor, but given that's a shared resource i felt more comfortable with minimizing the possibility.. and the reality is that there is the possibility that a rel hook is executing when we get the notification the rel is broken
<hazmat> since the schedulers feed into the executor, stopping it there suffices
<marcoceppi> koolhead11: hey
<fwereade_> hazmat, yeah, I pondered stopping the executor, it wouldn't be a nice solution
<hazmat> and the currently executing rel hook
<hazmat> is always a possibility
<fwereade_> hazmat, I must be missing something about the significance of a currently executing rel hook
<hazmat> fwereade_, feel free to add to the comment about this
<fwereade_> hazmat, I will :)
<hazmat> Sander^work, well the general understanding of charms helps, but first just figuring out how you do it outside of charms is helpful
<hazmat> Sander^work, http://askubuntu.com/questions/82683/what-juju-charm-hooks-are-available-and-what-does-each-one-do  http://askubuntu.com/questions/84656/where-can-i-find-the-logs-of-irc-charm-school
<SpamapS> http://www.debian-administration.org/article/Installing_Redmine_with_MySQL_Thin_and_Redmine_on_Debian_Squeeze  ... looks like a charm to me. ;)
<jimbaker> nijaba, sure, sounds like a good idea to augment juju scp (and other commands that need it) with more example-oriented help
<SpamapS> jimbaker: we call that "man pages"
<SpamapS> and you guys wanted me to make juju auto-generated which I've been looking into
<SpamapS> err.. language.. not quite unthawed from sleep.. rrrrrr
<TheMue> We don't have a kind of "juju retrieve-environment ..." to retrieve a somewhere else setup environment and merge it into the own one.
<TheMue> The intention is that a 2nd new operator can easily extend  his environment to take over the administration of an environment.
<Sander^work> hazmat, is it possible to write a charm that deploys eg. wordpress over an ftp connection?
<SpamapS> TheMue: I think that would be brilliant
<SpamapS> Sander^work: no, juju is built on the ability to own whole servers.
<SpamapS> Sander^work: you could write a charm which deploys a webservice + ftp onto a machine which accepts wordpress uploads. ;)
<TheMue> SpamapS: Aaargh, "bootstrap" has to be renamed! I allways do the same typo here. (smile)
<Sander^work> SpamapS, Okay.. Is it possible to deploy a charm.. where an ldap database defines which uid/gid the files deployed is owned by?
<SpamapS> Sander^work: certainly
<SpamapS> Sander^work: things like system policy are hard right now.. dev work has just begun on a feature to separate system policy charms from servie charms.
<SpamapS> service even
<Sander^work> I'm using apache with an ldap module an mod_fcgid so every vhost get it's own uid.
<SpamapS> Sander^work: yeah, that would be quite doable
<Sander^work> Whould love to be able to deploy our whole architecture trough a set of charms :-)
<SpamapS> TheMue: one thing to consider with the idea of retrieve-environment is that there is a desire, eventually, for environments.yaml to be limited to only facts that help you find and authenticate to the environment...
<SpamapS> TheMue: any of the settings would be stored and managed inside ZK
<SpamapS> Sander^work: we'd love for you to be able to do that too.
<SpamapS> Sander^work: charms are just scripts in whatever language you want... so you can just duplicate whatever you have now into a charm. :)
<TheMue> So the new admin only should get those facts. Once added his commands would use the ZK on the bootstrap instance, wouldn't they?
<mchenetz> I tried asking this on the Vagrant chat, but i think everyone is asleep. :-) Has anyone tried to implement Juju in Vagrant? I would be interested in working on that if not.
<TheMue> SpamapS: Where do I find the environment on the bootstrap instance? Only in ZK or does a file exists?
<nijaba> SpamapS: rouncube charm now has https support
<SpamapS> mchenetz: No but I figure its probably possible
<SpamapS> mchenetz: the local provider is basically vagrant-like tho
<mchenetz> Spamaps: hmmm I am just learning Jujuâ¦ What is the local provider?
<SpamapS> mchenetz: spins up 'machines' by way of LXC containers
<SpamapS> mchenetz: instead of using EC2 or a hardware provisioning system
<SpamapS> mchenetz: so its quite useful for testing things disconnected
<mchenetz> hmmm, interesting. I will look into that. I still think it would be nice to integrate it into Vagrant as i use it a lot and it already has chef and puppet...
<SpamapS> mchenetz: juju is more like vagrant than chef or puppet
<mchenetz> Definitelyâ¦ I do a lot of deployments in the cloud for some hugh customersâ¦ Juju is definitely going to be a big part of my future!
<mchenetz> I watched the webinar yesterday and my head is spinning with ideas
<SpamapS> mchenetz: so it wouldn't really make sense for vagrant to run juju at the same level as chef or puppet... juju doesn't have a DSL or a big library of configuration tools. Its just for coordinating and orchestrating these encapsulated services.
<SpamapS> mchenetz: I was "Clint" from the webinar. :) any questions?
<SpamapS> mchenetz: and thanks for watching!!
<SpamapS> mchenetz: I'm quit interested to hear how your vagrant knowledge maps to juju.
<mchenetz> hehe, i asked the security question the other day. I am mainly an enterprise security consultant. So, i am thinking about how i can create charms that would encompass some security vm's into the solution. I am thinking about creating some special firewall and ids modules that integrate with juju charms
<mchenetz> I will definitely keep you informed on how Vagrant and juju map up. :-)
<SpamapS> mchenetz: complex networking, thus far, has not been a part of the juju conversation.. but the colocation (or actually, subordination) work that is going on will enable that quite nicely.
<SpamapS> mchenetz: note that the security model of juju is still evolving, I'd love to hear your input on how important it is. There are a few bugs tagged "security" that are sort of our second priority.
<mchenetz> I would like to be able to say add-firewall port-80 relation or something to that effect and it will add a firewall and maybe some die monitoring too
<mchenetz> not dieâ¦ ids...
<SpamapS> mchenetz: well in EC2 nothing is accessible from outside -> inside
<SpamapS> mchenetz: we use the ec2 ingress firewall extensively
<mchenetz> thats trueâ¦ I am not just thinking ec2 thoughâ¦
<SpamapS> mchenetz: you could write a firewall subordinate charm and do exactly what you're talking about
<mchenetz> thats what i am thinking about
<SpamapS> mchenetz: subordinate charms are just charms that live inside the same container as other charms
<mchenetz> yeah.. aim a little familiar with how the charm structure works now. I am quickly getting up to speed.
<mchenetz> I would love to help out on the security side if you guys need any assistance
<TheMue> Hmmm, funny, I can expose a wordpress w/o a mysql instance. I would have expected an error due to the not fulfilled requirement.
<SpamapS> TheMue: the wordpress charm should not have any open port yet though
<SpamapS> TheMue: open-port 80 should only happen after the db is configurd
<SpamapS> TheMue: since the system is async.. its not an "error" .. you just don't get any open port
<koolhead11> i am trying to deploy a charm and i need some assistance
<koolhead11> i have moved the charm from /usr/share/doc/juju/oneiric directory
<koolhead11> to my /home/juju directory
<SpamapS> hazmat: I'm still really confused why docs needs to be a separate series and why we can't just agree that the docs dir under the trunk has a different policy. I'm *very* concerned now that the docs will get out of sync w/ trunk.
<koolhead11> when am trying  juju deploy --repository=/home/atul/juju  local:mysql
<TheMue> SpamapS: I understand, and I should have had a debug-log open. *gna*
<koolhead11> ERROR Charm 'local:oneiric/mysql' not found in repository /home/atul/juju
<SpamapS> TheMue: I don't necessarily think that having debug-log going all the time is a good idea ;)
<SpamapS> koolhead11: you need the series in there
<SpamapS> koolhead11: mkdir /home/atul/juju/oneiric
<SpamapS> koolhead11: and move the charms into that dir
<koolhead11> SpamapS: ok
<TheMue> SpamapS: debug-hooks are better? I'm currently want to so what's going around.
<koolhead11> so SpamapS my charm will be in /home/atul/juju/oneiric
<SpamapS> TheMue: while developing and learning its probably a good idea.. I think though at some point we have to look at it as users of the charm, who won't necessarily be able to consume all of that data.
<koolhead11> and i will deploy with
<SpamapS> koolhead11: right
<koolhead11> juju deploy --repository=/home/atul/juju  local:mysql
<koolhead11> ok
<SpamapS> koolhead11: that is necessary so that we can match the OS series with the charms for that OS
<mchenetz> Where do i find information on using a local provider in Juju?
<hazmat> SpamapS, let's give it a try, we can evaluate before 12.04 if its not worthwhile and move it back, but i'm hoping its still a benefit to getting doc contributions
<SpamapS> hazmat: as long as we agree to actually put a version number on juju so the disconnected docs can be written to a specific version, it should work. I'm just not confident about that. ;)
<hazmat> TheMue, there was a spec out for doing import / export of environments, but it ran afoul of want for a design of service groups aka stacks as a first class entity that was modeled and agreed upon.
<hazmat> SpamapS, we call winners on that bet at uds ;-)
<SpamapS> hazmat: we should maybe think about putting version strings in juju and having a release process now that we have, you know, users. ;)
<hazmat> SpamapS, i should investigate read the docs some more. i know we tried moved on, but i believe it has support for multiple versions
<TheMue> hazmat: thx for the info
<_mup_> Bug #902219 was filed: config values of 0 are discarded <juju:New> < https://launchpad.net/bugs/902219 >
<hazmat> SpamapS, sounds good, would you mind putting in  a bug for that?
<mchenetz> Found the doc: https://juju.ubuntu.com/docs/provider-configuration-local.html, Doesn't survive reboots? That isn't good for my scenario as i stage development code in the local environment.
<hazmat> mchenetz, it will survive reboots for 12.04, but no it doesn't survive reboots, or even hibernates at the moment.
<hazmat> mchenetz, you'd also have to manually connect the bridge that the lxc containers are bound to allow external connectivity to them off the host.
<hazmat> or port forward from the host
<mchenetz> okay. good to know. can i make a charm for that. :-)
<mchenetz> What i would like to do is give developers a local environment to develop in and then move the units to the cloud for test an production.
<mchenetz> Which i think is the whole purpose of juju. Easily move and provision units.
<SpamapS> hazmat: this feels like a blueprint, there are really 3 things that need to happen. 1- add versions to the juju --help, 2- ratify and agree to maintain a stable branch. 3- Setup a PPA that just has the latest stable release.
<hazmat> mchenetz, well.. when you say move.. its not moving the data, you can develop/stage local and the copy the configuration/charms to a different cloud, but that doesn't sync data.. you'd need a separate charm/service for data syncing, right now.. juju environments do not bridge clouds.
<hazmat> each environment is specific to a provider, but you can have multiple environments in a given provider.
<SpamapS> mchenetz: for what you're talking about.. you'd just repeat the local deployment into the cloud
<mchenetz> I was thinking that i can utilize charms that would say, install mysql, and apache, and then create the relations for multiple environments. As long as i create charms that have the glue code then it should work. Correct?
<mchenetz> And againâ¦ I am still learning Jujuâ¦ I learn very fast, but if it sounds like a stupid question it's just that i haven't learned it all yet. :-)
<hazmat> mchenetz, yes the charms are meant to capture the configuration/best practices for a service in a provider independent fashion
<SpamapS> mchenetz: eventually there's the idea that we'd be able to create a relationship between two environments .. but thats not done yet. ;)
<hazmat> mcclurmc, so you could deploy the same mysql/apache/appserver setup in multiple environments
<mchenetz> Spamapsâ¦ That sound awesome
<marcoceppi> SpamapS: I eagerly await that idea :)
<SpamapS> mchenetz: its already possible.. you can write a cloud-bridge charm that exchanges anything you need to exchange between the two envs.. and just use service configs to get them talking to eachother.
<hazmat> and even then you need an underlying setup that syncs data
<SpamapS> I bet I could get the mysql charm to expose config settings to allow external slaves/masters
<hazmat> well maybe not it could just offer access, but wan connectivity solutions are better at a data tier
<mchenetz> I don't mind creating, "glue" to connect disparate environments. I just need to know that limitations so that i implement things in the most efficient way.
<hazmat> marcoceppi, mchenetz so the first cut at gluing disseparate environments is a proxy charm that will relay notifications to a remote endpoint
<hazmat> you'd deploy a proxy service in each environment, bind it locally to the relations of interest, and then connect the proxy endpoints
<hazmat> at least that's one option
<SpamapS> I took a stab at making an 'othercloud' charm that would use the juju client to talk to another juju env but the lack of wildcard interfaces made it not work like I wanted.
 * hazmat nods
<hazmat> SpamapS, it would need support in the core, for basically assuming the interfaces of the proxy target
<mchenetz> hazmat: that makes senseâ¦ To me, it sounds like i would use ssh to create a tunnel between the envirnments
<marcoceppi> SpamapS:  hazmat: I was under the impression that orchestra was preferred for stringing multiple clouds into one env?
<SpamapS> really.. if you want cross-cloud cross-AZ .. you probably want to make conscious decisions about what crosses those boundaries.
<mchenetz> Then just run commands locally and remotely
<hazmat> mcclurmc, any secure transport would work
<hazmat> whoops
<SpamapS> marcoceppi: that wouldn't really work. ;)
<hazmat> mchenetz, i was thinking zeromq with encrypted messages would do it
<mchenetz> I haven't used that. I will definitely look into it
<hazmat> but i'm very much thinking like an app developer ;-)
<SpamapS> hazmat: yeah, thats when I gave up on it, when I realized unless I can make relations dynamic it just won't work.
<mchenetz> It's interestingâ¦ I grew up as a hacker of code with bbs's in the early days and then became a network engineer. So, i think in terms of both code and network infrastructure. :-)
<SpamapS> hazmat:  I think an ops guy would be fine with that as long as it was simple to understand and monitor.
<SpamapS> mchenetz: WWIV vs. Telegard ... GO
<mchenetz> hehe, i ran rbis originally and then WWIV, good old wayne bell
<hazmat> SpamapS, so this go further into a notion of charms that juju distributes and core services, given things like we could offer additional syntax for cross-env relations
<SpamapS> hazmat: yeah that would make sense.
 * SpamapS prepares for an Ubuntu bug triage rampage today
<mchenetz> To me, as long as you create the appropriate abstraction on the top level and the disparate environments have similar functionality then it really shouldn't matter what environment you are on...
<marcoceppi> I guess I'm just confused about how to best tackle a scenario using Juju
<mchenetz> there should be the idea of move-unit [enironment]
<mchenetz> and again.. i don't know juju that much yet...
<marcoceppi> I have three bare metal machines, lets say, each running an acceptable provider by Juju - I assume each would be it's own juju environment then?
<marcoceppi> nevermind actually
<hazmat> mchenetz, that assumes integrated volume management storage, even for just unit migration within an environment, and frankly at scale moving data across wans is a non transparent operation to QOS.
<hazmat> its potentially a  huge impact on network resources, and a multi-day operation
<mchenetz> hazmat: As long as the backend code accommodates the variables for that. Why would it matter? You can give the instructions for syncing code and throttling and so forth...
<mchenetz> syncing db's and such...
<mchenetz> It could say use-link [interface]  throttle [50%] of link or something like that
<mchenetz> this is all conceptual
<mchenetz> It could then set the proper qos tagging and such in the backend and setup the interface to use and maybe the timeframe
<hazmat> mchenetz, move-unit is a generic capability to any service.. what a service/charm chooses to expose can be accomodated by something like a proxy without charm knowledge, or the functionality could be incorporated directly into a charm.
<hazmat> mchenetz, more interesting though, juju right now is its infancy wrt to how it approaches networking.. i'm curious though what you would think of juju managing a soft overlay network that spanned machines
<mchenetz> That would be very interesting. So, are you talking about creating a networking abstraction that would be unrelated to a single machine?
<mchenetz> Can you elaborate on what you are thinking?
<hazmat> mchenetz, yes.. this is a while out most likely.. but the notion of getting unit density on a machine, where each unit is an lxc container,  to be abstract to a provider, we need to establish a soft overlay net that we'd plug the lxc containers into, probably with something like openvswitch or just using openstack's quantum
<hazmat> part of the problem is that we end up needing a bridge to reconnect the overlay, but the notion is for exposed services we would port forward
<hazmat> it give us much better capabilties to expose in terms of setting up vlans etc
<hazmat> but its also a pita
<mchenetz> hmmmâ¦ interesting.. You could potentially keep the environments networked permanently through the virtual switch and then exchange data and move things where they need to be. I like itâ¦ It doesn't seem like it would take too much either.
<mchenetz> I will definitely have more to contribute in the upcoming weeks as i learn juju. It's definitely a project i would like to be involved in.
<mchenetz> I am just ingesting all of the knowledge right now. :-)
<hazmat> mchenetz, awesome, probably the best way to get introduced to juju is to write a charm or have a look at some existing ones.. http://charms.kapilt.com
<mchenetz> I am planning on writing many charms and looking at existing ones. ;-)
<koolhead11> hazmat: revision  and config.yaml   are compulsary files to be with a charm
<koolhead11> ?
<koolhead11> i am learning writing juju with writing simplest charm which does things simply with apt-get isntall
<koolhead11> *install
<koolhead11> i moved mysql example in same directory and same part worked and charm got initialized
<hazmat> koolhead11, config.yaml isn't, revision is
<koolhead11> hazmat: i am using the existing mysql example
<koolhead11> and i see a file with name revision there
<koolhead11> notthing mentioned about same in config.yaml file
<koolhead11> so i created both files accordingly and created hooks sub directory inside it
<koolhead11> added options: {} in config.yaml file
<koolhead11> i am just clueless while this thing is not working :(
<hazmat> koolhead11, what do you mean by not working?
<hazmat> can you deploy your charm?
<koolhead11> hazmat: i get error in deploying charm i wrote
<koolhead11> hazmat: http://paste.ubuntu.com/765110/
<koolhead11> i have created a directory inside example named "oneiric"
<koolhead11> and put the charm for boa inside it
<koolhead11> and executing juju deploy --repository=example local:boa
<koolhead11> while my pwd is /home/atul
<hazmat> koolhead11, the path to boa should be /home/atul/example/oneiric/boa
<koolhead11> hazmat: that is where boa is
<mchenetz> I see some charm developers are using augeas to create/modify configs. This seems interesting. I never heard of that tool
<hazmat> that directory should contain the metadata.yaml file, if your using a recent ppa, and there's a syntax error in the charm, it should report it
<koolhead11> i am using oneiric and installed default juju
<hazmat> mchenetz, its a bit like a generic dom api for configuration, some folks prefer writing out the whole config, some prefer patching in place.
<koolhead11> from repo rather PPA
<hazmat> koolhead11, so you are using the ppa?
<mchenetz> Is there a standard you guys like in terms of directories? I notice the .aug files are in the root instead of the hooks directory...
<koolhead11> hazmat: i have not added any PPA manually, installed juju which came with default
<koolhead11> with oneiric
<mchenetz> I really should read the documentation. :-)
<koolhead11> hazmat: /home/atul/example/oneiric/boa   its very much here
<koolhead11> and also i have metadata.yaml file there
<hazmat> koolhead11, then you probably have a yaml error
<hazmat> koolhead11, the ppa version will detect and report yaml errors, the default version in oneiric just won't find the charm
<koolhead11> then why the error log says error in path
<koolhead11> hazmat: point me to PPA am upgrading juju from there
<hazmat> koolhead11, sudo add-apt-repository ppa:juju/pkgs && sudo apt-get update && sudo apt-get upgrade juju
<koolhead11> cool
<SpamapS> koolhead11: can you push your charm up to a branch on launchpad?
<koolhead11> SpamapS: sure once am home.
<koolhead11> catch u guys in sometime
 * koolhead11 rushes 4 home
<nijaba> SpamapS: if you feel like reviewing a charm, feel free to take a look at my roundcube one ;)
<SpamapS> nijaba: I have some other queues to tend to today (server bug triage and SRU's), but I won't be able to resist reviewing your charm all weekend. ;)
<SpamapS> marcoceppi: did you already have a look at it?
<nijaba> SpamapS: he did
<SpamapS> Oh, so, what do you need me for? ;)
<nijaba> SpamapS: to make it official? can't wait for your comments either, specially on the https handling
<SpamapS> as the #4 contributor to lp:charm (see https://launchpad.net/charm) I'd say he's quite qualified to ack and promulgate it :)
<fwereade__> I need to stop for a little while, back later
<SpamapS> I've actually wanted roundcube for some time as I plan to replace my crappy hastymail solution with it. :)
<nijaba> SpamapS: he said he would feel more confortable with you reviwing it first.  He might need some re-assurance :)
<mchenetz> I know this is a matter of opinion butâ¦ I have been using Eucalyptus for a long time because it is API compliant with Amazon EC2. Is there any advantage to going over to openstack? Anything from a juju side?
<SpamapS> nijaba: roger that. I'll take a loko between ubuntu bugs and SRU's ;)
<nijaba> SpamapS: no hurrry. cheers
<SpamapS> mchenetz: Euca is very expensive to scale up
<mchenetz> spamaps: I always hear that
<SpamapS> mchenetz: if you have a working euca solution with a narrow focus, probably best to just stick with it.
<nijaba> mchenetz: and hard to make HA (if posisble)
<mchenetz> I am definitely thinking about scalability for my customers. I think i am going to have to look at openstack...
<mchenetz> I think i will keep my dev in Eucalyptus
<nijaba> mchenetz: the fact that more and more provider are announcing public cloud based on OpenStack feel very re-assuring
<SpamapS> mchenetz: OpenStack is also more loosely coupled.. I find that attractive.
<mchenetz> I will definitely have to put it on my agenda to get familiar with Openstack...
<SpamapS> robbiew: hey how did your BOF go?
<mchenetz> Thanks for the comments
<robbiew> SpamapS: was great..attendance was so-so...but the BoFs started at 8pm
<robbiew> after dinner
<robbiew> luckily mine was BEFORE Google's beer and "icecream" social
<robbiew> :P
<robbiew> we should definitely have a Charm School at the next year's
<robbiew> TOTALLY our crowd here
<robbiew> and next year will be in San Diego...not Boston.
<mchenetz> Any plans on a east coast charm school? I live close to Philly and about an 1hr from NY
<robbiew> mchenetz: jcastro is the man with the plan on Charm Schools...I would expect we'd have one though
<robbiew> we'd prefer it to be tied to some sort of pre-existing event though....the ones on IRC are pretty good too ;)
<mchenetz> I would have to get in touch with him. I don't think i saw one when i looked
<mchenetz> There's irc ones?
<SpamapS> robbiew: *w00t*
<SpamapS> mchenetz: I'd hope we can co-locate a charm school with Surge
<SpamapS> mchenetz: we plan to have a charm school every couple of months on IRC
<mchenetz> SpamapS: coolâ¦. sounds good
<marcoceppi> SpamapS: Cool, wasn't sure if you wanted to a two person review or not :)
<marcoceppi> nijaba: I'll take a look at the SSL implementation and promulgate it after lunch :)
<SpamapS> marcoceppi: only if you feel like you aren't 100% sure its ok
<marcoceppi> SpamapS: gotchya, cool
<nijaba> marcoceppi: thanks :)
<robbiew> SpamapS: I did a local deployment of ThinkUp to demo the LXC stuff...worked perfectly
<robbiew> then I showed them a real ec2 deployment of the same thing...already running to compare
<robbiew> then I blew both away...and then did a live hadoop ec2 deployment
<robbiew> ran terrasort...and scaled it live
<robbiew> BAM!
<rog> i'm off for the weekend now. see y'all tuesday (i'm off monday)
<robbiew> had ganglia setup with auto-refresh plugin for chrome browser
<robbiew> everyone was really impressed...and they GOT it, b/c this crowd knows their shit
<hazmat> robbiew, nice
<hazmat> rog, cheers
<nijaba> robbiew: impressive!
<nijaba> welcome home, koolhead17 ;)
<robbiew> then I told them I learned all this on Tuesday
<robbiew> :)
<koolhead17> hehe nijaba:)
<robbiew> though I've obviously known HOW to do it for quite sometime...but never got my hands "dirty" until this week
<nijaba> rog: have a good long we
<robbiew> I've found juju to be a bit addictive...like, "what can I deploy now?"
<nijaba> robbiew: tell me about it.  I'm finding myself asking what else can I charm :D
<robbiew> this whole juju thing just *might* have legs
<robbiew> lol
<SpamapS> robbiew: \o/ .. sounds awesome
<robbiew> SpamapS: yeah...LISA'12 is going on my "give them money and love" list for next year
<SpamapS> robbiew: we should write a script that just deploys the entire charm store on t1.micro's and relates everything that can be related
<SpamapS> Right now that would be like, 40 t1.micro's .. so it would cost about $1.
<robbiew> lol
<robbiew> one charm to rule them all!
<SpamapS> juju deploy *
<robbiew> oh man
<robbiew> you'd have to script the relations though
<robbiew> or just talking about deploying only
<SpamapS> yeah
<SpamapS> 1 haproxy would not actually be able to serve every website, unfortunately
<robbiew> heh
<marcoceppi> Juju was great, until I got my Amazon bill :P
<SpamapS> haproxy is a "monogamous" charm.
<SpamapS> marcoceppi: LOL!
<SpamapS> marcoceppi: thsi is all an evil plan to get people to setup openstack clouds
<marcoceppi> SpamapS: I'm seriously considering setting up an openstack cloud at my house
 * koolhead17 searching 2 two mintues bzr guide
<jelmer> koolhead17: how about 5 minutes? http://doc.bazaar.canonical.com/latest/en/mini-tutorial/
<koolhead17> jelmer: 3 mins is ok :) thanks
<koolhead17> jelmer: am on LTS so all this command will work for it too. :)
<mchenetz> SpamapS: I plan to have an Openstack server running at my house this weekend. ;-)
<koolhead17> LTS lucid
<robbiew> mchenetz: nice!
<EvilBill> I'm itching for more info on deploying openstack with juju.
<robbiew> fwiw, we're making sure openstack is easily deployable from the installer in 12.04
<robbiew> EvilBill: talk to adam_g :)
<EvilBill> I will. Played with juju about three weeks ago when I was between gigs, dug it a lot, but curious as to why the bootstrap node can't co-exist with where the juju client is running.
<EvilBill> or maybe I'm just conceptually missing something.
<EvilBill> I was trying to multitask between spending time with the family and learning something new, and I think I didn't do either thing very well.
<robbiew> family? what's that?
<EvilBill> lol
<SpamapS> EvilBill: the bootstrap is really just ZooKeeper + the provisioning agent. It can't live with the client because clients can come and go (laptops.. workstations, etc)
<EvilBill> OK, well, what I did was setup an orchestra server and had it working with two other machines at home with wake-on-lan, etc.
<EvilBill> so Orchestra would rev up a bare-metal box on command
<EvilBill> tied that into juju, and when I'd do a juju bootstrap, a machine would turn on and go install and become the bootstrap node
<SpamapS> EvilBill: You *can* make the bootstrap node a VM on the orchestra server
<EvilBill> but it sounds like a full machine JUST for bootstrapping seems silly.
<EvilBill> That's what I didn't get around to playing with or figuring out
<SpamapS> EvilBill: we even talked about having that be the default at one point.
<SpamapS> EvilBill: its fairly simple to provision VMs in cobbler just like regular machines.
<EvilBill> ok, so that begs the next question, what's the preferred VM framework?
<EvilBill> my orchestra machine is an old laptop with a Core Duo, so it's not 64-bit capable.
<EvilBill> which means I don't think it'll run KVM.
<marcoceppi> I'd like to get a public openstack cloud open for charm testing and development against, but I feel that might be something for next year
<SpamapS> EvilBill: 'kvm-ok' will tell you that
<SpamapS> EvilBill: you *could*, in theory, have it provision an LXC container, but you'd have to figure out how to run the "late_command" bit to get juju to start its agents.
<SpamapS> EvilBill: it might also be possible to simply have the cobbler machine register *itself* as a cobbler system, and then do the same thing.
<SpamapS> marcoceppi: we've talked about having an openstack cloud which we make available to ubuntu members who want to work on bugs/testing
<marcoceppi> SpamapS: I think it's a cool idea
<marcoceppi> Limit it to charm-contributors even
<marcoceppi> ?
<koolhead17> EvilBill: i think there is a wiki which spells charm magic 4 openstack deployment allready!! :D
<SpamapS> marcoceppi: I'd like to see it open to even those who are not interested in juju.
<marcoceppi> SpamapS: so would this just be a free cloud for anyone to play in?
<SpamapS> marcoceppi: I think we'd require ubuntu membership and have a limited number of seats available
<marcoceppi> I see
<adam_g> http://wiki.openstack.org/FreeCloud
<adam_g> ^^ not sure what the status of that is ATM, tho
<koolhead17> SpamapS: https://code.launchpad.net/~koolhead17/charm/oneiric/boa/trunk
<koolhead17> hope you will not laugh, as it has notthing magical
<koolhead17> made it to understand juju better :)
<SpamapS> adam_g: *nice* I didn't know it had even been written down like that. :)
<SpamapS> koolhead17: thats fine. It should work.. its not clear at all to me why its not working. :-/
<koolhead17> SpamapS: hehe. i would love to see if someone tests it on AWS
<koolhead17> i have no idea why i was getting the error i mentioned earlier on LXC
<koolhead17> i created a directory name example/oneiric/boa*
<koolhead17> and repository = /home/atul/example
<koolhead17> :P
<koolhead17> i moved mysql charm from the examples directory to that path it worked but not the boa one
<SpamapS> koolhead17: ls -l /home/atul/example/oneiric
<koolhead17> it will have boa and mysql directory )
<koolhead17> SpamapS: am home now siir!!
<SpamapS> koolhead17: so.. um, you can't test at home, but you can't push to bzr at work? :-/
<SpamapS> koolhead17: you should be using the local provider at home
<koolhead17> SpamapS: i tried juju formula writing on LXC at work, synced it to my home computer and then pushed it to bzr once i came home
<koolhead17> i will install Oneiric tomorrow
<koolhead17> its been in list for ages, :D
 * koolhead17 is addicted to LTS
<SpamapS> koolhead17: yeah, LXC isn't quite usable in 10.04 unfortunately
<koolhead17> SpamapS: honestly am waiting 4 precious b4 i can format my poor lappy, that is why i do all charming stuff in office :P
<koolhead17> 2 GB baby
<mpl> niemeyer: I think I've got something going for zk + ssh. but it's still very messy so I'm gonna clean it up before showing it to you guys.
<niemeyer> mpl: Ohh, sweet
<mpl> gonna go home, dinner, and rest a bit first though. ttyl
<marcoceppi> I can't find config-changed anywhere in the hooks documentation
<hazmat> marcoceppi, yeah.. that's a problem.. its in a separate document on service config
<hazmat> i just started to pull the docs into a separate branch to hopefully facilitate making them easier to contribute to.. there at the bzr branch lp:juju/docs
<marcoceppi> Ah, cool - thanks for the heads up hazmat
<hazmat> marcoceppi, re the config-hook docs atm https://juju.ubuntu.com/docs/drafts/service-config.html#creating-charms
<marcoceppi> After a charm is reviewed and deemed needs more work, should I just remove the new-charm tag?
<_mup_> juju/ssh-known_hosts r431 committed by jim.baker@canonical.com
<_mup_> Refactored
<_mup_> juju/ssh-known_hosts r432 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<nijaba> marcoceppi: just put the bug bag to state incomplete
<nijaba> marcoceppi: once the problem are addressed, the requester will put it back to state "fix-comited"
 * nijaba takes off. Have fun
<EvilBill> SpamapS: Coming back after a lengthy meetingâ¦ you could get cobbler to register itself? Hm, that's an idea, but I wouldn't want it to try to PXE boot itself...
<koolhead17> bye nijaba
<_mup_> juju/ssh-known_hosts r433 committed by jim.baker@canonical.com
<_mup_> Simplify by keeping public keys in ProviderMachine itself
<_mup_> juju/ssh-known_hosts r434 committed by jim.baker@canonical.com
<_mup_> Refactored bootstrap
<_mup_> juju/ssh-known_hosts r435 committed by jim.baker@canonical.com
<_mup_> Fix error handling for refactored bootstrap
<_mup_> Bug #902384 was filed: Service units get stuck in debug mode. <juju:New> < https://launchpad.net/bugs/902384 >
<SpamapS> EvilBill: no you wouldn't pxeboot it .. you'd just run the final "late command" that juju sticks into the pre-seed
#juju 2011-12-10
<EvilBill> SpamapS: Cool, I'll have to dig into that later then.
 * nijaba is back (temporarily)
<SpamapS> EvilBill: its not something I've done.. but I have been trying to figure out ways to try out orchestra + openstack with < 6 machines available. ;)
 * SpamapS looks around for a fork to have stuck in him.. just about done
<zirpu> negronjl: are you going to post your slides from mongosv?
<_mup_> juju/ssh-known_hosts r436 committed by jim.baker@canonical.com
<_mup_> Key injection, a start
<nijaba> has anyone worked with peer relations? I have 2 questions about those:
<nijaba> (1) if all peers in a relation die and I start a new unit for them, do I get access back to relation variables that were set previously for it?
<nijaba> (2) how can I copy a file from one peer to another?
 * marcoceppi PROMULGATE PROMULGATE in best Dalek voice
<mchenetz> trying to use local provider, the machine is up, but doing, "juju ssh 0", gives me a connection refused. Is there not a ssh server by default? Do i have to use a charm first?
<mchenetz> sorry, got disconnected. Let me ask this one more time
<mchenetz> trying to use local provider, the machine is up, but doing, "juju ssh 0", gives me a connection refused. Is there not a ssh server by default? Do i have to use a charm first
<mchenetz> how do you get rid of state:pending under units?
<mchenetz> never mind... just took time :-)
#juju 2011-12-11
<SpamapS> looks like the EC2 account that WTF uses has exceeded its S3 bucket allowance
<SpamapS> Bootstrap aborted because file storage is not writable: Error Message: You have attempted to create more buckets than allowed
<SpamapS> http://wtf.labix.org/429/ec2-wordpress.out.FAILED
<marcoceppi> SpamapS: What is WTF?
<SpamapS> marcoceppi: niemeyer's mini-jenkins. ;)
<SpamapS> marcoceppi: wtf.labix.org , with no args, shows the two tests that run on each commit to juju trunk
<SpamapS> good for tracking down problems
<SpamapS> ok, now its time to go stuff myself silly at IHOP
<SpamapS> marcoceppi: hey, did you ever start writing tests for net.sh ? I just started poking at doing that.
<marcoceppi> SpamapS: I did a little bit, really nothing major.
<marcoceppi> https://code.launchpad.net/~marcoceppi/charm-tools/helper-tests
<marcoceppi> Still trying to figure out how to do an "all.sh"
<SpamapS> marcoceppi: ok good, I have a bunch fleshed out
<marcoceppi> Cool, if you've got the bandwidth then by all means :D
<SpamapS> marcoceppi: I'm looking for a tiny simple webserver to run on 127.0.0.1 to test ch_get_file
<SpamapS> boa might work
<marcoceppi> I was going to have it try to fetch a file from LP, but a lightweight webserver might be good too
<marcoceppi> Could use CherryPY
<SpamapS> I want it to be self contained
 * marcoceppi *nods*
<SpamapS> Because I'd like to run the tests in the package build, which has no network access
<marcoceppi> gotchya
<elmo> SpamapS: dear lord, dude
<elmo> boa is ridiculous overkill
<SpamapS> I don't know it at all
<SpamapS> just heard "embedded"
<elmo> you have python
<elmo> why not just run one of the simple http server modules?
<SpamapS> I 'spose thats simpler. :)
<elmo> python -m SimpleHTTPServer 8000
<elmo> will serve files from the current directory on port 8000
<marcoceppi> Well that seems easy enough, now how can we make it more complicated?
<elmo> I dunno - rewrite it in Go?
<marcoceppi> Seems like the only _obvious_ option is to write an HTTP server in Bash
<SpamapS> as fun as that sounds
<SpamapS> its been done so many times.. ;)
<SpamapS> marcoceppi: these tests have already turned up a few bugs in net.sh when running under set -u
<SpamapS> marcoceppi: I'm not sure I agree with the "try host then dig" approach
<marcoceppi> SpamapS: What would you recommend?
<SpamapS> hrm.. SimpleHTTPServer doesn't write a pid file.
<SpamapS> and dash's job management sux0rz
<SpamapS> Hm I guess I can exec it from another script
<marcoceppi> *whistles* or Bash?
<marcoceppi> :)
<SpamapS> marcoceppi: meh
<marcoceppi> <3
<SpamapS> marcoceppi: lp:~clint-fewbar/charm-tools/add-tests
<SpamapS> marcoceppi: probably just needs a tiny bit of love to get working
<SpamapS> marcoceppi: I have to run.. up against a hard stop.. but will be back later to finish it off
<osadmin> hazmat, I have added ppa as you recommended
<osadmin> hazmat, I see two packages ensemble and juju. Should I install both and how will it impact the default oneiric orchestra/juju/cobble packages?
<marcoceppi> SpamapS: Cool, I'll take a look
<marcoceppi> SpamapS: I made a few small changes: lp:~clint-fewbar/charm-tools/add-tests ch_get_file tests are still failing halfway through though
<marcoceppi> lp:~marcoceppi/charm-tools/add-tests even
#juju 2013-12-02
<smoser> hey all
<smoser> per http://www.dreamhost.com/newsletter/1113.html#a2
<smoser> dreamcompute is now "officially in beta"
<smoser> so you could sign up at http://www.dreamhost.com/cloud/dreamcompute/
<smoser> and presumably get a account and then point juju at it
<smoser> (i say presumably becauase i have not gotten response to my signup from earlier this mornign)
<thumper> o/ smoser
<lazypower> Thats good news!
<marcoceppi> smoser: sweet, I'll give it a go as soon as I get creds
<hatch> marcoceppi hey wb :)
<marcoceppi> o/
<hatch> now go review my Ghost charm :P
 * hatch snickers and runs away
<marcoceppi> hatch: it's not in the queue, go put it in the queue
<Luca__> marcoceppi: I am back now. So in regards of network infrastructure, could you point some document that explains how networks are set up using charms?
<marcoceppi> hatch: If it's not listed here: http://manage.jujucharms.com/tools/review-queue then I'm not going to review it
<hatch> hmm
<hatch> marcoceppi so https://juju.ubuntu.com/docs/authors-charm-store.html#submitting is not a complete list of steps then?
<marcoceppi> hatch: it is, link to the bug
<marcoceppi> I'll see what's missing
 * marcoceppi is working on a juju charm submit command to make this process a little more automated
<hatch> https://bugs.launchpad.net/charms/+bug/1229377
<_mup_> Bug #1229377: Charm needed: Ghost blogging platform <Juju Charms Collection:New> <https://launchpad.net/bugs/1229377>
<hatch> I don't see where it says to add it to the review queue....or how for that matter :)
<lazypower> hatch: Did you add the "charmers" group to the bug?
<marcoceppi> hatch: huh, this should be in the queue. Charmers is watching the bug and it's in a new or fix commited status.
<hatch> lazypower yup
<marcoceppi> hatch: I'll open a bug and make a note to review this first tomorrow
<hatch> marcoceppi cool, so I did everything correctly then?
<hatch> I'm not really too concerned I just want to make sure that the documentation is up to date so outsiders don't run into issue
<marcoceppi> hatch: looks like it, I'll ping charmworld and figure out why this didn't get scraped
<hatch> great thanks
<hatch> marcoceppi in that list it says "store error on ghost"
<marcoceppi> hatch: that's something different, but equally interesting
<hatch> oh :) cool well I was just joking about rushing to get it approved, take your time
<marcoceppi> hatch: too late, rush order already processed and your card has been debited $125
<hatch> awwww man
<marcoceppi> there's a cancellation fee of $200
<hatch> man you must be taking business lessons from the government
<hatch> haha
<hatch> well thanks in advance, lemme know if you need any help with anything
<marcoceppi> hatch: hey, I got it to show in the queue
<marcoceppi> hatch: you need to assign the bug to someone, preferably yourself. I'll update the docs. Hopefully `juju charm submit` will be easy enough that the majority of instructions can be condensed to that one command
 * marcoceppi searches for other charms that may have fallen through the cracks
<hatch> marcoceppi ahh interesting thanks
<Luca__> Hi there
<Luca__> does anybody know how to reuse a machine once issued a juju destroy-machine command?
<Luca__> it seems even this machine goes back to ready status in MAAS, once picked up again for a different service provision it just stuck and does not do anything
<davecheney> Luca__: that is werid
<davecheney> it shoulnd't do that
<davecheney> after destroy-machine
<davecheney> can you confirm that the machine is no longer listed in juju status ?
<Luca__> davecheney: I confirm it is no longer in the juju status
<Luca__> my understanding is that it should be reused and go through the OS reinstallation
<davecheney> yup
<davecheney> the only thing which could prevent that is constraints
<davecheney> /var/log/juju/machine-0.log on machine-0 will have the details of whta is going on
<Luca__> I am checking now the machine, status is pending, however not doing any installation
<Luca__> let me check
<Luca__> what should I exactly look for? there is a bunch of logs
<Luca__> 2013-12-02 05:02:03 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit rabbitmq-server/0
<davecheney> it will say worker/provisioner
<Luca__> nothing about worker/provisioner
<Luca__> the messages I can find are worker/firewaller
<Luca__> 2013-12-02 04:46:28 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit mysql-hacluster/0 2013-12-02 04:46:28 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit mysql-hacluster/1 2013-12-02 05:02:03 DEBUG juju firewaller.go:346 worker/firewaller: started watching unit rabbitmq-server/0 2013-12-02 05:02:03 DEBUG juju firewaller.go:495 worker/firewaller: started watching machine 13 2013-12-02 05:02:03 D
<davecheney> Luca__: nothing from the provisioner ?
<davecheney> in the whole file ?
<Luca__> yes, nothing in the whole file
<Luca__> but I could deploy mysql and ceph
<Luca__> hold on
<Luca__> filtering for provisioner return some
<Luca__> 2013-12-02 05:02:03 INFO juju.provisioner provisioner_task.go:239 found machine "13" pending provisioning
<Luca__> 2013-12-02 05:02:03 INFO juju.provisioner provisioner_task.go:239 found machine "14" pending provisioning
<Luca__> 2013-12-02 05:02:03 DEBUG juju.provisioner provisioner_task.go:303 Stopping instances: [nfqwq.master:/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ ac69q.master:/MAAS/api/1.0/nodes/node-ee42a908-59d4-11e3-9087-525400378304/ pwq4h.master:/MAAS/api/1.0/nodes/node-472158c0-59d6-11e3-9087-525400378304/]
<Luca__> 2013-12-02 05:02:05 INFO juju.provisioner provisioner_task.go:367 started machine 13 as instance /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ with hardware <nil>
<Luca__> but does not seem doing anything
<davecheney> Luca__: ok, time to check the maas log
<davecheney> if maas says that machine is in ready state
<davecheney> then it will have a record of why it refused to start a machine
<Luca__> from MAAS Web machine was ready
<davecheney> i think what has happened is thing happende too fasr
<davecheney> 3 seconds between maas stopping the machien then reusing it
<davecheney> not a lot of time for all the bookkeeping maas has to do
<davecheney> you could try deleting the service, unit and machine
<davecheney> or actualy just delet ehte unie
<lifeless> davecheney: maas shouldn't have 3s of work to do
<davecheney> juju remove-unit $UNIT
<Luca__> ok
<davecheney> juju destroy-machine 14
<davecheney> then watching the log
<Luca__> ok
<davecheney> waiting til you see maas remove the unit
<davecheney> then juju add-unit $SERVICE
<Luca__> MAAS logs or JUJU logs?
<Luca__> 2013-12-02 05:31:43 DEBUG juju firewaller.go:338 worker/firewaller: stopped watching unit rabbitmq-server/0
<Luca__> 2013-12-02 05:31:43 DEBUG juju firewaller.go:338 worker/firewaller: stopped watching unit rabbitmq-server/1
<davecheney> the juju machine-0 log for a start
<davecheney> but watching the maas logs is recommended as it can be very terse otherwise
<Luca__> it returns some errors but not sure whether is related
<Luca__> 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3057,"Type":"Provisioner","Request":"Life","Params":{"Entities":[{"Tag":"machine-14"}]}}
<Luca__> 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3057,"Response":{"Results":[{"Life":"dying","Error":null}]}}
<Luca__> 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3058,"Type":"Provisioner","Request":"InstanceId","Params":{"Entities":[{"Tag":"machine-14"}]}}
<Luca__> machine 13 and 14's life is dying
<davecheney> ok, those arent' errors
<davecheney> have machine 13 and 14 returned to maas ready state ?
<Luca__> 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3064,"Response":{"Results":[{"Error":null,"Result":"/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/"}]}}
<Luca__> nope
<davecheney> it's possible they haven't been properly relased back to maaas so now nobody owns them
<davecheney> ok, this isn't good
<davecheney> maas has lost track of the machine
<Luca__> state is still dying
<davecheney> which version of juju are you using ?
<davecheney> which version of maas are you using ?
<Luca__> maas: 1.4+bzr1701+dfsg-0+1718+212~ppa0~ubuntu12.04.1
<Luca__> juju-core 1.16.3-0ubuntu1~ubuntu12.04.1~juju1
<Luca__> on 12.04.03
<davecheney> hmm, i think thta maas install is quite old
<davecheney> hmm, actually, error: null is fine
<davecheney> i just checked the code
<Luca__> ok
<davecheney> so, maas gave you back the same machine you deleted
<davecheney> what state is maas saying /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304 is in ?
<Luca__> life: dying
<davecheney> in maas or in juju ?
<Luca__> juju status
<davecheney> what about maas ?
<Luca__> I am looking under /var/log/maas/
<Luca__> is there any specific file  I should check? maas.log is empty and others do not seems to have any information related to this node
<davecheney> so there is no record that maas assigned that node to match
<davecheney> 16:39 < Luca__> 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:172 -> {"RequestId":3064,"Response":{"Results":[{"Error":null,"Result":"/MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/"}]}}
<Luca__> this is what I get from bootstrap node
<Luca__> but if I check maas node I cant find any reference to those under /var/log/maas/
<davecheney> what is the request that matches that response ?
<Luca__> 2013-12-02 05:32:53 DEBUG juju.rpc.jsoncodec codec.go:107 <- {"RequestId":3064,"Type":"Provisioner","Request":"InstanceId","Params":{"Entities":[{"Tag":"machine-13"}]}}
<davecheney> ok, that is getting wiereder and weiredr
<davecheney> how many machines did you deleted
<davecheney> it looks like /MAAS/api/1.0/nodes/node-de6db216-59d4-11e3-8f27-525400378304/ has been assigned as machine/14
<Luca__> Yes I agree. I have never been able to reclaim a destroyed machine and what I was doing was replacing the virtual HDD with a new one, going through the discover process in maas and make the machine available once again
<Luca__> davecheney: any thought about that?
<Luca__> I did delete 2 machines
<Luca__> #13 and #14
<Luca__> which were the machines were I tried to install rabbitmq
<Luca__> getting this 2013-12-02 05:02:15 ERROR juju runner.go:200 worker: fatal "api": agent should be terminated
<Luca__> From here machine is in dying staus
<makara> hi. Juju is stuck on "juju status". I've destroyed environments, setup .bashrc to handle keys, and then "juju init; juju bootstrap; juju status" without any other customizations
<makara> what's going on?
<makara> what's the difference between 'juju init' and 'juju generate-config' ?
<lazypower> makara: nothing. generate-config      alias for init
<lazypower> makara: What do you mean juju is stuck on juju status? Can you provide logs?
<makara> i type in the command and it hangs
<makara> sorry, I'm too tired to send logs
<makara> i was hoping somebody else had the same problem :)
<Luca__> Is there anybody that could deploy openstack with charms here?
<marcoceppi> makara: are you sourcing your .bashrc?
<marcoceppi> makara: run juju bootstrap with --debug and --show-log the paste the output http://paste.ubuntu.com
<makara> marcoceppi, by sourcing...i've added environment variables to .bashrc
<marcoceppi> makara: cool, when you have a moment run bootstrap with the extra flags and let us know the output
<makara> marcoceppi, http://pastebin.com/x1QCGGnD
<marcoceppi> makara: give it a few mins and run juju status --debug --show-log
<makara> marcoceppi, do I need any special ports open? I'm behind a fairly restrictive firewall
<makara> 2013-12-02 10:04:21 DEBUG juju.state open.go:88 connection failed, will retry: dial tcp 54.226.118.252:37017: connection timed out
<davecheney> makara: yes, you need 37017 and 27017
<marcoceppi> you need access to 37017 or a proxy server outside of your firewall
<davecheney> marcoceppi: um, i don't think juju cli can use a proxy
<davecheney> although you could deploy the gui
<marcoceppi> davecheney: :(
<marcoceppi> davecheney: no you cant, he can't reach the bootatrap node
<marcoceppi> davecheney: i thought you guys respected proxies. maybe not. could use sshuttle as a proxy i guess. either way we should find and document a work around
<davecheney> marcoceppi: it's complicated
<davecheney> we respect the apt proxy
<davecheney> but the mongo traffic isn't proxyable
<davecheney> and that sort of carried through into the api
 * davecheney raises a bug
<marcoceppi> i wonder if quickstart would fix this. you could get a gui on the bootstrap node directly without using the deploy command
<teknico> jamespage, hi, got an MP for the glance charm, any chance having a look? thanks https://code.launchpad.net/~teknico/charms/precise/glance/preload-ubuntu-images/+merge/196341
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1256849
<_mup_> Bug #1256849: cmd/juju: no support for proxies <juju-core:New> <https://launchpad.net/bugs/1256849>
<davecheney> marcoceppi: maybe we should just always deploy the gui on the bootstrap node
<marcoceppi> +10000000000000000000000
<marcoceppi> or have a flag --without-gui
<jamespage> teknico, ah
<marcoceppi> for those that dont want it
 * davecheney foes to raise another bug
<davecheney> https://bugs.launchpad.net/juju-core/+bug/1256852
<_mup_> Bug #1256852: cmd/juju: juju should always deploy the gui when bootstrapping <juju-core:New> <https://launchpad.net/bugs/1256852>
<jamespage> teknico, ok - we need to discuss how we integrate simplestreams sync with glance; I'm not convinced this is the right way todo it
<teknico> jamespage, we do. I'm not convinced either, it couples the two of them too much
<jamespage> teknico, ageed
<jamespage> I'd envisaged a simplestreams-mirror charm
<jamespage> that you deploy (probably in a container)
<jamespage> and then relate to keystone
<jamespage> so it can register a product-streams endpoint and find out where things like glance are
<teknico> a separate charm altogether?
<teknico> I was considering just calling the tools/sstream-mirror-glance script, but it's not packaged (yet)
<teknico> (that script is in simplestreams)
<makara> hi. I've setup a juju control instance so I shouldn't have problems with ports. How can I use a specific SSH key when I deploy with juju?
<makara> i also want to install instances into a VPC
<marcoceppi> makara: we don't support VPC at this time, and you can specify ssh keys in your environments.yaml file (~/.juju/) using the authorized-keys option
<makara> marcoceppi, where can I find a complete list of options for the .yaml?
<davecheney> makara: if you do juju init --show
<davecheney> then all the option are presented
<davecheney> commented out (ie, set to default) in the sample config
<marcoceppi> davecheney: yeah, but we don't have /all/ of the options in there
<marcoceppi> like authorized-keys
<makara> and its not in https://juju.ubuntu.com/docs/config-aws.html
<makara> what exactly does bootstrap do?
<makara> i've had to create an instance with juju on EC2 to get past company firewall issues. Now I just want to start deploying
<makara> can I just bootstrap to the instance with juju installed on it?
<makara> my pc > dmz instance with juju > bootstrap > wordpress
<makara> is a little verbose
<jamespage> teknico, yeah - a separate charm
<teknico> jamespage, invoking a simplestreams script would decouple the glance charm from simplestreams internals, would that be enough?
<jamespage> teknico, I'm not worried about that so much; but the sync process needs to be able to deal with
<jamespage> a) multiple instances of the glance charm in different service units for scale-out/ha
<jamespage> b) periodic re-syncing of images
<jamespage> c) endpoint registration for things like juju to use
<jamespage> if we do it right, we should also be able to sync juju tools into a openstack cloud in the same way
<teknico> wow, this expands the scope of the thing quite a bit :-)
<jamespage> by splitting it into its own charm, we don't have to worry about scale out (its a async, period process)
<jamespage> teknico, I don't really want to have a feature which conflicts/confuses those objectives in the glance charm itself
<teknico> yeah, I understand it being a separate and extrinsic concern for the charm
<teknico> jamespage, can you expand a little on "register a product-streams endpoint" please?
<teknico> (which I believe is what your c) point refers to)
<jamespage> teknico, juju uses the keystone service catalog to lookup both image information and juju tool information
<teknico> oh right, the service catalog in keystone
<jamespage> the metadata for these are normally stored in swift, with the images in glance for ubuntu images, and in swift for juju tools
<jamespage> the sync charm is something that communicates with a number of services in openstack, not just glance :-)
<teknico> when you say "sync charm is" you mean "will be", right? you're not referring to something that already exists
<teknico> jamespage, and yeah, the current MP talks to both keystone and swift via simplestreams
<jcastro> hey evilnickveitch
<jcastro> http://discourse.ubuntu.com/t/how-to-translate-juju-docs/1284
<evilnickveitch> jcastro, ok, cool
<jcastro> jamespage, how familiar are you with eclipse virgo?
<jamespage> jcastro, still using juno right now
<jcastro> hey so they're writing a charm and have it all mostly working, but are having some upstart problems, I was thinking of pinging you when they hit the review queue to have you take a look?
<jcastro> It won't be soon
<jcastro> https://github.com/glyn/virgo-charm/ if you are bored though. :)
<X-warrior> marcoceppi: are u there?
<marcoceppi> X-warrior: yup
<X-warrior> oh I thought you were one of the creators of elasticsearch charm, but looking on git commit tree you just did the latest commit
<yolanda>  hi jamespage, pushed some updates to heat charm
<jamespage> yolanda, cool - I'll take a peek later!
<yolanda> thx
<X-warrior> Is instance-type not working on 1.16.3?
<marcoceppi> X-warrior: correct, it hasn't been fully ported from juju 0.7
<marcoceppi> X-warrior: https://juju.ubuntu.com/docs/reference-constraints.html
<benji> marcoceppi: I'm ready when you are: https://plus.google.com/hangouts/_/76cpit9hbj5c3kr1burqftqmk0?hl=en
<lazypower> Buenos Dias everyone
<X-warrior> If I deploy something with juju, it will save it on charmcache and in future if I deploy again, it will deploy the same version. Right?
<marcoceppi> X-warrior: so long as the environment is alive. If you destroy-environment the cache goes away
<X-warrior> marcoceppi: so if I destroy the machine where that one service is deployed
<X-warrior> it will still be in cache...
<X-warrior> Based on that, I think that upgrade-charm should work even if we don't have any instance with that service running...
<X-warrior> if that service exists in cache, it should be updated imo
<lazypower> window 3
<zzecool> Hello fellas i need your help , i just install juju locally on my pc and when i try to deploy the juju-gui  on the "watch juju status" i see that it creates a second machine that is pending . Is this normal ?
<zzecool> anyone ?
<marcoceppi> zzecool: yes, that's expected
<marcoceppi> zzecool: if you wanted to not have it create another machine, you can tell juju-gui to deploy to the bootstrap node using this line instead:
<marcoceppi> juju deploy --to 0 juju-gui
<zzecool> ohh thank you :)
<marcoceppi> zzecool: since you've "deployed" the juju-gui already, you'll need to first destroy it, before deploying again.
<marcoceppi> juju destroy-service juju-gui
<marcoceppi> juju deploy --to 0 juju-gui
<zzecool> how long does it take to deploy for the first times?
<marcoceppi> juju terminate-machine 1
<marcoceppi> zzecool: it depends on the cloud provider
<zzecool> it locally
<marcoceppi> and the charm, some charms can take a long time, others are pretty quick
<marcoceppi> zzecool: ah, for local deployments it needs to download the ubuntu cloud image, it's about 250MB so it can take some time
<zzecool> i see
<zzecool> really thank you
<zzecool> :)
<marcoceppi> zzecool: also, you can't use deploy --to 0 on local provider, so don't worry about it creating a new machine
<zzecool> ohh
<marcoceppi> zzecool: the 0 machine (bootstrap node) is technically your computer for the local provider
<marcoceppi> you wouldn't want to put the gui directly on you machine!
<marcoceppi> since you're not "paying" for the machines in the cloud provider, having an extra machine for the gui won't hurt
<marcoceppi> zzecool: what version of juju are you using? (juju version)
<zzecool> the latest
<zzecool> i used the ubuntu ppa
<marcoceppi> zzecool: cool, perfect
<zzecool> i think i misunderstood the juju thing
<zzecool> so all tha deployment thing doesnt install the actual applications on my pc
<zzecool> is it something like a sandbox?
<marcoceppi> zzecool: no no, it's using LXC to create contianers that simulate a cloud on your machine
<marcoceppi> except, with the local provider, machine 0 is technically your laptop/desktop. The rest of the machines juju deploys are LXC machines running on your laptop/desktop
<zzecool> LXC is like virtual machines right?
<zzecool> so i guess i cant use machine 0 to deploy things for security reasons ?
<zzecool> marcoceppi: Thank you for you help :)
<marcoceppi> zzecool: right, only with local provider, machine 0 is off limits
<zzecool> i see
<marcoceppi> LXC is conceptually the same thing as virtual machines, just faster and lighter weight
<zzecool> then my whole concept is a fail . The reason i installed juju was to easily install observium locally to monitor my pc
<zzecool> so i guess if i install observium i will monitor the LXC machine (1) and not my real pc...
<marcoceppi> zzecool: correct
<marcoceppi> zzecool: juju is a service orchestration tool, meant to drive bare metal and cloud machines
<marcoceppi> the local provider is a way to develop and hack on charms, what juju deploys, without needing lots of servers or spending money on a cloud
<zzecool> It is so nice though , it should be used for local install as well instead of synaptic or apt-get
<thumper> marcoceppi: you there?
<thumper> marcoceppi: FWIW, you can't deploy units to machine 0 with the local provider
<thumper> as the host machines IS machine-0
<thumper> zzecool: if this is the first time with the local provider, it is probably syncing the underlying ubuntu cloud images
<thumper> marcoceppi: with the local provider, machine-0 isn't able to host units (and should be constrained in the data model)
<thumper> if it thinks it can, it is a bug
<zzecool> thumper: yeap he allready told this
<thumper> zzecool: cool, just flicked through the chatter without reading in depth :)
<zzecool> <marcoceppi> zzecool: also, you can't use deploy --to 0 on local provider, so don't worry about it creating a new machine
<zzecool> :)
<zzecool> thumper: np thanks though
<dpb1> If I'm on a juju deployed unit, is there anyway for me to signal that I'm leaving the service and shutting down?
<thumper> dpb1: not yet
<thumper> dpb1: I'm assuming you mean the unit itself wants to leave?
<dpb1> Hey thumper, I just wrote to juju@ as well.  Yes, right.
<thumper> what is the use case for this?
<thumper> ah...
<dpb1> I guess the same use case that shutdown on the command line of an instance presents
<thumper> just read the email
<dpb1> ok, I think that makes it more clear
<thumper> I think we added 'destroy-machine --force'
<thumper> which works in this case (I think)
<thumper> not sure which release that is coming in though
<dpb1> ok... I can check that
<dpb1> thx for the pointer.
<davecheney> marcoceppi: really ? that is le suck
<marcoceppi> davecheney: which is?
<davecheney> marcoceppi: the fact we don't document all the options in juju init > sample.yaml
<marcoceppi> davecheney: oh, yeah. It is
<marcoceppi> we don't document it anywhere
<marcoceppi> you have to crawl through the code
<davecheney> i'm sure evilnickveitch won't appreciate us dropping that one in his lap
<marcoceppi> at least, we don't document all the fringe options
 * marcoceppi makes a bug
<marcoceppi> I might do it, well I might gather all the config options, I'll let someone smarter figure out what they mean
<davecheney> `juju is`
<marcoceppi> <3 easter eggs
#juju 2013-12-03
<yolanda> jamespage, btw, were you able to take a look at heat?
<mthaddon> if I have an extremely high revision number for a charm is that likely to cause problems when running upgrade-charm? "date +%s > revision && juju upgrade-charm" - my upgrade-charm hook seems to be running for a long time, taking a lot of CPU
<marcoceppi> mthaddon: interesting Why are you even changing the revision? Juju should do that for you during upgrade-charm
<ashipika> hey all.. just a quick one.. on bootstrap (in a VM): ERROR juju supercommand.go: 282 unrecognised architecture: precise
<marcoceppi> ashipika: precise isn't an architecture, it's a series. What bootstrap/deploy command are you running?
<ashipika> juju bootstrap --upload-tools --show-log.. null provider..
<ashipika> i know it's a series, that's why i'm confused
<mthaddon> marcoceppi: this charm doesn't have a revision file in the tree, so I need to manually create it - I realise this is possibly non-standard behaviour, but when you're relying on juju upgrade-charm for code rollouts having that file versioned can be problematic
<marcoceppi> mthaddon: interesting concept. I have no idea what would cause juju to hang though. Try increasing the log verbosity for the service and seeing what shows up in the unit/machine log
<mthaddon> marcoceppi: upgrade-charm seems to be checking if each previous revision exists for this env somehow
 * mthaddon is just going to start from scratch
<ashipika> marcoceppi: found an error.. when trying to ssh to the computer it fails to add the host to the know_hosts file (permission issues), so there is an extra line in the output of the detection script.. all lines get shifted by one line down..
<ashipika> that is why it fails to properly parse the architecture
<ashipika> and gets the series instead
<X-warrior> morning
<Luca__> marcoceppi: I keep getting this agent-state-info: 'hook failed: "ha-relation-changed"' when trying to add hacluster to mysql...
<Luca__> any idea?
<jamespage> marcoceppi, I'm helping Luca__ with that issue - worth noting that all of the openstack charms default to using eth0 for HA - but juju now configured a bridge automatically so it needs to be br0
<marcoceppi> jamespage: ah, cool
<jcsackett> sinzui: fighting g+, be at the 1x1 in a few.
<jcastro> jamespage, for the docs, for charm features we have "the service shouldn't run as root"
<jcastro> I want to crosslink to upstart docs to give people a clue on how they can do this
<jcastro> http://upstart.ubuntu.com/cookbook/#run-a-job-as-a-different-user
<jcastro> is this the right section?
<jcastro> marcoceppi, do we have a link to something like "how to use amulet" for charm authors?
<marcoceppi> jcastro: no, I'm writing that now actually
<marcoceppi> in preperation for the release
<jcastro> do you know where it will live URL wise?
<jcastro> I'd like to crosslink
<jamespage> jcastro, yes - thats right
<marcoceppi>  jcastro howto-amulet.html probably
<jcastro> authors-amulet.html seems to match more?
<jcastro> actually howto is fine
<jcastro> ok let's do that. :)
<jcastro> marcoceppi, hey I think I messed up an MP for docs
<jcastro> I just pushed a branch but it appears to have updated another one I had done earlier?
<jcastro> https://code.launchpad.net/juju-core
<marcoceppi> jcastro: it looks like you did a bzr push twice for two different branches?
<jcastro> yeah I dunno why it did that, sent nick a mail and he'll just pull  from the right one
<jcastro> hey sinzui
<jcastro> I am doing my talk proposal for SCaLE 12x in February.
<jcastro> " Any Ubuntu machine with an open SSH port can be managed by Juju." won't be a lie by then right? :)
<jcastro> aka how's the manual provider looking these days?
<marcoceppi> jcastro: thumper was saying he landed stuff to make it better last night
<sinzui> jcastro, buggy and specifically with ssh
<marcoceppi> err
<jcastro> sinzui, buggy as in bad or buggy as in "I have 2 months worth of work until the talk"
<marcoceppi> misspoke
<sinzui> jcastro, We do want this production ready in January
<jcastro> perfect, so submitting that statement for a February conference won't get my killed by the audience
<sinzui> jcastro, or...I use your conference as an excuse to escalate a cluster (F) or ssh bugs
<jcastro> that would be fine
<jcastro> is there a tag for manual provider specific bugs I can follow along?
<sinzui> manual-provider actually
<utlemming> Just announced the Juju Quickstart Images: http://blog.utlemming.org
<jcastro> oh cool, I'll share that around in a minute utlemming
<arosales> utlemming, very nice blog post http://blog.utlemming.org/2013/12/beta-cross-platform-juju-development.html
<arosales> utlemming, I think there is a "quickstart" plugin that is different from yours though.  I think your vagrant images is more of a local charm development work flow
<arosales> for clearity
<arosales> utlemming, but really nice work !
<jcastro> yeah you can't call it quickstart
<jcastro> unless we start naming various tools and aliases under "quickstart"
<jcastro> which is fine by me if they behave the same
<utlemming> hrm, I agree. If the effect is the same, Quickstart plugin vs Quickstart images, conveys the same idea: getting quickly started
<jcastro> yeah
 * marcoceppi is having flashbacks to bundle discussion wrt naming
<jcastro> utlemming, you going to announce anywhere else or want me to handle it now? Don't want to douple post if you plan on
<utlemming> jcastro: it landed on planet Ubuntu. But seeing as your the man with all the connects, if you would like to spred the message, that'd be great.
<jcastro> utlemming, typo "Chose" the box in your header
<jcastro> also why not recommend 12.04 by default?
<rick_h_> utlemming: jcastro just a heads up that we might want to watch the quickstart branding as we bring the juju quickstart command up to users. https://launchpad.net/juju-quickstart
<jcastro> rick_h_, yeah, we should def keep that in mind
<lazypower> Hey thats cool. I was experimenting around with vagrant based juju development a while ago and had some marginal success using sshuttle to route the requests into vagrant....
<lazypower> marcoceppi: this is basically the route we took when you came up to visit.
<mxc> have some juju security questions
<mxc> which i haven't been able to answer from reading the docs
<mxc> specifically fire walling, creating juju instances, without exposing them (on azure at least) opens up a public port 22 to the world
<mxc> is it possible to deploy charms (say mongodb for example) with very strict ufw/iptables configs?
<mxc> or, would I have to basically fork the charms to do that?
<sarnold> mxc: investigate the subordinate charms
<mxc> thanks, but I may be missing something.
<mxc> here's my situation, basically i want to have an environment like :
<mxc> [ haproxy ] <--> [ app servers ] <--> [mongodb ]
<mxc> where the app servers and mongodb are completely unreachable from the outside world
<marcoceppi> mxc: by default only port 22 is available to servers
<marcoceppi> all other ports are disabled unless you explicitly enable them
<marcoceppi> you can supplement this, by creating a firewall charm
<marcoceppi> that you can deploy on all of the service you care about, that restrict access by creating ufw, iptables, whatever on that machines
<marcoceppi> this is done with a subordinate charm
<mxc> aaah ok. now I get how the subordinate charms help
<mxc> i create a ufw subordinate charm, add it to my config for the mongo charm etc
<mxc> thanks!
<med_> marcoceppi, is co-location still called co-location (kind of a meta question)
<marcoceppi> med_: you're talking about the --to command, and I assume containerization?
<med_> marcoceppi, related: can you have a constraint that only deploys "NEWCHARM" if "OLDCHARM" exists?
<med_> marcoceppi, thanks, that's probably what I needed, "--to"
<marcoceppi> we have "hulk smash" colocation, where you can just smash two services together with --to, we're making a better container story so you can truly co-locate two services on a physical machine via something like LXC
<med_> "hulk smash" is probably more what I was thinking of then. And I'm more worried about "already available to users" vs "coming soon"
<marcoceppi> med_: there's no constraint like that, you'd have to codify it yourself
<med_> nod, that's what I thought
<marcoceppi> med_: but we have plugins, so you could create `juju deploy-only-if`
<med_> ah, I've found: https://juju.ubuntu.com/docs/charms-deploying.html#deploying-to-machines
<marcoceppi> med_: juju help deploy should also contain a good amout of information
<med_> where do I read up on "deploy-only-if" or are you saying that's a plugin I need to write?
<med_> I think you are saying I could write that myself in my copious free time.
<marcoceppi> med_: yes, you could write that if you so wished to :)
<med_> thanks marcoceppi . Fount of knowledge as per usual.
<marcoceppi> hey sinzui for the charts on the review-queue how far back do they go?
<sinzui> marcoceppi, The data varies. Some only from the moment the feature arrived in production.
<marcoceppi> sinzui: anyway to have only show the last X months? Or possibly reset the stats?
<sinzui> marcoceppi, not at this moment. jcsackett do you have any thoughts about marcoceppi 's question regarding http://manage.jujucharms.com/tools/review-queue
<marcoceppi> sinzui jcsackett it's just defeating that the max wait time is 22 months
<marcoceppi> that's not entirely accurate
<marcoceppi> and average wait time is more like 5 days now, but because of such outliers it's going to take a long time to drop down
<sinzui> I would think a breakdown of week, month, and quarter/year would be more informative
<marcoceppi> I'm really just concerned with what the last 30 days have been as an average
<marcoceppi> anything past that is just historical
<sinzui> marcoceppi, I have some list of metrics to gather. We can definitely tune the sparklines since we need to revisit the data to gather the metrics
<marcoceppi> sinzui: thanks! appreciate it
<jcsackett> sinzui, marcoceppi: i think we can probably alter the window we're looking at without too much work. but truthfully the problem is i think you're interested in current max, not historical max. which is also probably not too difficult a tweak.
<marcoceppi> well, max is one example, Really I just want to see a moving average of how long it takes for us to respond
<jcsackett> marcoceppi: is it fair to say that min and max aren't really that useful? we added it in on one mention, but average is all i hear anyone talking about, really.
<marcoceppi> I need to know, lets say, in the past X days, our average response time is X (hours hopefully, if not days)
<jcsackett> marcoceppi: so i'm hearing a yes. you're not looking at min and max, really.
<marcoceppi> Min and max are nice if they follow that same time window, but they're not nearly as important i odnt' think
<jcsackett> sinzui, marcoceppi: how do we feel about just dropping display of min and max?
<marcoceppi> jcastro: ^
<jcsackett> i'd like to keep collecting it in case we need to craft a different report, but on review queue we could just show average.
<sinzui> jcsackett, I don't think the min/max are interesting.
<jcsackett> well, if jcastro doesn't object, i can whack the display of that and tweak average to only show the last 30 days worth of info in not too much time at all.
<marcoceppi> jcsackett: Okay, executive decision, do that
<marcoceppi> I dont' think jcastro will object
<sinzui> jcsackett, given that the edges are exceptional, we may want to drop them
<mxc> marcoceppi: does juju on ec2 use restrictive security group settings?
<marcoceppi> mxc: what do you mean by restrictive security group settings?
<mxc> well, basically i want to make sure that the mongo instances are only accessible from the app server instances
<mxc> and the app servers are only accessible from haproxy
<mxc> on azure, juju doesn't set up endpoint ACLs which is why your idea of an iptables subordinate service might have to be the way to go
<marcoceppi> mxc: yes, juju on ec2 creates a sec group per unit and applies a restrictive setting. Only when juju expose is run does it open ports
<mxc> ah, that was my question
<mxc> so that would save me from having to create a separate instance
<marcoceppi> mxc: if I understand you correctly, yes
<mxc> i mean separate service
<mxc> subordinated
<marcoceppi> ah, then yes
#juju 2013-12-04
<blackboxsw> hi folks, anyone know why a config-get cmdline call from inside a debug-hooks would not match the values returned from an external call to "juju get haproxy"?
<blackboxsw> I'm finding all "type: int"   values are being replaced with an empty unicode string for the haproxy charm when I'm inside a debug-hooks config-changed
<marcoceppi> blackboxsw: that's odd, what version of juju?
<blackboxsw> marcoceppi, 1.15.0-saucy-amd64
<blackboxsw> used --upload-tools on my bootstrap
<blackboxsw> deploying to cloud currently, will try lxc and see if I can reproduce the problem.
<blackboxsw> string type variables seem to be intact.
<blackboxsw> yeah will have to play in lxc land for a bit to see if the error is reproducible thx marcoceppi
<blackboxsw> marcoceppi, also, I was working off of juju trunk, heading back to the ppa
<blackboxsw> marcoceppi, yep problem solved... was versionitis of an old trunk juju updating to the PPA in saucy fixed it: 1.16.3-saucy-amd64
<lazypower> Is it safe to assume we will always be using ubuntu in a charm? or should I decidedly do sanity checks on environment for custom configurations?
<lazypower> To scope this question specifically, rsyslog has been default fo ra while now. Should I assume that's the default or should I build in detection for other systems like legacy syslog, syslog-ng?
<sarnold> lazypower: well, there's charms published to the juju charmstore and then there's the charms that other groups might write; I could imagine a sles, rhel, or centos user liking what juju has to offer but wanting to build upon their existing tools, and choosing to write charms that use e.g. systemd journal as their logging service of choice
<lazypower> Ok thats insightful. So to promote adoption I should do sanity checks for additional daemons and provide those utilities as well - this just changed scope. Thank you sarnold.
<sarnold> lazypower: well, that's up to you to define the scope of what you'll support with your charms.
<sarnold> lazypower: some shops might be all-fedora and want to embrace the systemd way of life; other shops might be heterogenous and want to deploy charms on top of anything. They'll have more work than shops with tighter goals..
<lazypower> I'm in fairly constant contact with upstream about the charm I'm working on. They support everything under the sun - and if the guys on the other side of the fence want to help report bugs, I'll help them maintain it - to an extent. I dont want to try to make one charm ot rule them all, I'm currently running that mentality at my 9-5 and its exhausting.
<lazypower> but I see no reason to pigeon hole anyone because I'm lazy and dont want to read a configuration guide.
<sarnold> sounds like you'll have happy users :)
<lazypower> Lets hope, here's to keeping my promises
<sarnold> :)
<lazypower> Is there a good place for offtopic juju chat or have the IRC channels boiled down to business over pleasure?
<lazypower> looks like the hooks documentation just turned into chunky salsa after an edit or deployment - https://juju.ubuntu.com/docs/authors-hook-kinds.html
<lazypower> So, i've been tasked with finding an alternative to the remote_syslog rubygem. I found nxlog as a viable alternative but the repositories don't exactly provide it for 12.04 LTS. What would an acceptable alternative be for the charm? Is maintaining a PPA a viable alternative or is PPA use frowned upon in charming?
<sarnold> hey lazypower :) I'd again say this is up to you as a charmer to decide the scope. I'd say it is probably fine to use a PPA if the software is clearly not in the archive or your PPA version provides significant functionality beyond the version in the archive; it'd be best to document in the README which PPA is used and perhaps make archive/ppa/compile-from-source options -- or allow specifying which ppa to use...
<lazypower> ... choices... i dont know what to do with myself.
<sarnold> it's tough; if you make too many choices available to the end user they might wonder what benefit you provide compared to doing it all themselves. There are definitely times when it helps to be opinionated -- I love reading advice from opinionated people, even if I don't follow the advice. :)
<lazypower> Well played sir.
<X-warrior> I destroyed a service but it keeps on juju status list marked as 'life: diying' but never goes away. How to proceed/
<X-warrior> http://pastebin.com/z3Mbi8HJ
<marcoceppi> X-warrior: are any of the elasticsearch units in an error state?
<X-warrior> nope, "agent-state: started"
<X-warrior> marcoceppi: nope, there is no machine/service on error state as far as I can see
<X-warrior> marcoceppi: did you say something my internet dropped
<ev> Following https://bugs.launchpad.net/juju-core/+bug/1257705 would someone mind kicking off a new upload of juju to brew?
<marcoceppi> X-warrior: nope. Can you show the full juju status output?
<marcoceppi> ev: as soon as a new juju is out? sinzui did we have another release?
<sinzui> marcoceppi, juju 1.16.4 in a few hours
<X-warrior> http://pastebin.com/B11PkKZt
<X-warrior> marcoceppi:
<marcoceppi> sinzui: excellent! thanks
<ev> cool, thanks
<marcoceppi> ev: homebrew will be updated in a few hours
<ev> whoop
<X-warrior> marcoceppi: did you find anything wrong?
<marcoceppi> X-warrior: nothing immediate, can you juju ssh elasticsearch/0 and run ps -aef | grep juju
<marcoceppi> wonder if there's a long running hook or something
<X-warrior> marcoceppi: I dont think so. http://pastebin.com/UU63aFqK
<marcoceppi> X-warrior: correct. What happens if you run juju destroy-service again?
<X-warrior> it just runs normally
<marcoceppi> X-warrior: actually, what machine was logstash-indexer on?
<X-warrior> 11
<marcoceppi> X-warrior: juju ssh 11; ps -aef | grep juju
<X-warrior> ok
<X-warrior> marcoceppi:  http://pastebin.com/ifZGMSt6
 * X-warrior tired
<iri-> I tried to `juju set-environment access-key=ASDF secret-key=GHJK` and I get `ERROR The AWS Access Key Id you provided does not exist in our records.`. Ping @rogpeppe
<X-warrior> marcoceppi: so any idea about this?
<iri-> (I did also change it in the environment.yaml first, which may have been a mistake)
<rogpeppe> iri-: changing it in environments.yaml first *shouldn't* have made a difference
<rogpeppe> iri-: where do you see the error being printed?
<iri-> rogpeppe: in the terminal where I ran juju set-environment
<rogpeppe> iri-: do you have a go dev environment on your local machine?
<iri-> rogpeppe: yes
<rogpeppe> iri-: perhaps you could try this:
<rogpeppe> iri-: go get code.google.com/p/rog-go/cmd/ec2
<rogpeppe> (that fetches a little utility i wrote for dealing with ec2 stuff)
<rogpeppe> iri-: then:
<rogpeppe> iri-: export AWS_ACCESS_KEY_ID=ASDF
<rogpeppe> iri-: export AWS_SECRET_ACCESS_KEY=GHJK
<rogpeppe> iri-: ec2 instances
<rogpeppe> iri-: if your key is ok, that should print your current set of running instance ids
<iri-> rogpeppe: indeed it does.
<rogpeppe> iri-: hmm
<rogpeppe> iri-: what output do you get if you run juju status?
<rogpeppe> iri-: actually, that was probably a bad suggestion, as status takes ages
<rogpeppe> iri-: what do you see if you add --debug to the set-environment flags?
<iri-> rogpeppe: nothing out of the ordinary
<iri-> INFO juju.provider.ec2 ec2.go:193 opening environment "ec2" and the usual line containing a giant json object
<rogpeppe> iri-: so it didn't fail then?
<iri-> it said "ERROR" (as in the first line I said to the channel) and then didn't seem to update the environment
<rogpeppe> iri-: no other info on the ERROR line?
<iri-> rogpeppe: nope, it's as I showed
<rogpeppe> iri-: have you deprecated the old keys already?
<iri-> I made them inactive, yes
<iri-> (@ rogpeppe)
<rogpeppe> iri-: could you try reactivating them and then doing the set-environment again?
<rogpeppe> iri-: unfortunately we can't tell *which* access key id doesn't exist in their records
<iri-> rogpeppe: that worked! Damn good job I didn't delete the old credentials..
<iri-> rogpeppe: I wouldn't have expected to need the old one to work in order to revoke it..
<rogpeppe> iri-: indeed
<rogpeppe> iri-: i'm not quite sure why it does (the latest version of juju definitely does not)
<rogpeppe> iri-: i've managed to duplicate your error anyway
<iri-> rogpeppe: great.
<rogpeppe> iri-: does this file exist for you: ~/.juju/environments/ec2.jenv ?
<iri-> yes
<rogpeppe> iri-: ah, ok (i thought you were using an earlier juju version)
<rogpeppe> iri-: in that case, *that* is the place that's consulted for current environment keys by the client
<iri-> rogpeppe: so.. I should have edited that file? I'm not following..
<rogpeppe> iri-: it has an entry "bootstrap-config" containing all the keys that the environment was bootstrapped with
<rogpeppe> iri-: it shouldn't be necessary, but yes, editing that file would have fixed the problem
<rogpeppe> iri-: i'm just looking to find out why it failed the way it did
<rogpeppe> iri-: ah, i understand now
<rogpeppe> iri-: it does need the provider credentials to read the s3 bucket that contains the details of how to find the bootstrap instance
<rogpeppe> iri-: in the future we will be caching that instance's address locally, but in general there's no easy way around it
<yolanda> jamespage, i updated heat charm with different auth key generation, and adding more tests, coverage is now 85%
<jamespage> yolanda, better!
 * jamespage looks
<ashipika> hi all..  any idea why mongo would fail to start on bootstrap?
<ashipika> Starting MongoDB server (juju-db)
<ashipika> Connection to 10.0.0.1 closed
<ashipika> ERROR juju.environs.manual bootstrap.go:105 bootstrapping failed, removing state file: exit status 1
<lazypower> ashipika: is there any other relevant information in the unit log?
<lazypower> I've eperienced this behavior a few times when I've hastily tried to add shards to the cluster before the master node is spun up, but i attribute that to user error.
<ashipika> looking at it.. it's on a livecd.. so might be it ran out of disk space..
<ashipika> but i doubt it..
<lazypower> I wont be able to help much aside from pointing you in places to look, in about 3 hours i'll be at home and can try to reproduce what you're seeing if that helps.
<ashipika> ok.. i'll try to pastebin the mongodb log
<ashipika> if that helps
<lazypower> certainly. I'd be happy to look over the output
<ashipika> any other logs that might help?
<lazypower> Can you include the juju controller log for completeness? I'd like to see the communication between the nodes
<ashipika> http://paste.ubuntu.com/6521644/
<ashipika> hmm.. i'm doing all this on the same machine
<ashipika> same VM that is.. and there are no juju logs... strange
<lazypower> LXC?
<ashipika> just /var/log/juju/all-machines.log, which is  empty..
<ashipika> vmware player
<lazypower> ok, well looking at your mongodb log - this line itself http://paste.ubuntu.com/6521644/
<lazypower> there's something going on in one of the hooks that is restarting the daemon possibly?
<ashipika> this: got kill or ctrl c or hup signal 15 (Terminated), will terminate after current cmd ends ?
<lazypower> how about any logs in $HOME/.juju/local/logs?
<lazypower> Correct. That line says the deamon proces received an interrupt
<ashipika> no such logs :(
<lazypower> hmm. What environment are you running JUJU in?
<ashipika> juju version: 1.17.0-precise-amd64
<ashipika> precise livecd
<ashipika> null environment
<lazypower> ah ok. I have not startd experimenting with the null environment
<lazypower> the behavior changes a bit when using the null provider
<ashipika> i know.. experimental :)
<ashipika> that's what i'm doing.. experimenting..
<ashipika> trying to create a custom livecd that would boot a bootstrapped juju host
<lazypower> nice
<ashipika> crazy :)
<lazypower> it can be both :D
<ashipika> seems so, yes :)
<ashipika> hmm mongo version 2.0.4
<ashipika> latest stable mongo version should be 2.4.8
<ashipika> but that should not be an issue
<ashipika> lazypower: do you know, perhaps, who is working on null environment?
<dpb1> Is local provider not working currently on 1.16.3 and .4?   My machines stay in pending.
<dpb1> I get this in the cloud-init.log: http://paste.ubuntu.com/6521479/
<dpb1> It also seems like the instances don't get addresses assigned, and the console spins and waits for the network to come up.
<marcoceppi> ashipika: are you using mongodb from the cloud-tools archive?
<ashipika> sorry.. had to grab a bite.. i ran: juju bootstrapp --upload-tools.. so whatever is used there
<hazinhell> dpb1, you need the cloud-init-output.log to see what's actually happening
<hazinhell> dpb1, lxc seeds cloud-init with a direct file inject for userdata
<dpb1> hazinhell: I found the answer: http://curtis.hovey.name/2013/11/16/restoring-network-to-lxc-and-juju-local-provider/
<dpb1> basically, an old dhcp package
<dpb1> removing the cached images fixed it
<hazinhell> ah.. 12.0.4.2 in lxc cache
<hazinhell> yeah.. that's bit me before
<hazinhell> there really isn't a good way to tell what the heck version that is the cache because its a totally generic name
<dpb1> it was from February!
<dpb1> ya.
<dpb1> I think maybe juju should ship with a maintenance job for it or something, but there are concerns with that approach too. :)
<hazinhell> really it should be lxc's responsibility..
<dpb1> ya, you are probably right
<hazinhell> its the one keeping the cache, it should be responsible for sanely updating it
<dpb1> true.  agreed.
#juju 2013-12-05
<aquarius> marcoceppi, ping about some emails not getting sent out?
<Luca__> anybody with some familiarity with nova-cloud-controller charm?
<makara> I've been playing with Juju on AWS. Looks like a great way to scale.
<makara> Problem I have is money. I'd like Wordpress, MySQL, Varnish, Nagios deployed, but then that amounts to 4 instances and volumes.
<makara> is it an idea to pack all that functionality onto one meaty instance?
<thumper> makara: yes, that can be done
<thumper> juju deploy mysql --to 0
<thumper> will put mysql on the bootstrap node
<makara> thumper, i have this problem that I'm behind a firewall, so any web interface can only be port 80
<makara> this creates the problem that if I install Wordpress, Juju-GUI, Nagios, and another simple static page server, they are not accessible
<makara> i could create multiple elastic IPs to that single machine
<makara> then I need to edit the Apache confs
<makara> can Juju handle this for me?
<thumper> not out of the box
<lazypower> wait.. you mean i could have deployed juju-gui to node 0?
<lazypower> Aye de mi thumper! i could hug you.
<_spacemonkey_> Greetings fellow jujulians. Er, jujubees. Uh, oh whatever.
<_spacemonkey_> Anyone here playing around with the jenkins and jenkins-slave charms?
<_spacemonkey_> So I used juju to deploy jenkins which works just fine.
<_spacemonkey_> Then I deployed jenkins-slave on another machine, and it is green as well :)
<_spacemonkey_> However from within jenkins they refuse to talk to each other.
<_spacemonkey_> When I ask jenkins to configure the jenkins-slave via java webstart, it gives me a different URL string than what is actually running on the slave (via juju).
<_spacemonkey_> I remembered the relationship, but that seems to be the part that is bustomicated. Any ideas?
<rick_h__> lazypower: check out quickstart as well. It deploys the gui to 0 for you, does some new hotness. http://jujugui.wordpress.com/2013/12/05/new-juju-gui-0-14-0-juju-quickstart-0-5-0/
<marcoceppi> _spacemonkey_: We use jenkins and jenkins-slave quite a bit, but I've found some odd stuff happens when you add relation it doesn't always wok
<_spacemonkey_> Yeah, definitely no wok here. ;-)
<marcoceppi> _spacemonkey_: fwiw, the charm store verison of the jenkins charms are a bit out of date, I think we've done work in a perosnal branch with the intention of perfecting it and merging it back in.
<_spacemonkey_> Ok will manually config the slave agent on the second machine.
<marcoceppi> _spacemonkey_: let me see if I can find these
<_spacemonkey_> Oh, cool!
 * _spacemonkey_ waits patiently
<_spacemonkey_> Grazie mille.
<lazypower> rick_h__: Awesomeness. Do you have a link for quickstart or is it part of juju core?
<rick_h__> lazypower: it's in the stable ppa I believe. The project is at https://launchpad.net/juju-quickstart
 * lazypower hat tips
<adam_g>  what is the best method for getting juju tools for a MAAS environment w/o access to S3?  the environment has access to a public swift bucket containing all the tools i need
<adam_g> mgz, ^
<mgz> adam_g: you should be able to use sync-tools and give it a local dir
<adam_g> mgz, dont suppose i can sync tools from a URL? :)
<mgz> ...but if it's in swift already, you can just set the tools-url config
<adam_g> ah
<mgz> or whatever we renamed it too
<adam_g> yeah, trying to find what that setting is called
<mgz> adam_g: tools-metadata-url, needs the simple streams bits as well
<mgz> which you can generate with the juju-metadata plugin
<adam_g> mgz, simple stream bits == a populated image-metadata-url ?
<_bjorne> Hello, are that someone who know why only one node installing lxc, mongodb and run a dist-upgrade? i have two nodes and that is function on only one node?
<_bjorne> is that someone here?
<sarnold> _bjorne: you may need to hang around for a little while
<_bjorne> oki.. see if they have some information about my problem :) im get without hair soon :) if im cant solve this.
<ashipika> bjorne: sorry.. what is your question again?
<_bjorne> Im have some problem with maas and clients, should every clients (node) run a dist-upgrade and install lxc, monogodb? after im set up new node? only one node runs that not the others... i wounder why?!
<_bjorne> im use ubuntu 13.10 as server and clients.
<ashipika> so.. maas.. when you go to the web ui.. what is the status of your computers?
<_bjorne> bofh is allocated to root if you mean that?
<ashipika> they should be in ready state, i think.. disclaimer: i experimented with maas a bit a few weeks ago..
<ashipika> so i do not know much about it..
<zradmin> _bjorne: as someone who's been down that path... go with 12.04 LTS as the charms are all written for that. This link will get you the latest juju/maas for 12.04 https://wiki.ubuntu.com/ServerTeam/CloudToolsArchive
<ashipika> and then when you do juju bootstrap one node should be assigned to juu
<_bjorne> no i new on this with maas, and not new with computers :)
<ashipika> +1 on zradmin's point.. use 12.04
<_bjorne> why only one node run dist-upgrade and installing lxc and mongodb? should that not installs on every node? with some automatic script?
<ashipika> only the bootstrapped node has mongodb.. this mongo stores all the configuration and state data, i think
<_bjorne> ok if you dont have lxc and mongodb juju cant find them? or have wrong with that?
<ashipika> jujud daemon should run on all nodes.. sorry.. i'm new with juju, just trying to help with what i think i know :)
<_bjorne> in maas server i have installed juju-core and juju bootstrap and after that i can see the first node in juju status and not the other node, becuse for som script is not run like it should... in the second node...
<_bjorne> in the first node after fresh install, the node run a dist-upgrade and installs lxc and mongodb and starts it.. and that is not runs in the second node... im dont find any information about it...
<ashipika> when you say juju bootstrap it will allocate a new node by default..
<ashipika> brb
<bladernr_> Hey, I have a dumb question...  when I bootstrap with juju, does the bootstrap node always use the same Ubuntu release as the juju controller? or can I specify a different release... e.g. can I have juju on a Saucy laptop but have it bootstrap to another machine using Precise as the OS on the bootstrap node?
<sarnold> bladernr_: that should work well; juju will pick precise by default, and I expect once 14.04 LTS is out, juju will probably default to trusty instead. (but don't quote me on -that- :)
<bladernr_> how would I specify to bootstrap using precise?
<sarnold> bladernr_: ah, here we go, "default-series" key in the environment: https://juju.ubuntu.com/docs/config-aws.html
<bladernr_> ahhh.. thanks.  I was looking for that but kept finding more basic stuff...
<bladernr_> much appreciated
#juju 2013-12-06
<zradmin> are the quantum/nuetron charm experts on at the moment?
<zradmin> i was able to rebuild my entire environment today from maas controller to openstack and the database still isn't being created/configured for neutron
<melmoth> hi there ! I have been asked if there is a way to forward juju-log in a remote syslog server.. I guess the answer is no.
<melmoth> but hey, may be someone here knows how to do that ?
<jamespage> mgz, around? I have a python question
<jamespage> mgz, nm - I figured it out
<_bjorne> are that someone who can some things on maas, when the node do a dist-upgrade and installs lxc, mongodb? why it fails? i get it funtion on one node only... not the other nodes.
<Guest40944> What does this means? requires: juju-info: interface: juju-info scope: container
<Guest40944> I'm trying to deplo logstash-agent (to a new machine) but it doesn't executes, on juju status it just shows as "logstash-agent: charm: cs:precise/logstash-agent-0 exposed: false"
<jamespage> X-warrior, that's a subordinate charm; you deploy it and then relate it to a normal service, and it will deploy and configure alongside it
<jamespage> that relation is used between the principle charm and its subordinate to pass information
<jamespage> and can be typed like a normal relation
<X-warrior> jamespage: interesting I will try it later, have to go now, thanks for the help!
<jamespage> np
<dpb1> hi -- on openstack, is there anyway to get juju-core to use "daily" images instead of "released" ones?  I'm trying to spin up a trusty box.
#juju 2013-12-07
<mxc> been banging my head against the wall on this for a while:
<mxc> 2013-12-07 02:21:11 INFO juju.environs open.go:156 environment info already exists; using New not Prepare
<mxc> 2013-12-07 02:21:13 ERROR juju supercommand.go:282 failed to get blob "provider-state": Azure request failed: AuthenticationFailed - Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
<mxc> RequestId:75c99034-8f50-452f-bb47-a948949e6fba
<mxc> trying to bootstrap in azure using the same set up that has worked on an azure machine before
<mxc> the difference is that i'm trying to run this from a non-azure machine
<mxc> could that be part of the problem?
<mxc> also, is there a way to get more verbose output?  I've tried -v and -vvv
<mxc> ugh, timezone issue
<mxc> the time on my virtual machine got out of sync
<_bjorne> are that someone here who can some of cloud-init and maas? why the nodes fail read user-data.
<_bjorne> is that some one here? that can explane if im need to make user-data file and cloud-config.txt on maas server?
<_bjorne> someone live?
<ashipika> hi guys..
<ashipika> having problems deploying juju-gui charm
<ashipika> here's the log: http://paste.ubuntu.com/6537049
<ashipika> seems to fails when it wants to chown something to a nonexistent "ubuntu" user
<ashipika> anybody?
<InformatiQ> ashipika: what rovider ?
<ashipika> null
<ashipika> ?
<InformatiQ> ashipika: i meant provider
<InformatiQ> local or aws or what?
<ashipika> null..
<ashipika> you mean environment
<rick_h__> ashipika: yea, it's expecting an ubuntu user account on the system, which might not be true in your own systems.
<rick_h__> ashipika: can you file a bug with your paste https://bugs.launchpad.net/juju-gui please?
<ashipika> sure! thnx for the info
<rick_h__> ashipika: to work around it you can create an ubuntu user on the system for now
<rick_h__> ashipika: not sure what else you'll hit as you're hitting a code path not planned for in the charm atm
<ashipika> rick_h: is there another way to install juju-gui?
<rick_h__> ashipika: hmmm, I mean you can manually install things, but it's non-trivial compared to the charm
<rick_h__> the best thing I can think to do would be to duplicate the environment expected with an ubuntu user with sudo permissions
<rick_h__> I don't recall if it uses the sudo permissions for anything, but that's the only other thing I can see hitting compared to say aws, openstack, etc
<rick_h__> on most cloud systems the ubuntu user has password-less sudo access
<ashipika> hmmm.. not the safest thing imo
<ashipika> ok.. filed a new bug https://bugs.launchpad.net/juju-gui/+bug/1258827
<_mup_> Bug #1258827: Juju-gui deploy requires an "ubuntu" user <juju-gui:New> <https://launchpad.net/bugs/1258827>
<rick_h__> ashipika: thanks, I'll bring it up on Monday to the team
<ashipika> thanks.. for now i will follow your advice and create an ubuntu user with sudo permissions..
<rick_h__> ashipika: cool, sorry you hit it. that provider is kind of new and we've not tested with it much. I think when we have, we've brought up ec2 instances which have the ubuntu user and haven't hit it.
<ashipika> i know i know :) love playing with cutting edge things ;_
#juju 2014-12-01
<themonk> lazyPower, hi
<themonk> are you there?
<themonk> i am facing a problem, after juju-core update from apt
<themonk> this is the error: WARNING discarding API open error: unable to connect to "wss://172......:17070/environment/4aaa9a83............/api" ERROR Unable to connect to environment "amazon". Please check your credentials or use 'juju bootstrap' to create a new environment.
<themonk> is any one know how i fix it?
<themonk> is anyone know how i fix it?
<mbruzek1> Hello themonk
<mbruzek1> so juju was working prior to the update?
<themonk> mbruzek1, hello, yes
<mbruzek1> What was the operation that generated this error?
<themonk> juju status
<lazyPower> themonk: has your bootstrap node IP changed since you bootstrapped?
<mbruzek1> themonk: Can you figure out the IP address of the Juju bootstrap node?  Perhaps you can use your AWS control panel.  See if that matches what Juju thinks is your bootstrap node.
<mhall119> jcastro: I feel like you need to remix http://xkcd.com/303/ and s/compiling/deploying/ for Juju
<khuss> When we install OpenStack using Juju, is there any option to specify the  storage network?
<designated> does installing openstack using juju charms account for the use of multiple bonded NICS, or is this more of a MAAS question?
<noise][> Anyone here working with logstash (ELK stack) charms? There seem to be a ton of different charms available but hard to tell what's current/recommended
#juju 2014-12-02
<lazyPower> noise][: I have
<lazyPower> noise][: are you just looking for a specific charm to get started with? cs:trusty/logstash cs:trusty/elasticsearch and cs:trusty/kibana should provide everything needed to provide the core infra.
<lazyPower> there's been some work on logstash-forwarder (the GO log forwarder) - or you can use logstash-agent if you dont mind taking the java memory footprint hit.
<noise][> lazyPower: problably missed you now, but yes, was looking at the diff between logstash proper (which can send direct to ES) vs forwarder (needing to have sep. logstash indexers)
<LinStatSDR> hello
<LinStatSDR> hello
<LinStatSDR> getting a crazy bootstrap error saying invalid hardware information for node
<LinStatSDR> ERROR cannot start bootstrap instance: getInstanceNetworkInterfaces failed: invalid hardware information for node "/MAAS/api/1.0/nodes/node-db43f8cc-79f2-11e4-9246-90e6bad76540/"
<jackweirdy> Hey all.  Following the docs here: https://juju.ubuntu.com/docs/authors-charm-writing.html
<jackweirdy> $ juju charm create vanilla
<jackweirdy> ERROR unrecognized command: juju charm
<jackweirdy> Any ideas? Output of juju --version is 1.20.7-unknown-amd64
<rick_h_> jackweirdy: you need to install charm-tools to have that available
<jackweirdy> Thanks. I'm on macos - do you know if that is in homebrew? :)
<jackweirdy> oh, yeah it is (brew install charm-tools). Thanks :)
<thebozz> Hey guys, is there any plan to support more advanced configurations in the ELK charms? I'd like to customize ES clusters better, and the Logstash charm is kind of lacking as it is right now.
<jackweirdy> Hmm, the command exists, but when I run the create command I get an OSError because a file is missing. Can't decipher it though https://gist.github.com/NotBobTheBuilder/e3f3e02796889692b3a4
<rick_h_> jackweirdy: hmm, can you file a bug https://github.com/juju/charm-tools ?
<jackweirdy> Sure can :)
<rick_h_> ty!
<lazyPower> thebozz: I'm kind of a maintainer of the ELK stack when i have time to work on it - pull reuqests are welcome for any features you want to add
<mwak> hi
<thebozz> lazyPower: alright! We're still setting up our MAAS/Juju cluster, but as soon as I can get my hands on it I'll start experimenting and committing stuff.
<lazyPower> thebozz: sounds great :)
<lazyPower> hello mwak
<noise][> lazyPower: is there a version of the logstash charm that has lumberjack input enabled, for use with logstash-forwarder?
<lazyPower> noise][: the current edition in the store only supports logstash-agent, and has a redis-backend
<noise][> everything I'm finding seems to be using redis to shuttle between logstash-agent and logstash-indexer
<noise][> ok, so I'll have to fork it then
<lazyPower> noise][: the other options haven't been developed as far as I know. However lumberjack + the logstash-forwarder would be a welcome feature
<noise][> ok, thanks for confirming
<lazyPower> noise][: yeah, branch it - implement a new relationship that implements reconfiguring for lumberjack and I'd happily review it
<noise][> lazyPower: i'm pretty new to charms so i might just hack it to include lumberjack input, take a port option, and leave config on the forwarder side manual (as it is in https://jujucharms.com/u/chris-gondolin/logstash-forwarder/trusty/6 currently)
<lazyPower> noise][: nobody benefits when you do it that way :) I'm here to help if you decide to commit changes back.
<noise][> lazyPower: agreed, let me see what I can do :)
<lazyPower> note to self, do not run bundletester and wonder why its hanging up when you'r ein an active debug-hooks session trapping hooks
<lazyPower> >.>
#juju 2014-12-03
<mwak> hi
<mwak> I have created a connector for Online Labs and Juju
<mwak> Online labs is atm in preview, if anyone want to try Juju on it, let me know
<mwak> (Online Labs is an IaaS service based on dedicated arm servers)
<marcoceppi> mwak: I'm interested!
<ezobn> HI All. Does any means exist to say juju to create the KVM instance on a machine with specified constraints ? F.e. like this ? juju add-machine --constraints="root-disk=64G mem=8G" kvm:22
<ezobn> this is not working in my case
<ezobn> creating just default cpu-cores=1 mem=512M root-disk=8192M
<ezobn> so not depended what set on with constraints the following creating machine-22: 2014-12-03 14:38:28 INFO juju.provisioner.kvm kvm-broker.go:103 started kvm container for machineId: 22/kvm/3, juju-machine-22-kvm-3, arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
<mwak> marcoceppi: PM
<hazmat> mwak, awesome btw, and thanks for the invite, traveling today but will check it out and give some feedback
<mwak> hazmat: ?
<mwak> kapilt?
<hazmat> mwak, yup
<mwak> alrgiht :)
<ejat> im having this "agent-state-info: 'hook failed: "install"'
<ejat> for limesurvey charm
<ejat> can someone help me ?
<tvansteenburgh> ejat, juju ssh limesurvey/0 and tail /var/log/juju/unit-lime*
<ejat> tvansteenburgh: http://paste.ubuntu.com/9355865/
<tvansteenburgh> ejat, looks like download location is no longer valid
<tvansteenburgh> :(
<ejat> old charm ?
<ejat> looking at the charm : LURL="http://download.limesurvey.org/Latest_stable_release/limesurvey200plus-build121115.tar.bz2"
<tvansteenburgh> right
<ejat> latest stable : http://www.limesurvey.org/en/stable-release/finish/25-latest-stable-release/1204-limesurvey205plus-build141126-zip
<tvansteenburgh> ejat, gotta step away for a while but i'll try to get this fixed
<ejat> ok thanks ... tvansteenburgh
<ejat> really much appreciated
<tvansteenburgh> in the meantime you can try updating the charm to use other DL url if you want
<tvansteenburgh> you'll need new hash too
<ejat> how to reinstall juju agent ?
<ejat> tvansteenburgh: he db hook also broke
<ejat> the*
<tvansteenburgh> ejat, thanks, just got back, i'll look for that
<ejat> tvansteenburgh: ok .. thanks
<ejat> brb
#juju 2014-12-04
<sebas5384> hey!
<thumper> o/
<sebas5384> hey thumper o/
<sebas5384> was looking someone to talk about the mac osx workflow
<sebas5384> the other day i was doing an experiment with vagrant, vbox and network bridges
<sebas5384> configuring the lxc to use them
<sebas5384> so the containers would appear directly in the host
<sebas5384> and it worked! hehehe
<sebas5384> when i mean mac os x, i mean every vagrant workflow
<sebas5384> lazyPower: hi o/
<stub> tvansteenburgh: I've sorted lazr.authentication on pypi btw, so the scary pip options will no longer be needed.
<stub> tvansteenburgh: lazr.authentication should be correctly registered in pypi now
<tvansteenburgh> stub, that's great news, thanks! \o/
<lazyPower> stub: incoming beer vouchers
<stub> Just what I need, more virtual beers
<lazyPower> you can exchange them for virtual pizza vouchers as well
<lazyPower> Do you attend any of the cloud sprints?
<stub> lazyPower: If someone invites me ;)
<stub> I'm pretty much a team of one, so only go to other peoples sprints.
<arbrandes> jamespage, hey there!  I'm trying to set up highly available OpenStack with Juju, but without MaaS.  I believe I'm hitting https://bugs.launchpad.net/charms/+source/nova-cloud-controller/+bug/1391784 ( HA failure when no IP address is bound to the VIP interface).
<mup> Bug #1391784: HA failure when no IP address is bound to the VIP interface <openstack> <cinder (Juju Charms Collection):In Progress> <glance (Juju Charms Collection):In Progress> <keystone (Juju Charms Collection):In Progress> <neutron-api (Juju Charms Collection):In Progress> <nova-cloud-controller
<mup> (Juju Charms Collection):In Progress> <openstack-dashboard (Juju Charms Collection):In Progress> <percona-cluster (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1391784>
<arbrandes> Is there any documentation on how to properly set up what the charms require in terms of containers and network interfaces?
<arbrandes> I tried following https://wiki.ubuntu.com/ServerTeam/OpenStackHA with 23 nodes, and had no luck.  Ideally, there'd be a way to do this with containers on less nodes.
<arbrandes> Note: I'm trying to run this on OpenStack itself, so I have to deal with port security on the underlying "hardware".
<ezobn> Hi, Is it possible to set constraints when juju create KVM machine, using existing maas provisioned machine ? I have tried the --constraints option, but it is not working ...
<lazyPower> ezobn: juju should hand off those constraints to MAAS and MAAS will do its best to fulfill the request
<lazyPower> ezobn: what were your constraints? assuming typical ones like memory=2G?
<ezobn> yep - based on existing tags on physical machines
<ezobn> but I whant set mem=8G, not just 512M as seems a default
<ezobn> lazyPower: reating machine-22: 2014-12-03 14:38:28 INFO juju.provisioner.kvm kvm-broker.go:103 started kvm container for machineId: 22/kvm/3, juju-machine-22-kvm-3, arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
<lazyPower> ezobn: which version of MAAS/Juju?
<ezobn> lazyPower: juju:1.18.4-trusty-amd64, maas:1.5.4+bzr2294-0ubuntu1.1
<lazyPower> ezobn: 1 moment while i check the release notes - i'm not sure that 1.18 supports maas tagging (But it may i'm not certain)
<ezobn> lazyPower: I am using tags on maas via juju, but can't use them when creating kvm VMs
<ezobn> lazyPower: so just wondering does it possible to use them when creating KVM units on physical machine ? or any other means supposed ?
<lazyPower> ezobn: as i understand it you define the constraints when doing enlistment to maas
<lazyPower> ezobn: so i dont think any of those constraints will be handed off
<ezobn> lazyPower: those tags is working good. But I need somehow to say juju worker to create the VM with more memory ;-)
<lazyPower> ezobn: juju doesn't tell maas to actually 'create' the vm
<lazyPower> ezobn: juju requests a machine from maas, and maas has a pool of vm's already enlisted and available ina pool. it just returns that vm from the pool to juju
<ezobn> lazyPower: yes, understand. But then juju using libvirt to create KVM units on added machine.
<ezobn> lazyPower: so by default only this constrains using arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
<lazyPower> ahh so you're doing juju deploy --to 1:kvm  correct?
<lazyPower> as an example
<ezobn> lazyPower: yes
<lazyPower> ok my mistake, i thought this was a layer above in teh request
<lazyPower> Good question, I haven't done that. I've only used lxc with deploy --to
<ezobn> lazyPower: juju add-machine --constraints="root-disk=64G mem=8G" --to kvm:22 f.e. ;-)
<lazyPower> ezobn: bootstrapping and investigating - give me a bit to look into it
<ezobn> lazyPower: yes, lxc is a good option, but it all works good with KVM too ... so just options ;-) I try to make working the mataswitch clearwater charms... and they have some custom kernels, as I have learnt ...
<lazyPower> Indeed they do, i was on the early team workign with them and their solutions
<lazyPower> the nice part about kvm containers is you have dedicated resources vs sharing with the lxc containers - and this *should* work
<jamespage> arbrandes, I hope not - that was a very specific edge case causing that specific bug
<ezobn> lazyPower: will be glad to here any advice ;-) Good to know that I am on the right way with metaswitch charms ;-)
<arbrandes> jamespage, I hope so too.  I guess my question is: if I'm using the trusty juno charms in manual mode, deploying to VMs running on an OpenStack cloud, should I expect any trouble trying to get HA working for all services?
<arbrandes> In other words, no MaaS here.
<jamespage> arbrandes, most likey yes
<jamespage> arbrandes, we use openstack internally to test HA but we do funky things with neutron; specifically disabling all port level security in the cloud
<jamespage> which allows us to float IP's and have that just work
<jamespage> neutron security groups would by default just stop that from happening
<arbrandes> jamespage, that's what I feared. :)
<jamespage> ditto nova ones
<arbrandes> jamespage, I suppose you use containers for testing HA, which is why you need to disable port security.  What if I just deploy everything to "actual" nodes?  I actually did try this, btw, and the first roadblock I hit was the Keystone charm complaining it couldn't bind the VIP address.
<arbrandes> (Though in practice it was already bound).
<jamespage> arbrandes, hmm that's odd
<jamespage> I would expect the charms to still dtrt but the vips would just be inaccessible
<jamespage> arbrandes, the VIP must be in the same subnet as an existing configured network interface
<arbrandes> They're accessible because I used the allowed-address-pairs neutron extension.
<arbrandes> In other words, everything that the keystone + hacluster charms deploy work in practice, but the ha-relation-changed hook fails.
<arbrandes> Which sucks because then further actions fail.
<arbrandes> Anyway, what I'm trying to understand at this point is if I need anything else on a node that will receive HA Keystone besides a NIC configured on said subnet (there is one, btw - plus 2 more NICs for the data and external nets).
<darknet> hello guys, I've a problem with Juju, is there someone can help me? please
<marcoceppi> darknet: probably, its' best to just ask your question
<lazyPower> ezobn: my openstack provider is being pokey this morning - still investigating
<ezobn> lazyPower: Thank you ! I am using juju for my openstack to test :-)
<darknet> marcoceppi_:  do you have any idea to resolve that?
<lazyPower> darknet: unless i'm mistaken you didn't ask a question - what seems to be the trouble?
<lazyPower> ezobn: it appears you've uncovered a bug
<lazyPower> ezobn: i can reproduce the same behavior
<jamespage> arbrandes, sooooo
<jamespage> arbrandes, are you adding all your relations to deployed services at deployment time?
<arbrandes> jamespage, yes - basically all in one go.
<ezobn> lazyPower:  I glad that is not just in my setup ;-) Thank you for your help !
<jamespage> arbrandes, right to this sucks atm and its something we have focus on this cycle
<arbrandes> jamespage, it's not a bundle.  I just have a script that does juju add-machine for all 23 nodes, then a series of juju deploy + juju relantionship blocks.
<jamespage> arbrandes, but you'll need to do a phased deployment for ha right now
<arbrandes> jamespage, interesting!  How would that work?
<lazyPower> ezobn: np - sorry I didn't have a better message for you. If you could - would you mind filing a bug against juju-core?
<jamespage> arbrandes, I happen to be working on one of these right now let me dig it out
<ezobn> lazyPower: yep, I will do it
<arbrandes> jamespage, awesome, thanks!
<lazyPower> ezobn: brilliant. paste me the link when done so i can track it :)
<ezobn> lazyPower: OK
<roadmr> hm, we've had trouble trying to set all relations in one go, it seems like if the charm isn't race-condition-resistant then things fail
<darknet> marcoceppi, any idea?
<ezobn> lazyPower: just here ?
<roadmr> we have resorted to waiting until all units are up, then adding relations one by one with a X-second sleep interval between each
<jamespage> arbrandes, I've also not yet tested this; I was literally working on the bundles when I saw your pring
<jamespage> arbrandes, http://bazaar.launchpad.net/~canonical-server/+junk/serverstack/view/head:/deployment/serverstack5.yaml
<lazyPower> ezobn: that'll work
<ezobn> lazyPower: OK
<roadmr> darknet: what's the problem you're seeing? sorry, I missed it, but maybe I can help
<jamespage> arbrandes, I have some gaps right now due to the fact we are about to re-ip our networking and I don't have all the details yet
<jamespage> arbrandes, the idea is that you deploy 'serverstack-base' first, and let that deploy and settle (no hooks executing)
<jamespage> and then you do serverstack-relations
<darknet> anyone have any idea to resolve that error with juju?
<jamespage> arbrandes, there is quite a bit in that bundle which is MAAS specific; we make alot of use of LXC which is not possible under openstack
<lazyPower> darknet: none of us have seen a link to a pastebin or reference to your error. Can you provide some insight as to where we should look?
<arbrandes> jamespage, I understand it's a WIP, but thanks anyway - I might be able to extract what I need from that bundle.
<jamespage> arbrandes, fwiw I am driving towards a single line deployment for HA this cycle; juju should be delivering a few new features to be able to help us achieve that
<jamespage> arbrandes, right now its tricky for a unit of a service to know categorically how many peers its going to have from the point of first execution which makes electing a leader a bit clumsy and race prone right now
<arbrandes> jamespage, that would be fantastic.  Just so you guys keep my use-case in mind when working on this: I'm trying to do Openstack-on-Openstack with Juju for the purposes of training.  In this case, I'm trying to demonstrate how the Juju charms do HA, and to do it, I fire up one Heat stack per student, a stack that contains a Juju bootstrap node and as many other nodes as needed.
<arbrandes> jamespage, I can't really use MaaS for this setup, so I need Juju (or a bundle) to do its thing without it, and for HA to work *with* port security enabled.
<arbrandes> jamespage, if doing a phased deployment works, it's good enough for me.  I'll just have students wait for a prerequisite deployment to settle.
<jamespage> arbrandes, adding the extra allowed addresss stuff will work OK for HA VIP's (I think)
<darknet> so, no one can help me to resolve that?
<jamespage> arbrandes, however you may come unstuck with the quantum-gateway charm
<jamespage> arbrandes, that really does need port security disabled todo its neutron networking foo for routers etc...
<jamespage> it generates new mac and other things that won't work with allowed-address-pairs
<roadmr> darknet: we can help but only if you show us the problem. So far you have only said you have a problem, you have not detailed what it is.
<arbrandes> jamespage, yes, the allowed-address-pairs thing does work for accessing the Pacemaker VIP.  quantum-gateway works fine as well.  My only problem is having the hacluster hook finish running
<darknet> I've already posted it
<jamespage> arbrandes, ok - do you have the exact error from the unit log?
<arbrandes> jamespage, I just tore down my stack and am in the process of rebuilding it.  I'll post the error as soon as I reach that point again.
<jamespage> arbrandes, great - fwiw the bundle I pointed you at is the cloud that we test openstack ontop of
<darknet> roadmr, anyway, I've an error when I try to run the command "juju add-machine vnode -e maas" on maas environment. I've as result that the vnode starts but after few seconds goes down and make the reboot and receiving an error. I've posted that on askubuntu (http://askubuntu.com/questions/556605/juju-ver-1-20-13-cannot-run-instances-gomaasapi-got-error-back-from-server) both of them, Juju and MaaS are installed via ppa stable
<arbrandes> jamespage, awesome.  I'll get two stacks up and on one reproduce the bug, and on the other try to fiddle with the phased deployment.
<roadmr> darknet: ah cool! thanks for that. By the way, it's the first time I see it since you first mentioned you had a problem (less than an hour ago), so maybe your first message got lost.
<roadmr> darknet: so you say you've done this before and it worked? was it the exact same procedure?
<darknet> idem for me!!! I've installed them  in past but it's the first time I receive that after to make upgrade via ppa
<roadmr> darknet: I've never used manual provisioning, but it seems to me as if Node1 should already be up when you try to add it. Juju won't create it
<darknet> roadmr: it's already up and juju-gui has been deployed without prb,
<darknet> roadmr: I've a problem when add another vnode to environment
<roadmr> darknet: ahh ok, and are you 100% sure the name you're giving is correct? what juju is saying is it's unable to find the node by that name
<roadmr> CloudMaaSRCNode1.maas
<darknet> roadmr: yes
<roadmr> darknet: are you maybe missing an S? CloudMaaSSRCNode1.maas
<roadmr> (though the first node CloudMaaSRCNode0.maas looks like it has the correct name...)
<darknet> roadmr: it's strange, for CloudMaaSRCNode0.maas everything has worked well.
<roadmr> darknet: hey do you have the maas cli configured? you could do maas maas nodes list (or equivalent command, I'm assuming your profile is also named "maas") and post that or use that to double-check the nodes you're giving to juju are known to maas
<darknet> roadmr: the vnode continues to make the reboot and on juju status it results in pending...
<darknet> all vnode are present on MaaS and they are in ready status.
<roadmr> darknet: I believe you... that's quite weird
 * roadmr goes out for a bit, brb
<darknet> roadmr: It's the second time I re-install everything, and receive the same error with juju..the environment has been installed on ubuntu 14.04
<darknet> roadmr: I've received this type of error after to have made the upgrade to MaaS 1.7 and Juju 1.20
<darknet> roadmr: do y hav eany idea?
<darknet> roadmr: I was thinking that maybe is why I'm using ubuntu 14.04 with MaaS and Juju upgraded????
<darknet> roadmr: because I'm testing the previously release on ubuntu 14.04 without upgrade and it works well!!!!!!
<arbrandes> jamespage, I just managed to reproduce the error: "unit-keystone-0: 2014-12-04 15:46:40 INFO ha-relation-changed ValueError: Unable to resolve a suitable IP address based on charm state and configuration".  This is with keystone -n 2.  But if I SSH into the leader, I can see the VIP bound to the proper interface, and `crm status` looks good.
<jamespage> arbrandes, hmm
<arbrandes> jamespage, if you're interested, I can get you SSH access to that environment.
<jamespage> arbrandes, can you pastebin the full stacktrace?
<jamespage> oh ssh is good as well
<roadmr> darknet: hm, I suspect a name resolution issue, could you maybe ssh into the bootstrap node and see if it can ping cloudmaasrcnode1.maas?
<sebas5384> balloons: hey o/
<LinStatSDR> Hello all.
<balloons> sebas5384, hey! ;-)
<sebas5384> balloons: just confirming our meeting 17 UTC :)
<balloons> yep, I should be all ready
<lazyPower> o/ sebas5384
<sebas5384> hey lazyPower o/
<lazyPower> i was driving back yesterday when you pinged, so belated greetings.
<sebas5384> lazyPower: np! :)
<sebas5384> balloons: I was having a problem with a bug in the charm helpers
<balloons> sebas5384, ohh.. did you try and get a charm for it already?
<sebas5384> but i resolved getting an old version
<darknet> roadmr: already done it, I've tried ssh from host to vnode CloudMaaSRCNode0 and it works perfectly.
<sebas5384> https://bugs.launchpad.net/bugs/1397134
<mup> Bug #1397134: Python's Six dependency <oil> <Charm Helpers:In Progress by stub> <https://launchpad.net/bugs/1397134>
<roadmr> darknet: ok, and once you're in cloudmaasrcnode0, can you do somegthing like "ping cloudmaasrcnode1.maas"?
<sebas5384> balloons: didn't understand your question, sorry :P
<balloons> sebas5384, no worries. See you in 30 mins
<sebas5384> balloons: great then!
<darknet> roadmr: it' s impossible to make that because the cloudmaasrcnode1.maas don't finish the boot. it starts and after few seconds goes down.
<roadmr> darknet: true, sorry
<darknet> roadmr: will you be tomorrow here? I've to go out from office about 10 minutes
<roadmr> darknet: yes, I will be here, and there's other people who may be able to help too
<roadmr> darknet: just remember to put the URL for your askubuntu question, that's very well-explained, thanks for that!
<darknet> ok, thanks you for your support in case see y tomorrow
<darknet> roadmr: have a nice day bye
<roadmr> darknet: enjoy!
<sebas5384> balloons: I updated a new version recently, so I'm waiting for the change to update in the charm store
<balloons> sebas5384, ok.
<balloons> so we ready?
<sebas5384> balloons: some things changed, and the charm wasn't expecting that
<sebas5384> yeah sure
<sebas5384> hangout ?
<balloons> sebas5384, https://plus.google.com/hangouts/_/canonical.com/nicholas-sebas?authuser=1
<sebas5384> balloons: let me try again
<sebas5384> hangout always is trolling me
<balloons> sebas5384, I have the same issue at times :-)
<sebas5384> i'm going to use another browser
<sebas5384> balloons: yeah if you can get it from the launchpad
<sebas5384> till it updates to the new revision
<sebas5384> :)
<balloons> shall we carry on via IRC sebas5384 ?
<sebas5384> balloons: i'm installing the plugin into the safari
<balloons> ohh, right, the dreadful plugin
<lazyPower> i thought hangouts went html5 last week
<lazyPower> boo @ the dreaded plugin
<balloons> sebas5384, https://github.com/nskaggs/isotracker.git
<mbruzek1> Hey marcoceppi, jose just pointed out the precise drupal6 charm is not owned by charmers!
<mbruzek1> $ charm get cs:precise/drupal6
<mbruzek1> Branching drupal6 (lp:~lynxman/charms/precise/drupal6/trunk) to /tmp/precise/drupal6
<mbruzek1> marcoceppi: What is wrong here and how can we fix that?
<marcoceppi> unpromulgate, bzr init ~charmers, push, promulgate
<jackweirdy> Hey all o/
<mbruzek1> thanks marcoceppi, I will work with jose to fix that.
<jose> cool, thanks marcoceppi and mbruzek1!
<jackweirdy> I have a machine set up with a neo4j charm I'm building - testing on localhost at the moment. when I `juju ssh` into the machine I can `telnet` and see the port is open, but from the host I don't seem to be able to. The right port is exposed and I'm connecting to the IP reported by juju. I think I'm missing something obvious here; any ideas?
<jackweirdy> Doh! I bet I know what it is. I don't think I set Neo4j to accept public connections. I'll put my dunce hat on :)
<balloons> sebas5384, http://iso.qa.ubuntu.com/
<lazyPower> jackweirdy: we've all done that to be sure :)
<jackweirdy> I get a bit carried away with some of the magic and forget I have to do some work myself xD
<lazyPower> jackweirdy: thats a byproduct of how awesome we all are when charming :) (shameless self plug there)
<jose> marcoceppi, mbruzek1: found another of those branches, this time it's for the juju charm. should I move it to ~charmers like I did with drupal6?
<marcoceppi> jose: which charm?
<marcoceppi> sorry, who owns it*?
<jose> marcoceppi: juju, owned by Marc Cluet
<marcoceppi> yeah
<jose> ack, doing that now
<jose> marcoceppi: changes pushed and charm promulgated, thanks!
<marcoceppi> o7
<jcastro> hey mbruzek1
<mbruzek1> yo
<jcastro> calendar says charm school tomorrow on fat charms
<mbruzek1> yes.
<mbruzek1> I still have to prepare for that
<jcastro> ok I'll send a reminder now
<mbruzek1> 3pm est?
<jcastro> yeah
<jcastro> hey, you know what would be cool, since the power machines are firewalled
<mbruzek1> OK will have to set aside some time to prepare
<jcastro> maybe do the thing on the power machines?
<mbruzek1> jcastro: great idea
<mbruzek1> Can you get a fresh system from smoser?
<jcastro> I can work it now
<mbruzek1> Please do
<mbruzek1> I would love to show something like that off, but we need a fresh system that does not have the proxies set up
<mbruzek1> I will work on preparations after our standup
<jcastro> ok, I'll go ask
<noise][1> anyone in here familiar with the apache-openid charm? (https://jujucharms.com/u/caio1982/apache-openid/trusty/4)
<noise][1> and/or if it's better to just hack it together manually in the apache2 vhost conf?
<lazyPower> noise][1: i haven't used it unfortunately - however it looks like a userspace charm so ymmv
<noise][1> lazyPower: so better to just hack up the vhost file for the main apache2 charm directly?
<lazyPower> noise][1: but just looking at the config it generates seems feesable
<lazyPower> i would test it in a sandbox before relying on it
<noise][1> :)
<lazyPower> it may have quirky behavior with different charms configurations. I dont like racey configs
<lazyPower> as this looks like its appending, and if the parent charm has a config-changed that updates the vhost - you're asking for loss of OpenID
<lazyPower> so that would be my litmus test, is ensuring the template updates aren't atomic
<noise][1> oh, interesting
<lazyPower> if they are, its a no-go
#juju 2014-12-05
<jacekn> hello. I it possible for a subordinate charm to establish 2 relations to its master?
<jacekn> are there know problems with juju from ppa:juju/stable ? https://bugs.launchpad.net/juju-core/+bug/1399606
<mup> Bug #1399606: juju bootstrap of local environment failing <juju-core:New> <https://launchpad.net/bugs/1399606>
<ezobn> lazyPower: Not had time yesterday - just create https://bugs.launchpad.net/juju-core/+bug/1399613
<mup> Bug #1399613: juju-core not using constraints when creating KVM  unit on maas machine <juju-core:New> <https://launchpad.net/bugs/1399613>
<lazyPower> jacekn: its certainly possible to have 2 relationships with a sub to a service.
<jacekn> lazyPower: thanks, this is good news for me
<plars> is there a ?-relation-changed hook that should fire when the config on a subordinate charm is changed? I'm trying to do something like that and someone recommended container-relation-changed, but it's not working for me
<lazyPower> plars: only if the relationship name between the service and your sub is 'container'
<lazyPower> marcoceppi: do we allow juju-info-relation-changed hooks? i haven't actually tried to do this before.
<lazyPower> i'm iffy on if that'll work as juju-info is reserved
<marcoceppi> lazyPower: yes, if the relation name is called juju-info
<plars> lazyPower: ah, so we have to have an explicit relationship name for it?
<marcoceppi> but don't call the relation juju-info
<lazyPower> yeah
<lazyPower> so if your relationship is like such
<lazyPower> relation: foo
<marcoceppi> plars: you can use the juju-info interface
<lazyPower> interface: bar
<lazyPower> its foo-relation-changed
<plars> marcoceppi: how do you mean?
<marcoceppi> plars: http://paste.ubuntu.com/9383535/
<marcoceppi> juju-info, as an interface, is what you really need, but don't use "juju-info" as the releation name, instead name it something more useful
<plars> thanks, I'll give it a try
<ejat> hi tvansteenburgh
<ejat> how r ya ?
<tvansteenburgh> hey ejat
<ejat> did u already have time to fix and update the limesurvey charm ?
 * ejat no worries .. just asking .. 
<tvansteenburgh> ejat, i got the install fixed but there is still a prob in the db-relation-changed hook
<ejat> thanks for the update.
<ejat> do u file the bugs ? maybe i can subscribe and follow the bugs :)
<tvansteenburgh> ejat, no i haven't
<ejat> maybe need to seek Clint Byrum help on this
<tvansteenburgh> ejat https://bugs.launchpad.net/charms/+source/limesurvey
<tvansteenburgh> looks like bugs already reported
<ejat> owh ..
<ejat> ok thanks tvansteenburgh
<ejat> install hook reported
<ejat> but not the db-relation-changed
<ejat> right ?
<tvansteenburgh> whoever fixes install will see that db- is broken
<ejat> :)
<ejat> hi ...
<ejat> im facing this for hpcloud
<ejat> http://paste.ubuntu.com/9384755/
<ejat> can someone help me?
<jose> ejat: are you sure your credentials are correct?
<jose> and all the settings are correctly put on the environments.yaml file?
<ejat> yups .. ive checked
<ejat> A common mistake is
<ejat> to specify the wrong tenant. Use the OpenStack "project" name
<ejat> for tenant-name in your environment configuration.
<ejat> need to create project name 1st?
<ejat> then put that name in .yaml??
<jcastro> marcoceppi, can you this this one up? http://askubuntu.com/questions/554246/deploying-openstack-with-just-two-machines-with-juju
<marcoceppi> jcastro: following up
<ejat> can i use userpass instead of keypair ?
<ejat> or best recommend keypair?
<jcastro> I've used both but I think they recommend keypair
<jose> ejat: when I set up hpcloud a couple months ago, I followed the instructions at jujucharms.com/docs and found no errors
<jose> have you checked those?
<jcastro> there should be a screenshot pointing to where the exact tebnant name is
 * ejat checking and syncing with the docs .. 
<ejat> jcastro: i already put same as it is
<ejat> should i create another project?
<jcastro> marcoceppi, I also tossed some bounties on the openstack and maas tags to motivate folks if you want to do the same
<marcoceppi> ack
<ejat> because xxx-default-tenant
<ejat> because mine was xxx-default-tenant
<ejat> because last time i use userpass ...
<ejat> :(
<jcastro> ok so  you want to switch it to userpass?
<jcastro> ejat, how about this? http://pastebin.ubuntu.com/9385015/
<ejat> i think ... im having problem while im using this as my identity https://region-a.geo-1.identity.hpcloudsvc.com:35357/v3/
<ejat> is there any problem if i use 2.0 and 3 ?
<ejat> jcastro: http://paste.ubuntu.com/9385155/
<designate> I am trying to bootstrap an environment using maas/juju (latest stable versions of both) but I'm getting the following error: "401 OK (Authorization Error: 'Expired timestamp: given 1417774874 and now 1417800053 has a greater difference than threshold 300')" despite the fact that I have configured an NTP server in MAAS that is reachable by all servers.
<marcoceppi> designate: is your local computer also ntpdated?
<jcastro> ejat, huh unsure on that one, have you seen that before on hpcloud sinzui?
<designate> marcoceppi: if by local computer, you mean the machine I'm running "juju bootstrap" from, then yes...it is also the maas box
<marcoceppi> designate: interesting
<designate> my "local" computer's time will in no way affect maas or juju as I am SSHd into the box.
<marcoceppi> personally, I've never seen that error, it's being generated by maas and not juju
<marcoceppi> is that error presented during the bootstrap phase or when you run juju status after bootstrap?
<marcoceppi> are you using fastpath installer?
<marcoceppi> what version of juju?
<designate>  1.20.13-0ubuntu1~14.04.1~juju1
<sinzui> ejat, I don't know about auth version, but i can tell you in a few minutes. I have been using v2
<designate> it happens during "juju bootstrap"
<designate> yes I am using fastpath installer
<ejat> sinzui: owh okie ...
<ejat> sinzui: http://paste.ubuntu.com/9385155/
<sinzui> ejat, my https://horizon.hpcloud.com/project/access_and_security/ doesn't let me use v3, it just permits v2
 * sinzui tries anyway
<sinzui> ejat, I cannot use v3, but that might be because Hp doesn't list it as my as my identity endpoint
<ejat> brb
<jumpkick> I'm running on ubuntu 14.10, when I try to deploy my 3rd charm after juju-gui and another charm it is failing with a 1 hook failed "install", what's the best way to troubleshoot this (I
<jumpkick> (I'm a juju noob)
<jumpkick> I've already destroyed and recreated the environment with juju-quickstart twice
<jcastro> jose, yo yo
<jose> jcastro: hey!
<jcastro> mbruzek, 30 minute warning!
<jcastro> jose, ^^
<mbruzek> ack
<jose> I have to leave real quick (less than 10m) and I'll be back here
<jose> don't worry
<mbruzek> jose are you going to give me a link
<jose> mbruzek: I am, yes
<lazyPower> jumpkick: hey sorry about the latent reply
<lazyPower> jumpkick: there's a few things you can do 1) would be to investigate the log output - but that can be cumbersome, especially with an hour of log spam piling on top of that
<lazyPower> 2) use juju debug-hooks to attach to the unit, run juju resolved --retry in another terminal to enter the hook context and run the hook interactively to find the failure (and optionally correct the broken behavior)
<lazyPower> jumpkick: if you've got the time, we did an excellent charm school over this: https://www.youtube.com/watch?v=NjERFuBs2S8
<jcastro> jose, hangout URL?
<jose> I sent you an invite
<mbruzek> https://plus.google.com/hangouts/_/hoaevent/AP36tYegIWRyr0b1jvjaNiDlQDG_L-JKYd8WtjRoU-pWwUxGVqPDTw
<jcastro> ok fellas
<jcastro> https://plus.google.com/hangouts/_/hoaevent/AP36tYegIWRyr0b1jvjaNiDlQDG_L-JKYd8WtjRoU-pWwUxGVqPDTw
<jose> #ubuntu-on-air
<cory_fu> ubuntuonair.com still shows the Ubuntu Community Q&A video instead of the charm school
<skay> same here
<mbruzek> Jorge that is enough
<cory_fu> Video is here, though: https://www.youtube.com/watch?v=17FFAhhHPLI
<kwmonroe> :)
<mbruzek> I have all this stuff in my script
<mbruzek> you are stealing all my best material
<skay> cory_fu: thannks :)
<kwmonroe> page 2 man!  page 2!
<mbruzek> jcastro
<jose> huh, let's give it another update
<jcastro> is he supposed to be showing slides?
<jose> no, there's people saying ubuntuonair is not showing the right video
<jcastro> still the old video for me
<jose> jcastro: can you try refreshing now, please?
<jcastro> works!
<jose> \o/
<kwmonroe> :)  good lookin out for creds jose
<jose> :P
<kwmonroe> jcastro: you froze up 1/2 way through the "over arching question" itnro
<jcastro> that's ok
<skay> uh oh. repeat the question when you come back
<jcastro> it's hard locked now
<jcastro> it has not been a good hangout week for me
<kwmonroe> excellent thumbnail.  you look so happy.
<jcastro> jose, QUESTION: Do we have any things in place to allow people to host their own internal charm store?
<jose> ack
<jose> will ask in a min
<jose> jcastro: invited you, try joining again
<jcastro> sec, it's a chromebook, trying to figure out how to reboot it
<jose> :P
<jose> take the battery out?
<jose> jcastro: just asked :P
<jumpkick> lazyPower: thanks I'll watch that video and give juju resolved --retry a go
<jumpkick> I ran juju debug-hooks on my failed charm...   byobu-tmux is going nuts spewing the statusbar all up the screen and "byobu-screen" tells me I can't use it because root doesn't own /home/ubuntu
<jumpkick> sigh, I wish gnome terminal would not have this problem
<jumpkick> holy crap, I just fixed it by setting the character set to UTF-8
<marcoceppi> jumpkick: we experienced that issue elsewhere, it may be worth while mentioning that in the docs
<jumpkick> lazyPower: Watching your debugging hooks youtube video - the reason your mediawiki test curl failed to pull up media wiki is because you needed to provide "-L" to follow redirects ...
<jumpkick> though I'm guessing somebody already mentioned it by now... lol
<lazyPower> jumpkick: i was super excited to be leaving to see my family - so i was going 90mph and not thinking straight - is what i've chalked it up to
<whit> anybody here ever wish they could embed a charm in a repository? sort of like how folks embed packaging metadata with the code for an application?
<lazyPower> i do
<lazyPower> having a metadata.yaml or charm.yaml that handles the details for me would be tops
<lazyPower> just point juju @ my repo and let it go
#juju 2014-12-06
<jrwren> whit: kinda. want, yes.
<jumpkick> hmm... I just destroyed and recreated my environment and now I have very few charms available...  :(
<jumpkick> maybe 20 all together
<jumpkick> maybe I'll just do it one more time to see if all the other ones come back
<jumpkick> there we go, there back now
<jose> jumpkick: if you have any questions feel free to ask
<jose> mention my name - I'll probably be around for another hour or so
<jumpkick> Thanks jose.    If you're still around, I do have a question...   once I go into debug-hooks install phase, is there a way I can set a config value (either in the hook session or cmd line)?    I tried to juju set outside the session from the cmd line, but it didn't pickup
<jose> jumpkick: still around, yes :)
<jose> jumpkick: out of curiosity, in which hook is this?
<jumpkick> I'm debugging a failure in the OpenAM install hook
<jose> ok, what I suggest here is typing 'exit 1', which will give you an error code
<jose> the charm will enter an error state
<jose> when it's in error state, set the config value with juju ser
<jose> set*
<jumpkick> ah, okay cool
<jose> and then do juju resolved charmname/# --retry
<jumpkick> I did "exit" before and messed it all up...  that makes sense
<jumpkick> thanks jose!
<jose> no prob, let me know if you happen to face any other issues :)
<arbrandes> Hey guys.  Is it possible to juju deploy --to=<a pre-existing LXC container>?
<arbrandes> A pre-existing container that wasn't created by juju, that is.
<arbrandes> Alternatively, how do I get juju to use a custom LXC configuration?
<jumpkick> Is there a way to use someone's charm from a github repo in the GUI?
<jumpkick> The charm I was trying to deploy (openam) is badly broken, and it looks like the author has done some updates that haven't been published to the charm store (? - where the UI gets its charms from)...
<jumpkick> I want to give his github version a go (https://github.com/justme99/juju/tree/master/openam)
<jose> jumpkick: not entirely sure that can be done over the gui, but can definitely be done over the cli
<jumpkick> jose: is this (https://github.com/frankban/juju-git-deploy) what folks usually use to do that?
#juju 2014-12-07
<jrwren> jose: how can it be done in the CLI?
<jrwren> oh, juju-git-deploy. I had no idea!
<jumpkick> jrwen, if you are not an experienced python user, you'll need to install PIP3 on 14.10 - I wrote a quick ticket w/ the steps @ https://github.com/frankban/juju-git-deploy/issues/4
<jumpkick> ah shizzle...  Installing juju-git-deploy has messed up my juju-quickstart command
<jumpkick> ImportError: No module named parse
<jumpkick> :(
<jumpkick> mmm... purging juju and resinstalling didn't help
<jumpkick> #@!$
<jumpkick> I guess git-deploy needing pip3 to install means it needs Python3, while Juju apparently needs Python << 2.8...
<jumpkick> leads to install messed up
<jumpkick> :(
<jumpkick> ah, okay... maybe juju/unstable ppa will sort this out
<jumpkick> nope
<jumpkick> looks like only quickstart is broken
<jumpkick> I'm having some trouble with my debug-hooks...  seems "exit 1" is not sending the unit to back to error state, rather it's moving to the next stage.  If I put "exit 1" for all of them, at the end my unit ends up in started state
<jumpkick> how can I get my session to stop at the hook if "exit 1" doesn't work
<jumpkick> ?
<jose> jumpkick: try 'exit 225'
<jose> or something that's not 0
<jumpkick> higher then 1, okay I'll try that
<jumpkick> jose, will it continue to the next hook regardless of the error state while in debug hooks?
<jose> jumpkick: it shouldn't - should put the unit in error state
<jumpkick> that's what I figured
<jumpkick> ok cool
<jumpkick> jose: "exit 225" takes me to the next hook...   it's busted on my system
<jose> jumpkick: wat. let me bootstrap real quick and try this myself
<jumpkick> 1.20.13-utopic-amd64 is my juju version
<jose> ack
<jose> exit 1 does give an error state here, huh
<jose> jumpkick: can I suggest moving that variable to the config-changed hook?
<jumpkick> jose: did I miss anything?  my X just died
<jose> jumpkick: I mentioned that it's working good here... I was suggesting moving that config variable setup to config-changed
<jose> or if unset, create a sentinel file
<jose> then, when config-changed is run, if it finds the sentinel file and the variable set, it configures and deletes the sentinel file
<jose> otherwise, it leaves the sentinel file untouched for the next go-round
<jumpkick> ok, I'll see if I can get that going
<jose> cool :)
<ejat> im facing this issue : ERROR cannot start bootstrap instance: index file has no data for cloud {az-1.region-a.geo-1 https://region-a.geo-1.identity.hpcloudsvc.com:35357/v2.0/} not found
<ejat> is it the same bugs 1330553
<mup> Bug #1330553: Align quickstart to the latest HP Cloud configuration options <juju-quickstart:Fix Released by frankban> <https://launchpad.net/bugs/1330553>
<ejat> ?
<jose> ejat: let me check
<jose> ejat: https://bugs.launchpad.net/juju-core/+bug/1374253
<mup> Bug #1374253: hpcloud: index file has no data for cloud on region-b.geo-1 <bootstrap> <hp-cloud> <juju-core:Triaged> <https://launchpad.net/bugs/1374253>
<jose> it's the same as that one, can you please try changing to another region?
<jose> otoh, I would personally recommend AWS vs HPCloud, since it's way faster
<jumpkick> jose: when I google "juju sentinel file" I get nothing...  where can I find docs for it?
<jose> jumpkick: in which language are you writing your charm?
<jumpkick> the charm is written in bash
<jose> ok, so you can do 'touch .sentinelnamehere'
<jose> and then, in the other hook, if [ -f .sentinelnamehere ]; then
<jose> not sure if you get it
<jumpkick> hmm...  now I get what you were saying, but it isn't going to work...   once I exit all my debug-hooks I end up with a unit that's in "started" state (even though its unconfigured)...  juju won't let me do a "resolved -r" on a started unit to redo the config
<jose> jumpkick: if you do 'juju set config=value', the config-changed hook will run
<jumpkick> jose: cool, how can I get the install hook to run again
<jose> jumpkick: call it explicitly from the config-changed hook
<jose> call 'hooks/install'
<jumpkick> ok, I guess that will work
<jumpkick> thanks, I'll just work around it like that for now
<jose> awesome
<jose> lemme know how it goes
#juju 2015-11-30
<jamespage> marcoceppi, hey - is it possible to re-trigger the review queue tests for https://code.launchpad.net/~cjwatson/charms/trusty/ubuntu-repository-cache/xz-indexes/+merge/277880 ?
<jamespage> cjwatson asked me to look on Friday, but the test failures had been dropped due to archiving of older results I think
<marcoceppi> jamespage: restarted, but the results do disappear after a few days, but they are premenantly recorded here: http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-3650
<marcoceppi> jamespage: it looks like the tests need to be fixed to use the better format of amulet.sentry['service'][0] instead of hard-coding '/0'
<jamespage> marcoceppi,
<jamespage> ack
<stub> marcoceppi: Is all-machines.log or equivalent available anymore from CI runs? I can't see a link to them any more.
<marcoceppi> stub: they should be
<marcoceppi> stub: where are you looking?
<jose> arosales, marcoceppi: hey guys, wanted to check everything was going good wrt tasks?
<stub> marcoceppi: http://review.juju.solutions/review/2370 - just a link to jenkins logs
<marcoceppi> stub: yes, this is a known deficiency. THe results still get logged to the vapour.ws site
<marcoceppi> stub: http://reports.vapour.ws/all-bundle-and-charm-results/cassandra
<marcoceppi> http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-3785
<stub> marcoceppi: ahh, ta.
<roadmr> hey juju folks :) what's a good way to get the last X lines of log from a unit? juju ssh $UNIT_ID sudo cat /var/log/juju/unit-$UNIT_TAG.log feels kludgy, and juju debug-log -i $UNIT_ID --limit=100 -n 100 can block if there are less than 100 lines in the log file :(
<jrwren> charmers: any advice on these failure? http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1534/console from http://review.juju.solutions/review/2357  I recall something about broken test env on that day. If no advice, can you please rerun those tests, assuming env is fixed?
<bloodearnest> reactive question: where do I put my layer's unit tests, and do I run them in the layer, or in the built charm?
<lazypower> bloodearnest: unit tests belong in the unit_tests directory in the layer, and should be runnable by both the layer and the built charm for completeness
<tvansteenburgh> jrwren: i queued another test on lxc
<bloodearnest> lazypower, right. Is there a standard place to put test dependencies in the layer?
<lazypower> bloodearnest: so far i've seen a lot of use of tox to isolate all that in a venv
<bloodearnest> lazypower, right, that's what I would do for a normal charm. Unsure about how different layers would combine the setup/running of their unit_tests in the built charm
<lazypower> yeah, the files being overriden by the top layer that generates the built charm will be the holder of any overriden files
<bloodearnest> lazypower, where will <layer>/unit_tests/* end up in the built charm dir structure?
<lazypower> that dir will, yes
<lazypower> but if there is say a <layer>/unit_tests/01-thing.py  in both layers
<lazypower> the top most layer will override the 01-thing.py in the lower layer
<bloodearnest> right
<lazypower> bloodearnest - might also be worth the effort to ping the list with that question, and shake out some outliers that have thought but haven't asked yet
<jrwren> tvansteenburgh: thanks.
<bloodearnest> lazypower, by default, I assume charm build will merge all non-special directories in the final output?
<bloodearnest> where special I guess = reactive, maybe hooks?
<lazypower> bloodearnest, when merging, it still overwrites unless its metadata.yaml or config.yaml
<bloodearnest> lazypower, ack
<jose> marcoceppi: ping re: PSA
<marcoceppi> jose: yes
<jose> marcoceppi: is there currently a list of charms that are holding that hardcoded pattern? if so, is it public? I'd like to go in and fix some of those if possible
<jose> or do some MPs
<marcoceppi> not yet, we may be producing one for work with code-in
<jose> oh awesome
<asanjar> kwmonroe: sorry pal, i thought you had it
<kwmonroe> no worries asanjar -- i was building myself out of sparktc github.. pre-built bins are fine by me!
<tvansteenburgh> jrwren: almost http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-3834
<jrwren> tvansteenburgh: thanks for the notice. *grumble* at it failing still :)
<stokachu> so for actions.yaml the example schema https://jujucharms.com/docs/1.25/authors-charm-actions shows the use of compression.type=gzip, how does juju know that those properties are mapped to kind and quality?
<stokachu> and what is the format for defining both the compression kind and quality, is it compession.type=gzip,0?
<marcoceppi> stokachu: that's example fail
<marcoceppi> stokachu: it should be compression.kind=gzip in the example
<stokachu> marcoceppi: ah!
<stokachu> now that makes sense
<cory_fu> Hey, so I have a proposal for charm layers that I'd like to get feedback on.
<marcoceppi> cory_fu: shoot
<cory_fu> We'd been recommending base, runtime, etc. layers use yaml files for config options.  But this could lead to a lot of small yaml files with various names.  So I was thinking instead we could have an "options" section in layer.yaml
<cory_fu> So you'd have layer.yaml contain: includes: ['layer:apache-php']; options: apache: packages: [...]
<marcoceppi> cory_fu: oh, I like that, then what every key under option would be automatically prefixed?
<cory_fu> Instead of having a separate apache.yaml
<stokachu> created a PR https://github.com/juju/docs/pull/751 for that doc fix
<cory_fu> Automatically prefixed?
<marcoceppi> cory_fu: well, my thought was layers that presented options would provide jsonschema in them so that it'd be explicit
<marcoceppi> ie, apache-php layer would define the schema which it supported
 * marcoceppi strawmans
<cory_fu> That sounds intriguing.  What would that look like?
<cory_fu> Charm-tools could do the validation, then, at build time, too
<marcoceppi> cory_fu: right
<stokachu> heres my ghost charm using the latest wheelhouse support https://jujucharms.com/u/adam-stokes/ghost/trusty/8
<marcoceppi> bleh, the apache-php layer doesn't quite fit the mold, but almost done
<stokachu> went from 334 files to 72 \o/
<marcoceppi> cory_fu: give me 10 min, otp
<stokachu> though im not sure why the revisions aren't being updated to match what is in bzr
<cory_fu> marcoceppi: Still otp?
<marcoceppi> yes
<cory_fu> kk
<cory_fu> stokachu: Revisions on jujucharms.com don't match bzr revisions because the charm revision tracks number of times ingested, which happens at periodic intervals and could include many bzr revs
<cory_fu> Once we have `juju publish`, the charm revs will make a lot more sense, since it will track how many times you ran `juju publish`
<marcoceppi> cory_fu: https://gist.github.com/marcoceppi/ca36655cb917b16d0681
<marcoceppi> something like that?
<marcoceppi> cory_fu: actually, just updated it
<cory_fu> marcoceppi: I like that a lot
<cory_fu> I also like that Alot
<marcoceppi> I hug Alot
<marcoceppi> cory_fu: the "parent" key would just be the layer name. I'm not sure how this would work for the sites key other than make sites an open-ended object
<marcoceppi> and maybe replace "provides" with "defines" as provides is used in the charm ecosystem
<marcoceppi> best not to conflate the two
<cory_fu> Agreed on the last bit
<adam_g> ERROR error checking if provisioned: subprocess encountered error code 1   <- is there any way to get debug info on the subprocess failure ?
<cory_fu> jsonschema doesn't have support for validated lists?
<marcoceppi> cory_fu: no idea
<marcoceppi> cory_fu: apparently it does
<marcoceppi> cory_fu: since we already use jsonschema for actions.yaml it'd be great to keep that pattern going with this and maybe convince core that config.yaml should move to it as well ;)
<cory_fu> Agreed
<marcoceppi> cory_fu: either way, I like the idea. It gives better visibility into the layer and provides a build time lint for validation, +1
<cory_fu> marcoceppi: Can you point me to an example of how jsonschema is used to use definitions in actions.yaml to validate params (or a similar example)?
<marcoceppi> cory_fu: you mean the golang libraries, or how it looks as an implementation?
<cory_fu> How it looks as an implementation
<marcoceppi> cory_fu: the apache-plugin charm would be a good start ;)
<cory_fu> Specifically, I'm wondering how the action.yaml contents are used as / injected into a schema
<marcoceppi> cory_fu: they are the schema, each parent item defines a params and that params is the schema
<marcoceppi> we'd remove the parent key (action name) and just have a schema, more or less what's in the gist
<marcoceppi> cory_fu: O
<marcoceppi> cory_fu: I'd imagine this would come into play eventuall, https://pypi.python.org/pypi/jsonschema but maybe I'm not understanding the question
<cory_fu> So, looking at the example in that last link, the contents of "properties" is what we'd be filling in, right?
<marcoceppi> cory_fu: no, not quite
<cory_fu> So I guess we can just wrap it in {'type': 'object', 'properties': schema}
<stokachu> cory_fu: ah ok
<marcoceppi> cory_fu: let me try to produce a quick py example
<cory_fu> marcoceppi: Alas, I have to run, but send me the pastebin when you have it done.  I think I'm seeing it now, but I'm not certain
<marcoceppi> cory_fu: I wrote my thoughts at the bottom of the gist: https://gist.github.com/marcoceppi/ad318d081e3b30857819
<marcoceppi> cory_fu: if you'd like I can open an issue on charm-tools to discuss further
#juju 2015-12-01
<adam_g> what is jujud serving on port 17070 and where does it log?
<adam_g> attempts to create an lxc container in a kvm hang waiting for a response for the lxc rootfs
<adam_g> oh, nvm. there it goes
<jacekn> is it possible to write a charm that can be deployed as principal but in other deploymetns as subordinate? Or do I need to write two separate charms?
<rick_h_> jacekn: since you have to change metadata.yaml you have to have two
<rick_h_> jacekn: I think with the new layers system you could reduce a ton of duplicate code across the two charms though
<rick_h_> jacekn: but you'd probably end up with two branches, one for the subordinate and one for the non-sub charm so you can merge/back and forth
<jacekn> rick_h_: hmm that's what I throught. It's a bit annoying because of two codebases to maintain
<jacekn> thanks
<rick_h_> jacekn: yes, the new publish workflow allowing you to keep the two charms in branches of the same repo will help some
<rick_h_> jacekn: but agree, annoying. What service is it if I can ask?
<jacekn> rick_h_: I'm trying to write a collectd charm that can work as subordinate but also as principal so that it can relate to graphite server
<jacekn> rick_h_: I want subordiante because it's much more convenient in bigger environments, deploy once and just add relations
<rick_h_> jacekn: gotcha
<sparkieg`> jacekn: if all you need is a relation to graphite, then you can keep it as "just" a subordinate?
<sparkieg`> jacekn: subordinates can relate to more than just the unit their co-deployed on
<jacekn> sparkieg`: oh I did not know that was possible. So how do I stop that subordinate service from being deployed on the graphite server when I add relation?
<sparkieg`> eugh, what kinda of hell is going on with my nick
<sparkieg`> brb
<sparkiegeek> jacekn: so in metadata.yaml specify the interface without the scope set to container
<sparkiegeek> jacekn: example: https://api.jujucharms.com/charmstore/v4/trusty/landscape-client-12/archive/metadata.yaml
<sparkiegeek> that might be a bit misleading because we only had an experimental server charm that implemented the other side of the relation, but AFAIK it Workedâ¢
<jacekn> oh wow if that works it would be nice, thanks
<jacekn> I'll try this method
<bloodearnest> sparkiegeek, jacekn: that totally works, we do it with logstash-forwarder charm I think
<bloodearnest> we also relate subordinates together, which apparently was never supposed to work (I was surprised when it did), but is very useful
<jacekn> bloodearnest: relating subordinates kind of works but I found it unreliable
<bloodearnest> e.g. conn-check subordinate charm relates to nrpe subordinate charm in order to setup conn-check as a nrpe check
<jacekn> yeah but if you change config that propagates data throug relations that can cause problems, I saw many stale services where I had to recreate relaions to get things fixed
<bloodearnest> jacekn, orly? How so? We havn;t seen any problems (although upstream did break it dev release a while back, but the fixed it quick enough)
<bloodearnest> jacekn, interesing. In the conn-check<->nrpe relation, we just use it for a) we know nrpe is installed/configured and b) grab the nagios_host_context for the nrpe script
<jacekn> bloodearnest: it was very visible with nrpe-external master, change something in one subordinate that relates to n-e-m and checks are not always updated
<bloodearnest> it's not really 2 way relation
<bloodearnest> jacekn, interesting
<jacekn> bloodearnest: so if it works for you great, maybe you don't trigger conditions that cause problems
<bloodearnest> reactive question: I have a test_requirments.txt file in my later, which I use in tox to run install deps for my tests. But charm build does not like that I have this file, says it should be in a base layer
<bloodearnest> s/later/layer/
<bloodearnest> is there somewhere else it should be?
<bloodearnest> I don't really want to add test deps to wheelhouse.txt?
<marcoceppi> jacekn: that's how the current nrpe charms work, etc
<jacekn> marcoceppi: yes I'm just saying that it's unreliable, on multiple occasions things got stale or hooks were not triggered for me
<marcoceppi> jacekn: reactive framework should help with that
<bloodearnest> reactive tactics question: how do I specify a tactic to use? I'm trying to use the wheelhouse tactic from the latest charm-tools release, but I seem to be using copy tactic, afaics
<bloodearnest> s/reactive/build
<bloodearnest> actually, I am totatally not sure which tactic I'm using, but no wheelhouse dir is being built, so I assume non wheelhouse
<lazypower> hmm
<lazypower> bloodearnest : lets follow up with cory_fu when he comes in. I haven't used the wheelhouse tactic yet
<lazypower> but i know he put some work into that last week
<bloodearnest> lazypower, it looks like it should auto-detect the tactic from the files present?
<lazypower> bloodearnest: indeed - there are also tactic overrides you can specify
<lazypower> but i'm not sure what the expected behavior is with wheelhousing deps. it sounded like it was automagic
<bloodearnest> lazypower, ah! I did an upgrade, but charm-tools 1.9.3 is kept back for some reason. Manually installing...
<lazypower> ah that'll od it
<lazypower> *do
<bloodearnest> lazypower, now we're cooking on gas
<lazypower> bam!
<cory_fu> bloodearnest: Did the update fix the "should be in a base layer" issue with test_requirements.txt?  You should be able to have that alongside wheelhouse.txt
<bloodearnest> cory_fu, I think so, yes
<cory_fu> Ok, good
<gnuoy> jamespage, do you have any feelings about https://code.launchpad.net/~gnuoy/charms/trusty/openvswitch-odl/tox/+merge/279126
<gnuoy> beisner, ^
<jamespage> gnuoy, that will do unit tests and lint by default
<beisner> gnuoy, so the test runner didn't recognize that make target name as a unit test, because it disregards make target name and searches for one that looks like the unit test instead.
<beisner> it is that way because make target names sometimes vary across charms, though we have now normalized at least the os-charms
<Prabakaran> Hi mbruzek, This is regarding Platform Symphony Readme.md file, as per the recent comment received from you suggesting "Readme.md file should telll the juju user Juju users that they have to pay for the platform symphony software before they can download itâ
<Prabakaran> So I have mentioned in the README.md file âFor details on Platform Symphony usage and its benefits please visit [Product Page.] [product-page]  For information on purchasing, please visit IBM Fix Central][fixcentral]. To acquire and download Platform Symphony, login to fixcentral as an entitled user. If the user doesnât have entitlement, they won't be able to see fixes in fixcentral.â
<Prabakaran> Is it suffices to go or should I mention more details on this?
<mbruzek> Hello Prabakaran.  I have an IBM account and tried to follow the steps in the README.  When I attempted to download Platform Symphony I was presented with an error message
<mbruzek> Prabakaran: so my account was not "enabled" to download the software, which if I were a user would be frustrating and I don't know how to solve that problem.
<mbruzek> Prabakaran: I got the error message: "No applicable IBM support agreement found for one or more of the products you selected."
<mbruzek> I don't know how to get an agreement or resolve that download error
<mbruzek> only by reading the readme.
<Prabakaran> Matt, if user don't have entitlement, they won't be able to see fixes in fixcentral for the updates so anyone who wants to try this out has to already be a paying symphony customer
<Prabakaran>  if they are a paid symphony customer, they should be able to see the fp on fixcentral
<Prabakaran>  So I have mentioned in the README.md file âFor details on Platform Symphony usage and its benefits please visit [Product Page.] [product-page]  For information on purchasing, please visit IBM Fix Central][fixcentral]. To acquire and download Platform Symphony, login to fixcentral as an entitled user. If the user doesnât have entitlement, they won't be able to see fixes in fixcentral.
<Prabakaran> Is it suffices to go?
<mbruzek> Prabakaran: But I have a user ID on fix central so I assumed that I could download that software, I could not and there was no clear path for me to get that software.  I don't think the current readme was sufficient
<mbruzek> Prabakaran: Someone using Juju and interested in this Platform Symphony would need clear instructions on how to get it.
<mbruzek> Prabakaran: my point is *I* was not able to figure it out and I don't think others will be able to.
<Prabakaran> k then i will include more information on how to get entitlement to download these fixpacks when users is getting error while downloading fp.
<Prabakaran> i will include these information in readme with required links
<mbruzek> Prabakaran: I would mention this earlier near the top, so the reader has a chance to understand what is required here.  REading from top down it was not clear.
<mbruzek> Prabakaran: the rendered README is here: https://jujucharms.com/u/ibmcharmers/ibm-platform-symphony
<mbruzek> Prabakaran: most of the charms either have the software included or they download the software for the user.  It should be clear that this one is a little bit different and requires more set up.
<mbruzek> Prabakaran: it was not accurate that a user will need to create an "apt repository" because the software needs to be hosted on a HTTP server such as apache.
<Prabakaran> Still i have not pushed my readme changes to trunk branch to clarify with you i was asking wherther it is suffices to go...I will mention more detail on getting entitlement with the link as discussed
<Prabakaran> now i will be removing the apache config part as i am implementing juju action on this.
<mbruzek> a juju action to download the software?
<Prabakaran> ya
<lazypower> It seems like that should be charm config, not an action
<lazypower> consider teh scenario
<mbruzek> Prabakaran: How will an action prevent the need for an http server?
<lazypower> you've deployed the service, and you get a message in juju status that you have to now `juju action do ibm-platform-symphony/0 download-software`
<lazypower> what's this doing? why isn't the charm doing this for me automatically? I've input all the requested configuration, the rest should 'just work' (tm)
<mbruzek> Prabakaran: I am not clear on the change to an action.
<lazypower> mbruzek: i agree, there are unresolved technical questions, and workflow resons that is not a good path forward.
<Prabakaran> I was referring SFTP implemention using download action to the install packages from an SFTP server that can be setup in your network.  "juju action do <unit name> download username=<sftp user> password=<sftp password> packagedir=<full path to repository directory> host=<sftp host name>"
<Prabakaran> so i will be changing readme also accordingly
<mbruzek> Prabakaran: An action is a one time thing, and is not traditionally used to install the software.
<mbruzek> Prabakaran: My understanding is the user would still need to host the Platform Symphony software *somewhere* after they purchased and downloaded it.
<mbruzek> Prabakaran: and they would still need details like that in the charm.
<mbruzek> how to set up sftp server
<mbruzek> Prabakaran: I would keep that information in the configuration so the charm could be deployed from both the GUI and the command line.  Currently the GUI does not provide a way to run actions on a charm.
<Prabakaran> K ...i have one more question also... set âe was used as per your advise using â|| trueâ but unfortunately though I was getting non-existant error so in some places I was forced to used set +e. Is there any alternative way to alternative methods in order to resolved this non-existant error?
<mbruzek> Prabakaran: the charm used "set +e" in the majority of the code, defeating the best practice for bash scripts.  I would isolate the error commands down a little more and only use "|| true" where absolutely necessary.  Check the commands for options to "force" or return without error (silent perhaps), but I understand some will not have the option.  If you can't isolate those commands use if, then, else to prevent the command from being called.  If [ f
<mbruzek> Prabakaran: this is a very common pattern in the charms.  The trick to writing good idempotent charms is to figure out what causes the error and use appropriate code blocks to avoid these errors/problem.
<Prabakaran> Can anyone help me how to deploy openjdk layer charm available in the link https://github.com/juju-solutions/layer-openjdk as i am not able to find the deployment steps in the README.md file.
<Prabakaran> Hi kwmonroe
<Prabakaran> how to deploy openjdk layer charm available in the link https://github.com/juju-solutions/layer-openjdk as i am not able to find the deployment steps in the README.md file.
<lazypower> Prabakaran: that's a layer, and is not necessarily deployable by itself
<lazypower> Prabakaran: that's intended to be a runtime layer, as in a lower layer in a layered charm. You would include it in a layers.yaml file in your charm layer and subscribe to the events raised by the openjdk-layer
<lazypower> Prabakaran: There's been a lot of info published about this recently, let me fetch you some links
<lazypower> https://jujucharms.com/docs/stable/authors-charm-building
<lazypower> https://insights.ubuntu.com/2015/10/29/now-youre-charming-with-layers/
<lazypower> http://blog.dasroot.net/2015-charming-2-point-oh.html
<lazypower> Prabakaran ^
<lazypower> all three are excellent reads on the subject
<Prabakaran> Thanks for the links lazypower .. i have already tried to deploy this openjdk using charm build command
<Prabakaran> i was not able to check the openjdk service up and running
<lazypower> Prabakaran: well its not a service, its just installing the JVM/JRE
<Prabakaran> wondering wherther i followed correct steps while deploying
<lazypower> Prabakaran: this isn't fully ready for release yet i dont think. there's noi Readme in the layer repository
<lazypower> i know this is being worked on by folks over at IBM, WildFly, and a couple others to provide a java abstraction layer
<lazypower> giving folks the ability to mix/match pick/choose their componentry when charming and target a JVM
<lazypower> kwmonroe is currently out of office, but should be returning later today
<Prabakaran> I am working on IBM Java SDK for charming with layers. For implementation i referred OpenJDK layer, i want to deploy and check IBM Java layer charm is working or not. So for my understanding i want to test how openjdk works. Could pls help me on this?...
<mbruzek> Prabakaran: Do you get a jvm when you deploy the openjdk charm after building it?
<mbruzek> Prabakaran: it may not be *running* but is it on the system somewhere?
<Prabakaran> After deployment in the container i was able to see openjdk was up and running but when sshd into the container java -version command is not working
<mbruzek> Prabakaran: Perhaps java is not set up for the user.  Search for the java binary file
<Prabakaran> k
<roadmr> heya folks! I'm playing with the leadership feature and hooks. Is there a way to force a leader change without destroying the currrent leader unit?
<roadmr> I don't care which unit ends up being leader, just want a new one to be elected
<rick_h_> fwereade: ^ any idea there please?
<fwereade> roadmr, you could ssh in and stop the unit that's currently leader (`sudo stop jujud-unit-foo-123`); wait for failover; and start it again?
<fwereade> roadmr, you can see when that's happened with `juju run --service foo is-leader`
<fwereade> roadmr, let me know if that helps
<roadmr> fwereade: sure thing, trying it right now
<roadmr> fwereade: so it mostly works with one exception: I wanted to test the current leader getting the leader-elected hook so, so it can remove some files that should only exist on the leader
<roadmr> fwereade: if I stop the agent and restart it, the former leader comes back thinking it's still leader (I mean, is-leader says False, which is correct, but it never got the leader-changed event so my hook which manages the files never fired)
<fwereade> roadmr, maybe I misunderstand: shouldn't you do that on the leader-settings-changed hook? getting that should indicate that you're definitely not leader
<roadmr> fwereade: maybe; I'm just learning how/when these hooks fire :) so you're saying only the leader will get leader-elected?
<fwereade> roadmr, exactly
<fwereade> roadmr, https://jujucharms.com/docs/1.25/authors-charm-leadership might be helpful
<roadmr> fwereade: right, I probably misread that to  mean that all units would get leader-elected and should call is-leader accordingly to see if they are or not
<roadmr> fwereade: so in my scenario, when the former leader rejoins, it should get leader-settings-changed? are events queued for unresponsive units?
<fwereade> roadmr, yeah, leader-settings-changed will be one of the first hooks that executes
<roadmr> fwereade: great, that should do it then! ok, mangling my charm, give me a couple of mins...
<fwereade> roadmr, FWIW, if you're interested: all the queueing actually happens on the unit agent -- when it wakes up it asks the state server how the world should look, and then decides what hooks need to run to bring your unit up to date
<roadmr> fwereade: that sounds nicer than a queue of events in the state server
<fwereade> roadmr, so if a counterpart changes relation settings 10x while you were down, you'd just see a single change when you woke up
<fwereade> roadmr, yeah, exactly
<roadmr> haha, sounds like me slamming "juju set blah" :)
<geetha> Hi, I am trying to use HAProxy as reverseproxy for my charm, but my charm is using SSL and will automatically redirect to https, when I try to access my charm through HAProxy, its giving me HTTPS connection error.
<beisner> jamespage, gnuoy - i can't file a bug against https://bugs.launchpad.net/charms/+source/openvswitch-odl/+filebug -- probably due to ingestion/namespace thing.
<beisner> jamespage, gnuoy - openvswitch-odl test is failing with:  http://paste.ubuntu.com/13602264/
<beisner> jamespage, gnuoy - added as a card in place of a bug for now.
<gnuoy> beisner, I've seen that bug before
<gnuoy> maybe it was in another charm
<beisner> gnuoy, indeed, tcp:172.17.113.167:6633 tcp:172.17.113.167:6633 != tcp:172.17.113.167:6633
<beisner> i've not dug into it, just wanted to raise a bug to get it on the radar.  not sure if it's a test bug or a charm bug yet.
<mbruzek> geetha: It should be possible to set up haproxy with https
<mbruzek> geetha: I have not done this personally but I know there are configuration options for the ssl_key
<mbruzek> geetha: I would read up on haproxy and ssl or HTTPS for more information such as:  https://www.digitalocean.com/community/tutorials/how-to-implement-ssl-termination-with-haproxy-on-ubuntu-14-04
<mbruzek> geetha: If the charm does not do something you need you can branch it, fix the code and submit the improved code to help make the haproxy charm better, but I suspect it does work with https, just needs to be configured properly.
<geetha> mbruzek, there is no configuration option for https in HAProxy charm. could you please suggest me how to configure https in HAProxy charm.
<mbruzek> geetha: As I said earlier, I have not done this before.  You will have to read up on the charm's readme, a quick glance shows ssl_key and ssl_cert configuration options.
<mbruzek> geetha: a Google search shows me another doc http://seanmcgary.com/posts/using-sslhttps-with-haproxy
<roadmr> fwereade: it works \o/ using leader-settings-changed was the missing link. On coming back, the former leader correctly updates its files to be just a follower :) Thanks so much for your help!
<lazypower> geetha: It sounds like we need to file a bug against eth HAProxy charm
<lazypower> geetha: I don't beleive the haproxy charm supports ssl out of the box in its current form
<lazypower> ah disregard, mbruzek si correct
<lazypower> it accepts a base64 encoded certificate / key in the haproxy charm configuration. Which will ssl terminate the proxy frontend.
<fwereade> roadmr, excellent, glad to help
<adam_g> using the local kvm provider, is it possible to use a locally modified image instead of one fetched from simplestreams?
<rick_h_> adam_g: 'local kvm provider'? you mean lxc?
<adam_g> rick_h_, no, i mean the local provider /w 'container: kvm'
<rick_h_> adam_g: ah ok
<rick_h_> adam_g: no, it's not built in to do that atm
<adam_g> rick_h_, bummer
<rick_h_> adam_g: there's plans to use lxd and enable some of that for dev in the future but not here yet
<rick_h_> adam_g: and tied to lxd vs kvm
<adam_g> ah
<lazypower> rick_h_: correct me if i'm wrong, but if you're using MAAS you can specify one of the custom images?
<rick_h_> lazypower: if you build streams from that image list
<lazypower> s/you can/cant you specify/
<rick_h_> lazypower: see https://jujucharms.com/docs/1.24/howto-privatecloud
<lazypower> Yeah, i thought so. And thats right, gotta write streams for it
 * lazypower nods
<lazypower> ok, cool cool, thanks rick_h_  :)
<rick_h_> lazypower: np
#juju 2015-12-02
<Bofu2U> is there an easy way to just deploy a KVM VPS? with juju?
<Bofu2U> just with a stock centos image or something like that
<Laney> hi
<Laney> I defined a new action in my charm and then upgraded it (juju upgrade-charm), but I can't run it (status: failed, script never run)
<Laney> is this known to not work?
<bloodearnest> Laney, what's the error output when you try to run the action?
<Laney> bloodearnest: I don't see any output, where might it be?
<Laney> wait
<Laney> I do!
<Laney> 'action not implemented on unit ...#
<Laney> so you can't upgrade and get new actions?
<bloodearnest> Laney, what does this say: juju action defined <svc>
<Laney> bloodearnest: It's in there
<bloodearnest> Laney, and juju action do <svc> <action> gave you 'action not implemented on unit'
<bloodearnest> ?
<Laney> right
<bloodearnest> so if the action is defined, then the upgrade worked and it's available
<Laney> well it's <unit> in one case and <svc> in the other
<Laney> If I ssh in I can see that actions.yaml was indeed updated
<bloodearnest> Can you try invoke the action manually?
<bloodearnest> when ssh'd in?
<Laney> yes, but the first line is a juju-log which I never see in the debug-log
<Laney> so I suspect it's not running
<bloodearnest> Laney, what version of juju is your environment?
<Laney> 1.25.0
<bloodearnest> it's odd that it;s defined, but then can't be executed
<bloodearnest> Laney, is actions/<your-action> executable?
<Laney> yup
<Laney> I can't run it manually, don't know how to set the environment up
<Laney> JUJU_CONTEXT_ID
<bloodearnest> Laney, I think you should be able to run it like so, from the unit: juju-run <unit> action/<your-action>
<bloodearnest> Laney, if you want to check/rulle out the upgrade-charm being an issue, simply add a new unit, and see if the actions works on that.
<Laney> bloodearnest: bleh, juju-run doesn't seem to ever do anything, is that because config-changed is still running?
<Laney> (the failure was from before this)
<bloodearnest> Laney, ah, we have a stuck unit
<Laney> it's meant to still be running
<bloodearnest> Laney, juju serializes hooks/actions on a unit. So if one is stuck, blocks all else. Any ideas why config-changed is still running?
<Laney> still grinding away
<bloodearnest> it's deliberate?
<Laney> but when I tried the action it was before I changed the config, nothing was running then
<Laney> just means I can't try juju-run now
<bloodearnest> Laney, ack
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/nova-compute/no-hugepages/+merge/279245 to unbreak nova-compute if you have a moment. The amulet fail is odd as I can't actually see a failed test
<bloodearnest> another charm layers question: I have charm layer at ./layers/x509-cert: It's layer.yaml has: includes: ['layer:basic', 'interface:x509-paths']
<bloodearnest> I have interface defined at ./interfaces/x509-paths/
<bloodearnest> the charm builds fine
<bloodearnest> but, the built charm output, the hooks/relations/x509-paths directory containes the x509-cert *charm* code, not the interface code, as I was expecting
<bloodearnest> am I doing something wrong?
<bloodearnest> charm layer: http://bazaar.launchpad.net/~bloodearnest/charms/trusty/x509-cert/trunk/files
<bloodearnest> interface layer: http://bazaar.launchpad.net/~bloodearnest/+junk/x509-paths/files
<bloodearnest> output looks like: http://paste.ubuntu.com/13623707/
<JoseeAntonioR> marcoceppi: ping
<Laney> if I mess up an upgrade-charm hook how can I push a new version and retry?
<Laney> I tried upgrade-charm again and resolved --retry but it has the old version still
<Laney> I got it, upgrade-charm --force && resolved --retry
<asanjar> kwmonroe: hey ugly
<asanjar> kwmonroe: the latest spark-bench code should get uploaded to github
<asanjar> soon
<thomi> Hi, my juju deployment is stuck, and I see this error in 'juju debug-log' every few seconds:
<thomi> machine-0: 2015-12-02 22:56:15 ERROR juju.worker runner.go:223 exited "toolsversionchecker": cannot update tools information: cannot get latest version: cannot find available tools: no tools available
<thomi> any ideas?
<jrwren> chamers: requesting requeue: http://review.juju.solutions/review/2357
#juju 2015-12-03
<lazypower> thomi : which juju version are you using?
<thomi> lazypower: 1.25.0-0ubuntu1~15.10.1
<lazypower> thomi which substrate is this against? local, aws, openstack, et-al?
<thomi> lazypower: I destroyed the environment and re-created it. I no longer see that error (I see other errors instead)
<thomi> lazypower: this is all against local
<lazypower> well, thats progress
<lazypower> Are the new errors blocking you?
<thomi> lazypower: they are, but I'm at my EOD, so I'll attack them again tomorrow with a fresh brain
<thomi> thanks for your help tho
<lazypower> Sorry i wasn't more help. Cheers thomi
<jamespage> gnuoy, http://paste.ubuntu.com/13640557/
<jamespage> was the diff I have locally for generating a list of related unit private-addresses
<jamespage> it also switches the scope to SERVICE from GLOBAL
<jamespage> that said I think that the interface stubb should provide methods that return primitives such as lists as a core - joining with ',' is really an openstack specific bit
<gnuoy> jamespage, got a moment for a layers review? https://code.launchpad.net/~gnuoy/charms/+source/interface-rabbitmq/+git/interface-rabbitmq/+merge/279417
<dorea> How to study: http://qr.ae/RbCtvx
<dorea> Oops, wrong channel! bye.
<jamespage> gnuoy, reviewed - we should drop 'hostname' from the interface - its legacy and should not be used afaict
<gnuoy> jamespage, it opens an interesting question about what other aspects of the charm could be considered 'legacy'
<jrwren> charmers: I don't think that I can fix these errors, they seem to come from the test environment. Can someone give me suggestions? http://paste.ubuntu.com/13632412/
<marcoceppi> tvansteenburgh: ^?
<marcoceppi> tvansteenburgh: disregard
<marcoceppi> jrwren: that's an issue with OSCI, beisner is your best bet
<marcoceppi> We have two CI services running, one for general testing and one for OpenStack (OSCI) testing
<jrwren> beisner: help. :)
<beisner> hi jrwren - undercloud woes hit ya there.  serverstack is back in black, i can re-trigger that now.
<beisner> jrwren, can you link me to your MP?
<jrwren> beisner: oh please retrigger, thank you. https://code.launchpad.net/~evarlast/charms/trusty/mongodb/fix-dump-actions/+merge/277191
<beisner> jrwren, you have a passing result which ran after that failure
<jrwren> beisner: I do? I think I didn't get the email.
<beisner> jrwren, see comment chain @ https://code.launchpad.net/~evarlast/charms/trusty/mongodb/fix-dump-actions/+merge/277191
 * beisner thought he already retriggered  this one ;-)
<jrwren> beisner: I see the comment. Sorry for the false request. Any reason it didn't email me the success message?
<beisner> jrwren, not sure.  our bot doesn't email you.  launchpad does.
<jrwren> beisner: ah! I guess I never realized those messages were just LP messages on the MP.  Thanks!
<beisner> jrwren, ps thanks for the work on beating those tests into shape
<jrwren> beisner: i'll be giving charmers lots of fun pokes of fake greif if I ever see 'em all IRL again :)
<jrwren> *grief
<beisner> ha!
<jrwren> I think next request is rerun these: http://review.juju.solutions/review/2357 ?
<gnuoy> jamespage, I have banished hostname
<beisner> marcoceppi, hrm, can't figure out how to retrigger tests @ http://review.juju.solutions/    it seems like there used to be a button.  it could just be not-enough-coffee..
<marcoceppi> beisner jrwren retriggered
<beisner> marcoceppi, woot thx man
<jrwren> marcoceppi: thank you.
<jamespage> gnuoy, \o/
<gnuoy> jamespage, what are we celebrating?
<jamespage> hostname banishment
<jamespage> gnuoy, fwiw I think we're to agressive on removing the 'avaliable' and 'connected' state - the handler is applicable for 'broken' but not 'departed' which would apply when a single unit exits the relationship
<jamespage> gnuoy, but please merge your proposal for now
<gnuoy> jamespage, good point
<gnuoy> jamespage, ack
<jrwren> tvansteenburgh: I think this is you from whom I need help.  lxc test env broken? http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1652/console  ERROR there was an issue examining the environment: cannot use 37017 as state port, already in use
<tvansteenburgh> jrwren: ugh. thanks for the heads-up, fixing...
<tvansteenburgh> jrwren: new test running http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1654/console
<jrwren> tvansteenburgh: thank you.
<TheJeff> hey #juju
<TheJeff> running up maas/juju/openstack first time ever
<TheJeff> following the canonical guide - got to the part where we're to be bootstrapping a node
<TheJeff> juju quickstart
<TheJeff> juju quickstart v1.3.1
<TheJeff> stuck at bootstrapping the maas environment (type: maas)
<TheJeff> :(
<TheJeff> little help?
<TheJeff> just sits there and does nothing
<lazypower> TheJeff: has it acquired any of the nodes in your maas cluster?
<TheJeff> yes it did
<lazypower> as in, do any of the nodes in maas have a powered on logo?
<TheJeff> says deployed
<lazypower> ok, so the curtain process can take a bit of time. has the node come online fully after cutain?
<TheJeff> with the owner as the maas
<TheJeff> cutain?
<TheJeff> and its been a solid 15 minutes of no action
<TheJeff> 4th attempt
<lazypower> 15 minutes does seem excessive
<TheJeff> let it wait 30 before
<lazypower> typically its done within  2 or 3 minutes in most cases
<lazypower> so theres 1 of 2 things happening
<lazypower> there's a networking issue
<lazypower> or we've found a bug, but i'm apt to point a finger at a networking issue
<lazypower> when you deploy a node in maas, can your workstation reach the instance that its deployed?
<TheJeff> no, but the maas box is the one running juju
<TheJeff> and it can hit ssh
<TheJeff> but actually i tested that and it just kicks back pub key nogo
<TheJeff> so the daemon is reachable
<TheJeff> could that be related?  do i need to get an ssh key in?
<lazypower> Did you put your pubkey in maas for the juju user?
<TheJeff> oo
<TheJeff> nope, doc did not mention that
<lazypower> cat ~/.ssh/id_rsa.pub and place that in your maas users ssh keys
<lazypower> also TheJeff - which version of juju? 1.25 i assume?
<TheJeff> 1.24.7-trusty-amd64
<lazypower> TheJeff: once that ssh key is added, tear down the unit that was in progress and retry. That ssh key will be added during the curtain process w/ cloud init
<lazypower> and you *should* be g2g from there
<TheJeff> yep all that makes sense
<TheJeff> doing now
<TheJeff> really really appreciate the tip
<lazypower> TheJeff: additionally which guide were you following? I'd like to file a bug against it if its missing the pubkey instructions.
<kwmonroe> coreycb: are there restrictions on what things should be called in the ./reactive subdir of a layered charm?  like does ./reactive/myStuff have to end in .py or .sh or anything?
<kwmonroe> er, cory_fu ^^
<TheJeff> lazypower: after searching it does mention the key
<TheJeff> but doesnt actually state 'do this'
<TheJeff> https://help.ubuntu.com/lts/clouddocs/en/Installing-Juju.html
<TheJeff> the only mention of it is This key will be associated with the user account you are running in at the time, so should be the same account you intend to run Juju from.
<TheJeff> doesnt explicitly mention to add it to the user
<cory_fu> kwmonroe: Anything with .py is imported as a Python source file.  Anything that's executable is considered an ExternalHandler.  Anything else should be ignored
<lazypower> TheJeff: ah - yeah. looks like we have a new feature here that allows you to inline the authorized key path for hte maas provider - https://jujucharms.com/docs/stable/config-maas
<lazypower> i say new as its new to me, it may have existed for a while :)
<TheJeff> hm well I actually did that
<TheJeff> and the key wasnt present in the user profile
<lazypower> in the interest of one thing at a time, if that doesn't resolve your woes, we'll dive into deeper debugging. What may be helpful is to run juju bootstrap with the --debug flag so you get more verbose output
<TheJeff> neat thats a great tip too
<TheJeff> its still in a deploying state, fingers crossed
<lazypower> marcoceppi o/
<marcoceppi> \o lazypower
<lazypower> do you have a second to expand on charms.docker #3?
<lazypower> i'm not sure i understand what you were asking me
<lazypower> https://github.com/juju-solutions/charms.docker/pull/3
<marcoceppi> lazypower: I'm curious how this works. Do you still need to increment setup.py version when you tag a release or does travis just do that?
<lazypower> From what I understood in the docs, you still have to rev setup.py's version number. The GH tag doesn't have to necessarily reflect wha the version is, but it will trigger travis to run the deploy routine.
<marcoceppi> lazypower: cool, thanks for the clarification
<lazypower> so, if i leave it as v0.0.1, push an update, tag v0.0.1-1, it'll still publish to pypi as 0.0.1 (assuming you can publish the same version multiple times, i have not tried that)
<marcoceppi> lazypower: you can't
<lazypower> ok, so the build should fail then :)
<marcoceppi> interesting
<lazypower> it'll try to deploy and teh build should fail
<lazypower> wanna test it?
<lazypower> merge that and lets make some tags
<marcoceppi> but if you were to move the tag you'd have to force push on master
<marcoceppi> which is bad
<lazypower> since its semver, just increment the minor rev
<marcoceppi> wait, I'm not 100% on that actually
<marcoceppi> but it's annoying that you've burnt the 0.0.1-1 release
<lazypower> annoying i can deal with because thats problems for me
<lazypower> not for the consumers using pypi
<lazypower> i'm not sure that force updated tags will collide though
<lazypower> git push origin --tags --force doesn't seem to do anything that would collide, it just moves the hash reference
<lazypower> ah but i see what you're saying
<lazypower> the version thats in setup.py, and the now moved tag, would not reflect whats in pypi potentially.
<lazypower> yeah, i think i understand. seems annoying, and is worth investigating
<marcoceppi> well, also, if you force push on master, you potentially mess up other people's repos by rewriting history
<lazypower> not with tags
<lazypower> but code, yes
<marcoceppi> yeah, i realized as I said it earlier I wasn't 100% on that since a tag is just a ref
<marcoceppi> I don't have a problem with the merge, I was more curious how it worked
<lazypower> I'm not 100% on that either
<TheJeff> ok, so after trying again with the ssh key in there, and failing again I've run it with --debug
<lazypower> but wanted to investigate and possibly use this as a nicer method to keep it updated.
<TheJeff> lazypower: 2015-12-03 19:54:13 DEBUG juju.provider.common bootstrap.go:254 connection attempt for shy-yam.maas failed: ssh: Could not resolve hostname shy-yam.maas: Name or service not known
<TheJeff> looks like name resolution failing?
<TheJeff> how would it know that name?? do I need to create a host entry?
<marcoceppi> TheJeff: yes, this is a common problem
<lazypower> TheJeff: not at all
<marcoceppi> TheJeff: the one way I've gotten around this is to set my nameserver to the maas-master server
<lazypower> TheJeff: Maas  runs a bind server which updates w/ the hostnames as they are deployed and allocate addressing
<lazypower> TheJeff: so you'll need to add an entry to the resolv.conf templates, to point to itself. nameserver 127.0.0.1  \n search maas
<lazypower> and that should get you sorted w/ hostname resolution
<marcoceppi> lazypower: wait, were are you recommending that?
<marcoceppi> it issue is from from juju client -> dns, not from the nodes AFAIU
<lazypower> marcoceppi : TheJeff  is running juju on the maas master
<lazypower> on the region controller i believe
<TheJeff> yep
<TheJeff> it's all on a sanitized network...
<TheJeff> no extra hosts or workstation access
<lazypower> TheJeff: nslookup/dig will answer you where its looking for the hostname :)
<TheJeff> yep!
<marcoceppi> oh, oops, sorry. Didn't see that
<lazypower> i'm 90% certain that you just need to add the entry for looking up .maas tld on the region controller, and you should be sorted
<marcoceppi> yeah, adding `search maas` to the bottom of /etc/resolv.conf to test. If that works, it won't survive reboot, but you can edit the templates like lazypower suggested and have it persist
<TheJeff> yeah I can resolve the name now, will be a few minutes to reboot the box and redeploy
<TheJeff> (blasted server bioses are snails!)
<TheJeff> so many boot roms :(
<lazypower> TheJeff: i felt your pain yesterday standing my 2u back up after an 8 month resting phase :P
<TheJeff> 8 month not bad
<TheJeff> from what i hear, every time you reboot a linux server a kitten dies.
<lazypower> https://bugs.launchpad.net/maas/+bug/1522566
<mup> Bug #1522566: MAAS TLD names are not resolveable by maas-region-controller by default <MAAS:New> <https://launchpad.net/bugs/1522566>
<lazypower> marcoceppi ^
<marcoceppi> +1
<TheJeff> 2015-12-03 20:08:46 DEBUG juju.provider.common bootstrap.go:254 connection attempt for shy-yam.maas failed: Warning: Permanently added 'shy-yam.maas,10.7.1.102' (ECDSA) to the list of known hosts.
<TheJeff> /var/lib/juju/nonce.txt does not exist
<TheJeff> :(
<lazypower> getting trolled by a shy-yam, what a thursday adventure
<TheJeff> so i'm googling this
<TheJeff> i booted via the host's console but it pxe booted from maas
<TheJeff>   ^ 's/booted/powered\ on/'
<lazypower> TheJeff - seems like you're running into this https://bugs.launchpad.net/juju-core/+bug/1314682
<mup> Bug #1314682: Bootstrap fails, missing /var/lib/juju/nonce.txt (containing 'user-admin:bootstrap') <bootstrap> <juju> <maas-provider> <juju-core:Triaged> <https://launchpad.net/bugs/1314682>
<TheJeff> lazypower: totally
<lazypower> TheJeff: there's a few work arounds listed in there. Highly suggested that you click the link "This bug affects me", and give some of those a try. I'll ping @jcastro to bring it up as an item to look at on the next meeting as there's a few users here that have hit this bug and it seems to be long running
<jcastro> looking
<TheJeff> going to check it out - maybe I'm going about things wrong?  The hosts boot from PXE (but we kick the boot off via remote bios)
<TheJeff> afaik cloud-init generally working, as maas was able to spin these hosts up and make them nodes in the first place
<jcastro> I'll holler this over to the core team, seems to have plenty of data attached
<TheJeff> so whats the contents of this nonce.txt supposed to be?
<TheJeff> I can manually slap it in I suppose
<lazypower> TheJeff: we're getting out of my depth of knowledge here :(  Finding the bug report was due to googling and deep linking. I wish i were more help
<lazypower> Best I can offer at this juncture is to advise you to watch that bug and experiment
<TheJeff> yeah that's what I'm going to do
<TheJeff> seems it just contains (from comments) just one line:
<TheJeff> user-admin:bootstrap
<TheJeff> if anyone has a working juju bootstrapped box that can check /var/lib/juju/nonce.txt would greatly appreciate it
<TheJeff> is user-admin a real username?  or is that a placeholder by the poster?  are there other lines?
<TheJeff> or is it a key?
<lazypower> TheJeff bootstrapping now, will have an answer for you shortly
<TheJeff> lazypower: you are a hero among men (or women)
<lazypower> You're too kind :)
<lazypower> TheJeff: are you planning on attending the juju charmer summit? Best place to get your hands/feet wet with juju development
<TheJeff> where / when?
<lazypower> TheJeff: http://summit.juju.solutions
* lazypower changed the topic of #juju to: Welcome to Juju! || jujucharms.com back up || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP || Charmer Summit: http://summit.juju.solutions
* marcoceppi changed the topic of #juju to: Welcome to Juju! || Juju Charmer Summit: http://summit.juju.solutions || Docs: http://juju.ubuntu.com/docs || FAQ: http://goo.gl/MsNu4I || Review Queue: http://review.juju.solutions || Unanswered Questions: http://goo.gl/dNj8CP
<lazypower> derp
<lazypower> whoops, wrong window focus, nvm
<lazypower> TheJeff: something is going hinky on my end, this is taking longer than expected :/
<lazypower> TheJeff: it just has the string user-admin:bootstrap
<lazypower> jose: ping
<TheJeff> lazypower: had fires, trying now
<TheJeff> thank you for checking
<lazypower> np
<TheJeff> re: summit - Belgium is very far from Toronto!
<TheJeff> but we will see we will see
<TheJeff> lazypower: 2015-12-03 21:28:03 DEBUG juju.utils.ssh ssh.go:249 using OpenSSH ssh client
<TheJeff> Logging to /var/log/cloud-init-output.log on remote host
<TheJeff> Running apt-get update
<TheJeff> Running apt-get upgrade
<TheJeff> manually slapping that string in worked around fine
<TheJeff> +1 that bug
<lazypower> we got you unblocked! nice
 * lazypower fistpumps
<lazypower> sorry about the bug though. its on teh list to be talked about at the next team standup
<thumper> o/ lazypower
<lazypower> yo yo thumper
<TheJeff> lazypower: my team lead will be at cfgmgmtcamp
<TheJeff> so that's pretty neat
<thumper> I wish we had the time to allocate a dev to just follow each eco person for a month and fix all the shitty niggly bugs that get in the way
<TheJeff> he was going to /j and sing praises earlier but i told him to hold off dox'ing us lol
<lazypower> thumper: we file bugs, and triage accordingly :)
<thumper> :)
<TheJeff> but now that this is working i think he can dox us just fine
<TheJeff> and thank you very very very very very much sir
<TheJeff> or madam
<marcoceppi> TheJeff: well, you can always have him join us at the summit (and convince him to take you along ;)
<lazypower> Anytime TheJeff
<kwmonroe> where should layered charm authors commit source in launchpad?  i know the resultant charm must live at lp:~user/charms/trusty/foo/trunk, but what about the source -- lp:~user/charms/layers/foo/trunk, or lp:~user/charms/trusty/foo/source?
<marcoceppi> kwmonroe: they can put it anywhere, there's no need for it to be in the charms project
<lazypower> kwmonroe: i heard the cool kids were using git on launchpad these days
<mbruzek> kwmonroe: lazypower's answer is the best
<TheJeff> lazypower: he just followed you on twitter
<TheJeff> lol
<Icey> can I set a config variable within a charm?
<marcoceppi> Icey: only a user can set a configuration value
<adam_g> thedac, you familiar with the neutron-api charm's templating system?
<thedac> adam_g: a bit, yes.
<thedac> What are you trying to do?
<adam_g> thedac, trying to get something set in /etc/neutron/plugins/ml2/ml2_conf.ini
<adam_g> it looks like the only time this file is ever managed by the charm is when its set in the legacy manage plugin mode
<adam_g> http://bazaar.launchpad.net/~openstack-charmers/charms/trusty/neutron-api/trunk/view/head:/hooks/neutron_api_utils.py#L286
<thedac> let me take a look
<adam_g> i can just push setting in via the subordinate's relation, but the principal charm isn't managing the file as a template so they'll never be injected
<adam_g> ... unless the subordinate API thing supports injecting config values into arbitrary config files not managed by the templating system
<thedac> Off the top of my head the subordinate api only affects neutron.conf. Let me verify.
<adam_g> yeah, thats what it looks like
<adam_g> i suppose i could just manage the ml2 conf directly from the subordinate
<thedac> I think that is the intention. If we are using vanilla ovs neutron-api handles it. If not the subordinate does so.
<thedac> adam_g: That is confirmed see lp:~openstack-charmers/charms/trusty/neutron-api-odl/vpp as an example
<adam_g> thedac, what does 'vanilla ovs' mean?
<thedac> If we are using the "default" openvswitch
<thedac> Without any external config changes
<TheJeff> ayy
<TheJeff> ok we're like ... 99.9% there I think
<TheJeff> after working around and working around, juju quickstart ran
<TheJeff> juju-gui/0 deployment is pending
<TheJeff> machine 0 is started
<TheJeff> ehhhh didnt wait long enough
<TheJeff> its up !
<lazypower> TheJeff: Seems like you're ready to model some deployments
<jrwren> charmers, can someone please poke this http://review.juju.solutions/review/2357
#juju 2015-12-04
<blr> is there a reason that debug-hooks can't wait for a unit's public address to be published?
<jose> lazypower: pong
<bloodearnest> sanity check: if I have $JUJU_REPOSITORY/interfaces/test/{interface.yaml,provides.py,requires.py}, a charm layer in $JUJU_REPOSITORY/layers/test-charm/ that has includes: ['interface:test'], and has the interface in the metadata.yaml, I should expect to see those provides/requires.py in the charm build output somewhere, right?
<jamespage> gnuoy, thedac, coreycb: what do you think of https://code.launchpad.net/~james-page/openstack-charm-layer-dev/+git/openstack-charm-layer-dev/+merge/279563 ?
<jamespage> it introduces 'adapters' and a object mechanism for aggregating them across presented interfaces
<gnuoy> argh, we're working on the same thing
<gnuoy> jamespage, I really like it
<jamespage> gnuoy, some of that needs interfaces/relations
<jamespage> interfaces is the type, rather than the actual named relation
<jamespage> gnuoy, I think we can add some base adapter mapping as well, and then override in the subclasses
<gnuoy> jamespage, so I proposed that the adapter code goes into a layer called openstack which places them in lib/openstack/adapters.py
<jamespage> gnuoy, that sounds good to me
<jamespage> gnuoy, sorry if I stepped on your toes...
<gnuoy> jamespage, not at all, you'd got much further than me
<jamespage> gnuoy, I prefer this class based approach to dealing with things rather than functions and global variable in a utils class
<gnuoy> jamespage, I'm happy to grab what you've done and do the layer munging unless you want to do that ?
<jamespage> gnuoy, please go ahead - we can then collab on getting the layer just right...
<gnuoy> kk
<jamespage> gnuoy, there are some bits of that I don't like right now
<jamespage> (I wrote one TODO)
<jamespage> gnuoy, I did a template mockup as well - http://paste.ubuntu.com/13665616/
<jamespage> untested
<gnuoy> kk
<jamespage> gnuoy, I'm wondering whether we can also take an object based approach to the handlers as well...
<jamespage> gnuoy, just making the adapaters classes behave like jinja2 wants
<gnuoy> jamespage, lp:~openstack-charmers-layers/charms/+source/reactive-openstack-layer
<jamespage> gnuoy, OK I'll push my change their
<jamespage> gnuoy, ok should work with jinja2 now
<jamespage> gnuoy, I'll refocus onto ceph now and stop stepping on your toes...
<gnuoy> I'm very grateful for the input, thanks
<jamespage> gnuoy, my thinking on 'interface_type' was to use that for status reporting
<jamespage> but not thought through just yet
<gnuoy> jamespage, thedac, coreycb: I've taken James' work and created a branch containing the fledgling openstack base layer. I've also updated the setup branch to use it.
 * gnuoy heads out
<lazypower> jose: hey there. Was looking for you lastnight to get you in touch with mattyw about ubuntu membership
<coreycb> jamespage, gnuoy: I like that adapter approach.  that was a bit to wrap my head around, but if you look at that code without the base classes in the picture it's really pretty simple to write/use an adapter.
<lazypower> coreycb: what are these adapters you speak of?
<coreycb> lazypower, https://code.launchpad.net/~james-page/openstack-charm-layer-dev/+git/openstack-charm-layer-dev/+merge/279563
<lazypower> interesting.. you can swap in/out interfaces using an adapter?
<coreycb> lazypower, basically it's a way for the openstack charms to add some logic around interface data, and render it more easily
<lazypower> i like it
<lazypower> it looks clean
<coreycb> lazypower, yeah and and it keeps any logic out of the interface layer, and lets the interface just return data
<lazypower> coreycb: thats *exactly* what its supposed to do
<lazypower> \o/
<coreycb> yup
<jamespage> coreycb, lazypower, thedac, gnuoy: I just pushed a bit of a rename I wnated todo into the openstack base layer
<jamespage> Interface->Relation
<jamespage> and I added some support for default adapter mappings into the Adapters base case
<bloodearnest> is there a way to exclude directorys in a layer when building?
<jamespage> I was about to ask the same thing
<jamespage> I'd like to exclude unit tests for a layer for example
<bloodearnest> same here
<lazypower> there is a way to declare that i do beleive
<lazypower> Ah, according to the old repo's layer.yaml it only has strategy keys for deleting yaml key names
<lazypower> from config/metadata
<bloodearnest> oh wow - so charms.reactive does not use standard python import mechanisms, it does:
<bloodearnest> sys.modules[modname] = load_source(modname, realpath)
<bloodearnest> which means imports inside that source don't seem to work right
<bloodearnest> e.g. so if my reactive/ dir has 3 files ( foo.py, bar.py, __init__.py), I can not import a from b when running as a hook. I can when running unit tests.
<bloodearnest> s/a from b/foo from bar
<bloodearnest> oh my
<bloodearnest> (Pdb) __name__
<bloodearnest> '_var_lib_juju_agents_unit-x509-cert-0_charm_reactive_x509_cert_py'
<bloodearnest> (Pdb)
<bloodearnest> that would explain it
<bloodearnest> so, reactive/ modules can not depend on other local files.
<bloodearnest> an no, that's just name manling, ignore me
<bloodearnest> *mangling
<jamespage> thedac, gnuoy, coreycb: added tox configurations to the layer and the rabbitmq interface - does lint only right now but set to go for unit testing as well
<jamespage> coreycb, wanna add that to the shared-db interface?
<coreycb> jamespage, +1 yep I'll do that today
<TheJeff> hello you brainy charmers you
<marcoceppi> o/
<TheJeff> quick q
<TheJeff> deploying openstack with juju
<TheJeff> ext-port in the openstack-config.yaml -- is that the existing connection?
<TheJeff> or a second one
<marcoceppi> TheJeff: I think it's existing? jamespage gnuoy beisner coreycb ^?
<TheJeff> like, the interface that connects to MAAS?  I guess not, although that seems to be the only active interface on the juju host its up
<marcoceppi> TheJeff: I think it's for the external network
<TheJeff> alright... so eth0 will I suppose always default to the MAAS network
<TheJeff> then if i specify eth1 it'll bring that up on its own connection?
<marcoceppi> TheJeff: I'd wait to see what the OpenStack charmers have to say
<TheJeff> we have vlans for each physical interface.  I'm not sure from this config file and the way its going to spin up which one will be whitch
<TheJeff> which*
<TheJeff> eh, gonna try eth1 (default) and see what happens.  if it breaks I'll do it again I suppose
<TheJeff> unless the governor phones before I hit enter
<coreycb> TheJeff, that's a second one
<coreycb> TheJeff, so I believe you need 2 ports for neutron-gateway
<TheJeff> ok cool
<coreycb> one for cloud services, one for external instance traffic
<TheJeff> cool
<TheJeff> got it
<TheJeff> additionally - in the same conf, osd-devices suggests sdb
<TheJeff> is a second phyiscal block device required?
<TheJeff> we only have sda
<coreycb> TheJeff, one should be ok
<coreycb> TheJeff, well, in total I think you'd need at least 2
<coreycb> TheJeff, but a minimum of one to use as a ceph osd volume
<coreycb> if that makes sense, one for ceph to consume, one for your root partition
<TheJeff> hm
<jamespage> TheJeff, its an additional port - eth0 will drop all connections to the server
<jamespage> TheJeff, you might find https://jujucharms.com/openstack-base/ informative
<jamespage> coreycb, thedac, gnuoy: idea - use ddt for interface testing
<jamespage> http://ddt.readthedocs.org/en/latest/example.html
<jamespage> although that might not do quite what we want
<coreycb> jamespage, that looks pretty neat
<jamespage> coreycb, yeah - but I'm not sure it can do anything super complex with regards to data read from file
<jamespage> coreycb, I was thinking we could spec relations as yaml/json
<coreycb> jamespage, could be nice for the base openstack layer to work off a yaml of key/value interface data
<jamespage> ?
<coreycb> jamespage, I'm thinking more of when an event becomes true, so maybe not for testing the base layer
<coreycb> jamespage, so for testcharm.py for example you could have a yaml of key/value interface data that you test and verify the adapters work.  I think you're talking about testing at the interface level though
<asanjar> arosales: kwmonroe cory_fu ease and speed of building new hadoop/spark solutions (juju bundles) from existing charms has been impressive and mouth dropping  by some .
<kwmonroe> w00t asanjar!  that's good to hear.
<arosales> asanjar: \o/
<jamespage> coreycb, nogood brainfried
<jamespage> 5pm on a friday is not a good time to think about data driven unit testing....
<coreycb> jamespage, mine or yours :)  yeah time for a wind down I bet
<jamespage> cory_fu, hey - was looking at your reactive test helper pull request - we're getting to a point where I don't want to write a whole lot more code in interfaces without some unit testing
<jamespage> cory_fu, for the openstack charms today, we have a way of plugging in a set of 'relation data' which gets plumbed into the relation_get/units/ids calls - what do you think of a similar approach for testing interfaces?
<jamespage> thinking that we write json/yaml to represent each interface state; that gets read and plumbed in so we can then poke the interface class and ensure it sets the right state etc...
<jamespage> hey asanjar
<asanjar> jamespage: how are you my friend..
<jamespage> asanjar, well thanks - how about you?
<asanjar> jamespage: I am well and as usual still causing trouble for arosales
<jamespage> I'd expect nothing less
<asanjar> jamespage: sign of good health
<jamespage> anyway I'm going to EOW for now - have fun everyone
 * jamespage goes for a beer
 * arosales relishes in the touble asanjar creates
<arosales> jamespage: have a good weekend, enjoy the beer
<asanjar> jamespage: have a pint on me
<cory_fu> jamespage: Sorry, was on a call
<cory_fu> jamespage: I'm not sure what PR you're talking about, but we definitely want to have some framework / helpers around testing reactive charms.  One aspect to consider is that the relation data will need to evolve over time to model an evolving conversation, so a static set of data won't work.
<cory_fu> We might also want to consider where we cut the testing of interface layer implementations vs charm layer implementations.  As in, maybe in charm layer tests we want to have a system to easily "mock" interface classes based on their API and not the low-level data coming over the relation.  But that might be more difficult to manage and might not be worth it
<cory_fu> (Might even be counter productive, if we want full-stack testing)
<jcastro> hey jose
<lazypower> cory_fu: when you release a pypi package, you dont track the .egg-info bits or any of that in the repo do you?
<lazypower> i was looking at charms.reactive as a reference
<cory_fu> No, you shouldn't
<asanjar> kwmonroe: cory_fu: reactive charms ??!!! i smell another rewrite of big data charms.. third time's the charm
<cory_fu> asanjar: Well, you never liked the services framework, and it turned out that everyone else agreed with you.  ;)
<cory_fu> asanjar: Also, it's called "refactor" not "rewrite"  ;)
<asanjar> cory_fu: lol awesome
<lazypower> asanjar: how could you have missed all the reactive buzz? :P
<lazypower> we *Started* that in DC
<lazypower> well, the public buzz anyway
<mbruzek> asanjar: Welcome to the community.  I reject your immutable attitude
<asanjar> lazypower: I don't remember what I had for breakfast this morning .
<lazypower> asanjar: are you coming to the charmer summit in belgium?
<lazypower> asanjar: we need to go hit up that little hookah bar street side in brussels again. I'm hankering for some mojitos
<asanjar> mbruzek: how are you my friend, how is your MUCH better half
<mbruzek> asanjar: she is well.  Taking a half day today, to buy YOU a gift.
<mbruzek> asanjar: I don't think the post office will let us mail poo.
<asanjar> lazypower: I would love to go to the Charm summit, but seriously doubt IBM would pay.
<asanjar> mbruzek: mark it as white POO, and not BOOM
<lazypower> asanjar: i smell diversity class coming on
<asanjar> lazypower: I have that shit reset
<mbruzek> asanjar: seriously happy holidays to you and your kids.
<lazypower> asanjar: family friendly please sir :)
<asanjar> mbruzek: same to you buddy, have a wonderful xmas
<mbruzek> asanjar: ... I think their names were Cory and Kevin
<asanjar> mbruzek: lol lol
<jrwren> oh man, i forgot about that hookah bar.
<lazypower> jrwren: good times :)
<hatch> can you create multiple relationships between two services? What are the rules around this interaction?
<lazypower> hatch: you sure can
<lazypower> hatch: that the interfaces match on both ends. thats pretty much it.
<hatch> lazypower: so can you do two from one service to one endpoint on another?
<hatch> or do the two relations have to have unique endpoints?
<lazypower> its really only governed by the relationship name/interface.  Say you have a db relation that uses interface: mdb  and you have a db-admin relation that uses mdb interface.  Both are completely reasonable uses of the same "endpoint", and can consume any provided relation using the mdb interface
<hatch> ahh ok so an endpoint doesn't become 'locked' once it's active
<hatch> you can connect to that same endpoint multiple times
<lazypower> right, as its 2 sep. relation scopes on the requires side keep it as independent relations
<lazypower> they just share the same communication protocol
<hatch> great, now however you cannot create two identical realtionships
<hatch> that would be a noop?
<lazypower> well, thikn of it like this
<lazypower> juju add-relation foo:bar bar:foo
<lazypower> when you type that in a second time
<lazypower> its going to error saying you've already added that relationship
<lazypower> or rather, that the relationship already exists
<hatch> lazypower: thanks for clarifying - I'll file a bug to add this information to the user docsc
<lazypower> hatch: what you may want to do, is follow up with dimiter, as i know that network spaces were changing some of the rules here about this, as the network spaces bring into it the concept of binding service endpoints to a space, and you may be able to "overload" a relation due to the nature of a scoped network.
<hatch> ahh
<lazypower> where having 2 services in sep. network spaces use the same relation/interface across the different scopes
<lazypower> due to whatever crazy networking you're modelling
<lazypower> because dimiter loves crazy networking ;)
<hatch> lol
<lazypower> and meatloaf
<admcleod-> is there a constraints-type option for config.yaml
<lazypower> admcleod- So when you say constraints-type, what do you mean by that?
<admcleod-> if i config set, it must be one of a defined set
<lazypower> ah, there is not an ENUM type to speak of that i'm aware of
<lazypower> We've been working around that with status messaging, and an array of accepted values.
<lazypower> if its not one of the accepted values, status-set blocked, and message the user that they need to *read* what is acceptable
<admcleod-> ok thanks
<lazypower> admcleod- seems like a good idea to go poke this issue - https://bugs.launchpad.net/juju-core/+bug/918386
<mup> Bug #918386: config.yaml should have enum type  <charmers> <config> <pyjuju:Triaged> <juju-core:Triaged> <https://launchpad.net/bugs/918386>
<lazypower> hey cory_fu, so, i jsut got my layer setup for wheelhousing deps.
<lazypower> http://paste.ubuntu.com/13674509/ -- as confirmation
<lazypower> on charm deploy does it do the due dilligence to setup the proper linking for $PATH and all that schenanigans?
<cory_fu> Ok, first of all, there is a new release of charm-tools coming that will fix the fact that those are platform specific wheels and will probably fail when you deploy them
<lazypower> ok, i have something i need ot forward you then - as it appears we may have some overlap
<lazypower> if you check your mail, there's a stack there about being unable to find charms.reactive that i'm en-route to try and reproduce
<cory_fu> Second of all, the wheelhouse is now installed into the system and not the charm dir, so the path and imports should Just Work.  The base layer still supports lib/charms/* in the charm so that you can include small, charm-specific helpers libs in the charms.X namespace without packaging and pushing to pypi
<lazypower> is that the error you're referencing?
<cory_fu> lazypower: Hrm, no.  The error I would expect is that the pip install on the wheelhouse would fail because the platforms don't match
<cory_fu> So it would have failed earlier than that
<cory_fu> And the error would have specifically mentioned the platform
<lazypower> hmm
<lazypower> Ok, back to reproducing then
<lazypower> thanks for TAL
<cory_fu> np
<lazypower> ah yep
<lazypower> found it
<lazypower> wheel broke when installing PYYAML
<lazypower> well cory_fu  - on the bright side, osx brew released charm tools is now properly embedding dependencies (it didnt seem to in the 1.8.x releases), however its got the wheel bug as you pointed out :) so its progress in the right direction
<cory_fu> Again, I believe marcoceppi is currently working on the next release (including OS X brew) that should fix the wheel bug
<lazypower> Right on
<lazypower> marcoceppi: confirmation on the above?
<cory_fu> Well, work around it, since we can't actually use wheels any more.  (Which makes the "wheelhouse" a bit of a misnomer.)
<marcoceppi> lazypower cory_fu otp, but afterwards 1.9.4 is being released
<lazypower> whee \o/
<marcoceppi> lazypower cory_fu https://github.com/juju/charm-tools/milestones
<cory_fu> Thanks for the hard work, marcoceppi!
<mbruzek> cory_fu: Does the reactive base work with the leadership hooks at this time?
<mbruzek> cory_fu:  In other words can I use @hook('leadership-settings-changed')  to decorate my python function?
<cory_fu> mbruzek: No, but I believe that can be handled outside of the charm-tools release cycle (unlike actions)
<lazypower> i believe its missing the storage hooks, and the leadership hooks.  https://github.com/juju-solutions/reactive-base-layer/issues/4
<cory_fu> lazypower: storage hooks will need a charm-tools update as well
<cory_fu> Since the hook names are not fixed
<lazypower> ah, true
<lazypower> it needs to generate them much like the interface layers are handled right?
<cory_fu> Yep
<cory_fu> Same w/ actions, and I'm still a little unsure as to how / whether we should handle actions
<mbruzek> cory_fu: so your advice for leader-settings-changed is to make an old sytle hook named "leader-settings-changed" in this case?
<cory_fu> mbruzek: Or create a PR against the base layer.  I think that fix at least should be easy
<cory_fu> I'm happy to review and merge
<mbruzek> ack
<bdx> hey what up everyone?
<bdx> has anyone here ever preformed an instance resize?
<bdx> :-) :-) :-)
<lazypower> whattup bdx
<mbruzek> Hi bdx
<bdx> hey guys, we need to add the "allow_resize_to_same_host=trueâ and âallow_migrate_to_same_host=trueâ to ALL nova confs
<bdx> otherwise resize functionallity is broken
<lazypower> bdx: would be good ot get that as a bug against the requisit charms
<bdx> lazypower, will do.
<bdx> lazypower, mbruzek: I have a few questions concerning charming with layers....
<lazypower> fire when ready
<bdx> I want to create a puppetserver and puppetagent charm
<bdx> so
<mbruzek> cory_fu: https://github.com/juju-solutions/reactive-base-layer/pull/12
<bdx> from what I can tell, I would need to create a puppet-agent interface and a puppet-server interface
<bdx> on top of that, I would need to create 2 layers, puppet-server-layer, and puppet-agent-layer
<lazypower> bdx: i'm not sure why you would need two interfaces. afaics the puppet agent / server comms would use the same interface. perhaps just interface: puppet - as an interface layer so the agents can self-register.
<bdx> then, 2 charms, a puppet-server and a puppet-agent, each of which would provide the interface for itself, and require the interface for the other
<lazypower> the other interface would be consuming the typical HTTP layer as puppet master is a REST API, so expose that if required.
<lazypower> bdx: i think you've got relation and interface confused :)
<lazypower> bdx: interfaces are just the communciation happening between the two units. same interface, 2 relations - one for each charm.
<bdx> entirely
<lazypower> bdx: are you familiar with statically typed languages implementation of interfaces?
<bdx> somewhat
<lazypower> its just a data contract, that say you can be / do whatever you want. But you will implement "these things".
<bdx> totally
<bdx> ok
<lazypower> in charms, its very similar. We're saying this interface will *always* expect these data points, and communicate in ['global'|'service'|'unit'] .
<bdx> totally, tracking
<lazypower> which is why the interface programmer typically writes the provides/requires side of the interface, that way consumers just point at it, and the metadata susses out the details. All you need to ensur eyou do is grab the proper side of hte interface, and then handle it in your reactive bits
<lazypower> bdx: did you watch the UOS session over charming with layers?
<bdx> yes
<lazypower> cory did a great job of explaining this in that session. highly suggest you grab a re-watch of that session
<bdx> I've been reading and watching, and following the interface and layer repos you all have created
<bdx> so
<cory_fu> mbruzek: Should we go ahead and add update-status while we're at it?
<lazypower> cory_fu: you had a pretty legitimate concern there
<cory_fu> There was some discussion on https://github.com/juju-solutions/reactive-base-layer/issues/4 and I was trying to play the devil's advocate, but I'm generally pro-update-status
<lazypower> cory_fu: lots of curn on the unit during update-status if the hook is treated as a reactive handler.
<lazypower> *churn
<cory_fu> Yeah, maybe.  Though there are several reasons why it would be very useful
<cory_fu> And, it's entirely possible to code around that churn and is really something we should be encouraging anyway
<lazypower> oh indeed :) I see my name on that issue comment stream for being pro update-status, i just wanted to play the devils advocate once :D
<cory_fu> ha
<lazypower> because "reasons"
<cory_fu> mbruzek: Merged
<cory_fu> I'm really tempted to Friday afternoon cowboy in the update-status hook. :p
<marcoceppi> cory_fu: do it
<marcoceppi> cory_fu: do you ened a charm-tools change for that?
 * marcoceppi hasn't cut the release yet, it won't go out until late tonightr
<cory_fu> marcoceppi: Nope
<cory_fu> storage support would, though if we want to get that in real quick
<marcoceppi> cory_fu: I'm down
<marcoceppi> I can hold release off until weekend too
<marcoceppi> ligaf
<jcsackett> hey all; i'm managing 2 servers using manual provider, and on the second machine the agent is always lost when i deploy a service and agent-state ends up failing. functionally i can't deploy anything to it.
<lazypower> marcoceppi: i urge you to not hold it as the current revision is utterly broken
<marcoceppi> lazypower: that's fine. it's going out probably around 5pm EST
<lazypower> marcoceppi: nothing i build coming from c-t 1.9.3 in brew will work
<lazypower> i'm testing now w/ the vagrant box to see if thats a valid work around
<cory_fu> mbruzek, lazypower, marcoceppi: https://github.com/juju-solutions/reactive-base-layer/pull/13
<mbruzek> cory_fu: How does one release a new version of a reactive layer?  Do we need to do something on interfaces web site?
<cory_fu> Nope.  Anything merged to master is immediately released
<mbruzek> cory_fu: OK. I did not touch the update-status hook in my contribution because I saw the issue where you were worried about that one hook.
<cory_fu> mbruzek: Thanks for the README catch
<mbruzek> cory_fu:  no problem, while in the README I wondered why you used so many backticks
<cory_fu> mbruzek: Also, it's less that I'm worried about the hook, and more that I'm worried about people writing naive charms that end up churning on the machine
<cory_fu> mbruzek: Too much reST writing in the pydocs
<lazypower> cory_fu: whats the version in the interfaces site in reference to then? I thought that had ot be incremented when you update a layer...
<cory_fu> I think it's intended to auto-update, like the charm revision, but it's not implemented yet
<lazypower> ah ok
<cory_fu> Also, I wonder if using semver there might be better
<lazypower> i <3 semver so i'm an automatic +1 to that
<lazypower> could use a quick pair of eyes on this to get landed, it works in non x-platform cases, which is good enough for me for now - https://github.com/juju-solutions/layer-docker/pull/20
<cory_fu> lazypower: Should the "## States" header actually be "### States"?
<lazypower> Yeah, good call
<cory_fu> Otherwise +1
<mbruzek> ninjaed
<lazypower> daww i merged it too fast?
<lazypower> sorry mbruzek, i'm so used to our workflow :P
<mbruzek> all good
<cory_fu> marcoceppi, mbruzek: https://github.com/juju/charm-tools/pull/67
<marcoceppi> cory_fu: I'm not sure this works
<cory_fu> Why do you say that?
<marcoceppi> cory_fu: commented on mp
<marcoceppi> I also don't see any tests for this?
<cory_fu> Gah, yeah, so much for me paying attention to my copypasta
<marcoceppi> mmm pasta
<cory_fu> marcoceppi: Updated
<marcoceppi> cory_fu: I'll land this, but it'll be a 1.10.0 bump
<cory_fu> That's fine
<marcoceppi> cory_fu: https://github.com/juju/charm-tools/pull/55 is now conflicting
<cory_fu> marcoceppi: I'm confused, are you now looking at doing a 1.10.0 release?  I'm fine with the storage changes not landing now, but I do need the other, smaller changes (dist.yaml and resources.yaml)
<marcoceppi> cory_fu: well I just landed the storage stuff, so I figured I'd do a 1.10.0 bump
<marcoceppi> but maybe that was a bit...zealous
<cory_fu> Oh, well, if you're ok with doing that at EOD on a Friday
<marcoceppi> I always am
<marcoceppi> cory_fu: I was saying, if you wanted to update the py3 changes, I'll land that now as well
<cory_fu> Ok.  You ok with a pull --rebase for that branch, or should I just do a merge?
<marcoceppi> I was just holding off to group it with a bigger minor feature release
<cory_fu> (To update it to master)
<marcoceppi> cory_fu: ti's a feature branch do whatever you'd like :)
<cory_fu> I'm just not sure how github handles PRs being rebased out from under it
<marcoceppi> though I am a fan of rebase
<marcoceppi> cory_fu: it does the right thing
<marcoceppi> you just have to force push your feature branch
<cory_fu> marcoceppi: Updated.  CI is running, but tests passed for me
<cory_fu> I'm also a fan of rebasing feature branches
<marcoceppi> cory_fu: cool, ta!
<mbruzek> have a good weekend guys
 * mbruzek waves the eow flag
<cory_fu> You, too, mbruzek
<cory_fu> Damn
#juju 2016-12-05
<kjackal> good morning juju world!
<kjackal> Hi everyone, a member/partner from scaledb cannot register with the juju list. Who is the juju list owner?
<BlackDex> Hello there, i'm looking for the minimum requirements for a juju "node"/vm but i can't seem to find it
<kjackal> BlackDex: What do you mean by minimum requirements?
<kjackal> BlackDex: A base charm is the ubuntu charms
<kjackal> BlackDex: That deploys ubuntu and it should run wherever ubuntu runs
<BlackDex> I meen the juju bootstrap node
<BlackDex> So, how much mem, disk, cpu etc.. is the minimum requirement for a juju-bootstrap node
<BlackDex> For maas it is located here: http://docs.ubuntu.com/maas/2.1/en/#minimum-requirements
<BlackDex> For JuJu i couldn't find it
<cjwatson> Is there any equivalent of optional_resources from jujuresources in Juju 2.0 resources?
<cjwatson> I'm trying to write a charm that installs packages from resources if they're provided that way, but otherwise installs from a PPA
<cjwatson> Simply not attaching the resources and having a bit of conditional behaviour in the charm worked OK until I tried to do "charm release", at which point I got "cannot release charm or bundle: bad request: charm published with incorrect resources: resources are missing from publish request: ..."
<cjwatson> I've thought about building metadata.yaml conditionally, but that's fiddly, and it would mean two different versions of the charm, one with resources and one without
<cjwatson> Do I just have to fall back to bundling files in the charm instead?  Seems a shame - resources are a nicer UX in general
<kjackal> Optional resources....
<kjackal> you can have dummy 0-sized recources
<cjwatson> True; feels extremely hacky
<kjackal> cjwatson: there is an email thread that is somewhat related
<cjwatson> Is that something that other charms do?
<kjackal> perhapse you could give some feedback
<kjackal> yes, there is no other option for now
<kjackal> but your feedback/usecase needs to be heard
<kjackal> please send some email to the list/or say something about your needs in that email thread
<kjackal> cjwatson: title "Hide resources from showing on the store"
 * cjwatson finds https://lists.ubuntu.com/archives/juju/2016-December/008265.html
<cjwatson> kjackal: do you happen to have the Message-Id of that message?  That way I can synthesise a reply - I'm not subscribed to that list
<cjwatson> kjackal: ah, never mind that, I can extract it from the HTML source
<kjackal> sorry cjwatson. I am back now
<shruthima> Hi all, iam trying to push multiseries charm to charm store. i have successfully ran charm push but charm publish command if giving following error http://paste.ubuntu.com/23583217/, anyone please suggest?
<shruthima> Hi all, iam trying to push multiseries charm to charm store. i have successfully ran charm push but charm publish command is giving following error http://paste.ubuntu.com/23583217/, anyone please suggest?
<kjackal> hi shruthima I thought the publish was replaced by charm release
<kjackal> i get a "ERROR unrecognized command: charm publish"
<kjackal> shruthima: is it possible you are running some old version of charm tools?
<shruthima> ya in some machines i faced even that error
<shruthima> iam using charm-tools 2.1.2
<shruthima> kjackal: any document related to charm release?
<kjackal> should be the same as charm publish
<shruthima> oh k il give a try
<shruthima> kjackal: ERROR unrecognized command: charm release
<shruthima> in which version of charm-tools will it work?
<kjackal> I couldnt find docs on the web for charm release, but here is the --help output: http://pastebin.ubuntu.com/23583338/
<kjackal> shruthima: Regarding the "access denied" error you are getting I would start by looking at the permissions you have on launchpad
<kjackal> shruthima: are you in the right group(s)?
<shruthima> kjackal: ya i have pushed 3-4 charms with same launchpad id
<kjackal> so it is only that one you cannot release?
<kjackal> you can release other charms in that group?
<shruthima> almost everyone is facing this issue from last friday
<kjackal> In that case, use charm show <charm-name> to compare the permissions set
<kjackal> last friday...
<kjackal> strange..
<kjackal> let me try to release a charm myslef.
<shruthima> kjackal: no other charms also iam unable to publish now
<shruthima> ok
<kjackal> shruthima: I am sorry I cannot reproduce the error. Releasing charms works here: http://pastebin.ubuntu.com/23583368/
<kjackal> shruthima: can you try the publish command with the --debug option
<shruthima> kjackal: almost everyone from ibmcharmers facing same error http://paste.ubuntu.com/23583370/
<shruthima> ok
<marcoceppi> shruthima: you have an old version of the charm command, if you run `charm version` you should get charm with at least 2.2.0
<shruthima> marcoceppi: how to update the charm version
<marcoceppi> `sudo add-apt-repository ppa:juju/stable; sudo apt update; sudo apt dist-upgrade`
<marcoceppi> `sudo add-apt-repository -y ppa:juju/stable; sudo apt update; sudo apt dist-upgrade`
<shruthima> kjackal: here is the output with debug option http://paste.ubuntu.com/23583387/
<kjackal> shruthima: can you please try the upgrade marcoceppi suggested and then do a charm logout, login, release ?
<shruthima> kjackal: ya iam trying to upgrade now
<kjackal> shruthima: thanks
<shruthima> marcoceppi: kjackal : i have updated the cahrm version http://pastebin.ubuntu.com/23583438/ , still iam facing the same ERROR cannot release charm or bundle: unauthorized: access denied for user "salmavar"
<marcoceppi> shruthima: which charm are you trying to release?
<shruthima> ibm-im
<shruthima> multiseries charm
<marcoceppi> shruthima: you can't release to the stable channel for that charm, it's an approved charm.
<marcoceppi> https://jujucharms.com/ibm-im/
<marcoceppi> instead, sonce you push, enter the URL you get back from push into http://review.jujucharms.com/ and it'll follow a review cycle then be released once reviewed
<shruthima> marcoceppi: oh ok so just i have to create a new request for review with new URL right ?
<marcoceppi> yes
<shruthima> marcoceppi: thanku :)
<shruthima> marcoceppi: any updates on this issue https://github.com/juju-solutions/interface-http/issues/5
<marcoceppi> shruthima: no, not at the moment
<shruthima> marcoceppi: ok when can we expect that to be resolved?
<admcleod> is there a way, when adding a machine manually with 2.x, to define what kind of lxc networking you'd like to use by default?
<deanman> marcoceppi: Hi, i joined recently the aws charmers program, I was wondering if i do have access to AWS web console as well.
<marcoceppi> deanman: you do not
<marcoceppi> deanman: is there something you need in the web console?
<deanman> marcoceppi: i was controlling a deployment yesterday from home and now from office (and behind proxy) it seems my client cannot talk to AWS. I'm trying to debug whether is due to controller dies or some other network issue.
<marcoceppi> deanman: ah, do you know the instance id for your controller?
<deanman> marcoceppi: does that help ? http://paste.ubuntu.com/23583601/
<marcoceppi> deanman: ah, can you get the controller UUID from `juju show-controller`
<deanman> marcoceppi: hmm it seems that i can't http://paste.ubuntu.com/23583609/
<marcoceppi> deanman: $HOME/.local/share/juju/controllers.yaml should have that cached
<deanman> marcoceppi: http://paste.ubuntu.com/23583631/
<deanman> uuid: d1a79bb1-a362-4551-81ec-d51613d31a1f
<marcoceppi> deanman: it is alive still
<marcoceppi> bootstrapped about 22 hours ago
<deanman> marcoceppi: how does local juju client talks to the controller ? ssh or other protocol ? SSH is blocked and needs some tsocks configuration but everything else should go through the proxy server.
<marcoceppi> deanman: I'm under the impression that it's directly with a websocket
<deanman> ah websockets
<deanman> ok thanks, ill try troubleshoot it
<bildz> goood morning
<bildz> are there any docs at running a juju controller in HA?
<bildz> im designing an HA MAAS solution and haven't been able to find any docs yet
<marcoceppi> bildz: let me take a look
<bildz> marcoceppi: thanks!
<marcoceppi> bildz: in short, `juju enable-ha` is the command you want (juju help enable-ha for more info). I've filed a bug on the docs to document this better https://github.com/juju/docs/issues/1556
<Ankammarao> Hi Kevin
<Ankammarao> Hi i have firefox browser issue on ubuntu z machine ,can anyone help me
<marcoceppi> Ankammarao: you mean, a problem with Firefox on Z or problem with Juju GUI on Firefox (on Z)?
<Ankammarao> when running firefox command on vnc server ,getting the error like "Error: GDK_BACKEND does not match available displays"
<Ankammarao> marcoceppi ,i want to open firefox browser on vnc session for ubuntu z machine
<marcoceppi> Ankammarao: this sounds like a general Ubuntu question. http://askubuntu.com or #ubuntu on irc
<marcoceppi> Ankammarao: those are probably the best source for help
<marcoceppi> Ankammarao: I'd help, but I've not encountered that problem :(
<Ankammarao> marcoceppi, can we install other browsers like "firefox" on ubuntu z machine
<marcoceppi> Ankammarao: probably, sure. It's the same ubuntu as you'd expect on any other architecture. So long as there's a package for that architecture
<marcoceppi> bildz: https://jujucharms.com/docs/2.0/controllers-ha
<marcoceppi> petevg: hey, are you in DC?
<petevg> marcoceppi that I am.
<petevg> Just for the day.
<magicaltrout> petevg: you visiting the whitehouse before it turns gold?
<magicaltrout> personally, I think it'll look nice gold...
<petevg> magicaltrout: heh. I thought about doing that.
<petevg> I have just enough time to wander through the National Mall, though. Then I have to hit the conference.
<magicaltrout> i was down there the other day.... huge blisters
<magicaltrout> I never learn
<petevg> Outside > inside. I'll have to see the Whitehouse after Trump's successor restore it :-)
<magicaltrout> I reckon he'll turn it into a downtown golf course
<magicaltrout> he'd make a lot of money.....
<petevg> Ew.
<magicaltrout> think about it
<magicaltrout> teeing off from the roof, with the hole being the washington monument
<magicaltrout> it doesn't get any better than that
<petevg> Sigh.
<magicaltrout> you could zipwire from the roof
<petevg> The problem is that I can see it.
<magicaltrout> then, tee #2 could be the lincoln memorial
<magicaltrout> it would be the best golf course ever
<magicaltrout> i need to figure out how to incorporate capitol hill
<magicaltrout> I also happen to think the roof of the kennedy centre would make a great green
<magicaltrout> plenty of room, but testing enough  because if you go off the edge, you're gonna have a tricky chip to get back onto the green from 4 stories below
<magicaltrout> maybe a short par 3 from the Watergate hotel
<bdx> marcoceppi: can you link me to anything you have done for datadog-agent?
<cory_fu> marcoceppi: https://github.com/marcoceppi/svg.juju.solutions/pull/8
<marcoceppi> bdx: give me 10 mins to clean up this iteration
<marcoceppi> bdx: and we can chat
<bdx> marcoceppi: ok
<marcoceppi> cory_fu: thanks for the contribution!
<marcoceppi> cory_fu: I wnat to merge post/get, since it's a lot of duplication
<marcoceppi> maybe one day
<bdx> rick_h: you were talking about how you specified the default aws region in clouds.yaml, how exactly are you specifing that?
<marcoceppi> bdx: don't listen to rick, use juju set-default-region ;)
<marcoceppi> cory_fu: I've got a crazy idea, need sanity check
<marcoceppi> cory_fu: got a min?
<cory_fu> marcoceppi: Sure
<bdx> marcoceppi: thx
<marcoceppi> cory_fu: https://appear.in/lets-talk-about-interfaces-charms-and-reactive
<rick_h> bdx: yea right , that Rick guy is nuts. Don't listen to him.
<lucacome> hey guys, I'm from NGINX and working on the official charm, does someone have time to take a look at it and see if I'm going in the right direction?
<marcoceppi> lucacome: yeah, totally
<lucacome> marcoceppi, I've started form layer-nginx and continued from that https://github.com/lucacome/juju-layer-nginx
<marcoceppi> lucacome: taking a look now!
<lucacome> thank you
<bdx> lucacome: cert/key might be better specified as config options
<lucacome> bdx, can I pass files wit config options?
<bdx> lucacome: I don't think so, you can pass the cert/key as strings <- this might not be the prefered way either tho
<bdx> luacome: https://github.com/jamesbeedy/charm-datadog-agent/blob/master/config.yaml#L22
<bdx> lucacome: others have made charms and layers to help facilitate SSL/TLS, see https://jujucharms.com/u/containers/easyrsa/3, and https://github.com/cmars/layer-lets-encrypt
<bdx> lucacome: ^ only cover self signed, and LE certs/keys though ... so to acommodate user specified sighned cert/key, I would thing config options would do it, but I'll let the pros chime in here
<bdx> signed*
<bdx> think*
<lucacome> bdx, they are not self signed cert/keys, they are handed out by us to install the NGINX Plus, our commercial version
<bdx> lucacome, I see that now
<lucacome> but I'll take a look in to it to see how others are doing dealing with certs
<marcoceppi> cory_fu: I've backed myself into a corner
<bdx> lucacome: I would lean towards config options for that .... @marcoceppi should be able to confirm whether that is the best route or not
<marcoceppi> lucacome bdx I don't think resources is a bad way, but if you had a "nginx-cert" resource that was a tar.gz of a .key and .crt file it'd be easier for users
<marcoceppi> I'd go either way, config or resources - both are valid
<bdx> marcoecppi: in his charm the resouce is optional though
<marcoceppi> I do have a few other concerns, still working through it
<marcoceppi> bdx: yeah, and that's fine, as long as there's a 0 byte blank resource
<bdx> ahhh
<marcoceppi> neither config or resources is ideal for this
<marcoceppi> cory_fu: can I do something like this?
<marcoceppi> bdx: haven't forgotten about you, but I have some code I think you'll like
<marcoceppi> bdx: for datadog
<lucacome> marcoceppi, is there a better way to upload files?
<bdx> nicccee
<marcoceppi> lucacome: configuration, resources, and scp are the three methods. With the last one not being recommended at all given the UX around it
<marcoceppi> resources is preferable, because it doesn't expose the key and crt files in plain text anywhere
<marcoceppi> unless you want the configuration options for nginxplus to be exposed
<marcoceppi> in exports
 * bdx eats his words
<marcoceppi> well, configuration isn't bad bdx, there are cases you'd wnat the crt to be in a bundle export
<marcoceppi> but I'm not sure that's the case here
<marcoceppi> but I also can't tell, those are the two paths ultimately
<marcoceppi> cory_fu: http://paste.ubuntu.com/23584586/ line 13, can I use getattr on the class for autoaccessors?
<marcoceppi> cory_fu: or is there an easier way?
<cory_fu> marcoceppi: I think that's the best way, though "rel_data.k()" won't work
<marcoceppi> cory_fu: right, so will getattr work?
<marcoceppi> or is there a better way?
<cory_fu> marcoceppi: Actually, what about standardizing the interfaces so that they share a common method name.  Something like, "get_configuration()"?
<marcoceppi> cory_fu: not a bad idea
<marcoceppi> cory_fu: I want to ultimately standardize these stats interfaces
<bdx> is there a debug method of sorts that will list all flags/states that are set for a charm?
<skay_> how do I get tests to run against a live deployment? I can't only use containers because I'm using a snap and snaps have a problem installing in containers at the moment
<skay_> so my one test at the moment makes a deployment and checks that my charm is a blocking state while it waits for a snap resource
<skay_> I can verify that
<marcoceppi> bdx: `charms.reactive get_states` I think
<marcoceppi> bdx: I am finally ready to talk about datadog
<bdx> marcoceppi: all ears
<marcoceppi> bdx: so, this is what I came up with
<marcoceppi> https://github.com/silph-io/layer-datadog/blob/master/reactive/datadog.py#L52
<marcoceppi> I'm testing it now, but basically it means you only need to touch metadata.yaml and layer.yaml to add new integrations
<marcoceppi> that was the biggest concern when talking to ilan at data dog, about overhead of maintenance and boilerplating
<marcoceppi> bdx: I'd like to collaborate going forward, and get this repo into upstream
<bdx> marcoceppi: I'm trying to get something official going for it before I start putting it out everywhere for sure
<marcoceppi> bdx: yeah, we've been in touch with datadog about this, we have an internal business relationshiop being developed
<bdx> marcoceppi: so why list integrations in metadata?
<marcoceppi> bdx: well, interfaces should drive the entire configuration for everyhting. So if the nginx and php-fpm (and kubernetes, mysql, etc) charms and layers include a unique stat inferface, which supplies the keys required by datadog, or any other stat generations/configuration tool, you get a fully automated experience
<marcoceppi> we can jump on a quick call to explain more if you'd like
<bdx> marcoceppi: ok
<marcoceppi> bdx: https://appear.in/datadog
<marcoceppi> bdx: I found a few foul ups, updating the repo in a min with some patches
<marcoceppi> lots of missing imports
<xplatform12> I have the latest juju and MAAS. When trying to "conjure-up openstack" and running the OpenStack option, I use the "Connect to an existing MAAS" option. For maas server, I am entering "http://x.x.x.x/MAAS/api/2.0/" and entering my maas-auth key in that field. When I Confirm the options, it shows that it is trying to bootstrap the environment and then almost immediately failes with "error: flag provided by not define: --upload-tool
<xplatform12> s" does anyone know what is causing this issue?
<marcoceppi> xplatform12: to start, you should need to enter "http://x.x.x.x/MAAS/"
<marcoceppi> xplatform12: what version of juju do you have installed?
<marcoceppi> xplatform12: `juju version`
<marcoceppi> cory_fu: I hit a snag
<cory_fu> marcoceppi: ?
<marcoceppi> cory_fu: `RelationBase.from_state('php-fpm')` returns None despite these states: {'apt.installed.apt-transport-https': None, 'datadog.configured': None, 'apt.installed.datadog-agent': None, 'php-fpm.available': {'conversations': ['reactive.conversations.php-fpm:3.silph-web/0'], 'relation': 'php-fpm'}}
<cory_fu> marcoceppi: The states have to match exactly.  You'll want RelationBase.from_state('%s.available' % relation_name)
<marcoceppi> cory_fu: oh, boss, even easier
<marcoceppi> this removes my entire `relation_name.available` state check
<marcoceppi> bleh
<cory_fu> ?
<marcoceppi> cory_fu: http://paste.ubuntu.com/23585512/
<marcoceppi> these are set to scope.UNIT
<xplatform12> marcoceppi: I tried with just http://x.x.x.x/MAAS/ and my key and got the same error. I am using juju Version: 1:2.0.1-0ubuntu1~16.04.4~juju1
<marcoceppi> xplatform12: huh, yeah, conjure-up should work just fine. stokachu ^ ?
<cory_fu> marcoceppi: Yeah, if it's set to scope UNIT, that means there will be a separate conversation per remote unit, so you have to iterate over self.conversations() instead of assuming a single self.conversation()
<marcoceppi> cory_fu: yeah, it's a subordinate relation, so, single conversation makes sense
<xplatform12> I also checked in /var/log/openstack.log and it has the following: "Dec  5 22:28:27 juju openstack: [ERROR] ['error: flag provided but not defined: --upload-tools']" I can't seem to find any info about htis issue online.
<cory_fu> marcoceppi: So if it's always going to be subordinate, you can just change it to GLOBAL scope
<marcoceppi> xplatform12: --upload-tools was a flag for juju 1.x conjure up thinks that's your version, for whatever reason
<marcoceppi> cory_fu: ack, just did
<marcoceppi> time to re-deploy
<xplatform12> marcoceppi: This was upgraded from juju 1.x. Let me rebuild the juju system and go right to 2.0 without an upgrade. I hope that helps and thanks.
<xplatform12> marcoceppi: I rebuild and made sure I had the correct repositories when installing juju and now conjure-up is past that issue and bootstrapping. Thanks for your assistance.
<marcoceppi> xplatform12: no worries! sorry you got caught up in that transition
<marcoceppi> bdx: alright, finally got configuration files writing
<bdx> marcoceppi: niceee
<marcoceppi> bdx: my latest work was just pushed to the repo
<bdx> marcoceppi: looking
<marcoceppi> bdx: bunch of awful little papercuts
#juju 2016-12-06
<marcoceppi> oy, magicaltrout, you're in dc and you didn't tell me! :)
<magicaltrout> not any more marcoceppi sadly
<magicaltrout> flew home last friday
<magicaltrout> out there helping track down human traffickers and gun smugglers on the dark web... as one does
<petevg> magicatrout: we both managed to snub marcoceppi.
<petevg> Whoops.
<petevg> ^ magicaltrout (without typos in your handle, this time)
<magicaltrout> yeah mine involved minimal social activity, and lots of crazy shit i didn't understand
<magicaltrout> i have since enrolled myself on a deep learning course
<marcoceppi> cool, well if you're ever back in DC hit me up
<magicaltrout> i shall be marcoceppi
<magicaltrout> probably march
<magicaltrout> i'll prod you when i find out
<marcoceppi> magicaltrout: cool, hopefully I'll also be in town ;)
<bdx> marcoceppi: should we be including a reference to datadog in the interfaces?
<bdx> marcoceppi: erggg, *in the datadog state interface name
<bdx> jeeezee - s/state/stat/
<bdx> e.g. interface-php-fpm-stats
<bdx> should we make it a convention to possibly have 'interface-datadog-<integration-name>' for all of the integrations?
<bdx> since there will be alot of them
<bdx> I feel like it will be confusing if we don't signify datadog in there somehow ... bc the names of the majority of the integrations are alto the names of juju charms
<marcoceppi> bdx: other than datadog being a reference implementation, there's really nothing datadog specific
<bdx> marcoceppi: nice, that makes sense.... so then these interfaces could be applicable to other use cases hopefully
<marcoceppi> bdx: that's the plan
<marcoceppi> bdx: I hope to do a sysdig implementation using a few of these
<marcoceppi> as a proof of concept
<bdx> ahh ... Sysdig looks like they are blasting native container support
<marcoceppi> bdx: yeah, I'm interested in it for kubernetes
<bdx> I figured as much ... from a high level ... it looks like they rolled a bunch of cli monitoring tools into one place .. cool ... no gui tho?
<bdx> sysdig looks really cool ... I'll have to dig into it
<bdx> I see now, sysdig doesn't log data really, it more or less gives you a realtime (and for some things historical) view into the system through a centralized ui - very useful
<marcoceppi> bdx: yeah, you can then tap that into sysdig cloud, their saas
<lazyPower> sysdig cloud is *intense*
<Ankammarao> Hi i am facing browser issue with z ubuntu , getting the error like "Segmentation fault (core dumped)"
<kjackal>  Good morinig Juju world
<deanman> Good morning !
<Ankammarao> kwmonroe,Hi Good morning
<Ankammarao> kwmonroe i have some firefox browser issue, we are charming IBM products
<Ankammarao> kwmonroe can you help me regarding the browser issue on ubuntu z machine
<deanman> j #EliteBNC
<magicaltrout> indeed!
<deanman> sticky fingers
<kjackal> magicaltrout: are you still in the EU?
<kjackal> magicaltrout: How is the car project going :)
<kjackal> Ankammarao: kwmonroe will be available later today. If there is anything I can help you with let me know
<Ankammarao> kjackal Hi
<Ankammarao> kjackal, having issue with firefox in ubuntu z terminal
<magicaltrout> yeah kjackal I'm sat in the frozen heartlands.... mostly known as the east of england
<Ankammarao> kjackal getting error when run firefox in the terminal "Segmentation fault (core dumped)"
<kjackal> Ankammarao: is this the case with other browsers as well?
<kjackal> Ankammarao: does not seem to be a juju related issue. Is it?
<Ankammarao> kjackal we have tried with chromium browser to install but getting error
<Ankammarao> kjackal yeah its related to ubuntu machine i guess
<kjackal> Ankammarao: I do not know much about the firefox build on Z
<kjackal> Ankammarao: however I would recomend you asking in #ubuntu-powerpc
<Ankammarao> kjackal thank you
<kjackal> Ankammarao: here is a list of IRC channels we have. You might find a better match https://wiki.ubuntu.com/IRC/ChannelList
<Ankammarao> kjackal i will check it thanks
<kjackal> magicaltrout: Ahhh very nice. The alternative would have been a life-threatening desert (LA) where coconuts would ruin your brand new tesla :)
<magicaltrout> this is true kjackal
<marcoceppi> stub: will this work?
<marcoceppi> stub: http://paste.ubuntu.com/23588297/
<marcoceppi> stub: mainly, can I set access to 'pg_stat_database' and is that how you would interact with the results
<stub> marcoceppi: That looks correct
<marcoceppi> \o/ about to test, so we'll see ;)
<stub> marcoceppi: oh... pg_stat_database is a view inside every database, not a database itself.
<stub> marcoceppi: but it should work... it just creates an empty database called pg_stat_database for your client to connect to.
<stub> marcoceppi: And you probably need to pass in the database name to configure_integration (psql.maser.dbname, which in this case will be 'pg_stat_database' because that is what you asked for)
<marcoceppi> stub: according to the setup guide: create user datadog with password '<bleh>'; grant SELECT ON pg_stat_database to datadog
<marcoceppi> trying to replicate this using the pgsql interface, but maybe that's not possible?
<stub> marcoceppi: You would do that after connecting to the database you want to use
<marcoceppi> stub: well, I want all databases
<marcoceppi> stub: the documentation has no parameter for "database"
<stub> marcoceppi: datadog needs to connect to a database and run selects on the pg_stat_database table. In this case, it doesn't matter what dbname you use.
<stub> marcoceppi: So...
<stub> marcoceppi: In configure_db, I'd set_database('datadog') since that is unlikely to conflict with anything else
<stub> marcoceppi: And ...
<stub> marcoceppi: in configure_postgresql_integration pass 'dbname': pgsql.master.dbname
<stub> marcoceppi: For the charm to do the grant, it needs an administrative connection. And if you give it an administrative connection, it doesn't need the grant run.
<stub> marcoceppi: So just relate it to db-admin instead of db and you might be done
<stub> (I should have called the set_database method set_dbname to avoid confusion. The setting on the relation is 'database', which I inherited. But in the PostgreSQL world it is usually referred to as dbname)
<marcoceppi> no worries
<marcoceppi> trying with db-admin
<marcoceppi> thanks!
<stub> np.
<stub> marcoceppi: if you get that working, try it with a non-administrative connection. The grant should be unnecessary, at least with modern versions of PostgreSQL. I'm able to query it as an unprivileged user here
<stub> (some views are protected, like pg_stat_activity, but pg_stat_database doesn't contain any sensitive information and seems to be public)
<marcoceppi> stub: cool, will give it a go
<slrocketchat> marcoceppi:  used the layer-mongodb in our chat charm - but the default 2.6.10 instance needs an `rs.initiate()` from mongo shell before being functional ... is there anyway to set that by default?
<marcoceppi> slrocketchat: mongodb charm is missing replicaset support, it's the last thing needed
<slrocketchat> marcoceppi:  thanks!
<marcoceppi> slrocketchat: I've got a stub for it, but I need help completing it, happy to collab on it
<marcoceppi> slrocketchat: I started mapping that out here https://github.com/marcoceppi/layer-mongodb/blob/master/lib/charms/layer/mongodb.py#L101
<marcoceppi> I just never was able to get replicasets to come online in a stable fashion
<slrocketchat> marcoceppi:  happy to take a look.  all the docker solution i see uses tricky unreliable time delay strategies.  But we only need the single instance as a primary (because Rocket.Chat acts as secondary and parses oplog) so our requirements are non-typical.
<marcoceppi> slrocketchat: well, Juju as a third party, does a "leadership" election. I'm happy to pilot a lot of the code, I just need someone with better mongo experience than me to help me figure out the pitfalls
<magicaltrout> multicontainer docker stuff, still makes me sad, so i'm not surprised its flaky! ;)
<slrocketchat> marcoceppi:  we can definitely help to put the layer through the grinder .. once we complete the charm it is expected to be widely deployed in varied configuations :)
<marcoceppi> slrocketchat: well I love rocket chat, so happy to work with you all :)
 * magicaltrout googles rocket chat.. i clearly live in a cave
<slrocketchat> marcoceppi:  goes both ways - we enjoy having gurus around us, solving problems that we hit in seconds with no run arounds :+1:
<deanman> Hi, quick question, might have stumbled on this before but don't recall. Shouldn't newly created models be inheriting same configuration of default?
<lazyPower> deanman: only if you juju set-model-defaults
<lazyPower> deanman: othewrise juju set-model-config only applies to the currently selected model
<deanman> lazyPower, during bootstrapping i configured http proxy and that was passed to defalt. When i created a new one it wasn't inherited. Should i manually configure every new model or is there a better way ?
<lazyPower> deanman: i'm fairly certain the bootstrap constraints only apply at time of bootstrap in juju2. marcoceppi can you fact check my statement above? ^
<deanman> No worries, just want to make sure whether there is a quick way to copy defaults from one model to another.
<lazyPower> deanman: i'm pretty certain you want to set-model-defaults then
<deanman> `juju set-model-defaults` does not exist on my 2.0.1
<deanman> newer command addition ?
<deanman> got it, `model-defaults`
<deanman> no way to pass a file that has pre-configured the key-value configurations?
<marcoceppi> deanman: you can
<marcoceppi> deanman: you should be able to
<lazyPower> deanman: doesn't look like it has support for feeding a file. That looks like a bug/feature-request to me.
<deanman> ok thank you, i''ll file two feature requests then, one to be able to carry over configuration passed during bootstrap and a second to be able to load configuration to a model using a file.
<SimonKLB> trying to co-locate two charms on the same machine but i'm getting into trouble with mis-matching python libs - would it be a good idea to try to run charms in separate virtual envs?
<marcoceppi> SimonKLB: which charms?
<SimonKLB> marcoceppi: my own and cs:~chris.macnaughton/influxdb-4
<marcoceppi> SimonKLB: if they're reactive charms, you can turn on the "enable_venv" option during build time so that their libs (or at least one of their libs) is in a venv
<SimonKLB> marcoceppi: neat! :)
<marcoceppi> SimonKLB: https://github.com/juju-solutions/layer-basic#layer-configuration
<SimonKLB> thanks!
<marcoceppi> SimonKLB: cheers
<deanman> lazyPower, got them filed as bugs but its the same process for features right?
<lazyPower> deanman: correct
<lazyPower> SimonKLB: did you file
<lazyPower> https://bugs.launchpad.net/charms/+source/elasticsearch/+bug/1646904 ?
<mup> Bug #1646904: ElasticSearch fails at peer-relation-joined <elasticsearch (Juju Charms Collection):New> <https://launchpad.net/bugs/1646904>
<SimonKLB> lazyPower: thats not me
<lazyPower> SimonKLB: k, i didnt think so but was just checking
<skay_> hey any mojo person around? I'm building a spec, and the secrets phase seems to have deposited my secrets as expected, but my charm didn't get configured with the values in the secrets file
<petevg> cory_fu: https://github.com/juju-solutions/matrix/issues/36
<cory_fu> petevg: Replied.  But testing bundles from the charm store is no longer supported at all, and you're attempting to test the current directory as a bundle.  We should probably improve that error message
<petevg> cory_fu: I didn't realize that we were officially dropping support. That's fine .. we just need to revise the tests to reflect.
<magicaltrout> get with the times petevg !!!
<petevg> magicaltrout: I'm so behind.
<magicaltrout> me too.... =/
 * petevg sheds a single tear
<magicaltrout> on the plus side, I have one of my guys working on charms now, so i might actually get some stuff with tests, reviewed etc
<magicaltrout> stranger things have happened
<magicaltrout> although today I am writing a test suite to test our docker containers......
 * magicaltrout also sheds a tear
<petevg> One step forward ...
<petevg> Though hooray for testing, regardless :-)
<magicaltrout> lol, as long as I don't have to write... tests....
<magicaltrout> he's working on drill though, I want to get some authentication running in it, but once its done, we'll get some tests in and it reviewed then hopefully you guys can utilising it in your big data bundles
<magicaltrout> -ing +e
<cory_fu> bcsaller: If the deploy task encounters a critical error, how can it signal that the test should abort?  Returning False makes it hang on the other rules
<petevg> magicaltrout: I look forward to playing around with it :-)
<magicaltrout> liar
<cory_fu> petevg, bcsaller: https://github.com/juju-solutions/matrix/pull/37
<petevg> cory_fu: code looks good to me.
<petevg> Thank you :-)
<petevg> Merged.
<cory_fu> petevg: Thanks
<petevg> np
<petevg> cory_fu, bcsaller: I experimented with integrating the "subordinate okay" stuff more organically into the typing, but I wound up just making monstrosities.
<cory_fu> bcsaller: I went with pre-validating the path instead of aborting more cleanly when the deploy failed, but I'm still curious how we would perform a "clean" abort if we needed to.
<bcsaller> cory_fu: I think i have some outstanding code for that. let me see if it still works
<cory_fu> bcsaller: That also came up when I was working on the ubuntui changes.  I was going to add a Cancel button but I couldn't figure out how to make it actually abort
<petevg> cory_fu, bcsaller: shouldn't it be something like loop.stop(), model.disconnect(), loop.close()? (I guess you may not have access to the model to disconnect it, and the disconnect method may not work after you call loop.stop ...)
<bdx> I see now, sysdig doesn't log data really, it more or less gives you a realtime (and for some things historical) view into the system through a centralized ui - very useful
<bdx> aawwwweee
#juju 2016-12-07
<Ankammarao> Hi Kevin
<Ankammarao> Hello kwmonroe
<magicaltrout> Ankammarao: you're about 8 hours too early
<skay_> mojo spec question: I have a charm that transitions to blocked until a resource is attached and until a db relation is added. mojo fails on an error when it detects blocked. is my only choice not to use a blocked state, even though blocked is accurate?
<mthaddon> skay_: that sounds like https://bugs.launchpad.net/mojo/+bug/1645784
<mup> Bug #1645784: deploy phase should support custom ready states for juju status <mojo:New> <https://launchpad.net/bugs/1645784>
<skay_> thanks! I'll subscribe to that
<petevg> @cory_fu, @bcsaller: Updated my "fixes hadoop" PR. Collapsed the decorators into one decorator, that works with no extra parens (inspired by cory_fu's hack): https://github.com/juju-solutions/matrix/pull/34
<petevg> @cory_fu, @bcsaller: I considered replacing the tags with things like `units: Union[Unit, SubordinateUnit]` in the args for an action, but that felt like lying in the type checking, and made things more complex, rather than less complex. I think that passing flags via tags or args is the right thing to do for now.
<petevg> For the record, though, Union[Type1, Type2, ...] appears to be the Right Way to specify functions that can take multiple
<petevg> types.
<cory_fu> petevg: Is SubordinateUnit a subclass of Unit?  I don't see that in libjuju
<magicaltrout> aww petevg, SaMnCo thanked me for joining you at ApacheCon.....
<petevg> cory_fu: SubordinateUnit was something that I was going to make up to fake out the type checking.
<magicaltrout> oh wait... mass mail out....
<petevg> magicaltrout: Yay! ApacheCon was fun.
 * magicaltrout presses some buttons to respond nonsensically to the spam
<petevg> cory_fu: basically, I was going to do all sorts of evil to get rid of the tags/args/flags ... and then I thought better of it and didn't :-)
<cory_fu> petevg: Yeah.  It would be better if we could use the existing type checking in a clean way, but if not, it's better to use a clean alternative than shoehorn in enforce in a bad way
<cory_fu> I am happy about only having one decorator, though.  Thanks. :)
<Ankammarao> Hi kwmonroe
<cory_fu> bcsaller, petevg: So, it looks like matrix is running, it's just taking forever and I'm not sure why.  I think glitch needs to be more chatty about what its down
<cory_fu> *it's
<petevg> cory_fu: it should spit out a "GLITCHING blah" message for every glitch action that it runs. Some of the actions have debugging statements in them.
<petevg> cory_fu: you do need to run with "-l DEBUG". We could maybe change that "GLITCHING blah" message to an info message ...
<petevg> cory_fu: It'll also spit out "Starting glitch" when it first starts glitching, followed by "Writing glitch plan to ..."
<petevg> cory_fu: if you don't see that second message, it got hung up on building the plan.
<cory_fu> petevg: Yeah.  Health spits out a message every 5s on INFO just to let you know that stuff is still happening.  We should have some sort of message, at least
<petevg> cory_fu: and if you don't see the first one, then it isn't glitch's fault that you're stuck :-)
<cory_fu> petevg: No, I got both, but nothing after that
<petevg> cory_fu: interesting. Do you see exceptions in the log?
<petevg> cory_fu: ... and do you see a new glitch_plan.yaml written to disk?
<cory_fu> petevg: No, and yes
<petevg> cory_fu: if I were troubleshooting, I'd add some debugging statements to the actions that the glitch plan is calling, and then re-run with that plan.
<petevg> I have a feeling that something else might be broken in your case, though ...
<petevg> cory_fu: ... because if something is broken in the glitch actions, you should still see that GLITCHING message.
<petevg> Huh.
<cory_fu> petevg: Didn't run it with -lDEBUG, tho
<petevg> cory_fu: Ah. In that case, I'd change that to an "info" message, and/or re-run with debug :-)
<petevg> In principle, that stuff should get written to the log as frequently as the health notifications.
<petevg> Glitch actions shouldn't take a long time to run.
<cory_fu> petevg: Yep, re-running now
<petevg> Cool.
<cory_fu> Odd thing, tho.  I should be getting the raw skin output from BT on a line-by-line basis, and I'm seeing nothing
<petevg> Weird.
<cory_fu> petevg: http://pastebin.ubuntu.com/23595172/
<cory_fu> It seems to have stopped during or after the second kill_juju_agent
<cory_fu> petevg: Full plan: http://pastebin.ubuntu.com/23595175/
<cory_fu> petevg: Units: http://pastebin.ubuntu.com/23595178/
<petevg> cory_fu: interesting. Killing the juju agent is inherently buggy, because you don't get a response back due to the agent dying. It might be worth adding a debug logging statement to the "except AttributeError" maybe it's failing in a way that we don't expect.
<petevg> (That's in actions.kill_juju_agent)
<petevg> cory_fu: it could also be a bug in python-libjuju/juju/unit.py:Unit.run. Possibly wait_for_action is getting hung up.
<petevg> cory_fu: is there anything useful in the juju debug logs?
<beisner> hi all - is there a work-around for https://launchpad.net/bugs/1633788 where juju tries to talk to the default network gateway as if it's a lxd host?
<mup> Bug #1633788: juju 2.0.0 bootstrap to lxd fails (connect to wrong "remote" IP address) <canonical-is> <juju> <lxd> <lxd-provider> <uosci> <juju:Triaged by rharding> <https://launchpad.net/bugs/1633788>
<cory_fu> petevg: So I think what is happening is that sometimes `await unit.run('sudo pkill jujud')` is neither returning nor raising an AttributeError.  :(
<petevg> cory_fu: that wouldn't shock me :-(
<petevg> cory_fu: not sure what to do about it :-/ Killing the juju agent is definitely the sort of thing that glitch wants to do. But it's also exactly the sort of thing that is going to break the api.
<cory_fu> petevg: Maybe we shouldn't be using run() for that, and use ssh() instead
<cory_fu> That shouldn't go through the agent
<cory_fu> tvansteenburgh: Can you chime in on that: ^
<petevg> cory_fu: on the one hand, yes. On the other hand, you could argue that the websocket api freaking out because the agent on one of the machines went away is exactly the sort of bug that we're trying to uncover.
<brenopolanski> hey guys. I created a simple script for installing Juju tools on Ubuntu environment: https://github.com/brenopolanski/juju-setup
<cory_fu> petevg: I don't think it's the websocket API that's hanging.  I think it's the async code in libjuju
<magicaltrout> ah brenopolanski you turned up
<cory_fu> I think it's waiting for a response from the agent that it won't get
<petevg> cory_fu: that's better ... but also the sort of bug that we'd want to fix :-)
<cory_fu> Indeed
<magicaltrout> brenopolanski: just to point you to some useful guys, petevg is a big data guy, so the drill stuff, if you get stuck ask him
<magicaltrout> cory_fu technically I think is big data, but is a juju internals guru, so if you get stuck, prod him
<magicaltrout> and buy him a hat and he'll love you forever
<cory_fu> :)
<magicaltrout> brenopolanski works with me and is making my charms stable
 * petevg waves at brenopolanski
<magicaltrout> watch out brenopolanski, petevg is overly nice
<magicaltrout> too nice, some might say
<petevg> Yeah. I must be hiding something.
<petevg> :-p
<petevg> cory_fu: it might make sense to code up a general timeout for glitch actions. Like, each time that GLITCHING message gets written, we reset the timeout, and if a glitch action takes too long to run, we call it a failed test.
<cory_fu> petevg: Probably a good idea, yeah
<cory_fu> petevg: There is a timeout flag on unit.run()
<petevg> cory_fu: I was just taking a look at the docs for asyncio, related to timeouts. Nice to see that python-libjuju has a passthrough for them :-)
<petevg> cory_fu: I we could set a really short timeout on the kill juju agent command, since we expect it to break things, anyway.
<petevg> *we
<cory_fu> petevg: Well, as of now, the timeout is just given to the Juju API, so I don't think it would actually help in this case.  But we could pass it through to wait_for_action as well
<petevg> cory_fu I'm curious to see if it would work when passed to the API. It's facilitating our conversation with the agent, so it might do the right thing.
<cory_fu> petevg: I still think unit.ssh() makes more sense for kill-agent, but it looks like that's not implemented yet anyway
<cory_fu> petevg: I'll give it a try
<petevg> cory_fu: I wouldn't be sad to see an implementation of .ssh, either. But if the timeout works, that would be happy.
<cory_fu> brenopolanski: That's interesting.  I actually hadn't heard of zenity before.  Also, have you taken a look at http://conjure-up.io/?  It can also assist with installation of Juju, as well as getting an environment set up and deploying a bundle with interactive configuration.
<tvansteenburgh> cory_fu: i'm not sure how to chime in on that other than to say that kill jujud via the api will never end well
<cory_fu> brenopolanski (and magicaltrout if you haven't seen it): You might also be interested in signing up for the beta program on https://jujucharms.com/ for hosted controllers, which lets you deploy charms and bundles to your own cloud account without needing to bootstrap or even install Juju at all
<cory_fu> tvansteenburgh: Well, ostensibly, the agent should be restarted immediately, but it does seem that we lose the delta for the result of the run "action"
<brenopolanski> cory_fu: cory_fu: very good `conjure-up`. I did not know
<cory_fu> brenopolanski, magicaltrout: Actually, that beta might be restricted, I'm not sure.  But it's linked on the site, anyway
<brenopolanski> cory_fu: okay :)
<magicaltrout> wondered what that beta link was for
<cory_fu> magicaltrout: If you click on it, it gives details as well as the option to request to join.  I'm not sure how permissive the signup process is, but it doesn't hurt to apply. :)
<magicaltrout> yeah
<magicaltrout> but then you lot can spy on all your users.......
<magicaltrout>  ;)
<cory_fu> It's pretty cool stuff, though.
<cory_fu> ha
<petevg> cory_fu, bcsaller: are we sure that task args are working correctly right now?
<cory_fu> Yep
<petevg> cory_fu, bcsaller: The .yaml snippet here should lead me to have task.args['plan'] when I run glitch, right? http://paste.ubuntu.com/23595592/
<cory_fu> Hrm.  Yeah, it should
<petevg> cory_fu: I'm poking at it in ipdb, and task.args is {} :-/
<petevg> cory_fu: ... and it's getting other stuff from the .yaml. I'm running a "rolling_restart", like the .yaml specifies.
<petevg> cory_fu, bcsaller: full .yaml, for reference. Either of you see anything obviously wrong with it?
<cory_fu> petevg: You gonna provide a link with that last one?
<petevg> cory_fu, bcsaller. Whoops: http://paste.ubuntu.com/23595603/
<cory_fu> petevg: Seems fine to me.  I might try quoting the path, but I doubt that's the issue
<petevg> cory_fu: darn. I just added a breakpoint to deploy, and it has the same problem -- it isn't picking up that "version" arg.
<cory_fu> petevg: I've definitely seen it work, and recently
<petevg> Interesting ...
<cory_fu> petevg: I'm testing it on my run
<cory_fu> petevg: Yeah, it worked for me: http://pastebin.ubuntu.com/23595653/
<cory_fu> petevg: Code: http://pastebin.ubuntu.com/23595658/
<petevg> cory_fu: I'll try rebasing from master. Thx.
<cory_fu> petevg: I just realized that I'm not up to date on master myself
<petevg> cory_fu: my args are broken even after a rebase. Uh-oh.
<cory_fu> petevg: I think I was only missing the ubuntui changes.  But I'm kicking it off again
<petevg> cory_fu: yeah. If I set a breakpoint in .from_v1 in rules.py, I have a data object that doesn't have the args in it.
<petevg> That's Test.from_v1 (line 55, for me)
<petevg> I have no idea what's calling that, though. :-/
<petevg> Ah. Here it is ...
<petevg> cory_fu: I think that utils.merge_spec is doing broken things. Now I feel bad for not spending more time thinking through the logic when I did the PR (I remember thinking that I should slow down and think about it ...)
<cory_fu> petevg: It's still working fine for me.  Did you modify merge_spec in your local copy?
<petevg> cory_fu: nope.
<cory_fu> petevg: Well, it's working fine for me off of master.
<petevg> cory_fu: are you running a custom matrix.yaml?
<petevg> cory_fu: I just switched to master, and still get the same error (deploy doesn't have a version arg.)
<cory_fu> petevg: Only that one change (the arg)
<cory_fu> petevg: The deploy task *doesn't* have a version arg
<petevg> cory_fu: what I mean is, have you edited the matrix.yaml in matrix, or are you pointing matrix at a separate .yaml, like I am?
<cory_fu> petevg: I'm using the build-in matrix.yaml.  But the deploy task code doesn't look for a "version" arg
<petevg> cory_fu: I think the args may be broken if I override stuff with a custom matrix.yaml, like I'm doing with Zookeeper. (deploy does have a version: current in test_2.matrix, which is what I used as a template, so my deploy does have an arg that I can check for ...)
<bcsaller>    petevg: some of the things in test are not real. cory_fu implemented deploy, I think you have to take his word for it ;)
<petevg> cory_fu: oh. I see what's happening. And I should know, because we talked about it. I'm looking for my args too early in the process. It's still doing the default tests, before it gets to my arg tests.
<petevg> bcsaller ^
<cory_fu> petevg: Yeah, if you are only adding a new test case, then the built-in ones will not have any args
<petevg> cory_fu, bcsaller: We should probably blow up test_2.matrix. There's all sorts of outdated stuff in it, like "version" args for deploy :-)
<petevg> cory_fu, bcsaller: it's basically my bad. Sorry to waste people's time.
<petevg> cory_fu, bcsaller: though I now do have a real blocker. reset consistently doesn't work, so I can't actually get matrix to run a custom test that comes after our default tests :-/
<cory_fu> petevg: I'm hitting reset not working pretty consistently after a glitch as well.  I need to add in the direct machines kill, at least as a fall-back.  I was hoping to do it more cleanly, but meh
<petevg> cory_fu: I was going to directly kill the machines manually. I think that's the solution, unless you want to blow up the whole model and recreate it.
<cory_fu> petevg: I'd probably prefer to only kill the newly added machines, but maybe we should just move toward each test running in its own fresh model, like I think bcsaller mentioned
<petevg> +1 to that.
<cory_fu> petevg, bcsaller: My concerns with having Matrix add and remove models are: 1) what about permissions? 2) it doesn't give us any re-use with the model that bundletester has already spent time deploying
<petevg> cory_fu, bcsaller: made an issue, that partially addresses cory_fu's concerns in the description: https://github.com/juju-solutions/matrix/issues/38
<petevg> cory_fu: In any case, I can work around for now with -D. Thank you for coding up that option. :-)
<bcsaller> cory_fu: permissions should be at the controller level, no? so model add/remove should be seen as a cheap op. For point 2 we might want to enable a signal to BT that allows better control over if the current model can be reused at the end of the test (and when no it could make a new one), what do you think about that?
<cory_fu> bcsaller: Well, regardless of whether we create a new model or not, matrix is destructive so really needs to run at the end of BT.  So it's more a question of matrix re-using BT's deployed model to save some time (though only for the first test, I suppose).  I guess it's cleaner all-around to just spin up new models for matrix
<bcsaller> cory_fu: supposing there are no other BT tests in the bundle it could skip the default deploy as part of matrix support but that might be too much
#juju 2016-12-08
<petevg> cory_fu: do you save off the dir that you found the bundle in anywhere in the code that you wrote related to finding bundles?
<cory_fu> petevg: What?
<cory_fu> I didn't write any code related to finding bundles
<petevg> cory_fu: I mean, when you fall back to running in the bundle in the current dir ...
<petevg> cory_fu: I can just read the code.
<cory_fu> petevg: I'm not sure what you mean.  That's just the default value of the argparse option
<petevg> cory_fu: by which you mean, you do save it into the path config value. It's just late, and I'm not typing clearly :-)
<petevg> ... or thinking clearly, obv
<kjackal> Good morning Juju world!
<jacekn> is it possible to run xenial services on xenial machines but with trusty bootstrap node?
<rick_h> jacekn: completely
<rick_h> jacekn: just deploy those using the trusty series and you'll be peachy
<rick_h> jacekn: we even do where the controller is ubuntu but then you run applications on machines in centos/windows
<jacekn> rick_h: ack, thanks
<jacekn> rick_h: I'm assuming that series within one service need to match correct? (so it's impossible to upgrade units one by one)?
<rick_h> jacekn: so upgrading a charm isn't upgrading the series
<rick_h> jacekn: so that's a bit different
<rick_h> jacekn: but yes, each application needs to match
<rick_h> jacekn: so you can deploy a charm twice, with a different application name, and they can be of different series
<jacekn> ok thanks, that's what I thought
<jacekn> rick_h: it's not possible to rename servcies right?
<rick_h> jacekn: no, cannot currently
<jacekn> alright
<jamespage> hmm I put https://review.jujucharms.com/reviews/49 up - but then think I might have actually pushed that to edge instead
<stub> Is it still supported to change the advertised IP address on a relation to some other IP address? IIRC it was blessed as a way of implementing proxies.
<stub> I'll test soon, but even if it works I'm not sure if it is something I should be doing.
<petevg> bcsaller, cory_fu: would either of you mind if I just merged https://github.com/juju-solutions/matrix/pull/34  Having my work on other stuff depend on that branch is making my life complicated :-/
<magicaltrout> i mind
<cory_fu> petevg: :)  I was just now giving it a last look and was going to merge it before sync
<petevg> cory_fu: cool. Thx.
<petevg> I guess I don't need to encourage magicaltrout to rebel against the tyranny of his own mind by dropping a +1 on my PR ...
<magicaltrout> i do like this matrix stuff I piss the guys off at JPL buy just randomly shutting down docker containers on some of our stuff occasionally
<petevg> ... and my PR allows you do to that on services that you couldn't do that to before :-
<petevg> :-)
<cory_fu> petevg: Merged.  Sorry that took so long.  Thanks for putting up with my nit picking, but I am quite happy with the result.  :)
<petevg> cory_fu: thx for merging. Nit picking is okay -- we want this to be code that we're all proud of :-)
<magicaltrout> haha
<petevg> at least you're entertained :-p
<magicaltrout> okay enough trolling petevg for now, I should probably do some.... work
<magicaltrout> as such, i'll delay that by fetching coffee
<skay_> mthaddon: how long until this hits? https://code.launchpad.net/~mthaddon/mojo/additional-ready-states/+merge/312791
<mthaddon> skay_: hard to say - depends on the review
<skay_> mthaddon: my new charm is in review, and I have feedback to change blocked to maintenance or waiting due to how mojo handles things currently. but, this looks like it will go in soon
<mthaddon> skay_: I would imagine this would be merged/released within a week or so if that helps
<skay_> mthaddon: ok
<skay_> I'll mention it. we're not close to ready for production, so I am pretty sure the mojo change will be out first
<skay_> (though, I think I could change it to waiting. if a human is around, they'll see the message about what it is waiting for. (the bug mentions it is debatable))
<bladernr> Question about service/application config.  I have a charm and I have defined some config options and can set and change them.  What I haven't been able to find an example of, or understandable documentation on, is how to write a config-changed hook that will take the contents of each option and turn them into an environmnet variable on the machine.
<bladernr> so example, if I do juju set servicename target-ip="192.168.100.100", I want the config-changed hook to recognize that and do something like "export TARGET_IP=$target-ip"
<mgz> bladernr: I responded on your bug
<mgz> (unrelated to anything)
<mgz> bladernr: to your question here, just export probably doesn't do what you want, you'd need to restart any services in the new context, or set globally in an rc file or something
<kwmonroe> Ankammarao_: i understand you're having trouble with sshuttle?  there are a few of us here that have used sshuttle to tunnel traffic to a remote network.  what's the issue?
<Ankammarao_> i tried to login s390x machine on x86_64 machine (firefox working fine)
<Ankammarao_> could you tell me how to exactly use the sshuttle
<kwmonroe> Ankammarao_:  sshuttle -r <user>@<remote-machine>  <juju-subnet>
<kwmonroe> Ankammarao_: sshuttle -r root@z10lnx04 10.0.3.0/24
<kwmonroe> ^ as an example
<Ankammarao_> kwmonroe, Z10LNX06.in.ibm.com this is the machine we are using
<Ankammarao_> here i have instaled the sshuttle
<kwmonroe> ok, you need to install sshuttle on your local machine.  is z10lnx06 local, or is that the remote server?
<Ankammarao_> local machine only
<kwmonroe> ok, what's the name of your remote machine?
<Ankammarao_> we have deployed the charm here only
<Ankammarao_> so we are using the this server to run firefox
<Ankammarao_> how we can now the juju-lxd-subnet
<kwmonroe> Ankammarao_: if you're local on the machine, you don't probably don't need sshuttle.  just browse to the ip address of the lxd machine.
<kwmonroe> Ankammarao_: you can test that you have access to the lxd subnet from you local machine by trying to ping one of the lxd machines.  if that works, then try 'curl <url>', if that works, then any browser should work too.
<Ankammarao_> kwmonroe, is there any other browsers which supports the ubuntu
<Ankammarao_> kwmonroe, i am able to accessing the lxd machine and pinging but unable to browsing
<kwmonroe> Ankammarao_: let's back up a minute.  are you typing on a laptop or desktop right now?
<Ankammarao_> typing on laptop
<kwmonroe> Ankammarao_: what OS is your laptop running?
<Ankammarao_> windows 7
<kwmonroe> ok.. so.  Ankammarao_.  here's where the confusion started.  your *local* machine is your windows7 laptop.  your remote machine is Z10LNX06.
<kwmonroe> and when you attempt to run a browser like firefox on your remote machine, it will likely fail because there is no window manager to handle firefox's gui from within a terminal.
<Ankammarao_> kwmonroe, i am running the firefox on vnc server
<Ankammarao_> kwmonroe, its working fine for x86_64 machine
<Ankammarao_> but we are facing issue with s390x machine
<kwmonroe> Ankammarao_: so you're running vnc on your laptop and connecting to the destop of Z10LNX06.  running firefox from there is what is crashing.  is that right?
<Ankammarao_> kwmonroe, yes exactly
<kwmonroe> Ankammarao_: ok.  sshuttle won't work for you because i do not believe there is a windows client available.  Ankammarao_ what version of ubuntu is running on Z10LNX06?
<Ankammarao_> kwmonroe, Ubuntu 16.04.1 LTS
<kwmonroe> Ankammarao_: you should be able to 'sudo apt-get install chromium-browser' on Z10LNX06 and try that browser.  if that crashes as well, you'll need to follow up on #ubuntu or perhaps #ubuntu-server.  we're way outside of juju's world at this point..
<Ankammarao_> kwmonroe, we have tried with chromium-browser too its not getting installed
<Ankammarao_> kwmonroe, can you suggest me the steps to try with sshuttle
<kwmonroe> Ankammarao_: sshuttle is not an option for you.  there is no sshuttle for windows 7.
<Ankammarao_> kwmonroe, okay
<Ankammarao_> kwmonroe, so what could be the solution for browser issue on s390x ubuntu, can you redirect to some conatcts or channels
<kwmonroe> Ankammarao_: join #ubuntu-s390x and tell them you're having trouble launching firefox on your s390x machine when connecting through vnc.
<Ankammarao_> kwmonroe, thanks
<kwmonroe> np - good luck Ankammarao_!
<bdx> getting this error after creating a new vpc http://paste.ubuntu.com/23599855/
<bdx> I've ensured it has been provisioned identically to the other vpc's I'm using
<bdx> my controller never seems to finish bootstraping ....
<bdx> going to let the bootstrap timeout and see what gives there
<bdx> ahhh, had to add a route to my igw in my routing table
<vagarwal> i deployed openstack novalxd using conjure-up. it sets lxdbr0 as 10.4.57.1 and conjureup0 as 10.99.0.1, where the new instance on ext-net interface is assigned 10.99.0.14
<vagarwal> i cannot ping to this ip or ssh to it. it gives a destination unreachable error.
<vagarwal> is this expected behaviour and what am i missing?
<hml> vargarwal: did you assign a floating-ip to the instance?
<derekcat> Hey, does anyone know if there's a problem with running a maas-region-controller and a maas-rack-controller each in LXDs on the same host?
<zeestrat> derekcat: You might want to check in #maas
<derekcat> Ok, wasn't sure where to ask first since I'm deploying maas to the LXDs with juju
<petevg> cory_fu: thinking aloud about matrix issue#39 ... I think that the problem is that glitch can run a select after a remove_unit command has been issued, but before that unit has been removed from the model. Glitch can then go and try to run an action on that unit as it is disappearing, or after it has disappeared. I guess the question is, is it glitch's
<petevg> responsibility to make sure that I don't try to reboot a nonexistent unit? Or is it the api's responsibility to fail sensibly when I act on outdated info that I've gotten from the model?
<petevg> cory_fu: on the one hand, I kind of want to reign in glitch. On the other hand, we agreed earlier that I probably shouldn't -- that we were trying to unearth exactly these sorts of broken situations, and make sure that our APIs and tools dealt with them appropriately.
<petevg> cory_fu: did you happen to keep any of the glitch_plan files that caused your crash?
<cory_fu> petevg: I didn't keep any, no, sorry.  I don't feel like this is a problem with the API, though.  I think it is reasonable for glitch to block until the unit is actually gone when it removes the unit, and I think it's also reasonable to ensure the plan doesn't include more remove_units than can possibly succeed.
<cory_fu> The API can't do anything other than throw an error if you attempt to remove a non-existant unit.
<cholcombe> is something going on with amazon?  all my units i deploy are in the down state and never seem to boot
<petevg> cory_fu: right. The API should throw an error. And then glitch should catch it and decide to either fail the test or ignore it and continue. The behavior that you saw, where it stalls out and gets stuck, isn't correct.
<cory_fu> petevg: Yep, fair enough
<petevg> cory_fu: I thought that I had thrown a try: except around the glitch actions, though. It looks like I haven't. That's maybe the best thing to do (if I add in the sleep, then glitch will never test things like attempting to reboot a unit as you're removing it, which is the sort of edge case that we want to test.)
<cory_fu> petevg: Fair enough.  As long as it doesn't get in to an unstable state, I'm happy.  Though I do think that at some point we're going to have *some* smarts to the plan generation.  Having a plan with a bunch of steps that are essentially no-ops seems counter-productive
<cory_fu> But one or two, which would be the most common case, should be fine for now, as long as it doesn't go unstable because of it
<petevg> cory_fu: maybe we do something like make it less likely that glitch will act on a single unit more than twice. (Just thinking aloud.)
<petevg> cory_fu, bcsaller: PR to help fix some of cory_fu's headaches: https://github.com/juju-solutions/matrix/pull/42
<petevg> cory_fu, bcsaller: the weird thing is that I distinctly remember adding something like this earlier. Maybe I backed off of it because I didn't like the generic catch. I think that it's something we actually want to do here, though.
<cory_fu> petevg: I still don't like the generic catch.  What's wrong with catching JujuAPIError?
<petevg> cory_fu: I believe the error that we're seeing for the reboot is not a JujuAPIError. (Running against to double check.)
<petevg> cory_fu: I mean AttributeError. Regardless, it's not a neat JujuAPIError.
<cory_fu> petevg: Regardless, we should definitely be specific about which exceptions we're catching.  Otherwise, you'll catch the control-flow exceptions that are used to shut down the UI
<cory_fu> Among other things
<petevg> cory_fu: except Exception doesn't catch control flow exceptions. That's why I used it rather than just "except:"
<petevg> cory_fu: "except Exception" only catches stuff that inherits from the Exception class. Unless I missed a change between Python 2 and 3, the control flow commands don't inherit from that class.
<petevg> It's the "safe" way of catching a generic exception.
<cory_fu> petevg: Yes, I'm aware.  But we might have application-defined control-flow exceptions which do inherit from Exception raised, since we're async.
<cory_fu> petevg: Specifically, bcsaller's branch adds some
<petevg> cory_fu: that's bad. That probably isn't the solution, then.
<cory_fu> Maybe I'm being paranoid and asyncio wouldn't cause the exception to come up through that code path.
<cory_fu> petevg: ^
<petevg> cory_fu: I don't know. For now, I'm on deck to make dinner (I'm solo parenting for the next few days), and I'll be out tomorrow. I'm not attached to that code, and won't be offended if anyone rewrites it.
<cory_fu> I'm not familiar enough with async to say for sure
<petevg> cory_fu: but I have to sign off for now.
<cory_fu> kk
<cory_fu> petevg: Have a good evening!
<petevg> cory_fu: you, too :-)
<petevg_afk> have a good weekend, too :-)
<magicaltrout> nooo petevg_afk don't leave!
<petevg_afk> magicaltrout: but I must. If I don't, then we will have burned shrimp tempura for dinner, and the kidlet and the cats may join in revolt.
<magicaltrout> argh not burned shrimp!
<magicaltrout> goooooo
<magicaltrout> be at one with the fish
 * petevg_afk nom nom noms
#juju 2016-12-09
<brenopolanski> someone writing the documentation of Juju to portuguese/brazil (https://github.com/juju/docs)
<brenopolanski> ??
<magicaltrout> hey marcoceppi my fellow charmer brenopolanski is interested int committing some PR/BR translations, do you have any way of accomodating that stuff yet?
<kjackal> Good morning Juju world!
<dougi> Hi. im try to install openstack... im jusing latest conjure-up on MAAS ubuntu 16.04 LTS, in all dokumentation it says that i need 1 MAAS server, 1 juju, and 3 comp nodes for openstack. But i run conjure-up openstack, bootstrapping the juju node goes fine, but when nodes are deploing i run juju status and there i see that it expect 4 mashine?? why ? and is it possible to run it at 3 comp nodes?
<kjackal> hi dougi, you could ask at #openstack-charms
<marcoceppi> magicaltrout: email the list, we've not really talked much about translations. I'd like to use Launchpad for it, but I don't run the docs
<magicaltrout> fair enough
<admcleod> magicaltrout: HELLO!
<magicaltrout> jesus
<dougi> kjackal: ok thanks :)
<admcleod> magicaltrout: HOW R U?
<admcleod> lolz
<magicaltrout> you a bit bored admcleod ? :P
<admcleod> magicaltrout: no, just expressing the joy which you bring to my world
 * magicaltrout detects cynicism
<admcleod> i typed that without even a homeopathic degree of anything negative
<bladernr> Stupid question, but hook tools can't be run from the shell on unit can they?  Isn't there some way to get into the charm's environment to run the hook tools lke config-get manually?
<bladernr> I want to test some of them out and then write a hook to take advantage of them.
<bladernr> ^^ IOW, I can't just juju-ssh to a unit and run config-get, there's something more that needs to be done to get into the Juju environment on the deployed unit?
<jrwren> bladernr: juju run for a unit runs in a hook context. you can invoke that way.
<bladernr> jrwren, perfect! that's exactly what I was looking for
<bladernr> jrwren, thanks a ton!
<jrwren> bladernr: you are welcome. If you can suggest a place where you would have found this in the docs, maybe we could update the docs to mention it.
<bladernr> jrwren, maybe just a mention here: https://jujucharms.com/docs/2.0/reference-hook-tools
<bladernr> perhaps in one of the Note boxes at the top of the page as just a hint for testing the hook helpers
<bladernr> sprint over, time go to.
<jrwren> bladernr: thanks!
<cory_fu> tvansteenburgh: https://github.com/juju/python-libjuju/commit/6ba2856fecf224ae3fd589331e889a6587e8153b#commitcomment-20136048
<cory_fu> tvansteenburgh: I was using add_model in matrix already before your change and just passing None for both of those, so I think we can just pass those through.  (Though your "tag.cloud(cloud_name)" change will probably need an if / else wrapper.)
<tvansteenburgh> cory_fu: passing None for both of what?
<cory_fu> cloud_name and credential_name
<tvansteenburgh> oh i see
<tvansteenburgh> well that's great news!
<tvansteenburgh> didn't realize that
<tvansteenburgh> cory_fu: will refactor, thanks for the tip
<cory_fu> tvansteenburgh: No problem.  And thanks for the merge conflicts.  ;)
<tvansteenburgh> any time!!
<cory_fu> =D
<skay_> stub: hi, I'd like for the snap layer to support private/unpublished snaps. how would I go about adding that functionality?
<cholcombe> bdx, could you publish your vault, consul and consul-agent charms?
<mbruzek> bergtwvd ping
<tvansteenburgh> if i run `juju config ...` and then run `juju run ...`, am i guaranteed that the config-changed hook has finished before the `juju run` command is executed?
<tvansteenburgh> iotw, are `juju run` commands queued along with regular hook invocations, or run out-of-band?
<tvansteenburgh> i thought it was the latter but i can't find any docs that say either way
<rick_h> tvansteenburgh: yes
<tvansteenburgh> rick_h: yes to which question?
<tvansteenburgh> rick_h: yes, config-changed will def finish before the juju run command executes?
<rick_h> Yes, it will go in order
<rick_h> The events are sync though they occur async
#juju 2016-12-10
<vagarwal> has anyone tested conjure novalxd setup? does it work out of the box or requires one to make changes to openstack networking?
<vagarwal> i cannot ping any instance. security groups are fine and i don't know where to look. please help
<deanman> vagarwal, does `lxc list` bring anything up ?
<vagarwal> deanman: lxc list is showing several containers with juju prefix with ip assigned to them
<vagarwal> i can access openstack horizon using one of the container ip address (as provided by the installer)
<deanman> I think to be able to ping openstack instances you have to enable ICMP rules
<vagarwal> i can launch an instance and use "ext-net" that is infact the conjureup0 subnet (10.99.0.xxx)
<vagarwal> deanman: that was what i thought and i can clearly see the security group has allowed everything
<vagarwal> i can ping 10.99.0.3 (router ip) but i cannot ping the instance 10.99.0.14
<deanman> Hmm can you try to ssh into that machine ?
<vagarwal> it looks like the default router setting is not correct in the conjure novalxd setup
<vagarwal> i cannot ssh either
<vagarwal> i did a tcpdump on my the host machine and i can see there is no reply coming for echo
<vagarwal> possible mising some nat rules? i got no idea how ovs works, hence clueless here
<vagarwal> s/echo/icmp echo
<deanman> To be honest i haven't used conjure myself so i don't know how exactly it works, e.g. are VM deployed with open stack seen as LXD instances on host or are they nested ?
<vagarwal> openstack is using lxd as well
<deanman> so you can see extra lxc instances created on `lxc list` when using OS
<deanman> ?
<deanman> Maybe you could then use `lxc exec <lxd name> bash` to gain shell access inside that and check network configuration ?
<deanman> Have you seen this ? https://www.stgraber.org/2016/10/26/lxd-2-0-lxd-and-openstack-1112/
<kildurin> Can the openstack-base charm be installed behind a proxy?
<kildurin> It uses "git://" addresses in the charm which I can't see a way to route to the proxy
<marcoceppi> kildurin: openstack-base should only really use the packages unless you're installing from upstream
<marcoceppi> kildurin: try setting your proxy settings in your model
<vagarwal> deanman: i will look into ovs and read about neutron
<vagarwal> i will even give a try to stgraber's method that involves manually spinning out the containers
<deanman> vagarwal: well I was intrigued and bootstrapped stgraber's openstack just before leaving home. At least I could have a look on mine and share whether I see the same problem.
#juju 2017-12-04
<Ravenpi> Trying to set up IPv6 endpoints for our cloud.  Have assigned 2005::100 to eth0 on my Keystone.  When I try to enable IPv6, though, I get "Interface 'eth0' does not have a scope global non-temporary ipv6 address." Ideas?
<Ravenpi> (2005::100 shows as globally scoped via ifconfig.)
<derekcat> Hey Everyone, quick question about the percona-mysql charm - is there a Juju way to change my.cnf options for the cluster?  For example, we need to set: ft_min_word_len=2
<jose-phillips> hi derekcat
<jose-phillips> try juju perscona-mysql config
<jose-phillips> sorry
<jose-phillips> juju config mysql
<derekcat> Hey jose-philips!  Ok, give me a second, trying to get back into the controller.  >_<  Juju is hanging on the controller for this model...
<stokachu> ryebot: gun1x  is having issues connecting to grafana
<stokachu> ryebot: says it's asking for a password
<gun1x> not only grafana ... all the services from these links ask for user/pass: https://bpaste.net/show/a43461bbec45
<rick_h> gun1x: yea, grafana needs a password. The readme has instructions for getting it or you can change it via config
<rick_h> gun1x: assume you mean for the webui?
<gun1x> rick_h: i mean all the links from the paste
<gun1x> none work
<gun1x> if you have a readme, please share it
<gun1x> this happened after deploying kubernetes from canonical, using conjure-up
<rick_h> gun1x: https://jujucharms.com/grafana/
<rick_h> gun1x: yea, try changing the password on grafana: juju config grafana admin_password=testing
<rick_h> gun1x: looking at the paste for the other links
<gun1x> gunix@juju-k8s:~$  juju config grafana admin_password=testing
<gun1x> ERROR application "grafana" not found (not found)
<rick_h> gun1x: oh! ok so these are services running in the k8 cluster, not the juju charms themselves
<gun1x> yes.
<rick_h> tvansteenburgh: or kwmonroe any ideas on the default proxy setup that gun1x needs? ^
<gun1x> found it ... gunix@juju-k8s:~$ cat .kube/config.conjure-canonical-kubern-04a
<gun1x> well ... now that i get pass auth ... grafana is not working, influxdb is not working and heapster is not working. they are all returning errors. omg
<gun1x> heapster returns 404 page not found
<gun1x> grafana returns
<gun1x> {{alert.title}}
<gun1x> and influxdb returns:   "status": "Failure",
<gun1x>   "message": "no endpoints available for service \"monitoring-influxdb\"",
<gun1x>   "reason": "ServiceUnavailable",
<gun1x>   "code": 503
<gun1x> i am confused how this failed. it is recommended by kubernetes and it is promoted on the ubuntu website.
<jose-phillips> exist a way
<jose-phillips> to allow perform changes inside of the container?
<jose-phillips> and dont get deleted after reboot
<jose-phillips> ?
<stokachu> gun1x: think you need to run kubectl proxy
<stokachu> gun1x: before accing those addons
<gun1x> stokachu: from what i see kubectl proxy is working. is there any command to prove this ?
<stokachu> gun1x: not sure unfortunately
<stokachu> ryebot: ^
<gun1x> :)
<stokachu> gun1x: trying to get you some additional resources
<stokachu> Cynerva: gun1x having trouble getting to grafana and other addons after running kubectl proxy
<stokachu> https://www.irccloud.com/pastebin/4CYh6F4F/
<gun1x> yep, that's it. but the kubernates dashboard works
<gun1x> and it's behind the same proxy
<stokachu> so just grafana then?
<Cynerva> do metrics appear in the dashboard? that's where i'm used to looking for them
<Cynerva> never tried hitting the services directly
<gun1x> Cynerva: you mean grafana metric in the kubernates dashboard?
<Cynerva> gun1x: pod metrics and such that come from heapster
<gun1x> Cynerva: under workloads/pods within kubernetes-dashboard i get the following fields when clicking on a pod: details, containers, conditions, created by
<gun1x> the pod list has name, anmespace, node,s tatus, retstards, age
<gun1x> if something else is supposed to be here, i don't know
<gun1x> i will try to destroy this deployment and do conjure-up again, but this time with calico instead of flanel
<Cynerva> gun1x: hard to describe but it's usually pretty prominent, graphs and such on the pod overview, so they're probably missing
<gun1x> yea, no graphs here
<Cynerva> gun1x: if you haven't redeployed already, can you check `kubectl get po --all-namespaces` and make sure there's a influxdb-grafana-* pod and that it's in a Running state?
<Cynerva> i've got a deployment coming up so i can get familiar with how it works again xD
<gun1x> Cynerva: https://bpaste.net/show/9f1b4091fe3b
<Cynerva> okay, that looks normal
<knobby> maybe check the logs from the pods?
<gun1x> knobby: lol Can't access the Grafana dashboard. Error: Get http://admin:admin@localhost:3000/api/org: dial tcp: i/o timeout. Retrying after 5 seconds...
<gun1x> that is not the password
<gun1x> however this log has no timestamp
<gun1x> nvm found timestamp button
<gun1x> knobby:  logs are from 2017-12-04T20:55:21.375298866Z ... not it's 22:57, 2 hours later
<knobby> I meant for the pod itself: kubectl logs monitoring-influxdb-grafana-v4-hmt8j for example
<gun1x> knobby:  those are logs from the pod :)
<knobby> so the logs just say "Error: Get ..."?
<knobby> gun1x, ^
<gun1x> wait a sec, i need to google commands because kubectl has no autocompletion
<gun1x> knobby: https://bpaste.net/show/0775416c87b3
<gun1x> this is really confusing. i have no ideea why it still tries with admin:admin
<gun1x> i tried from another browser and i get 2017-12-04T21:06:40.259719634Z t=2017-12-04T21:06:40+0000 lvl=info msg="Request Completed" logger=context userId=0 orgId=1 uname= method=GET path=/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/public/img/fav32.png status=404 remote_addr=192.168.122.1 time_ms=2ns size=23
<gun1x> but still {{alert.title}}
<gun1x> without any grafana content
<gun1x> maybe it's redirecting badly and i don't get some of the javascript or css content
<knobby> gun1x: my logs for my cluster show the same admin:admin error, but futher down say "connected to the Grafana dashboard."
<gun1x> i will create a pod with novnc inside the deployment, maybe that works
<knobby> gun1x: is your influxdb log ok? mine is full of 204's
<gun1x> knobby: you mean this?
<gun1x> 2017-12-04T21:11:05.087279702Z [httpd] 10.1.93.0 - root [04/Dec/2017:21:11:05 +0000] "POST /write?consistency=&db=k8s&precision=&rp=default HTTP/1.1" 204 0 "-" "heapster/v1.4.3" a3b1eb77-d937-11e7-8190-000000000000 203
<knobby> same as mine
<gun1x> you just did a VM with ubuntu 16.04 and did conjure-up ?
<knobby> gun1x: bare metal and MaaS, but yes
<Cynerva> gun1x: okay, i get the same results you do when i try to hit the services via kubectl proxy - 404 from heapster, weird {{alert.title}} page from grafana, no endpoint found for influxdb
<Cynerva> for grafana, this bit from the browser console is fun: Loading failed for the <script> with source "http://localhost:8001/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/public/app/boot.85c49108.js".
<gun1x> Cynerva: ...
<Cynerva> gun1x: i'm able to get metrics from heapster at least, with a little url exploration - can you try something like http://localhost:8001/api/v1/namespaces/kube-system/services/heapster/proxy/api/v1/model/metrics/ ?
<gun1x> Cynerva: i will try to deploy this manually without juju
<gun1x> or with another solution, maybe openshift, i don't know
<Cynerva> okay
<stokachu> heh
#juju 2017-12-05
<bdx> @marcoceppi_ that was the most comprehensive and well executed talk and demo on k8s/CDK I've heard yet - by 20 minutes you had demoed and explained a good majority of the important top level information that people need to know
<bdx> it was clear, concise, and understandable
<bdx> you are getting good at this :)
<bdx> awesome news about rancher 2.0 too
<bdx> really cool
<derekcat> Hey @jose-philips!  Figured out my controller issue [the KVM that ran it died somehow!]
<derekcat> So I looked at both juju config percona-cluster and juju config mysql-hacluster, but I don't see a reference to general options in my.cnf?
<derekcat> Or if anyone else knows about the percona-mysql charm - is there a Juju way to change my.cnf options for the cluster?  For example, we need to set: ft_min_word_len=2
<bdx> derekcat: sup homie
<bdx> :)
#juju 2017-12-06
<gnuoy> Does anyone have any pointers on how I can further debug Bug #1736207 ?
<mup> Bug #1736207: Unknown OS for series bionic <NTP Charm:New> <https://launchpad.net/bugs/1736207>
<gnuoy> I don't seem to be able to deploy a charm that references bionic in its metadata
<gnuoy> (I'm not trying to deploy to bionic fwiw)
<rick_h> gnuoy: hmm, so the logic is supposed to be the order of the series called out in the metadata.yaml (first one is preferred) so I'd check that first
<rick_h> gnuoy: there is a default series config I believe, have to look that up.
<gnuoy> Even if I explicitly set the series in the juju cmd line it still fails with the same error
<rick_h> oh, in the bug I thought it worked
<rick_h> I see, it was the older charm my bad
<gnuoy> No, the bug alters the version of the charm. In both cases it's explicit about the series I think
<gnuoy> â« juju deploy --series=xenial ntp
<gnuoy> Located charm "cs:ntp-24".
<gnuoy> Deploying charm "cs:ntp-24".
<gnuoy> ERROR cannot add application "ntp": unknown OS for series: "bionic"
<rick_h> gnuoy: can you run juju model-config and check "default-series                default  xenial"
<gnuoy> â« juju model-config | grep default-series
<gnuoy> default-series                controller  xenial
<rick_h> gnuoy: honestly it sounds like a bug in the series selector not accepting the cli defined value but figure check the boxes
<rick_h> gnuoy: hmm, the same values are in some test charm bits in juju https://github.com/juju/juju/blob/698a34c22f2176a6de8cd24c5ea4aa5b11637069/acceptancetests/repository/charms/ubuntu/metadata.yaml
<gnuoy> It must be some config issue on my side.
<rick_h> gnuoy: I guess maybe check the cloud config? but really the cli is the boss and should be the last word on that.
<gnuoy> ack
<rick_h> gnuoy: I guess one other test is to download the charm and try it locally
<rick_h> gnuoy: at least see if there's any fishy-ness with charmstore bits. I don't think so but would be glad to narrow it down I guess.
<gnuoy> rick_h, seems to work locally http://paste.ubuntu.com/26125569/
<rick_h> gnuoy: hmm, ok. well that's different.
<gnuoy> haha, yeah. I have this concern that it's going to turn out I'm accidentally using py juju or something
<gnuoy> I see no logging around this on the controller. I pasted output with --debug in the bug too
<rick_h> gnuoy: yea, nothing fishy in the debug log output. Unfortunately no output showing why it thinks bionic is the great choice
<gnuoy> so you think it's picking bionic, then going wtf is bionic ?
<rick_h> gnuoy: well I just mean that something in there is deciding it needs bionic. At first I was wondering if it just didn't recognize the series and was blowing up unable to resolve what bionic was, but now I'm thinking there's some faulty logic somewhere and it's picking bionic to deploy with but no idea why
<rick_h> 30 mins until the juju show! Get your coffee refills on.
<rick_h> folks wanting to chat on the Juju show hop in https://hangouts.google.com/hangouts/_/yoyg3gi2xrg35alcdlnyhpjx3ue
<rick_h> and if you want to watch: https://www.youtube.com/watch?v=1LD05nnsq7E
<rick_h> kwmonroe: aisrael bdx cory_fu magicaltrout stokachu and anyone else ^
<Cynerva> is there a way to deploy a charm to aws with network spaces? i tried `juju deploy ubuntu --constraints spaces=public0,private0` but the resulting machine just gets a single network interface on the public0 subnet
<bdx> Cynerva: using the AWS provider, juju doesn't support adding secondary interface (second space), only a single space constraint is supported
<Cynerva> ack, thanks bdx
<bdx> Cynerva: there should probably be a message to the user if the user tries to configure multiple spaces where its not possible
<bdx> np
<bdx> I've granted "read" to everyone on a few charms and a few bundles, but only seeing the elasticsearch charm  https://jujucharms.com/u/omnivector/
<bdx> kind of odd
<bdx> hmmm, also, not all of the relations populate in the gui it seems https://jujucharms.com/u/omnivector/elasticsearch/8
#juju 2017-12-07
<bdx> nm^^, user error
<bdx> derekcat: I added a few bits to the percona-cluster charm, an extra option for "custom-config"
<bdx> derekcat: `juju deploy cs:~omnivector/percona-cluster-0 --config custom-config="mykey=myval,mykey2=myval2"`
<bdx> derekcat: or add the `custom-config` option to your charm config yaml file
<bdx> derekcat: then the config you have comma delineated in the custom-config will make it into you my.cnf
<bdx> derekcat: it needed a small fix, use `juju deploy cs:~omnivector/percona-cluster-1 --config custom-config="mykey=myval,mykey2=myval2"`
<derekcat> bdx: That's awesome! :D  Thank you for doing that!
<bdx> derekcat: np
<bdx> derekcat: I'm kind of locking you in though
<derekcat> ?
<bdx> derekcat: lets still get a bug filed on that
<derekcat> Okey
<bdx> derekcat: because I have created a dependency on this snowflake charm that I crudely forked and modded
<bdx> and you probably dont want to be locked in to the rev of that charmed that I forked and modded for you
<bdx> I mean, its not so bad
<bdx> but like, we just want that custom-config in the upstream charm anyway I think
<bdx> lets see if others agree
<bdx> derekcat: https://bugs.launchpad.net/charm-percona-cluster/+bug/1737022
<mup> Bug #1737022: support "custom-config" option <OpenStack percona-cluster charm:New> <https://launchpad.net/bugs/1737022>
<bdx> just click the "this bug affects me" button and you will have done you good deed for the day
<derekcat> bdx: Done!
<bdx> sweet!
<h00pz> wondering if there is a way to deploy gnocci without storage-ceph ?
<h00pz> it seems the charm has a storage-ceph: ceph-client relation requirement (iâm not running ceph)
#juju 2017-12-10
<andrew> Is there a way to give a service a hostname in MAAS? Like, instead of juju-6150e5-0-lxd-0.maas something like openstack-dashboard.maas?
#juju 2019-12-02
<hpidcock> wallyworld: was there any more information on these py libjuju tasks? do you remember if any bugs were filed?
<wallyworld> hpidcock: pretty sure there's a bug for the storage one, i'll see if i can find it
<hpidcock> danke
<wallyworld> hpidcock: https://github.com/juju/python-libjuju/issues/361
<wallyworld> i can talk through the issue if you need clarification
<hpidcock> no that helps a lot thanks
<wallyworld> hpidcock: with the constraints bug, it's just a case of the api in the libjuju client not being implemented
<hpidcock> wallyworld: yeah I think I should be able to figure this one out, I'll probably hit you up if I have any questions
<manadart> Minor acceptance test fix: https://github.com/juju/juju/pull/10974
<manadart> achilleasa: Quick audit. The only higher order usage of device/address provider ID is AllProviderInterfaceInfos, which itself is unutilised.
<manadart> Except in tests.
<nammn_de1> manadart: do you have some time to talk about this https://github.com/juju/juju/pull/10967 and which edge cases you see in generall? I added mine to the the pr description
<nammn_de1> stickupkid: i added those 'edge'-cases i had in mind into the description. If you have some in mind feel free to add/comment them as well. I don't have that many in mind right now
<stickupkid> nammn_de1, will do
<nammn_de1> because right now I have 2 and the code only catches one, But there are probably more which i just cannot see
<manadart> nammn_de1: Give me a few.
<stickupkid> CR anyone https://github.com/juju/juju/pull/10975
<nammn_de1> manadart: sure just ping me if you have time
<achilleasa> manadart: so the solution we discussed shouldn't impact anything then, right?
<manadart> achilleasa: No current ops. We're OK for now.
<manadart> nammn_de1: Do you want to HO?
<nammn_de1> manadart: sure!
<manadart> nammn_de1: I'm in Daily.
<stickupkid> this is interesting - get a failure attempting to output juju status tabular with branches https://github.com/juju/juju/runs/329443547
<nammn_de1> anyone got 10 min time to compare vpn settings again? :D I just got out of a call with the support and we did not come to a conclusion :D
<stickupkid> nammn_de1, yap - 2 secs
<nammn_de1> while at it, does anyone know of any canonical service which can only be reached with vpn, excluding our own jenkins stuff (10.125.0.203)
<stickupkid> nammn_de1, in daily
<hml> stickupkid: looking
<nammn_de1> dammn i keep forgetting that i should change the acceptancetest for 2.7  first and merge on develop..
<nammn_de1> Here is the backport for 2.7 https://github.com/juju/juju/pull/10976
<nammn_de1> manadart ^
<hml> stickupkid: approved
<timClicks> what do providers need to do to support multiple instance types? We allow openstack users to provide their own image metadata and have received requests to allow vSphere users to do the same..
<babbageclunk> timClicks: is instance type what you mean here? I think instance types are less flexible than picking CPUs/ram/disk directly, aren't they?
<timClicks> am not 100% sure. if I were to provide my own OVF, for example.. would I register that as an instance type?
<babbageclunk> just looking at what's in the ovf...
<timClicks> the end game would be to able to support centos7 images, for example
<babbageclunk> Right, I think that's definitely image rather than instance type
<babbageclunk> It doesn't seem like instances are constrained to only have the cpus/ram/disk specified in the ovf
<babbageclunk> but there's not an obvious way to use a different disk image.
<timClicks> no, because we customise that base template later
<babbageclunk> the vmdk is pulled from cloud-images.ubuntu.com - I guess if we could swap that out, it should allow substituting the image
<babbageclunk> We do have something called CustomImageMetadata in instance config - I'm trying to trace that down
<babbageclunk> timClicks: It seems like we *might* be able to make a substitute image metadata dir using `juju-metadata generate-image` that contains a different ova file? But I haven't played around with it
<babbageclunk> I'm not at all clear how to run it
<timClicks> btw I have updated the default theme in discourse
<timClicks> the theme is now more... on theme
#juju 2019-12-03
<kelvinliu> wallyworld: the review tool change is not ready yet. `confinement 'classic' not allowed with plugs/slots lint-snap-v2_confinement_classic_with_interfaces`
<wallyworld> kelvinliu: ah, they are behind schedule. i'll email them to ask
<kelvinliu> wallyworld: ok thx
<manadart> Need a forward merge review: https://github.com/juju/juju/pull/10978
<nammn_de1> manadart stickupkid as you guys already gave it a quick look, its ready for a review. Mostly little cleanup and updated the go doc  https://github.com/juju/juju/pull/10967
<nammn_de1> btw. manadart: there is no way for a controller to access another controllers collection, right?
<nammn_de1> at least not without a lot of overhead (?)
<stickupkid> nammn_de1, nope
<nammn_de1> stickupkid: thanksss
<stickupkid> nammn_de1, you'd need to write a LOT of api server/client apis
<nammn_de1> stickupkid:  is there a quick way to find out from the package naming which runs on a controller and which on the units on the not controller? Or  something similiar?
<stickupkid> nammn_de1, https://github.com/juju/juju/tree/develop/apiserver/facades
<nammn_de1> stickupkid: I see, somehow overlooked this ð­
<hallback> Interesting to hear about the CentOS + VMware issue, I know some people that are struggling with that, or at least would like to use it. I've used MAAS + my own CentOS images to be able to use Juju with CentOS until I figured out how to do it with LXD instead, but VMware would provide more use cases
<achilleasa> manadart: this is the final instancepoller PR https://github.com/juju/juju/pull/10979. I have also noted my findings about the dups in the linklayer devices/addresses collections
<manadart> achilleasa: OK; will look in a bit.
<achilleasa> manadart: take your time... it's a big one :-(
<nammn_de1> manadart did you mean that by renaming in your pr review? https://github.com/juju/juju/blob/4ad19fc6fc3105d0d94404cedfd9be33db4f34dc/state/modelmigration.go#L861 Wasn;t 100% sure
<nammn_de1> wanted to make sure before merging
<nammn_de1> rick_h: what would be the ideal output of this bug be? https://bugs.launchpad.net/juju/+bug/1853529
<nammn_de1> possible options: https://pastebin.canonical.com/p/fVh89BcHQ9/
<mup> Bug #1853529: The lack of a controller is not an error <bitesize> <papercut> <ux> <juju:Triaged> <https://launchpad.net/bugs/1853529>
<stickupkid> manadart, when you get 5 minutes https://github.com/juju/description/pull/67
<stickupkid> i'm going to start on external controller
<manadart> achilleasa: Still NFI what is up with that. This fixes the error https://pastebin.canonical.com/p/kqVJGHFjJS/ but the test still times out waiting for the address.
<rick_h> nammn_de1:  I think option 3, just "No controllers registered"
<nammn_de1> rick_h: but this would mean for all the other commands then as well? E.g.
<nammn_de1>   juju offers, juju status, juju models are all returning the same message. Do we want to change them as well?
<manadart> nammn_de1: Yes, and rename LatestRemovedModelMigration as well. That can call CompletedMigrationForModel.
<stickupkid> manadart, external controllers done, I'm now going to wire them up, before landing the PRs https://github.com/juju/description/pull/68
<stickupkid> manadart, I'm unsure how relation id plays into this whole thing, so I may end up removing it before landing #67
<achilleasa> manadart: that's odd... I will spend some time in the morn commenting out things from the linked commit to see if I can find the block that makes it fail in the first place
<hml> timClicks__: did you get the provider instance types answer yesterday?
<timClicks> hml: not really, but I don't think that we have a lot of capacity available to implement what erik/scania would like
<timClicks> which is supporting centos7 images on vsphere
<hml> timClicks:  I only know a few, and not how it works for vsphere.
<hml> timClicks:  with vsphere itâs in the config file we setup i believe.
<hml> timClicks:  the other is aws, were we pull down the data, and parse it for our use.  usually check when we update AZ
<timClicks> yeah, we build a custom base template image
<hml> timClicks: rackspace would be the same as openstack
<timClicks> sure
<hml> timClicks: at a very high level, if the vsphere has centos7, why couldnât we use it in the config file?
<timClicks> I believe we could, which is why I'm probing around asking questions.
<hml> timClicks:  :-D
<hml> timClicks:  related to the bug, the providers do add model config keys, at leaste the openstack provider does.  does vsphere not add the ones in your bug?
<timClicks> no, it doesn't
<timClicks> or at least, this is what users see when they try to set up vsphere
<hml> timClicks: hrm.. the values are thereâ¦ but only if you have bootstrapped vsphere.
<hml> timClicks: https://github.com/juju/juju/blob/develop/provider/vsphere/config.go#L17
<hml> timClicks: juju show-cloud --include-config <vsphere-cloud>
<hml> wonder why they arenât seen?
<hml> i donât have a vsphere cloud defined locally to check.
<rick_h> hml:  timClicks what do we need? I've got one now :)
<timClicks> rick_h: I just filed a bug
<hml> rick_h: run the juju command above please
<hml> timClicks: were you bootstraped to vsphere when you ran the commands?
<timClicks> i didn't, I copied that output from a discourse post from someone else
<rick_h> hml:  timClicks ok, ran the command...what am I looking for?
<hml> rick_h: what does âThe available config options specific to vsphere clouds are:â
<hml> list
<rick_h> oh bummer I don't get to do that since I'm on JAAS
<hml> timClicks: okay, iâll check disposable2
<rick_h> so it doesn't give me cloud config in that way
<hml> rick_h: hrm, not cool, can i get a pastebin of what you do see?
<hml> I meant Iâll check discourse
#juju 2019-12-04
<wallyworld> hpidcock: this looks bigger than it is - small code changes but significant test boilerplate, deletion of unused code etc https://github.com/juju/juju/pull/10982
<hpidcock> on it
<wallyworld> ty, sorry in advance
<wallyworld> it was a bitch to write
<hpidcock> wallyworld: LGTM, tried it out on gce with no issues
<wallyworld> hpidcock: awesome, ty
<wallyworld> still go 3 pacgaes to go, won't be as hard
<wallyworld> i tested with k8s as well
<wallyworld> as they use resources etc
<timClicks> could someone please take a look at a discourse answer I've just given and check that it makes sense? I should really log off https://discourse.jujucharms.com/t/how-to-deploy-charms-to-specific-clusters-on-vsphere-cloud/2396/8
<manadart> stickupkid or nammn_de1: https://github.com/juju/juju/pull/10983
<stickupkid> love it when this happens https://paste.ubuntu.com/p/ccsyVWTC5r/
<achilleasa> stickupkid: you can just claim that one of the two contains a zero-width unicode character :D
<manadart> achilleasa: Did you have any success with that test in your patch?
<achilleasa> manadart: still chasing it down :-(
<achilleasa> I am down to sprinkling fmt.Printf statements everywhere...
 * achilleasa needs stronger coffee...
<gnuoy> kwmonroe, hi there, a work aroundthat was put into layer basic sometime ago (https://github.com/juju-solutions/layer-basic/pull/51/files) seems to be no longer valid (https://github.com/juju-solutions/layer-basic/issues/149). I'd like to put up a PR to remove the workaround but am concerned I might set the world on fire. Do you have any thoughts on the issue ?
<gnuoy> fwiw just removing the allow_hosts line works for me but am I guessing that was needed in the original work around
<gnuoy> s/but am I/but I'm/
<achilleasa> manadart: hmmm... I still need to add a "NewScopedProviderAddressInSpace" c-tor though
<manadart> achilleasa: Yeah, that's OK for now.
<nammn_de1> rick_h and anyone else regarding the "not returning an error if no controller is found". I would love to have some opinions on this one https://github.com/juju/juju/pull/10985/files#diff-bcdb02984a8543f1187ed21cb4812a8fL206 Just opened a draft to see how one could handle it. Wrote more on the gh PR
<rick_h> nammn_de1:  cool otp but will take a peek after lunch ty
<nammn_de1> rick_h: sure its just initial draft to discuss possible solutions, as this would require changes on multiple places
<achilleasa> manadart: I think I found the issue that broke that test...
<manadart> achilleasa: nice.
<achilleasa> manadart: when I added the new shims for testing the link layer device update path I accidentally made the StateMachine interface in the instancepoller test code incompatible with what FindEntity returned...
<achilleasa> obviously it worked with the mocked state in the instancepoller package but exploded for that particular agent test
<kwmonroe> hey gnuoy, i've read the upstream issues once per hour since you pinged me, and i still don't have a good feeling about the ramifications of removing allow_hosts.  punting to cory_fu because he loves issues like https://github.com/juju-solutions/layer-basic/issues/149
<gnuoy> haha, sounds good to me
<cory_fu> kwmonroe, gnuoy: Since we have a pre-bootstrap step to update pip & setuptools, as long as those get the new enough version to have the "never use easy_install" behavior for packages using install_requires, I think it should probably be ok.  Definitely should get plenty of testing, though.
<cory_fu> Oh, it probably depends more on the version of pip & setuptools used by the build process, actually, rather than at deploy time.
<cory_fu> Should make it easier to test, at least.
<cory_fu> Oh, nm, that bit is in fact to ensure that the bootstrap phase doesn't hit pypi for some packages.  So it does require a deploy test, and it requires one in a network-restricted environment (or at least blocking pypi, or maybe checking the logs for indications that pypi is being used)
<cory_fu> gnuoy: So, testing will be a pain, but I'm cautiously optimistic that it will work fine.
<cory_fu> I really really hope we can come up with a better method for dependency management for charms soon (new framework, then maybe backported to reactive).  I'm hopeful that the ESM stuff can inform us on better patterns
<babbageclunk> wallyworld: I'm updating the CrossModelRelations client to be able to talk to either v2 or v1 of the server API (expanding the v1 events into v2 ones if needed). I don't think I need to do that for the RemoteRelations, since that's inside one controller - the API version will always be >= than the "client" (actually a worker in the controller agent anyway).
<babbageclunk> wallyworld: does that make sense to you?
#juju 2019-12-05
<lucidone> Hi, when running juju add-k8s $cluster; juju bootstrap $cluster; against a k8s cluster in openstack. The controller service comes up with a ClusterIP. As such, it fails when it tries to contact the juju controller / api server becuse the address is inaccessible. I'm probably blind / doing something wrong,  but is there a config setting to make it use a spec.type=LoadBalancer instead?
<wallyworld> lucidone: there's a new 2.7 feature https://discourse.jujucharms.com/t/whats-new-in-juju-k8s-for-2-7/2300
<wallyworld> you can specify a bootstrap option
<wallyworld> hpidcock: the final PR to remove charmstore.v5 https://github.com/juju/juju/pull/10988
<hpidcock> ok
<lucidone> wallyworld: ah, magic! thanks
<lucidone> --config controller-service-type=loadbalancer worked a charm ;)
<thumper> I'd love a review of my ratelimit PR: https://github.com/juju/juju/pull/10987
<hpidcock> thumper: having a look
<thumper> hpidcock: thanks
<achilleasa> manadart: I fixed the broken test (https://github.com/juju/juju/pull/10979/commits/dcd69a17088c7d19bf5bcf95e7e025b661440dcc) and I think I got all your suggested changes in 10979; can you take another look?
<manadart> achilleasa: Yep.
<achilleasa> manadart: also, I got rid of some not needed code in the instancepoller facade which triggered the original type-casting error we were seing (this line: https://github.com/juju/juju/pull/10979/commits/2393a6d3d1ce273564080359a62d46090b952ceb#diff-d177f61f881e38ac7ccb857814927e3fR118)
<nammn_de1> manadart stickupkid up for a small review? https://github.com/juju/juju/pull/10985
<nammn_de1> rick_h: regarding the patch we were talking about yesterday
<nammn_de1> ^
<rick_h> nammn_de1:  did you get the feedback from thumper?
<rick_h> I mentioned it on a call with him and he was going to take a peek
<nammn_de1> rick_h: yes I did, I tried to work his feedback in, but needed to adjust a few places different
<nammn_de1> I need make the err silent as manadart has mentioned
<rick_h> nammn_de1:  gotcha, sorry yea just catching up on email this morning
<nammn_de1> manadart: on your comment: https://github.com/juju/juju/pull/10985#issuecomment-562123107
<nammn_de1> That are the options I see, any else in your mind?
<stickupkid> hml, can you check this one, this is the one that breaks all the tests :( https://github.com/juju/juju/pull/10990/files
<hml> stickupkid: looking
<hml> stickupkid: is there face to palm asci art?
<hml> for me?
<stickupkid> generally Patrick Stewart isn't it?
<stickupkid> https://i.kym-cdn.com/entries/icons/original/000/000/554/picard-facepalm.jpg
<hml> errrâ¦ donât know that one
<hml> ah, i was thinking asci, not jpg
<stickupkid> interestingly I've met him on multiple occations, depends if you're a trekkie or not
<hml> doh
<stickupkid> https://www.google.com/search?q=facepalm+ascii&sxsrf=ACYBGNQ_Sdi-AUddeexe3IF_Ujw4YkKalw:1575556212640&source=lnms&tbm=isch&sa=X&ved=2ahUKEwiH0oKj3J7mAhXgURUIHfSOCC8Q_AUoAXoECA0QAw&biw=1272&bih=635#imgrc=-uejGHWQzbfg2M:
<hml> stickupkid: seen a lot of it.
<hml> wow
<hml> stickupkid: posted 2 comments
<rick_h> hml:  can I bug you back in daily around the error message for the neutron thingy please?
<stickupkid> hml, done https://github.com/juju/juju/pull/10990
<hml> stickupkid: approved
<stickupkid> ta
<hml> anyone know if thereâs a juju command to get the <commands> allowed with juju-run?  or it is just in docs?
<hml> found it
<timClicks> we received a 35x increase in anonymous pageviews on our discourse yesterday
<rick_h> timClicks:  :/
<timClicks> not sure what the significance is; but it does throw everything out
<rick_h> yea, I bet
<timClicks> blog post about juju on the ubuntu blog!  https://admin.insights.ubuntu.com/2019/12/05/web-application-development-with-juju-charms/
<thumper> hml: here's a quick PR for you https://github.com/juju/juju/pull/10992
<thumper> timClicks: can we catch up now?
<thumper> timClicks: Friday meetings have been all screwed up since daylight savings
<timClicks> thumper: sure, 2 mins
<hml> thumper: looking
<thumper> hml: came across them trying to land my other branch
<hml> thumper: we seem to be hitting more intermittent again recently.  nice to see a fix for one.
<hml> or two
<wallyworld> hml: tiny https://github.com/juju/bundlechanges/pull/58 pretty please
<hml> wallyworld: looking
<hml> wallyworld:  juju:master?
<wallyworld> hml: parse rror?
<hml> wallyworld:  looks like your pr is trying to merge into master instead of develop
<wallyworld> this is bundlechanges
<wallyworld> a different repo
<hml> wallyworld: ah
<wallyworld> we need to use bakery.v2 in juju and it's a friggin rabbit warren to get all the dependencies sorted
<hml> wallyworld:  definitely!  weâve all(?) been down that path a run screaming at one time or another.  :-D
<wallyworld> i'm living it :-( for the past week
<hml> wallyworld: :-(
<wallyworld> finally got charmstore.v5 all removed
<wallyworld> now hitting issues with go dep
<wallyworld> ty fore review
#juju 2019-12-06
<hpidcock> wallyworld: is there a way to tell juju client to log out facade calls?
<wallyworld> hpidcock: what's the context?
<hpidcock> ahh just want to see exactly what add-k8s is doing
<hpidcock> so I can verify it with python-libjuju
<wallyworld> oh, right. you can turn on DEBUG or TRACE for the apiserver package
<wallyworld> trace gets you content of the calls, or else they are redacted
<wallyworld> if you bootstrap with --debug you will see most of what you want
<wallyworld> i forget the exact package name ot trace
<wallyworld> starting with --debug will show what you might then want to set to trace
<hpidcock> wallyworld: danke, I just set TRACE on everything
<hpidcock> got what I want thanks
<nammn_de1> stickupkid: around for quick ho?
<stickupkid> yeah, give me 2 minutes
<stickupkid> nammn_de1, in daily
<nammn_de1> rick_h: can you give me a ping when you around? stickupkid and I have a small question for you
<nammn_de1> stickupkid: ahh you wanted to give some input here: https://github.com/juju/juju/pull/10985 Last comment are my ideas, any others in mind?
<nammn_de1> I prefer the second option from that comment
<stickupkid> nammn_de1, let me get back to you on that, I'm knee deep in ACLs :D
<nammn_de1> stickupkid: no worries
<rick_h> nammn_de1:  what's up?
<nammn_de1> rick_h: was talking with stickupkid about how we want to return formatting directives on error cases.
<nammn_de1> https://pastebin.canonical.com/p/p25K6HzSjN/
<nammn_de1> and wanted your input on that
<nammn_de1> manadart stickupkid: https://github.com/juju/cmd/pull/70 wdyt?
<nammn_de1> Did I get it correct?" Does the "allwatcher" watches collections in the mongodb  and on change in pushes those into a channel?
<stickupkid> nammn_de1, it uses the oplog, it's worth reading about that
<nammn_de1> stickupkid: thanks, ah okay, it uses the oplog to keep track on changes and notifies accordingly, as described above..
<nammn_de1> right?
<stickupkid> yeah
<nammn_de1> stickupkid: thanks!
<rick_h> bdx:  what crazy thing are you trying to do now :P
#juju 2019-12-07
<bdx> haha
<bdx> rick_h: just an idea that came up in conversation this morning
<bdx> while on charm config
<bdx> do we have a way of excluding lower layer's config from ending up in the built charm?
