#juju 2011-11-14
<fwereade> hazmat, niemeyer: been trying to figure out the details of the unit agent for upstartification
<niemeyer> fwereade: Cool
<fwereade> hazmat, niemeyer: there's talk of in-memory state we need to persist; is that *just* UnitLifecycle._relations, which is all I can obviously see, or do you know of more things I should be worrying about?
<fwereade> niemeyer, I don't really expect you to reread all the code tbh, I'm just hoping for a quick yea/nay or just a "figure it out yourself :p"
<niemeyer> fwereade: One of the most interesting bits is in HookScheduler
<niemeyer> fwereade: You're right that the relation is a problem as well, though
<fwereade> niemeyer, hm, I'd missed that, I'll take a look
<fwereade> thanks :)
<niemeyer> fwereade: Due to the use of ephemeral nodes
<fwereade> niemeyer, ahhh
<niemeyer> fwereade: When we restart the agent for whatever reason, ideally the underlying connectivity shouldn't go away
<niemeyer> fwereade: Which raises further questions about how to handle it
<niemeyer> fwereade: The timeout should probably be moved into application logic rather than trusting on ephemeral nodes for that
<niemeyer> fwereade: That said, it's probably a step you can care about at a later point
<niemeyer> fwereade: The HookScheduler I'm mentioning will simply yield weird faults if not handled properly
<niemeyer> fwereade: Which is worse than having a relation restarted
<fwereade> niemeyer, not entirely following you, but I think I just need to spend a bit more time on the code
<fwereade> niemeyer, haven't formed a really good model of how it all fits together yet
<niemeyer> fwereade: What part?
<niemeyer> fwereade: I mean, which part aren't you following? Can I explain something else, or would you rather spend some time looking at the code?
<fwereade> niemeyer, all the parts I've looked at make sense; I just don't have instant recall of how it all fits together
<fwereade> niemeyer, I think I'll learn best by just diving in and breaking stuff
<fwereade> niemeyer, I'll certainly hassle you if I need help with something specific
<niemeyer> fwereade: I easily get lost as well, to be honest, which is something I'd like to fix
<fwereade> niemeyer, I'm not sure whether that's reassuring or alarming :p
<fwereade> a bit of both mayeb :)
<niemeyer> fwereade: ;)
<hazmat> fwereade, g'morning
 * hazmat has been watching lbox manipulate the reitveld queue
<fwereade> heya hazmat
<hazmat> fwereade, re persistence for the unit agent and upstart, i think their different problems
<fwereade> hazmat, hmm; I got the impression that UA persistence was a prereq
<hazmat> fwiw workflow states are recorded to zk, the memory state is the seen nodes / watches that input into the hook queue / scheduler
<fwereade> hazmat, I'd figured that much out at least :)
<niemeyer> hazmat: Hm?
<hazmat> fwereade, its not a pre-req, their both problems, solving the restart problem fully means doing both
<fwereade> hazmat, indeed, I'd seen "upstartify" as meaning "upstartify and ensure it's actually useful"
<hazmat> fwereade, upstartification applies to lots of things, all the agents, and likely additional things like local provider processes
<niemeyer> hazmat: How's lbox manipulating the rietveld queue?
<hazmat> fwereade, its useful byitself
<hazmat> niemeyer, oh.. i guess you where using it by hand?
<niemeyer> hazmat: I don't know.. what are you referring to? :-)
<hazmat> niemeyer, http://codereview.appspot.com/5372097/
<niemeyer> hazmat: This was created after your initial comment, which makes me wonder where is this showing?
<niemeyer> hazmat: This is the rietveld Go package.. almost thre
<niemeyer> there
<hazmat> niemeyer, i saw one a few hrs ago (2.5)
<niemeyer> hazmat: Where
<niemeyer> ?
<hazmat> niemeyer, same url same content
<niemeyer> hazmat: I mean.. is there a feed somewhere that you're watching?
<hazmat> well maybe not same url, but same content
<hazmat> niemeyer, no just hitting the front page
<niemeyer> hazmat: Hah :-)
<fwereade> hazmat, I have the impression that upstartifying the UA without persistence is... of verify limited utility, if not necessarily and more harmful than the current state
<fwereade> MA/PA I don't see issues with upstrartifying, agreed
<niemeyer> hazmat, fwereade, rog: http://goneat.org/lp/goetveld/rietveld
<niemeyer> Now, to tweak lbox so it sends the delta on propose
<hazmat> fwereade, unit agent sans persistence, will invoke relation hooks redundantly, thats not the end of the world
<fwereade> hazmat, ah, I had a feeling it might be able to miss them as well
 * hazmat checks the codz
<hazmat> fwereade, it should just restart the relation.RelationUnitWatcher which will re-feed to the hook queue based on current state
<hazmat> fwereade, a change event while down will get mutated into a joined event
<hazmat> actually it still gets a changed event since we have expansion of joined to (joined and changed)
<fwereade> hazmat, hmm, ok, shall I just look at upstartification for now then?
<fwereade> hazmat, hopefully the UA stuff will bed into my brain for a bit while I work on something else ;)
<hazmat> fwereade, sounds good
<fwereade> hazmat, ok, cool :)
<fwereade> cheers
<hazmat> fwereade, incidentally there was some concern about lost events, in the case that the session was active, and we restablished the process hooked up to the same session, but the recent tests in txzk demonstrate that watches are still delivered when the client does reconnect to the extant session
<fwereade> hazmat, ah cool, perhaps that was what I was thinking of
<fwereade> ta
<rog> niemeyer: cool!
<rog> niemeyer: is there anything rietveld-specific about rietveld.Auth ?
<niemeyer> rog: Not sure about what's the underlying question
<rog> niemeyer: just wondered if it would be worth putting in its own package, so it can be used by other people talking to google apps
<niemeyer> rog: Maybe..
<niemeyer> rog: But I'm happy with it for now
<rog> niemeyer: just a though
<rog> t
<niemeyer> rog: Yeah, I understand.. just saying the actual use case right now is Rietveld
<niemeyer> rog: People can freely factor it out, though
<rog> niemeyer: that was the reason for my original question - because it's in the rietveld package, it's not clear whether it has rietveld dependencies. i don't mind - it was just my first reaction on seeing it.
<niemeyer> rog: I understand.. :-)
<uksysadmin> hi all
<uksysadmin> got a quick q - my bzr set up seems to have gone screwey... think I'm ok now btu I've got a situation now where my charms dir where I have all my charms checked out using "charm getall oneiric" I get the following:
<uksysadmin> mr update: /home/kevinj/cloud/charms/hadoop-slave
<uksysadmin> bzr: ERROR: No location specified or remembered
<uksysadmin> mr update: command failed
<uksysadmin> etc
<hazmat> uksysadmin, i try just blowing away the hadoop-slave charm dir.. and let it refetch, mr can get wedged if things change on it
<hazmat> s/i/i'd
<uksysadmin> np - will have another go
<uksysadmin> hazmat, that's done it
<uksysadmin> ta
<hazmat> np
<niemeyer> Phew..
<mcclurmc> hi all
<mcclurmc> i've heard that there is an openstack service provider for juju, but it's not in the main source repository. can anyone point me to it?
<hazmat> mcclurmc, the ec2 provider works with openstack
<hazmat> niemeyer, got a moment?
<mcclurmc> hazmat: ah, of course. thanks
<mcclurmc> hazmat: actually, this might be a better question for #openstack, then, but how does it handle storage? is there an S3 api for openstack as well?
<hazmat> mcclurmc, there is, juju ec2 provider been tested with the simple s3 objectstore thats distributed with openstack/nova, it should work with swift if its got the s3 middleware enabled but that hasn't been tested by the juju team yet
<mcclurmc> hazmat: great, thanks
<hazmat> jcastro, ping
<_mup_> juju/local-repo-log-broken-charm r414 committed by kapil.thangavelu@canonical.com
<_mup_> unify metadata/config error handling, distinguish config definition errors from value errors, repository find notes errors and location/path on charms.
<mpl> hazmat: thx for the reply on the list.
<hazmat> mpl, np
<mpl> hazmat: the reason I'm not working from local containers is I intend to play with the go code, and gustavo said I'd be better off directly on an ec2 for that.
<hazmat> mpl, not sure why that would be the case, is the code something likely to overwhelm a single machine.. your working with camelstore ?
<hazmat> mpl, local provider isn't perfect, its got some known issues (doesn't survive restarts or suspends)
<jcastro> hazmat: pong
<hazmat> but as long as what your building isn't going to more than the underlying machine can handle it should be fine.. if your looking for large memory instances or large cpu .. then going native in ec2 from the start makes ense
<hazmat> niemeyer, ^?
<mpl> hazmat: iiuc, the go port atm is only working with ec2 and focusing on that. that may be the reason.
<hazmat> jcastro, hi.. i wanted to catch up on charm stuff
<jcastro> hazmat: yeah sure, gimme 5?
<hazmat> jcastro, sounds good
<niemeyer> hazmat: Yo
<niemeyer> Sorry, I'm just up from a nice, long, restful nap..
<niemeyer> mpl: That's right
<niemeyer> mpl, hazmat: That was indeed the reason..
<rog> niemeyer, fwereade: any response to my thoughts on william's review of the initial ec2 stuff? particularly point [1].
<niemeyer> rog: Sorry, I haven't been watching reviews much in the hopes we can move them over to Rietveld ASAP, but let me check that one
<rog> niemeyer: that's fine, i thought as much. this is a central (and interesting) point though, and worth thinking about.
<niemeyer> rog: I agree with fwereade in general
<niemeyer> rog: The Machine should probably be an interface rather than a concrete type
<niemeyer> rog: and it should definitely not expose a ec2 instance in its interface
<rog> niemeyer: it is already an interface
<niemeyer> 694	+type Machine struct {
<niemeyer> 695	+ MachineId string
<niemeyer> 696	+ *ec2.Instance
<niemeyer> 697	+ Reservation *ec2.Reservation
<niemeyer> 698	+}
<niemeyer> rog: That's not an interface
<rog> niemeyer: it's just an implementation of juju.Machine
<rog> niemeyer: i could avoid exporting it though
<niemeyer> rog: Right, that's what I meant
<rog> niemeyer: the renaming i'm proposing is to rename juju.Machine to juju.Interface
<rog> oops
<rog> juju.Instance
<niemeyer> rog: juju.Machine sounds good
<rog> niemeyer: but the problem is that, as fwereade points out, that there's not necessarily a direct correspondence between instances and machines
<rog> niemeyer: at least, i *think* that's his point.
<niemeyer> rog: It's not
<niemeyer> rog: The point is that the instance underneath can change
<rog> niemeyer: doesn't that imply what i just said?
<fwereade> niemeyer, rog: that said, juju.Instance might be a better name than juju.Machine, because it's less likely to occupy the same mental register as "machine" (as in machine state)
<rog> niemeyer: i.e. a machine can correspond to different instances over time
<rog> fwereade: that's what i'm thinking
<fwereade> niemeyer, rog: OTOH it's quite ec2-specific
<niemeyer> rog: It can, but there's a direct correspondence between a machine and an instance, which is what you implied to not be the case
<rog> niemeyer: hmm. i think i'm probably missing something. how does it happen that an instance underneath can change?
<niemeyer> fwereade, rog: Hmm.. maybe.. let me see the code again, please
<niemeyer> rog: I'll let fwereade explain while I have a quick look at the code
<niemeyer> brb
 * rog is all big flappy ears
<fwereade> niemeyer, rog: I *think* you actually agree but are using incompatible words :)
<rog> wouldn't be the first time!
<rog> lol
<niemeyer> fwereade: I certainly don't disagree with anything right now.. :-)
<niemeyer> Just trying to sort out what we're after..
<TheMue> niemeyer: Did you followed my mail correspondence with Robbie?
<rog> but it was a genuine question: how *can* an instance change when the juju machine itself remains the same?
<fwereade> niemeyer, rog: I would be happy with a juju.Instance, so long as it didn't have any *machine*-specific data tacked on
<fwereade> rog: if I were to go and terminate-instance, juju would notice the instance was gone and bring up a new one
<fwereade> rog: that's ec2 terminate-instance, that juju is unaware of
<rog> fwereade: that's what i thought. but it needs to be possible, i think, to tag a machine (as ec2 does with the security group)
<fwereade> rog: what are we missing if we don't have that?
<niemeyer> fwereade, rog: I think you guys are onto something for sure.. just want to see the code to ensure I'm getting it too
<rog> fwereade: good question.
<rog> fwereade: i just saw that that's what the code does, and haven't seen how it's actually used.
<niemeyer> rog: I think it's not part of the actual ProviderMachine interface, though
<fwereade> rog: as I see it, if an instance is associated with a machine, we can find that out from ZK; we might want to interact with i-foobar because it corresponds to machine/27
<niemeyer> rog: So this again holds up your/William's suggestion that an Instance would make more sense
<fwereade> rog: but I can't think of a case where we only have i-foobar and need to find out what juju machine it corresponds to
<rog> niemeyer: what isn't? the tag?
<niemeyer> rog: The information used to tag a machine isn't part of the ProviderMachine interface
<rog> niemeyer: you mean the juju.Machine interface?
<rog> (as currently, or juju.Instance as proposed)
<niemeyer> rog: I mean the ProviderMachine interface
<fwereade> guys, sorry, dinner is ready and apparently it's important that it be eaten hot
<niemeyer> rog: We're still defining how it looks like in juju, so that might be ambiguous
<fwereade> I'll pop back on when I can
<niemeyer> rog: I can correctly state that about the current code, though
<niemeyer> rog: Sorry, still defining in the Go port
<rog> fwereade: okeydokey
<niemeyer> fwereade: Cheers man
<niemeyer> rog: Ok, in general, I'm happy with yours/William's suggestion of using juju.Instance
<niemeyer> rog: Where today we have ProviderMachine
<rog> niemeyer: by "ProviderMachine" you mean some as-yet-to-be-defined thing?
<rog> ah, the python code!
<niemeyer> rog: Yeah, that forgotten piece of code that happens to be the critical reference!
<rog> doh
<rog> sorry, living in two worlds
<niemeyer> rog: So, for the docs of juju.Instance: it needs to be Machine agnostic, and Provider agnostic, which is interesting
<rog> niemeyer: yeah. that seems ok to me.
<rog> niemeyer: it's already close to that. except that it mentions machine id
<niemeyer> rog: So juju.Instance is the internal representation of that thing that a provider calls a machine :-)
<niemeyer> rog: Yeah, that's the bit worth cleaning up
<rog> niemeyer: definitely. that's good.
<niemeyer> rog: Awesome, thanks for the suggestion.. will be a nice mental clean up :)
<rog> niemeyer: that's kind of how i thought of it before, but i hadn't realised the loose relationship between instance and machine.
<rog> right, Machine -> Instance it is. and instance id becomes defined by Instance rather than being given to StartMachine.
<rog> niemeyer: and for now, i won't provide the ability to tag a machine on startup. that can be added back later.
<niemeyer> rog: I think there may be some confusion taking place still
<rog> niemeyer: quite probably!
<niemeyer> rog: start_machine takes a machine id, and it needs it
<rog> niemeyer: ah. why's that?
<niemeyer> rog: It should probably be called start_instance, though
<TheMue> beside the wiki, do we have some diagrams showing how the software is organized and the components play together?
<niemeyer> rog: But it needs a machine id still, not an instance id
<niemeyer> TheMue: Sorry, I missed your earlier message
<TheMue> niemeyer: nopro
<niemeyer> TheMue: I've followed a few threads, but I'm not sure we're referring to the same one
<TheMue> niemeyer: regarding a talk on the parallel 2012 in May here in Germany
<niemeyer> rog: I suggest following through in the Python code so you understand more of that and see the separation of concepts that already exists
<rog> niemeyer: how does the machine id get used by the provider?
<TheMue> niemeyer: should not conflict with the next summit
<niemeyer> rog: Despite the name, MachineProvider is already not the same as a state machine
<niemeyer> TheMue: Ah, yeah
<rog> niemeyer: ?
<niemeyer> TheMue: It should be fine for sure
<niemeyer> rog: !
<niemeyer> See, that's my job.. ;-D
<rog> niemeyer: did i somehow imply that it was?
<niemeyer> rog: We're just evolving an idea.. not every comment has to be  disagreement :-)
<niemeyer> rog: Check the code out.. it'll be enlightening
<rog> niemeyer: eek! i wasn't disagreeing! i was trying to find out what you understood by what i said
<TheMue> niemeyer: maybe we'll find a good topic matching how we use the concurrency of go in juju.
<niemeyer> TheMue: Sounds good.. even though I hope our use of concurrency is simplistic enough to not be a fantastic highlight :-)
 * hazmat finishes lunch
<hazmat> TheMue, greetings
<TheMue> niemeyer: my other topic would be concurrent design with message based communication as a more natural approach for software architectures
<TheMue> hazmat: hi
<hazmat> that reminds of the start of the project :-)
<niemeyer> TheMue: My first reaction to the topic is that it feels a little bit overcovered already
<rog> niemeyer: AFAICS it looks like the machine-specific security group is just used for opening ports. am i missing something?
<TheMue> niemeyer: ok, maybe from a too high level. i like the idea of event-driven architectures.
<hazmat> TheMue, state based approaches have some advantages over message systems, esp. as one considers faults scenarios
<hazmat> TheMue, i'm a fan as well.. but i think the state based approach is feature of ensemble and makes reasoning about the system much more succinct
<niemeyer> rog: No, that's right.. the machine id in start_machine/instance is used for bootstrapping the node too, though
<niemeyer> rog: So we can't get rid of it
<rog> niemeyer: ah, i must have missed that bit
<niemeyer> hazmat: TheMue is referring to internal architectures
<niemeyer> rog: Otherwise the starting machine has no idea of its purpose in the world (poor guy :-)
<TheMue> hazmat: event-driven and state based is no opposite for me
<rog> niemeyer: ah, i thought it knew from its boot arguments.
<niemeyer> rog: It does!
<niemeyer> rog: Where do the boot arguments come from? ;-)
<rog> niemeyer: argument to StartInstances?
<niemeyer> rog: Right
<rog> niemeyer: so, i must be missing something. where does the machine id come in there?
<rog> BTW i have to go in 1 minute!
<niemeyer> rog: Check the MachineProvider in the ec2 backend
<niemeyer> rog: start_machine method
<hazmat> TheMue, i'm missing a context then of what you mean by event driven architecture then
<rog> niemeyer: ah yes. i guess it could be some other unique string but then we wouldn't know how to remove it.
<hazmat> the state system is observable and the observations could be decomposed into an event based dispatch driving application logic, so they are compatible, but its hard to reason about the usefulness in the abstract, the observation pattern against state is a non concurrent  one
<niemeyer> rog: Btw, the _machine_ id is a number
<niemeyer> hazmat: TheMue was really talking about concurrent programming within a process, rather than across processes, I believe
<TheMue> back again, had to bring my daughter to driving school
<TheMue> hazmat: i've learned about eda in large systems, with cep for a large number of events in realtime
<hazmat> TheMue, nice.. custom cep engine? or something like esper?
<TheMue> hazmat: langs like go and erlang allow to create the same insde one process
<TheMue> hazmat: would have liked to learn more about esper, only read about it. we tried to establish it as a part of a soa. our part has been done in smalltalk, for a telco
<hazmat> TheMue, effectively in process eda using coroutine pools  for concurrency for a given event dispatch?
<TheMue> hazmat: my own Tideland ECA will be a framework for go internal event-processing, but not publish/subscribe based. more like connected cells with inputs and outputs
<TheMue> hazmat: no explicit concurrency controlled by us, only by GemStone/S as the backend
<TheMue> hazmat: thankfully no high load yet, project is still in progress. but shall be used later for the analysis of outages
<TheMue> so, have to stop here for a moment and finish my (print) article on Dart
<niemeyer> hazmat: ping
<raphink> hi there :-)
<raphink> how is juju different from zookeeper (or other similar products)?
<marcoceppi> raphink: Well, Juju uses zookeeper - so that's something to think about
<raphink> ah ok
<hazmat> raphink, greetings.. zookeeper is just a coordination storage layer.. it doesn't really do much by itself.. its what the application chooses to do with it thats interesting..  juju is an application that uses zk (mostly as an implementation detail) to provide a language neutral mechanism for defining a service and its dependencies in a reuseable fashion.  its like apt for the clouds.. solving dependencies on the network
<hazmat> juju differs from most of the similar configuration management products in that it focuses on orchestration and encapsulation/reuse of a service definition.. ie. its focused on service orchestration not configuration management
<raphink> hazmat: and it makes use of other configuration management products (like puppet/cfengine/chef) under the hood, right?
<hazmat> raphink, actually it allows a charm author  (the definition of the reusable bit) to use whatever other products their comfortable with in whatever language their comfortable with, juju provides communication channels (exposed via clis) and guarantees on when things are called (orchestration).. juju core doesn't have an internal dependency on other CM tools
<raphink> I see
<raphink> just wondering
<raphink> why was the product renamed?
<hazmat> raphink, for that i have no good answer ;-).. it suffices to say it was done
<raphink> hehe I guess it was haakon_
<raphink> sorry, hazmat
<raphink> :S
<hazmat> niemeyer, is there any reason the wtf tests can't also be run against the local provider?
<hazmat> s/wtf/functional
<_mup_> Bug #890449 was filed: metadata.yaml needs contact fields <juju:New> <juju Charms Collection:New> < https://launchpad.net/bugs/890449 >
<_mup_> Bug #890453 was filed: metadata.yaml needs a category / classifier field <juju:New> < https://launchpad.net/bugs/890453 >
<niemeyer> hazmat: No intrinsic reasons
<niemeyer> hazmat: I've not tried to set them up against the local provider, but as long as we can put the system back on a clean state I guess it should work
#juju 2011-11-15
<hazmat> niemeyer, hmm.. so destroy-environment cleans out state and containers, but there is one caveat that would need to be addressed by a tear down, namely updating the lxc cache image, and perhaps cleaning out the apt-proxy cache
<hazmat> the precise lxc ubuntu script now has update cache functionality this incorporated
<_mup_> juju/robust-zk-connect r415 committed by jim.baker@canonical.com
<_mup_> Updated tests
<_mup_> juju/robust-zk-connect r416 committed by jim.baker@canonical.com
<_mup_> Docstrings, PEP8, PyFlakes
<niemeyer> Good morning all
<raphink> hi niemeyer
<niemeyer> Or good afternoon, to some :)
<mpl> yep. good morning to you.
<hazmat> Testing
<hazmat> g'morning
<rog> niemeyer: good afternoon :-)
<hazmat> jimbaker, could you send an email to the list re describing high level cli changes for scp/ssh/num-units
<hazmat> niemeyer, your still against the change in lp:~hazmat/juju/assume-local-ns-if-local-repo.. ie assume local ns if --repository is specified on the cli?
<hazmat> fwereade, the visible instance stuff looks good, its a +1 with the fix for local provider
<fwereade> hazmat, cool, let me take a quick look
<niemeyer> hazmat: Yeah
<hazmat> niemeyer, it looks like the dns entry for the store is in.. so people are getting client hangs, is it possible to put up a static page there that will cause a more immediate fault?
<hazmat> ie. if you forget to add a 'local:' the deploy just hangs
<niemeyer> hazmat: Yeah, should be possible
<hazmat> niemeyer, so i still think its rather confusing to have a --repository not imply local.. i think the best thing to avoid confusion is to just have resolve explicitly log which repo its using, that will at least give the user some indication of what's going on
 * hazmat notes its time to go to the dentist.. drill baby drill ;-)
<niemeyer> hazmat: I find it confusing in the other direction, and problematic for several reasons
<niemeyer> hazmat: --repository to me simply implies "look for charms here as well"
<niemeyer> hazmat: having it affecting the resolution of urls sounds pretty surprising
<hazmat> niemeyer, right.. but the common case for specifying --repository is to deploy a local charm.. not deploying from the cs.. the dep lookup algorithm goes by first found dep, so i don't see that algorithm being affected, its just a matter of resolving the first item
<hazmat> your the taste master ;-)  but i'd like to switch that branch out to do a log of repo being utilized to avoid confusion in this case, as i think its common
<hazmat> and is a nice practice in general
<niemeyer> hazmat: --repository is called like that because it enables any kind of repository
<niemeyer> hazmat: It's not the same as --local
<hazmat> niemeyer, so the goal is that it could enable a private remote repo?
<niemeyer> hazmat: Yes, that's what we discussed
<hazmat> currently that logic is hardcoded .... remote == 'store.juju."
<niemeyer> hazmat: implementation vs. public API
<hazmat> niemeyer, does that imply a different namespace prefix? .. could the user switch the default lookup to a private repo?
<niemeyer> hazmat: The user can do whatever he pleases.. the command line option is unrelated to that
<niemeyer> hazmat: We can change the logic, but then let's rename the option as well
<niemeyer> hazmat: and change it globally, and _fail_ in case the charm isn't found locally anywhere
<hazmat> niemeyer, i meant the user doing so sans code modification.. if they have a private repo, that they want to use for all their charms.. an additional environments.yaml repo config setting seems appropriate
<niemeyer> hazmat: So for --local, help="Resolve all charm references to the given local repository"
<hazmat> ic
<hazmat> argh.. have to come back to this latter.. late for my appt
<niemeyer> hazmat: "Enjoy" :)
 * hazmat lol
<hazmat> probably not
<marcoceppi> I've got a question about checking cryptographic signatures from 3rd party sources
<marcoceppi> The third party source doesn't provide a has to check against, so I've created a bzr repo in LP that has the proper checksum. The install branches this and performs the check - that way I can continually update the checksum without having to always update the charm
<marcoceppi> Best practice for this? Or is this okay?
<jcastro> m_3: the hadoop email reminds me about automated testing of charms
<jcastro> know anything since the sprint about that?
<m_3> jcastro: nope, haven't been following the conversation on it
<jcastro> marcoceppi: hey so, sorry to sound annoying, but Minecraft?
<marcoceppi> Yes, it's in relation to minecraft :)
<marcoceppi> My question is at least
<marcoceppi> It's kind of the last thing I need to do
<jcastro> ah ok
<jcastro> hazmat: SpamapS: m_3: what do you guys think about marco's suggestion for checking the checksum?
<m_3> marcoceppi: sounds like this at least tells you if it's changed or not.. next best thing to 3rd party hash.  doesn't tell if you were using a compromised binary to begin with
<fwereade> need to pop out for a bit, back later
<marcoceppi> m_3: I know the binary I have isn't compromised, if that helps. I guess I'll make a plea to Mojang to include checksums
<m_3> marcoceppi: what you've done is a great workaround... seems like the same amount of work as if you added the hash directly into the charm, but either way works
<m_3> i.e., when new binary comes out, you update hash... either in your repo _or_ the charm
<marcoceppi> Right, but the alternative I was looking at is, the charm won't have to be updated as nearly as often
<marcoceppi> I'll push up what I have now, then I think it'll be ready for review
<m_3> marcoceppi: in general, you're signing and saying that the upstream binary is clean... I'd keep asking Mojang for hashes
<marcoceppi> yeah, I'll start bugging him on twitter
<jcastro> Jorg
<jcastro> whoops
<robbiew> m_3: uh oh....saw the video....now people will know your face...you're so screwed
<robbiew> lol
<jimbaker> hazmat, yes, i will send the emails describing these and other proposed cli changes
<m_3> robbiew: no way... didn't know it was out... ugh
<bloodearnest> hey all - any way I can get "juju ssh" to use a unit's ip address rather than dns to attempt to connect?
<marcoceppi> bloodearnest: juju ssh service_name/unit_number ?
<bloodearnest> marcoceppi, that's what I used, but the dns on the unit instances is a .local, and resolution fails
<robbiew> m_3: http://cloud.ubuntu.com/2011/11/hadoop-world-ubuntu-hadoop-and-juju/
<robbiew> :D
<bloodearnest> the command "juju ssh <n>" works, but "juju ssh <service/unit>" fails with a dns error
<hazmat> bloodearnest, that's odd.. which provider are you using?
<hazmat> marcoceppi, i guess separating the checksum from the download is better than no checksum at all
<SpamapS> Don't want to be a downer, but Juju doesn't always get positive mentions.. http://www.youtube.com/watch?v=oebqlzblfyo&feature=youtu.be .. whole thing is good, but juju pops up at 3:30 or so
<SpamapS> Luckily its just a WTF, not a specific argument against it.
<marcoceppi> SpamapS: This man is very angry
<jcastro> this is awesome
<SpamapS> and VERY smart
<jcastro> this guy did that SSD talk right?
<jcastro> at the US velocity?
<SpamapS> Dunno
<jcastro> ah yeah, it is him
<SpamapS> AWS -- Is shit, * Openstack -- more shit
<bloodearnest> hazmat, canonistack
<rog> i'm off for the day, see y'all tomorrow.
<fwereade> cya rog
<fwereade> later all
<niemeyer> fwereade: Cheers!
<niemeyer> rog: Have a good evening rog
 * niemeyer on lbox propose now
<niemeyer> Is Launchpad read-only?>
<niemeyer> Seems to be back..
<niemeyer> Seems to be down again.. something funky
<niemeyer> bzr: ERROR: Connection closed: Unexpected end of message. Please check connectivity and permissions, and report a bug if problems persist.
<niemeyer> hazmat: Btw, feel free to talk to IS if you'd like to have dummy server deployed sooner rather than later
<jcastro> jamespage: is the etherpad-lite charm still good to go? Or are you planning any surgery on it?
<jamespage> I think its still good - I'm not planning todo any more work on it
<jamespage> until they next make a release
 * jcastro nods
<_mup_> juju/local-repo-log-broken-charm r415 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<hazmat> hmm
<jimbaker> (looks like this didn't get sent) hazmat, i think it makes sense to add timeout support to making more robust connections. but maybe this should be done in two phases: robust (but consequently never times out!); then add a --timeout option to juju (or maybe better --timeout-zk)
<jimbaker> the advantage of doing this is that vs an external timeout is we can timeout just the connection to zk setup
<hazmat> jimbaker, agreed in principal... we already have a timeout, but there's a post host up, pre timeout error afaicr as well that needs handling
<jimbaker> hazmat, sounds good
<hazmat> jimbaker, afaics the end behavior is the same, not sure i understand the distinction with an external timeout
<jimbaker> hazmat, the difference is that one be inadvertently interrupting some ZK modification
<jimbaker> i suppose the cli could be more nuanced in terms of handling signals however
<hazmat> jimbaker, there is a double wait, connection to zk, and zk initialized, and then cli done, the cli can be disrupted any at point though
<hazmat> s/any at/at any
<hazmat> interuppted that is
<jimbaker> hazmat, correct. i'm only referring to timeout the waiting up to the point of checking for /initialized
<jimbaker> in actual usage, any subsequent waiting is minimal of course
<hazmat> hmm.. well it could be substantial.. and there is a wait timeout firing race against initialized.. oh .. right i forget, its not bootstrap waiting.. which simplifies this considerably  its everything else.
<jimbaker> hazmat, in this model, bootstrap doesn't wait, and in practice it actually works well
<jimbaker> hazmat, right now i'm just hunting down while we are seeing tx connection timeouts leaking out of the retry connect loop (no errback setup on them of course...)
<jimbaker> we seem them in http://wtf.labix.org/413/ec2-wordpress.out for example ("Unhandled error in Deferred")
<hazmat> one scenario i was thinking of in  the context of the local provider, anodd behavior is that there is a long asyncop  for the debootstrap of the master template and first unit, it might be nice to due that directly in bootstrap, so the user has feedback when its done and deploys/add-units are fairly normalized in times.. but that's orthogonal i guess.
<jimbaker> hazmat, my feeling is that the normal thing to do in this case is juju bootstrap && juju status, and that gets what you want
<jimbaker> maybe a hypothetical juju --waitfor bootstrap is just that, i don't know
<jimbaker> hazmat, actually in that case, it's really juju bootstrap && some activity && poll on juju status
<hazmat> jimbaker, that timeout exception from txzookeeper is odd, afaics that's caught by sshclient.connect
<jimbaker> hazmat, i know, i've traced it through and i believe that's the case, except for the fact that it appears like that on stderr ;)
<hazmat> jimbaker, indeed, reality is rather contrary in this case ;-)
<hazmat> jimbaker, it looks like _cb_tunnel_established is the one raising the error, but it looks like that has an errback
<jimbaker> hazmat, yes, all the deferreds have errbacks. i do wonder about the chain_deferred variant however
<hazmat> jimbaker, me too
<jimbaker> hazmat, brb
<hazmat> jimbaker, it consumes the error (ie handles it) and propogates the error down the client connect deferred which is yielded on
<hazmat> it looks right..
<marcoceppi> So, config-changed. It gets run one or multiple times depending on how many config options there are in the install?
<marcoceppi> The problem I have, a service needs to be restart each time a configuration option is updated, but it's pretty slow to restart.
<hazmat> marcoceppi, config gets run once prior to start, and then once per time the config is changed
<hazmat> marcoceppi, multiple values can be set in a single command line, ie  juju set a=b x=z
<marcoceppi> hazmat: so when you do juju set for multiple values, is config-changed run each time for those updated keys? or just once per set?
<hazmat> marcoceppi, the config-hook  may run up to however many times the juju set is called (regardless of how many values in a single set), but the guarantee is that it will be called at least once with the latest config values
<hazmat> marcoceppi, short answer to the question, once per multi-value set
<marcoceppi> hazmat that's what I needed to know, thanks!
#juju 2011-11-16
<_mup_> juju/local-repo-log-broken-charm r416 committed by kapil.thangavelu@canonical.com
<_mup_> deploy uses serviceconfigvalue error
<_mup_> juju/trunk r414 committed by kapil.thangavelu@canonical.com
<_mup_> merge local-repo-log-broken-charm [r=fwereade,bcsaller][f=885515]
<_mup_>  - Local repository find logs charm error information with location information.
<_mup_>  - Unify metadata and config error handling. Structural errors are preserved and raised.
<_mup_>  - Config definition errors are distinct from value errors.
<niemeyer> Woohay
<niemeyer> https://code.launchpad.net/~niemeyer/lbox/test-branch/+merge/82239
<niemeyer> Almost there..
<fwereade> hazmat: you around?
<rog> fwereade: hiya
<fwereade> heya rog
<rog> fwereade: someone cut through the phone line this morning
<fwereade> wtf?
<rog> fwereade: so hoping i don't reach my mobile phone download limit...
<fwereade> rog, overly casual use of a digger or something?
<fwereade> rog crikey
<rog> fwereade: yeah. lots of water mains replacement going on
<rog> fwereade: we've had the street dug up for about 5 weeks now!
<fwereade> rog: unhelpful ;)
<rog> fwereade: indeed.
<rog> fwereade: at least my phone can act as a wifi hotspot, very handy in a pinch
<fwereade> rog, yeah, very handy (sorry I missed you)
<fwereade> rog I have a nearby cafe for emergencies :)
<rog> fwereade: no cafes with wifi here in darkest Tyneside :-)
<fwereade> rog, some parts of malta are more civilised than some parts of the uk, at least, then
<rog> actually, i *say* that, but i should probably explore a bit
<fwereade> rog, although I guess that statement doesn't say much :p
<rog> lol
<rog> fwereade: depends whether your definition of "civilised" equates to "has wifi"
<fwereade> rog, there's another definition?
<rog> fwereade: "has running water" does it for me. having been without it yesterday...
<fwereade> rog, ouch :(
<fwereade> rog, just going to collect laura from nursery
<rog> fwereade: to be honest i don't mind. i lived without plumbing for almost a decade :-)
<rog> k
<fwereade> rog, I must say that perception-of-civilisation has a sort of ratchet effect built in
<rog> fwereade: definitely. always good to remember the bottom line... food and shelter :-)
<fwereade> rog, yeah
<fwereade> rog, I'm trying to remember a nice statement of the general problem in a book I read recently
<fwereade> rog: ah, in excession
<fwereade> That was the Dependency Principle; that you could never forget where your Off switches were located, even if it was somewhere tiresome.
<rog> fwereade: speaking of books, just started Kraken and am having difficulty in avoiding just sitting down for the day and consuming it.
<rog> fwereade: i'd forgotten that.
<fwereade> rog, yeah, I remember really enjoying it while I was reading it
<rog> fwereade: i misremember, i thought you hadn't read it. anyway, ahem, we digess.
<fwereade> rog: I got distacted half way through
<fwereade> rog: been meaning to go back to it for more than a year now :/
<hazmat> g'morning folks
<hazmat> krakens and waters mains indeed ;-)..
<rog> fwereade: ah! carmen did the same thing. my fault - i took it off her to go to orlando, citing greater need and monetary outlay.
<rog> hazmat: hiya
 * hazmat had a hill disappear directly behind the house and turn into a parking lot yesterday
 * rog hates it when they do that
<mpl> wifi coverage is strongly motivated by tourism, so malta having a good one is not so much of a surprise to me
<mpl> it's pretty amazing when you arrive in San Pedro of Atacama, village lost in the middle of the desert, and you discover you get a wifi signal pretty much everywhere in the main "streets".
<niemeyer> Hello all!
<rog> niemeyer: hiya!
<hazmat> g'morning
<mpl> 'lo
<hazmat> fwereade, you pinged?
<fwereade> hazmat, hey
<fwereade> hazmat, I just wanted to check there wasn't a reason for the -n arg to the agents to be weird
<hazmat> fwereade, its not wierd at all, in fact the upstart files for unit agent in local do that
<hazmat> fwereade, i was thinking it would be done more as a class like what was done with the cloud init rather than some gestalt formatting function
<fwereade> hazmat, agent -n ends up meaning "run as a daemon" instead of "run in foreground", as one might expect on a casual reading
<hazmat> fwereade, the primary reason -n isn't used is these processes is that there not being supervised.. so they daemonize..
<hazmat> fwereade, really? -n should be no daemon
<fwereade> hazmat, quite so
<fwereade> I spent most of a day assuming that -n meant no-daemon until I thought to look at that bit of code ;)
<fwereade> hazmat, anyway, I've fixed that now and it's getting there
<fwereade> hazmat, the remaining weirdness is almost certainly my fault ;)
<hazmat> fwereade, yeah.. it is a bit confusing looking at control/options.py and twistd -h
<hazmat> hmm. ic but yeah it appears to be the opposite
<hazmat> -n implies daemon
<fwereade> hazmat, indeed
<fwereade> hazmat, ah, that was it, I wanted to check that that wasn't really an API change in any meaningful sense
<hazmat> i might have reversed that from the original twisted semantics
<fwereade> hazmat, ah, was there a reason you can recall?
 * hazmat checks the twisted source
<hazmat> fwereade, nothing comes to mind
<hazmat> fwereade, i think you where right btw regarding HA story, not collapsing the juju services into a single juju service, there will be others in the future and the collapse makes even less as that number grows
<fwereade> hazmat: cool :)
<hazmat> although it would make sense to continue the placement of provisioning agent and storage i think
<hazmat> that's relatively minor and paranoid as well ;-)
<fwereade> hazmat: potentially -- I'm mainly waiting to see what my brain thinks is a good idea when we come to actually implement it
<fwereade> hazmat: for all I know I'll be firmly advocating something entirely different :p
<hazmat> fwereade, ;-) i think the separate service notion gets better once we have service namespaces/hierarchies
<fwereade> hazmat, definitely
<hazmat> but thats perhaps not on the roadmap for 12.04.. so perhaps irrelevant
<hazmat> er. its not on the roadmap
<fwereade> yep, understood :)
<raphink> Just going through the juju docs
<raphink> I'm wondering... Juju makes it very easy to deploy and scale services quickly
<raphink> but how about maintaining services for a long time, where conffiles need to be adapted over time without adding/removing nodes?
<raphink> the typical case where you'd use cfengine/puppet to maintain long-term services
<raphink> how does juju play along in this case?
<m_3> jcastro: yo
<jcastro> hi
<jcastro> m_3: man, you know every one.
<raphink> hi every one :-)
<jcastro> hi raphink!
<raphink> sup jcastro ?
<m_3> jcastro: yeah, small devops world
<m_3> jcastro: about a mongosv charmschool... is it worth a byobu-classroom type setup?
<m_3> that seems like a lot of extra distraction
<jcastro> m_3: I think keeping it simple is key
<m_3> otherwise how do we handle other clients?
<m_3> I've never tried it on a mac
 * m_3 adds to the todo list
<jcastro> looking at the attendees they're not going to be dumb people there. I think opening up the hadoop charm and going through the sections will do the trick
<m_3> jcastro: would this be more of a charming presentation (sorry)
<m_3> than charmschool?
<m_3> lead to for wifi setup is a concern too
<jcastro> raphink: there's facilities to upgrade charms, but I don't know enough about it to answer your question intelligently, perhaps ask on the mailing list?
<m_3> s/lead to/lead time/
<raphink> thanks jcastro
<jcastro> m_3: yeah, so if it ends up being more of a one-to-many but more in depth than a normal presentation, I think that will be fine
<m_3> gotcha... cool... I'll reply with that... thanks!
<jcastro> I mean, if they're having it anyway, and Juan's going to be there ...
<raphink> jcastro: from what I understand, upgrading a charm will allow to re-run the installation, but will it update a machine's configuration without re-installing it?
<jcastro> I am not sure
<raphink> ok
<_mup_> juju/preserve-unit-for-external-gc r415 committed by kapil.thangavelu@canonical.com
<_mup_> defer removal of unit state to a garbage collector
<niemeyer> Lunch time!
<_mup_> juju/preserve-unit-for-external-gc r416 committed by kapil.thangavelu@canonical.com
<_mup_> preserve service state for out-of-band garbage collection, to provide for a currently executing hook retrieving service config.
<hazmat> raphink, changing the service's configuration, will invoke the charm's config-changed hook and can update local config files... for additional configuration management, we're working on a feature called co-located/subordinate charms that will allow for deploying puppet, logging, etc alongside/within  other service units
<raphink> that's what I meant
<hazmat> upgrade-charm is also present though its more for changes to the functionality of the charm than as a general means to effect on-going long term management
<raphink> thanks hazmat , that sounds interesting
<raphink> hazmat: so the "juju master" (how do you call machine 0?) would become a puppet master and potentially a log server, too?
<raphink> or would you make charms to deploy puppet masters and log servers?
<hazmat> i wouldn't call it a juju master, its just the first machine in the environment, running some internal juju services, we've discussed an ha story on list that turns it into just another machine
<raphink> ah right
<hazmat> raphink, but to the question.. you'd deploy a puppet-master charm, or a munin aggregator charm as a separate service (the head) and then relate it to munin-node or puppet subordinate/co-located service
<raphink> ok
<hazmat> the subordinate services will associated to existing services via add/remove-relation
<raphink> and then deploying a new service would be pretty much simply relating a new machine to the puppet service
<hazmat> ie. the tails that connect back to the head.
<raphink> sorry, deploying a new _service unit_
<hazmat> raphink, well you wouldn't be associating the machine per se.. you'd be associating to puppet to another service.. say hadoop
<hazmat> er.. hdfs data node
<hazmat> the containers running those service units would all be populated with units of the puppet agent charm
<hazmat> s/charm/service
<hazmat> ie. the association isn't between a machine and a puppet service but just a relation between services
<raphink> right
<hazmat> the effect of having it everywhere.. ie all machines is achieved by setting up some default co-location services for the environment
<raphink> you only deploy puppet client services
<raphink> and you register these nodes to the puppet master to tag them properly for deployment
<raphink> right?
<hazmat> raphink, basically.. the puppet client service and puppet master service have a relation like any other relation, whose hooks effect that registration.
<hazmat> so yes
<raphink> ok
<raphink> and then it's the puppet code that instantiates the functional relations (like db/frontal for example) ?
<hazmat> at the moment this feature is at the level of consenting adults.. a co-located/subordinate charm can do all sorts of interference to the parent/master service unit its deployed along with which is prevented by juju. And at the moment the external system has minimal knowledge of the service unit its deployed with (just identity).
<hazmat> there's some mail on the list going through some additional things that could be done like exposing additional cli api to provide for populating the puppet master with more relevant facts/tags.
<raphink> right
<raphink> I remember when I read about mcollective
<raphink> the guy talked about how machine names didn't really matter anymore
<raphink> since he could use tags to deploy/control them
<raphink> I guess the idea behind juju is also to automate this kind of relation between machine name and functional names (like wordpress/1)
<hazmat> raphink, well its to obviate the machine name
<raphink> yes
<hazmat> not to beat a dead horse, but say  i've got a logging or metrics aggregator, i don't want it to record the machine name but the unit name, which could have migrated away from the given machine.. more importantly i don't care about the machine performance, but the service level aggregate among  its units.
<hazmat> s/performance/identity
<raphink> right
<niemeyer> What's this Jim Jammer guy doing?
<SpamapS> Hm
<m_3> windows phone?  I'm imagining a private conversation between loved ones simply routed to the wrong place
<jimbaker> yes, every list will eventually get spam, vacation responders, and miscellaneous other cruft
<jimbaker> not to mention the occasional troll
<hazmat> jcastro, just to clarify you where serious about needing metrics reports this week?
<jcastro> hazmat: yes please.
<hazmat> jcastro, ok.. in progress
<jcastro> hazmat: I'll clarify with jono today during our call how important they are to him to have right away
<hazmat> jcastro, awesome, thanks
<SpamapS> where are the WTF test results?
<SpamapS> they used to be in the topic
<m_3> SpamapS: is it easy to transfer ownership of upstream charmers teams in lp?  I prefer to create the teams beforehand, but I'd rather it not show me as the team owner for all time
<SpamapS> m_3: the branch owner team should be owned by charmers, and have charmers and the upstream managed team as its members.
<SpamapS> m_3: you can give away ownership of a team to another team that you are an administrator of. I think I'll make you a charmers administrator
<rog> niemeyer, fwereade: PTAL at the go-juju-initial-ec2
<SpamapS> m_3: actually you're in ~juju so you are already an admin
<rog> niemeyer, fwereade: i think it should be good to go
<niemeyer> rog: Cheers
<m_3> SpamapS: I thought we were going to create new team and add charmers as members of that team
<m_3> ah, yes... sorry read above wrong
<SpamapS> m_3: so like, for voltdb, I'd call it voltdb-charmers
<m_3> I did this last week... lemme find the team
<m_3> crud, I removed it
<m_3> created "Test Charmers" with a testcharm to test out the lp:charm/testcharm aliases and ownership
<m_3> SpamapS: ok, so looking back on it, that process should work fine if I can transfer ownership... cool thanks!
<marcoceppi> Is there any documentation on the other supported environments for Juju?
<SpamapS> marcoceppi: the orchestra provider is probably the hardest to find docs on
<SpamapS> marcoceppi: but that is coming
<SpamapS> marcoceppi: ec2 and local are well documented https://juju.ubuntu.com/docs/
<marcoceppi> Right, I thought there were other environments. Maybe I'm confused with what Orchestra is?
<marcoceppi> Wikipedia says many things, but nothing that looks like the Orchestra you've referred to
<hazmat> SpamapS, wtf.labix.org
<hazmat> marcoceppi, https://wiki.ubuntu.com/ServerTeam/Orchestra
<marcoceppi> Oh god Orchestra looks awesome
<SpamapS> hazmat: thanks
<SpamapS> So, I'm thinking I'll upload 416 to precise.
<hazmat> SpamapS, sounds good
<jcastro> hey SpamapS
<jcastro> so, marcoceppi's blog post is ready for the minecraft charm
<jcastro> but he's pulling it from his personal branch
<jcastro> it would be nice for that part of the blog to just say "charm get minecraft"
<jcastro> <SpamaspS> So in other words, I should totally review the charm right now and get that bad boy in the store, so we don't miss the surge of interest around Minecraft's release
<jcastro> SpamapS: correct!
<SpamapS> jcastro: will review shortly. :)
<SpamapS> marcoceppi: please make sure you have your asbestos suit handy.. ;)
 * marcoceppi suits up
<SpamapS> marcoceppi: btw, as a member of charmers, you can make it official yourself, though I have found at least one thing that needs fixing before it is importable.
<marcoceppi> SpamapS: I didn't want to be presumptuous and say "oh, this is done" without a somemore eyes on.
<marcoceppi> What did you find?
<jcastro> SpamapS: boo, bad best practice, everyone should check, even if you're a pro
<SpamapS> marcoceppi: yeah the optional review is something we should all make use of until we're sure about the process and requirements. :)
<SpamapS> I suppose we could enforce the NEW queue like Debian/Ubuntu do.. but for now, lets just make it a convention
<SpamapS> marcoceppi: commented in bug 857654
<_mup_> Bug #857654: Charm needed: Minecraft Server <new-charm> <juju Charms Collection:In Progress by marcoceppi> < https://launchpad.net/bugs/857654 >
<SpamapS> marcoceppi: basically you need to set the license info right for opt/minecraft
<marcoceppi> SpamapS: Ack, missed that. The rest looks pretty easy to fix. I'll try to find a state that I can check in the minecraft setup to remove the sleep
<SpamapS> marcoceppi: again tho, those are just bugs, I'd say just import it once you fix the license info
<jcastro> \o/
 * marcoceppi hates legal stuff
<SpamapS> marcoceppi: everybody except me seems to share that, so please, defer all legal stuff to me... keep illegal stuff for jcastro.
<marcoceppi> It appears that the startup script is actually not from Mojang, but the Curse wiki
<marcoceppi> So I'm just going to link back to the source and say it's released under the CC-BY-NC-SA 3.0
<SpamapS> marcoceppi: sounds good, the CC-BY-NC-SA specifically mentions that you must attribute the author.
<SpamapS> marcoceppi: which is annoying, because the author of said script is not clear
<SpamapS> http://www.minecraftwiki.net/wiki/User:Hount
<marcoceppi> Right, so I just throw back a link to the source, which should dispell anyone curious of the author.
<SpamapS> created the page
<SpamapS> really, CC-BY-NC-SA is a *horrible* software license.
<jcastro> well, people don't think about that when they toss random scripts on their CC licensed-by-default wikis
<SpamapS> Yeah
<SpamapS> people don't consider licensing at all
<SpamapS> Its one of those things that will likely never actually bite anybody int he arse, but when it does, it will likely be a very expensive bite.
<SpamapS> marcoceppi: I think a link to the page and mention of the CC-BY-NC-SA is as good as you can get, since the attribution on the page is even unclear.
<SpamapS> marcoceppi: technically though, you have to respect the copyrights you're informed of, and in this case, Mojang AB and Curse Inc. are claiming copyright.
<SpamapS> well
<SpamapS> Curse Inc. is actually, not Mojang.
<SpamapS> http://www.curse.com/legacycontent/curse-network-terms-of-service
<SpamapS> two pages on IP
<SpamapS> "By submitting, posting or displaying User Submissions on, to, or through Curse Websites (or its successors and affiliates), you grant Curse, Inc. a worldwide, non-exclusive, transferrable, royalty-free right to use, reproduce, distribute, display, perform, make derivative work"
<SpamapS> I'd say its Curse's
<SpamapS> and CC-BY-NC-SA would be very clear, that you need to attribute it to them.
<jcastro> man, this due diligence thing is a buzzkill
<SpamapS> jcastro: a bigger buzzkill would be Curse suing all users of the minecraft charm. :-/
<marcoceppi> Yeah, I pushed up what I have now
<jcastro> SpamapS: yeah I know I know
<marcoceppi> opt/minecraft is released under Attribution-NonCommercial-ShareAlike 3.0 Unported.
<marcoceppi> (CC BY-NC-SA 3.0) http://creativecommons.org/licenses/by-nc-sa/3.0/ made
<marcoceppi> available from (http://www.minecraftwiki.net/wiki/Server_startup_script) by Curse
<marcoceppi> need to add Inc.
<SpamapS> marcoceppi: yeah, add Inc. and thats good enough for me.
<SpamapS> interesting though
<SpamapS> with the NC bit.. nobody can charge for access to the minecraft servers they're deploying
<marcoceppi> Curse runs the wiki and the forums
<marcoceppi> Oh.
<marcoceppi> what?
<SpamapS> marcoceppi: what would you say to supplanting that script with an upstart job?
<marcoceppi> Because of the init script
<marcoceppi> SpamapS: I'd be all for it
<SpamapS> marcoceppi: right, if you are charging for that server access, you are now using the script for commercial purposes
<marcoceppi> Shouldn't be _too_ difficult
<SpamapS> It should be dead simple
<marcoceppi> I'll read up on upstarts
<SpamapS> tho you'd lose the cool update/backup/etc. bits
<jcastro> yeah but we'd be jameshunt approved!
<SpamapS> marcoceppi: given that, I think we actually need to keep CC-BY-NC-SA out of the official charm repo
<marcoceppi> Well, with the upstart it won't be needed
<SpamapS> right
<marcoceppi> give me a min to whip something together
<SpamapS> w00t
<jcastro> marcoceppi: might want to send them a mail and be like "hi, we're trying to make it easy for people to deploy minecraft servers, can you clarify the license on this script?"
<SpamapS> Yeah
<jcastro> they probably don't know/care that it's a problem
<SpamapS> The original submitter probably didn't realize they gave away all IP rights when submitting stuff to that wiki
<jcastro> ^^ right
<SpamapS> and Curse probably does not want the rights to that script
<marcoceppi> SpamapS:  there are alternatives to that script, linked at the bottom
<SpamapS> yeah, those have licenses.. nice
<SpamapS> mcwrapper is MIT
<SpamapS> Though done *completely* wrong
<SpamapS> (has to be in the software..heh)
<jcastro> well, this is one of the first ones
<SpamapS> mcshellscript has no license
<jcastro> I think we should set the standard, and do it the upstarty way
<SpamapS> jcastro: I think so too
<marcoceppi> jcastro: I agree
<jcastro> even if it means another day or whatever
<marcoceppi> It looks like I can finally ditch screen too
<SpamapS> The whole point of having this distro of charms is that you can use them without thinking about it
<SpamapS> minecraft-init has the most clear license, btw
<SpamapS> # (c) 2010-2011 Dagmar d'Surreal
<SpamapS> # Released under the terms of the GNU GPL 2.0.
<SpamapS> # Run this script with the argument 'licence' to read the licence.
 * SpamapS adds a question to the Discuss page of the original script about its licensing
 * marcoceppi needs to figure out how to test the upstart script
<SpamapS> http://www.minecraftwiki.net/wiki/Talk:Server_startup_script#Licensing_of_this_script.3F
<SpamapS> marcoceppi: service minecraft start
<SpamapS> marcoceppi: want to pastebin it? You're in luck, I'm not just the charm expert, I'm also an upstart expert. ;)
<marcoceppi> Where to I put the upstart script?
<marcoceppi>  /etc/init.d/ ?
<SpamapS> /etc/init/minecraft.conf
<marcoceppi> oh god, the error messages!
<SpamapS> can't you just send them out to a logfile?
<SpamapS> exec java -foo minecraft.jar server.whatever >> /var/log/minecraft/minecraft.log 2> /var/log/minecraft/minecraft.err
<marcoceppi> http://paste.ubuntu.com/740560/
<SpamapS> are you *sure* it forks?
<SpamapS> oh and don't emit anything ;)
<marcoceppi> So, no forking, no emiting?
<SpamapS> 'started minecraft' is automatically emitted
<SpamapS> marcoceppi: since you ran it in screen before, I'm guessing there was no forking
<SpamapS> marcoceppi: also description can just be "Minecraft server" .. it will show in bootup as 'Starting: Minecraft server' and 'Started: Minecraft server'
<marcoceppi> Stupid blog post from 2005 led me astray
<SpamapS> ;)
<SpamapS> yeah trust nothing before 2010 regarding upstart
<marcoceppi> Everytime I try to "start minecraft" I get this
<marcoceppi> start: Rejected send message, 1 matched rules; type="method_call", sender=":1.173" (uid=1000 pid=6044 comm="start minecraft ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")
<SpamapS> thats because you're not root. :)
<marcoceppi> UGH, duh.
<SpamapS> make *sure* you removed the 'expect fork' btw
<marcoceppi> wow, that actually worked
<SpamapS> there's a bug, if you say 'expect fork' and it doesn't.. the job will be stuck forever.
<marcoceppi> cool, I removed it, it's all working now \o/
<SpamapS> also you don't need script
<SpamapS> chdir is a stanza
<SpamapS> so you can just do 'chdir /opt/minecraft' and then exec
<SpamapS> marcoceppi: \o/
<marcoceppi> sudo stop minecraft yields this:
<marcoceppi> stop: Unknown instance:
<marcoceppi> Here's the updated script
<marcoceppi> http://paste.ubuntu.com/740563/
<SpamapS> marcoceppi: is it still running tho?
<marcoceppi> yup
<SpamapS> so maybe it does fork. ;)
<SpamapS> just.. doesn't let go of stdout/stderr? hrm
<marcoceppi> it does fork.
<SpamapS> marcoceppi: so, to make it easier to read, maybe consider moving all the java -X args into their own env?
<marcoceppi> hey, expect fork - all is good now
<marcoceppi> one ENV for each, or one ENV to rule them all?
<SpamapS> oh
<SpamapS> no its forking because you left the &
<marcoceppi> Oh, so I don't need to push it to the background then?
<SpamapS> marcoceppi: http://paste.ubuntu.com/740568/
<SpamapS> that might work
<SpamapS> marcoceppi: no, upstart does that for you. :)
<marcoceppi> perfect
<TheMue> Hi, today I've got an off-topic prob.
<TheMue> May anybody help me with unity ?
<SpamapS> TheMue: #ubuntu maybe?
<TheMue> Yep, sounds logical. *smile* But I'm starting #juju by default, so I used it for my first question. *lol*
<SpamapS> TheMue: by all means, be the first to create a unity charm! ;-) (using vnc server? ;-)
<TheMue> hehe, will start creating charms aftere Dec 1st. but this time my unity settings are messed up.
<TheMue> So, reset done, try a restart.
<marcoceppi> SpamapS: For the interface, do I need to do anything special with that?
<marcoceppi> charm proof gives me a warning, since I don't have relation hooks
<marcoceppi> Otherwise I've pushed up all the other changes, with the exception of unpack detection, to the bzr
<marcoceppi> I think I've found a way to settle the problem with detecting unpacking
<marcoceppi> SpamapS jcastro I think minecraft is ready for final review
<jcastro> whoa, that was fast!
<SpamapS> marcoceppi: bzr pulling now
<SpamapS> marcoceppi: nice. So do you want to do the official "promulgating" ?
<marcoceppi> Sure, might as well learn now
<SpamapS> marcoceppi: bzr push --remember lp:~charmers/charm/oneiric/minecraft/trunk && charm promulgate ... that should do it.
<marcoceppi> Will I always have to push to charmers/charm? going forward?
<jcastro> SpamapS: marcoceppi: off to a lug meeting, I'll check in in a few
<marcoceppi> jcastro: \o
<jcastro> marcoceppi: just ping me me when blog post is ready/updated with the charm get
<marcoceppi> will do
<jcastro> and I'll suck it onto cloud.ubuntu.com
<robbiew> jcastro: that ThinkUp stuff is pretty sweet...would be sweet to get a charm for it
<jcastro> yeah
<jcastro> I put it on the list right away
<marcoceppi> Okay, just confirmed all the new upstart stuff works properly, promulgating
<SpamapS> marcoceppi: *saweet*
<marcoceppi> Awesome, pushed and working
<SpamapS> $ charm list|grep minecraft
<SpamapS> lp:charm/minecraft
<SpamapS> woot
<robbiew> awesomely unbelievable
 * marcoceppi pelvic-thrusts around the office
<robbiew> lol
<marcoceppi> Shouldn't charm get take a --repository flag?
<SpamapS> marcoceppi: no it only cares about official lp:charm charms
<SpamapS> marcoceppi: otherwise, just use bzr branch
<marcoceppi> It's just odd that it always branchs into CWD, most ever other charm command allows you specify a directory
<SpamapS> ahhhh that
<SpamapS> good point
<marcoceppi> I'll throw up a bug against it, if you want
<SpamapS> almost done
<marcoceppi> cool! :D
<SpamapS> marcoceppi: truth be told, charm get is kind of.. not something we need.. just seemed missing, which is I think why negronjl wrot eit
<marcoceppi> It's basically a wrapper for bzr branch lp:charm/<charm> right?
<SpamapS> marcoceppi: r82 pushed with repo name for charm get :)
<SpamapS> marcoceppi: right
<marcoceppi> <3
<SpamapS> should arrive as an update from the PPA tomorrow
<SpamapS> which reminds me, I need to upload charm-tools to Ubuntu
<m_3> marcoceppi: props!
<marcoceppi> m_3: Thanks! 1 down, many more to go :)
<SpamapS> looks like more errors building on precise..
<SpamapS> https://launchpadlibrarian.net/85303628/buildlog_ubuntu-precise-i386.juju_0.5%2Bbzr416-1juju1~precise1_FAILEDTOBUILD.txt.gz
<_mup_> Bug #891419 was filed: Juju fails test suite when building on precise <juju:New> < https://launchpad.net/bugs/891419 >
 * SpamapS goes on a bug triage rampage
<SpamapS> bcsaller: hey are you working on colocation?
<SpamapS> bcsaller: I'm thinking bug 805585 should be assigned to you as part of that
<_mup_> Bug #805585: Break the one service-unit per instance assumption <juju:New> < https://launchpad.net/bugs/805585 >
<bcsaller> SpamapS: I am working it, called "subordinate services now", less ambiguous
 * SpamapS imagines services lining up in neat rows being inspected by commanders. ;)
<SpamapS> bcsaller: is there another bug report already open? otherwise I'll morph this one into subordinates
<bcsaller> SpamapS: I don't have one for a devel branch, I can map the existing one to it when I'm ready for the first push
<SpamapS> Ok, basically re-wrote the bug.. bug 805585 is yours now ;)
<_mup_> Bug #805585: Policy charms should be able to be deployed along side other charms inside the same machine/container. <juju:In Progress by bcsaller> < https://launchpad.net/bugs/805585 >
 * m_3 wishes there was a way to give the buildd a little kick in the butt... tick tock
<SpamapS> m_3: there is, his name is lamont.. #launchpad .. ;)
<SpamapS> m_3: the buildd's are all really busy right now finishing a massive transition to perl 5.14 in precise.. that shoudl be done soon tho
<m_3> SpamapS: ah, gotcha... thanks for the info
<SpamapS> m_3: building voltdb?
<m_3> no, fixing a stupid ganglia bug... I only tested the last patch in the openstack demo scenario and it wasn't general enough... doh!
#juju 2011-11-17
<niemeyer> Aaaaand, that's it.. lbox is ready for inline reviews on Launchpad.
<niemeyer> Now, package and docs..
<hazmat> niemeyer, nice
<hazmat> SpamapS, thanks for doing the bug weeding
<noodles775> Hi, any reason why I'd start seeing the following on juju units? http://paste.ubuntu.com/741073/
 * noodles775 checks the mail list for api changes.
<noodles775> seems JUJU_AGENT_SOCKET is no longer set?
<hazmat> noodles775, the juju cli api is only setup for automatic usage from a hook or a debug hook terminal.. other uses need to setup variables to connect to the remote end
<hazmat> we should probably default these though to their common known values
<fwereade> hazmat, may I borrow you for a quick pre-review?
<fwereade> hazmat, http://paste.ubuntu.com/741112/ has the changes I've made for upstartification; they're verified but not yet tested and I wanted to check if there were any serious issues before I go too much further
<fwereade> hazmat, the first 270 lines are the important ones, the rest are consequential changes to test data
<fwereade> bbs lunch
<rog> niemeyer: mornin'
<mainerror> Hello.
<hazmat> fwereade, functionally it looks fine, except the stop handling needs better, i'm still wondering if it wouldn't be better served by a more complete abstraction of upstart config
<hazmat> er .. stop needs error handling
<fwereade> hazmat, good point re stop
<hazmat> some of the use cases are pretty general purpose outside of existing agents, starting containers, zookeeper, webservers.. etc.
<fwereade> hazmat, I'm reluctant to do that until those use cases are actually in play
<fwereade> hazmat, although that makes me think
<hazmat> fwereade, those cases exist now, and could be simplified via upstart
<fwereade> hazmat, actually no it doesn't
<hazmat> fwereade, much of the local provider code is really just process management stuff that could go away
<fwereade> hazmat, ah, ok, I haven't really dug very deep into that
<fwereade> hazmat, I'll poke around for further opportunities to use it then, better abstraction should come from that quite naturally
<fwereade> hazmat, thanks
<hazmat> fwereade, a straight conversion of http://upstart.ubuntu.com/cookbook/#id119 would look alike the bootstrap, with a **kw constructor for convience
<hazmat> its not a direct 1-1
<hazmat> but it would cover all our use cases
<hazmat> maybe
<niemeyer> rog: Yo
<hazmat> well definitely.. but not clear we really need it
<niemeyer> hazmat, fwereade: Mornings as well
<hazmat> niemeyer, greetings
<niemeyer> fwereade: Well, aft. for you :)
<fwereade> hi niemeyer
<fwereade> (details, details)
<niemeyer> rog: Thanks for the log.. I think there's much better heuristic I can use to find out the proper URLs
<niemeyer> rog: I'll try that now
<rog> niemeyer: cool.
<jcastro> hazmat: nice graphs, <3
 * niemeyer => lunch
 * mpl <- cookies
<mpl> :)
<fwereade> hey guys, cath's stuck in the lift and laura is somewhat distraught, back later
<niemeyer> fwereade: Ouch
<SpamapS> https://blueprints.launchpad.net/ubuntu/+spec/servercloud-p-juju-charm-testing
<SpamapS> This has the juju team as the drafter..
<SpamapS> I went ahead and added a few work items that I think will be part of it, but it would be good for somebody from the juju team to review them and change the Definition from Discussion to Review once that is done.
<niemeyer> rog: Alright, I think I'm done with lbox..
<rog> niemeyer: yay!
<niemeyer> Unfortunately codereview seems down for maintenance.. *now* from all times
<rog> niemeyer: damn, i remember it was announced.
<niemeyer> rog: The heuristics are much better now
<niemeyer> Simpler too..
<rog> niemeyer: good. nothing like someone actually using some heuristics to sort 'em out...
<niemeyer> rog: Yeah, definitely
<rog> niemeyer: BTW currently i'm trying to get one of the simplest tests in ec2 to run against my test server.
<niemeyer> rog: Cool, that's awesome
<rog> niemeyer: it's by no means awesome yet :-)
<niemeyer> rog: Ugh :)
<rog> niemeyer: and the code is grungy
<rog> niemeyer: here's a current snapshot of the server code: http://paste.ubuntu.com/741436/
<niemeyer> rog: This is quite involved already
<rog> niemeyer: all it does currently is allow a client to induce a failure (FailNow) and to inspect the set of actions afterwards.
<rog> niemeyer: i know, i'm not entirely happy
<rog> niemeyer: i can't think of a way of making it simpler though, while keeping our current approach
<niemeyer> rog: I mean, even in terms of features
<niemeyer> rog: We should be able to have a simpler test case that doesn't involve all of those calls
<niemeyer> rog: To get started
<niemeyer> rog: E.g. whatever call reads the machine state
<rog> niemeyer: most of those calls are currently unimplemented
<rog> niemeyer: i was after getting instance starting and stopping working
<niemeyer> rog: The overall direction looks interesting, though
<rog> niemeyer: i need DescribeInstances and then i think i'll have enough for a juju machine
<niemeyer> rog: // Address returns the URI of the server.
<niemeyer> rog: I wonder about this one
<rog> niemeyer: is that a problem?
<niemeyer> rog: It feels useful for sure
<rog> niemeyer: i don't want to clash with other potential services
<niemeyer> rog: But we can't expose that to the tests, otherwise we're binding them to a particular implementation
<jcastro> m_3: ~45 minutes until that call
<niemeyer> rog: The backend semantics may need multiple URLs, etc
<rog> niemeyer: i don't think so - we just make an environments.yaml that includes that string
<niemeyer> rog: Right, something like that should be cool
<rog> actually, we allow registration of a new region
<rog> which includes the url
<niemeyer> rog: FailNext may not be enough..
<niemeyer> rog: interactions will generally take several roundtrips
<rog> niemeyer: yeah, it's just my placeholder for whatever is needed
<niemeyer> rog: The first one may not be interesting to fail
<rog> niemeyer: it's enough for my first test
<niemeyer> rog: Cool
<niemeyer> rog: +1
<niemeyer> rog: Looks nice overall
<rog> niemeyer: but in general it's an interesting question
<rog> niemeyer: the testing code can't know how many requests will be generated
<niemeyer> rog: Some nastiness will go away when Go is fixed
<niemeyer> Go's xml, that is
<niemeyer> rog: That's right
<niemeyer> rog: We'll need to think through that one a bit
<rog> niemeyer: yeah, i think xml could do with a thorough rethink
<niemeyer> rog: It sounds like failure scenarios are too bound to the backend
<rog> niemeyer: yes
<niemeyer> rog: We'll probably need a richer interface on the fake server, and provider-specific tests
<niemeyer> rog: Otherwise we'll be spending years on that framework creating something extremely involved
<rog> niemeyer: for example ec2 generates errors in a particular form
<niemeyer> rog: yeah, and on particular calls, etc
<rog> niemeyer: hmm. a possible idea:
<rog> niemeyer: given that hardly any code reacts to the actual content of an error
<rog> niemeyer: and that most of the stuff we're testing will be single threaded and therefore deterministic
<niemeyer> rog: Both of these assumptions don't seem necessarily valid, FWIW
<rog> niemeyer: you could have a scheme where you run a test once, record the number of operations, then run it again several times failing at a different place each time
<rog> true..
<niemeyer> rog: See the "spending years" comment above.. :)
<rog> yeah
<rog> hmm. anyway,  i think the important thing is the constructive tests.
<rog> we can do provider specific tests for provider specific error scenarios
<jcastro> yep
<niemeyer> rog: Agreed
<m_3> petecheslock: great talking to you guys again... I'll get portertech some examples
<petecheslock> m_3: sounds great - we're excited to bring sensu to more automation platforms.
<hazmat> this is cool http://news.opensuse.org/wp-content/uploads/2011/11/snapper.png
<hazmat> btrfs snapshot visualization
<hazmat> imagine hooks doing btrfs snapshots, with introspectable  diffs
<hazmat> not quite dry run, but a different level of introspection
<jrgifford> i do **not** need to do `sudo juju bootstrap`, correct?
<jrgifford> just a normal juju boostrap, from my normal user?
<m_3> jrgifford: correct, ordinary user
<marcoceppi> jrgifford o/
<jrgifford> m_3: thanks
<jrgifford> marcoceppi: ?
<mainerror> He meant that there is no such thing as a normal user since it would imply the existence of an abnormal user which doesn't make sense. :)
<portertech> due to being a ruby whore, I require ruby to be present in order to write my hooks in ruby
<portertech> is there a common solution in this case, for charms?
<portertech> check if installed, if not, install, a ruby charm out in the wild?
<portertech> just do an apt-get install then? ;)
<brunopereira81> hello, I need some help setting up a install script for a juju charm
<jrgifford> portertech: hey, another ruby charmer. :P
<jrgifford> i was actually wondering the same thing...
<fwereade> portertech, an apt-get install in your install hook should be fine
<fwereade> brunopereira81, I'll help if I can, but I know a lot more about the code than about the charms
<portertech> jrgifford fwereade: yeah, doing a quick which & $? to determine if it's present, installing if not, then go have a ruby party
<fwereade> portertech, perfect
<jrgifford> portertech: thanks for the suggestion
<brunopereira81> fwereade, I think I got it, thx
<fwereade> brunopereira81, cool
<portertech> tricky way: [[ ! `which ruby` ]] && apt-get -y install ruby
<portertech> so is the install hook the default? what's the standard way to call another hook from it?
<brunopereira81> I have a config.yaml file in my charm folder, how can I read an key from it to use in my install hook?
<portertech> fwereade: would I "$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"/ruby_hook from within the install hook?
<portertech> must be a sexy way of doing it
<fwereade> sorry guys, distracted
<fwereade> you should be able to get your config stuff via relation-get
<fwereade> which is available to all relation-related hooks
<brunopereira81> so for instance if I want to get a token string from my config.yaml i just need call 'relation-get token'?
<fwereade> brunopereira81, sorry, needed to refamiliarise myself
<fwereade> that should be config-get
<fwereade> and that should be available in all your hooks including start and install (which aren't anything to do with relations)
<brunopereira81> fwereade, thx
<fwereade> sorry brainfart :)
<brunopereira81> lol, np
<fwereade> portertech, my bash-fu is weak :)
<fwereade> portertech, as a suggestion, you *probably* don't need to do a great deal of work in the install hook; and if the install hook is just a bunch of "apt-get install"s, subsequent hooks (like start) can just be ruby
<portertech> yeah, since install only runs once
<portertech> nothing special
<fwereade> portertech, indeed, you'll get an "install" and a "start" and then it's all about the relations
<fwereade> oh, I think there's a config-changed one as well, let me check
<fwereade> yep, it's "config-changed"
<portertech> does anyone use facter or ohai w/ there charms? :P
<portertech> where a large amount of system info is required
<fwereade> portertech, we're very deliberately agnostic about what tools you use within the hooks
<fwereade> portertech, I don't know for sure what people are using though
<portertech> lots of freedom
<portertech> i dig it
<fwereade> :D
 * hazmat catches up
<SpamapS> portertech: yes there are charms that use facter already
<brunopereira81> after creating a charm what is the easiest way to test it?
<SpamapS> portertech: btw, for the "run another hook" case I use this
<SpamapS> home=`dirname $0`
<SpamapS> $home/hookname
<hazmat> $CHARM_DIR is defined as well
<portertech> SpamapS: cool
 * SpamapS is all about self reliance :)
 * SpamapS heads off to do some of the wife's bidding
<brunopereira81> (nvm, found how)
<hazmat> most of these formulas use facter.. http://charms.kapilt.com/~negronjl
<hazmat> albeit just as a k/v store, which could be done with just about any tool..
<_mup_> Bug #891868 was filed: juju cli api should be invokable outside of units  <juju:New> < https://launchpad.net/bugs/891868 >
<portertech> hazmat: is $CHARM_DIR the current charm dir absolute path?
<portertech> or just a user specified var
<m_3> portertech: that's the charm path (/var/lib/juju/units/<unit-name>/charm/ on the deployed nodes)
<portertech> m_3: ah, ok, thanks
<m_3> and CWD druing hook exec
<m_3> I usually just use relative
<m_3> portertech: incidentally when debugging... /var/lib/juju/units/<unit-name>/charm.log is useful (along with /var/log/juju/unit-agent.log)
<m_3> (from memory, but something like that)
<portertech> so if i call another script in the current dir from within install, i could do $CHARM_DIR/script?
<portertech> or ./
<m_3> I guess I use [[ -f "$(dirname $0)/common.sh" ]] && source "$(dirname $0)/common.sh"
<m_3> but that's probably easier with CHARM_DIR now
#juju 2011-11-18
<m_3> portertech: or in a ruby example hook http://paste.ubuntu.com/741757/
<m_3> (where charm-tools.rb is in the hooks dir)
<portertech> require 'charm-tools' <-- how does this get installed?
<portertech> its not part of the standard charm tools is it?
 * portertech will bbiab
<m_3> no, not yet, it's a file in the /hooks dire
<portertech> ah, ok
<portertech> looks cool
<m_3> still in progress :)
<m_3> idioms, best-practices, tools for charms are evolving
<m_3> we've let them run rampant for now... consolidating into charm-tools as we can
<jcastro> jrgifford: hey are you working on a charm or just helping bruce?
<jrgifford> jcastro: both
<jrgifford> trying to get rvm to play nice with charms
<jrgifford> so we can have a ruby/rails charm that is sane. :P
 * jcastro nods
<jimbaker> now i know why my desktop has been locking up, somehow my puppy does this by brushing by it just so. maybe fur + static electricity + ...
<m_3> jrgifford: do you have a repo to follow on that?  I'd love to see rvm play nicely
<jrgifford> m_3: not right now.
<jrgifford> but once i'm happy with it, yes, there will be one.
<m_3> jrgifford: awesome
<jcastro> hey so here's the general ruby bug: https://bugs.launchpad.net/charm/+bug/799879
<_mup_> Bug #799879: Charm needed: rails (framework) <developers> <new-charm> <juju Charms Collection:In Progress by mark-mims> < https://launchpad.net/bugs/799879 >
<jcastro> do we need another one?
<jrgifford> i don't think so
<m_3> just tack stuff onto that one
<m_3> that charm uses package ruby(s) and then bundler for rails
<jrgifford> i marked that as affecting me, and subscribed.
<m_3> it'd be great to see that use rvm instead... been on the todo, just lower on the list
<jrgifford> i'm hacking railsready to start with...
<jimbaker> hazmat, i just saw the bug you posted (bug 891868). i wonder if it would be better if this were done by requesting an agent to run a hook however
<_mup_> Bug #891868: juju cli api should be invokable outside of units  <juju:Confirmed> < https://launchpad.net/bugs/891868 >
<m_3> feel free to branch and update that at will
<jcastro> jrgifford: is there any rails stuff you deploy that you think we should charm up? (thinking ahead)
<jrgifford> jcastro: redmine.
<m_3> nice
<jcastro> oh right
<jrgifford> that's about the only thing that people normally do.
<jrgifford> other than that, everything would be custom
<jcastro> ok I'm going to file a bug on redmine
<m_3> tried to make that a config parameter (i.e., repo to pull your app from)
<m_3> but that's a lot of work to maintain in a general way
<m_3> bundler helps that tons!
<m_3> rvm woudl add to it too
 * jrgifford gets off IRC to go hack on rvm and charms
<jrgifford> if i run into problems, i'll be back. :P
<jcastro> <3
<jrgifford> jcastro: ok, i
<jrgifford> i'm at a point where i can test it
<jrgifford> where can i find the command to deploy from the repo?
<jcastro> you push it to your branch
<jrgifford> ok.
<jcastro> and then just run it with --repository <where it is on disk>
<jcastro> https://juju.ubuntu.com/Charms
<m_3> jrgifford: there's a step-by-step http://paste.ubuntu.com/741778/ too
<m_3> it's for another stack of charms, but the basics are the same
<jcastro> I found a cool ruby thing I used to use, I filed a bug on it: http://getontracks.org/
<jcastro> m_3: we should schedule a charm school in here for next week
<jcastro> for the new people from today
<m_3> jcastro: they're gonna be finished with their charms over the weekend though!
<jrgifford> ok, juju deploy is complaining about too few arguments
<m_3> jcastro: wonder if they'll be willing to walk others through the charms they create this weekend?
<jcastro> yeah so things like that. Or improving existing charms
<jcastro> or, have them review each other's charms
<jcastro> jrgifford: what exactly are you typing?
<m_3> jrgifford: juju deploy --repository <somedir> local:<charmname>
<m_3> (the "local" is a namespace implying the charm's coming from "<somedir>/oneiric" on your filesystem
<jrgifford> tried m_3's suggestion and got something different
<jrgifford> now i get :
<jrgifford> $ juju deploy --repository ~/code/charms/oneiric/ local:rails
<jrgifford> Charm 'local:oneiric/rails' not found in repository /home/jrg/code/charms/oneiric
<jrgifford> 2011-11-17 19:26:52,573 ERROR Charm 'local:oneiric/rails' not found in repository /home/jrg/code/charms/oneiric
<jrgifford> $
<m_3> juju deploy --repository ~/code/charms local:rails
<jrgifford> oh lord that worked.
<m_3> !!
<jrgifford> thanks!
<m_3> np
<jrgifford> ok, so how do i figure out if it compiles successfully?
<jrgifford> is there a juju log i can watch?
<m_3> juju debug-log
<m_3> just 'juju' should give you a list of subcommands... there're lots of useful ones... debug-hooks for instance rocks!
<jrgifford> awesome, i'll look into those next
<jcastro> yeah debug-hooks is basically awesome
<mainerror> Is it a problem if the service I'm trying to create a charm for is not GPL licensed? This is its license. http://github.com/pyrocms/pyrocms/zipball/v1.3.2
<mainerror> What I actually mean is does this license just go into the copyright file in my charm?
<jcastro> I think so, but I'm not an expert
<jcastro> we just talked about licensing this morning
<jcastro> SpamapS: ^^^^
<m_3> mainerror: short story, yes... it'll just effect the tags your charm gets in the repo
<mainerror> Alright. Thanks. :)
<jcastro> pyro looks pretty awesome
<m_3> mainerror: have to look at the license in detail though... of course if you can't distribute it, we can't add it to the repo
<m_3> but you can still charm anything (even binary blobs)
<jrgifford> ooooohhh....
<jcastro> https://github.com/pyrocms/pyrocms/blob/2.0/develop/LICENSE
<jrgifford> so charming is different from packaging in that you can do binaries... interesting...
<hazmat> jimbaker, its more the case of either invoking the cli api from a ssh connection or co-located unit, the latter has it own unit agent though.. the general problem is unified api different targets depending on context, default is no context.. documenting how as a faq is sufficient as well, client id isn't justifiable atm (constant)
<jcastro> jrgifford: you can do whatever you want, if you have local charms that toss out unholy binary blobs everywhere, then sure.
<jrgifford> lol
<m_3> jrgifford: yup... and pull from repos if you want
<jcastro> jrgifford: though of course we want to start out with awesome OSS stuff for people to build on
<jrgifford> naturally
<m_3> jrgifford: we'll classify (and tag) charms according to the license for the charm itself and the bits that it includes
<m_3> possibly different licenses
<jimbaker> hazmat, sounds good if we simply doc this as not valid. i think it made sense in our earlier thinking, but restricting these commands to only run in hooks seems to be the right way imho
 * hazmat prefers a cms with presentation tier as management tier
<jrgifford> ok, question - does juju deploy as a user with root privs?
<jimbaker> but i do think there's a need to trigger hook execution, as i mention in my comment on bug 891868
<_mup_> Bug #891868: juju cli api should be invokable outside of units  <juju:Confirmed> < https://launchpad.net/bugs/891868 >
<hazmat> jimbaker, i don't see why we want to restrict it, its useful for integrations
<hazmat> and the cli api is our juju's api
<hazmat> s/our/
<m_3> jrgifford: hooks run as root if that's what you're asking... everything on your client runs as a regular user
<hazmat> jimbaker, hooks are readily executable
<jrgifford> m_3: yes, that was what i was asking. thanks.
<hazmat> hook contexts not so much
<jimbaker> hazmat, as i understand it, there's some usage out there that basically requires the use of juju set to trigger execution. this seems like an abuse of the mgmt api
<hazmat> redefining framework defined hooks for arbitrary usage is questionable imo, it violates any usage/invocation semantics around hooks
<jimbaker> don't want to have to do the integration on a clien tmachine
<hazmat> jimbaker, custom hooks would be a better way to solve that
<hazmat> rather than allowing arbitrary usage association to hooks with well defined semantics
<jimbaker> hazmat, agreed
<mainerror> m_3: You can distribute it freely if you keep the license file "intact".
<jimbaker> so call it "external" or "trigger" or something like that
<hazmat> jimbaker, the hook cli api as a universal though seems pretty useful.. it has a well-defined semantic to retrieve information, if the context is not hook specific, then it should default to being usable anywhere.
<m_3> mainerror: awesome
<hazmat> i don't see security concerns per se, their's still standard unix permissions on the socket (ie root)
 * m_3 food
<jimbaker> hazmat, so long as we preserve support for ACLs at the ZK level, it's fine with  me
<jimbaker> and i guess that is assumed
<mainerror> I'm using the Wordpress charm as a reference but I noticed that the "Writing a charm" wiki page differs from the Wordpress charm install script. Basically in the tutorial the hook installs Apache and all necessary packages but the Wordpress charm doesn't. Did that change?
<jrgifford> mainerror: i think the wordpress charm installs the dpkg. (not sure though)
<george_e> It's charming time! Where do I begin? I have 11.10 32-bit installed in a VM.
<mainerror> jrgifford: You mean the wordpress package has the other packages as a dependency?
<jrgifford> i think so.
<hazmat> jimbaker, indeed. i've been trying to keep an eye for acl issues, the biggest thing is just maintaining the principal hierarchy for unit creation, and not creating too ambiguity around ownership or usage around node paths.. nothings really changed so far,
<mainerror> Makes sense.
<jrgifford> george_e: here - https://juju.ubuntu.com/Charms
<george_e> Thanks.
<jrgifford> or, just wait for jcastro  to point you to something even more awesome. :P
<jcastro> oh awesome, welcome george_e
<george_e> Hello!
<george_e> I love the Sphinx-generated documentation.
<jcastro> yeah, hot action
<jcastro> ok so you need to follow the first part of the docs
<jcastro> and set it up for local development
<george_e> Okay, I'll start there.
 * george_e switches to VM...
<SpamapS> mainerror: fyi, that license looks quite free, its close to the PHP license.
<jcastro> george_e: http://askubuntu.com/questions/65359/how-do-i-configure-juju-for-local-usage
<jcastro> hazmat: you need to steal that and put it in the docs ^ since right now it only has EC2 in there
 * george_e waits for packages to install.
<jcastro> george_e: the next steps are here: https://juju.ubuntu.com/Charms
<jcastro> you can start reading that
<george_e> Okay - it will take some time to install everything - my connection is slow and the VM is slow :P
<jcastro> and for someone like you, probably just showing you the code is the trick: http://bazaar.launchpad.net/~charmers/charm/oneiric/mediawiki/trunk/files
<jcastro> browse that
<george_e> Sure.
<jcastro> you'll basically do a charm create
<jcastro> and modify to your needs.
<jcastro> george_e: brunopereira81 and jrgifford are new today too, so we're swimming with new questions etc, so feel free to ask everything
<george_e> I will... and it's on-topic to ask charm-creation questions on AU, right?
<jrgifford> george_e: yup!
<george_e> Maybe when I'm done, this should be another tutorial for 2buntu :)
<george_e> (Hey, that's how we got the manpage and makefile tutorial.)
<jrgifford> yes please! :D
<george_e> We'll see how it goes.
<george_e> If things go well, maybe I'll create a charm for JetHTTP
<brunopereira81> how do you test if everything is ready to run with your charm?
<jcastro> check with with "charm proof"
<jcastro> and then just deploy it locally
<jcastro> jrgifford: heh, we should start snagging the questions from the past few hours and just put them in the charm tag
<jrgifford> hey! free rep! :D
<jcastro> until we started the docs seemed complete to me
<jrgifford> er, "free" :P
<jcastro> now it's like "crap, we're missing a bunch of stuff"
<jrgifford> yeah
<mainerror> First one is already up. http://askubuntu.com/questions/80323/how-can-i-test-a-juju-charm
<mainerror> We are going to hear that one quite often I think.
<brunopereira81> I get " W: relation server has no hooks" on charm proof
<jrgifford> mainerror: you have two answers. :P
 * george_e is still waiting for packages to install...
 * george_e twiddles thumbs...
 * jrgifford twiddles thumbs with george_e while he waits for his charm to work....
 * george_e is watching a whole bunch of Debian certificates get installed...
<jrgifford> george_e: you're almost done, that's like the last thing that gets installed
<george_e> Yay! Time for rebooting.
<george_e> Would a PHP library (used by other apps) be packaged as a charm?
<jrgifford> if you can use it, you should charm it. â¢
<george_e> Lol... I'll add Stack.PHP to my list of charms to create then.
<brunopereira81> I get " W: relation server has no hooks" on charm proof, help plz :S
<jrgifford> i think we're all looking at it trying to figure out what it means brunopereira81
<brunopereira81> when I run "charm proof <path_to_charm_folder>" it returns that error
<mainerror> Here is the second question. http://askubuntu.com/questions/80327/charm-proof-issues-a-warning-w-relation-server-has-no-hooks
<george_e> Gotta restart here...
<m_3> brunopereira81: I'd guess that means that you have a relation specified in metadata.yaml but no hooks defined for that relation
<brunopereira81> can you please have a look, im looking at the rest of packages but not seeing what is wrong http://bazaar.launchpad.net/~brunopereira81/+junk/teamspeak/files
<m_3> brunopereira81: it's ok... just a warning
<m_3> brunopereira81: charm proof requires a relation in metadata.yaml, but there's no need to implement hooks for this particular relation
<m_3> brunopereira81: just like minecraft
<brunopereira81> I have tried deploying to local I can see the in status that it was deployed but no activity and debug window is empty, how to move from here?
<jrgifford> brunopereira81: wait
<jrgifford> mine didn't kick in for about 5 minutes
<jrgifford> (well, it didn't show up in the logs)
<m_3> local can take a _long_ time to come up the first time
<m_3> (it's caching an image)
<jcastro> (but afterwards it has the debs in a cache so you'll be good to go)
<mainerror> m_3: If you want you can answer that server relation warning question AU as well. http://askubuntu.com/questions/80327/charm-proof-issues-a-relation-server-has-no-hooks-warning
<fwereade> morning niemeyer
<wrtp> morning fwereade
<niemeyer> fwereade, wrtp: Mornings!
<niemeyer> I'm heading off to bus/airport.. have a fantastic weekend and week folks
<fwereade> have fun niemeyer, see you soon
<wrtp> niemeyer: dammit! https://codereview.appspot.com/5417045/
<wrtp> niemeyer: have a wonderful holiday
<niemeyer> wrtp: How amazing! 8)
<wrtp> niemeyer: you are my hero
<niemeyer> fwereade, wrtp: Will have rudimentary connectivity meanwhile, but no laptops :-)
<niemeyer> (should make me more honest ;-)
<wrtp> niemeyer: all my net connectivity is through my mobile phone right now
<niemeyer> Cheers folks!
<wrtp> niemeyer: the cable to the entire street has been broken and then buried 6 feet deep!
<wrtp> i was amazed that my mobile bandwidth was sufficient for me to watch a tv programme over it last night
<wrtp> fwereade: so there it is - inline comments requested!
<fwereade> wrtp, cool, I'll try to get to it soon
<fwereade> wrtp, but I've broken something that should be trivial and I'm not quite done banging my head against it
<wrtp> fwereade: i hate it when that happens
<fwereade> wrtp, for some reason this whole feature has been one of those after another :/
<wrtp> fwereade: which feature are you on?
<fwereade> wrtp, upstartification of agents
<wrtp> fwereade: funny, it *does* seem like that would be fairly independent and not too hard
<wrtp> (not that i know anything at all about upstart)
<fwereade> wrtp, indeed, there isn't much that's actually difficult in any way
<m_3> fwereade: did everybody get out of the lift yesterday?
<fwereade> m3, yeah, it was cath alone in there, but there for 45 mins or so :/
<m_3> fwereade: sounds like excitement
<fwereade> m_3, they came and had a roper look at it today, as nearly as I can tell
<fwereade> m_3, yeah :/
<m_3> hope there's not too many stairs to deal with for a while :)
<fwereade> m_3, heh, 6th floor, was fun taking water up there this morning
<fwereade> fixed now though
<marcoceppi> Is it preferred to use the deb for an install, or the latest stable from upstream?
<m_3> marcoceppi: deb's preferred, just because it's been vetted by security team and tested... but one of the cool features of charms is the option to just use the latest stable from upstream
<m_3> marcoceppi: give me enough rope... :)
<marcoceppi> heh
<jcastro> didn't we decide on a best practice on this?
<jcastro> like a config flag to use a package or just pull from upstream?
<marcoceppi> That would be easy enough?
<jcastro> I could have sworn that's what we talked about at UDS
<jcastro> and then by default we use the packaged one
<jcastro> brunopereira81_: ok, I can test it
<brunopereira81_> https://code.launchpad.net/~brunopereira81/+junk/teamspeak
<brunopereira81_> ;) thx
<jcastro> can someone help brunopereira81_ figure out what's wrong with his local instance?
<jcastro> I'll start testing the charm on my box
<jcastro> brunopereira81_: oh, one thing to check
<jcastro> try with a known working charm first
<brunopereira81_> will do as soon as I have the chance
<jcastro> in the dir you specified for it to keep the local stuff in environments.yaml there is a log
<jcastro> mine is in: ~/juju/jorge-sample/units/teamspeak-0
<jcastro> ok
<jcastro> found some errors
<jcastro> http://pastebin.ubuntu.com/742363/
<brunopereira81_> thats normal :D ill sort it out but havent be able to actually run it till now
<jcastro> you want apt-get install -y sqlite3 or whatever
<jcastro> so that it just installs it without prompting you for all that other crap
<brunopereira81_> and wget is missing aparently
<jcastro> heh
<brunopereira81_> will fix it tonight (3 - 4 hours from now here) and try to get this running in another computer, after its done I'll let you know, if possible maybe we can do this then. thx!
<jcastro> woo
<jcastro> we'll be around!
<_mup_> Bug #892236 was filed: Don't ship example charms in /usr/share/doc <juju:New> < https://launchpad.net/bugs/892236 >
<fwereade> hazmat, do you recall the "restart" Transition in UnitWorkflow?
<fwereade> hazmat, because it goes from "stop" to "start", and I naively believe that perhaps it ought to be from "stopped" to "started"
<hazmat> fwereade, hmm.
<hazmat> indeed
<hazmat> ugh.. missing tests
<hazmat> fwereade, indeed it should be started to stopped
<fwereade> hazmat, heh, I'm just trying to get this all to work, I'll be slathering all my changes in tests as soon as I have something that actually does what I want ;)
<fwereade> hazmat, cool, thanks
<hazmat> fwereade, for an upstart case and arbitrary process termination.. i don't think we can assume the graph is in any place.. ie. you'll need a started->started transition as well to execute the start hook
<hazmat> ie. if the graph is in a non error state then resume
<fwereade> hazmat, yeah, I'm taking baby steps still, I kinda want to get everything working in just one case first
<fwereade> hazmat, if I try to make everything work before I can handle "just restart the machine" I'll take forever
<hazmat> fwereade, fair enough
<noodles775> Hi! I'm developing a charm using lxc, and was suprised to receive the following during db-relation-changed (when related to a postgresql service):
<noodles775> {u'private-address': u'192.168.122.9', u'host': u'localhost', u'password': u'dRqKZi4xNjCk0*****', u'user': u'apache-django-wsgi', u'database': u'apache-django-wsgi'}
<noodles775> The postgresql readme doesn't mention private-address, nor why host would be localhost (even for an lxc container, I assumed it'd be 192.168.122.9 - the IP of the unit?)
<_mup_> Bug #892254 was filed: SSHClient does not properly handle txzookeeper connection errors <juju:New> < https://launchpad.net/bugs/892254 >
<jimbaker> (i've pulled this bug out because its solution has so far eluded me and while annoying, doesn't actually impact on making a robust connection to ZK)
<hazmat> noodles775, private-address is provided by juju automatically
<hazmat> noodles775, that version of the postgresql charm isn't able to reliabily resolve the container hostname it seems
<m_3> noodles775, hazmat: that charm is still out of date
<m_3> I'm updating nfs and varnish at the moment... I'll clean up pgsql next week
<m_3> noodles775: please try the latest version and let me know if that resolves correctly in an lxc container
<matrix3000> hey guys
<matrix3000> juju doesn't need ubuntu-orchestra does it?
<raphink> matrix3000: iirc it does for bare-metal deployments
#juju 2011-11-19
<SpamapS> matrix3000: raphink is correct, orchestra is for building your own cloud basically. If you have access to an openstack deployment, or you want to pay amazon, you can just use the ec2 provider.
<SpamapS> matrix3000: in theory it will work with Eucalyptus too
<marcoceppi> So, building a charm and since co-location is in the works I'm adding a configuration option to switch between using mysql interface and local sqlite
<marcoceppi> Is this feasible
<_mup_> juju/robust-zk-connect r417 committed by jim.baker@canonical.com
<_mup_> Test normal (fast path) connection to ZK
<_mup_> juju/robust-zk-connect r418 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> juju/robust-zk-connect r419 committed by jim.baker@canonical.com
<_mup_> Removed redundant tests for EC2 connect (in common anyway)
<_mup_> juju/robust-zk-connect r420 committed by jim.baker@canonical.com
<_mup_> Removed redundant orchestra connection tests
<_mup_> juju/robust-zk-connect r421 committed by jim.baker@canonical.com
<_mup_> PEP8, PyFlakes
<george_e> I'm in the middle of writing a charm and I can't get juju to start.
<george_e> Running 'juju bootstrap' yields: "Environment already bootstrapped"
<george_e> But 'juju status' yields: "ERROR could not connect before timeout".
<george_e> What does that mean?
<marcoceppi> george_e Local or EC2?
<george_e> I got it running now.
<george_e> I don't know what happened though.
<marcoceppi> Nice, what was it?
<george_e> Yeah, it was local.
<george_e> I just ran 'juju destroy-environment' and started it again.
<george_e> ...and it worked.
<marcoceppi> I know local messes up on boot strap occasionally, I can't remember the reason. Someone explained it to me earlier
<george_e> Well, at least it's working.
<george_e> I'm starting to get a feel for how powerful Juju really is.
<marcoceppi> it is AWESOME
<george_e> I believe you.
<george_e> ...and I am sooo gonna write more of these when this one is done.
<marcoceppi> Oh, apparently charms can contribute to charm-tools - I opened a merge proposal for a small bug, can I just...merge it? Or would it be better to keep doing merge requests for you guys
<SpamapS> marcoceppi: If its trivial, just go ahead and merge/push. If its something that makes a large change, reviews are a nice way to ensure the tools have a high quality.
<marcoceppi> Okay, yeah, these are just language changes for the bugs about charm create
<SpamapS> marcoceppi: is Ondina, LLC your corporate alter-ego ?
<marcoceppi> SpamapS: It's the company I started, I still can't figure out WHY it shows Ondina as the one committing when it's actually me and my personal key
<marcoceppi> SpamapS: Thanks for the update on the bug, I misunderstood the context of the bug. Would adding a config key of "autogenerated" to meta.yaml or a file that simply says "removebeforeflight"?
<_mup_> Bug #892548 was filed: missing "us-west-2" in schema definition <juju:New> < https://launchpad.net/bugs/892548 >
<_mup_> Bug #892552 was filed: juju does not extract system ssh fingerprints <juju:New> < https://launchpad.net/bugs/892552 >
<marcoceppi> I'm trying to pass config values on the bootstrap with juju deploy --config "foo=bar" [...] local:charm but it's expecting a file. Does this kind of functionality work?
<marcoceppi> Furthermore, if it doesn't - should it work as such?
#juju 2011-11-20
<george_e_> I'm trying to run some charms locally and I'm running into a bandwidth problem:
<george_e_> Is there any way to have Juju cache the packages it retrieves for individual services?
<wckd> you can run an apt-cacher
<george_e_> wckd: So how would that work with each unit?
<george_e_> Would it (the unit) automatically know to fetch the packages from the cache?
<george_e_> ...and I assume I would install that on the host?
<george_e_> My charm started successfully!
<george_e_> Yay!
<george_e_> When running Juju locally, do I need to have a lot of RAM?
<george_e_> I assigned the VM running Juju about 800MB but increased it to 1GB.
<george_e_> Is that enough to run a MySQL and a small web application charm?
<hazmat> juju local provider runs an apt-cacher-ng for the local units
#juju 2013-11-11
<freeflying> is it possible to destroy a subordinate hacluster unit in dying state
<freeflying> thumper, if we wanna to destroy a service properly, and its has relation with others, what shall we do exactly
<freeflying> remove relation firstly and then destroy service?
<thumper> freeflying: I think with the fix that landed on friday, just destroying the service should work, but for now, best to be safe and remove the relations first
<thumper> freeflying: the fix won't be released until 1.17 i think
<freeflying> thumper, so remove-relation works in 1.16.0?
<thumper> freeflying: I hope so... seems pretty fundamental
<freeflying> thumper, :)
<sbbrtn> what is an easy to test charms?  I have a charm written but now need to debug it.  Is there a tool to do this?  Or do you just have to deploy it and see what happens?
<thumper-afk> sbbrtn: you can use the local provider and deploy there, it uses lxc locally
<sbbrtn> I'm not able to ssh into the local provider
<freeflying> sbbrtn, do you have ssh key pair
<sbbrtn> yeah.  I created an ssh key like the tutorial said
<sbbrtn> maybe I am running the wrong command...
<sbbrtn> i run "juju ssh 1" and it gives me an error.
<thumper-afk> sbbrtn: that should work
<thumper-afk> sorry
<thumper-afk> no it won't
<thumper-afk> use 'juju ssh unit-name/n'
<thumper-afk> it has to do with limitations I've not yet fixed
<thumper-afk> alternatively
<thumper-afk> all the log files are local
<thumper-afk> by default in ~/.juju/local/log
 * thumper goes to do a few dishes
<sbbrtn> ahh.  thanks!
<sbbrtn> that works
<freeflying> I destroy-service, is it normal that the machine hasn't been released from maas?
<thumper> freeflying: yes, the machine isn't destroyed automatically (yet)
<thumper> freeflying: we are looking to change the default behaviour back to what pyjuju did
<thumper> freeflying: right now, when the last unit is removed from a machine, the machine is still around
<thumper> freeflying: we want to change this (soonish)
<freeflying> thumper, I just did it :)
<thumper> freeflying: just did what?
<freeflying> thumper, destroy services and then destroy machines
<freeflying> then get the machine released from maas
 * thumper goes to roll his calves
<freeflying> :)
<AskUbuntu> Ceph did not deploy to machine 0 why? | http://askubuntu.com/q/375321
<DanChapman> Hey when i run 'juju bootstrap -e azure'. My memory completely fills (16GB + 8 GB swap) to the point my box crashes. I need to file a bug for this, is there any specific logs you would need included with the bug report?
<axw> DanChapman: I can't think of anything off the top of my head. I'd say just file it, and when someone looks at it they may ask for more info.
<DanChapman> axw, thanks I will do that then :-)
<axw> DanChapman: do you have any non-default parameters in your environments.yaml? apart from the credentials
<axw> perhaps list them
<axw> DanChapman: also, the version of juju
<DanChapman> axw all I changed were the credential: subscription-id, storage-account, cert path and location. I've included the juju version I'll add some info about the .yaml file
<axw> cool, thanks!
<freeflying> after destroy a service, some relation stays in dying, how can I get them removed?
<axw> freeflying: can you please pastebin a "juju status" for me?
<bloodearnest> heya all, 2 simple questions:
<bloodearnest> 1) is there a standard way to get a file path friendly version JUJU_REMOTE_UNIT? I see a lot charms implementing their own sanitizing functions
<bloodearnest> (am using charm-helpers, but can't see one in there)
<bloodearnest> 2) is there a standard solution to subordinate charms possibly fighting with it's main charm over apt locks if their install hook execution overlaps? Does it indeed overlap? Is hook execution serialized per-unit?
<stub> bloodearnest: no, everyone has implemented their own. It is probably worth adding to charm-helpers if you can think of a good name for it.
<stub> bloodearnest: 2) is interesting - could be worth adding locking to the charm-helpers helper?
<stub> bloodearnest: I don't know what the current serialisation guarantees are with hooks
<bloodearnest> stub: I'm seeing code in gunicorn subordinate charm to retry in a loop, which I've gone with. Just thought it was error prone if subordiante charm authors have to remember to do it
<stub> bloodearnest: yeah, that sounds like a hack to me.
<bloodearnest> as for names: urlsafe_juju_unit?
<stub> I guess. Or sanitized_juju_unit if it is generic enough
<stub> I mainly needed it for filesystem safe names
<stub> oh - and sql safe for database names, usernames etc.
<bloodearnest> good point
<stub> (think the last version I coded allowed only a-zA-Z0-9_ ?)
<stub> (And now I think of it, it probably explodes if the service name starts with a number)
<AskUbuntu> How to make my nodes online in maas? | http://askubuntu.com/q/375412
<jamespage> bloodearnest, you should not hit 2) - within a container hook execution is serialized
<jamespage> if you have that's a bug
<bloodearnest> jamespage: ack, good to know. Not hit it, just working on a charm that has code to handle it, wasn't sure it was needed
<bloodearnest> jamespage: I'll remove it
<jamespage> bloodearnest, it used to be a problem
<jamespage> but then someone saw sense in that juju was much better positioned to resolve that sort of thing
<bloodearnest> yep
<marcoceppi> jamespage: adam_g would you guys be opposed to me assigning reviews for openstack charms to ~openstack-charmers and having ~charmers abstain? In order to get them out of the queue?
<marcoceppi> Or do you guys use the queue?
<marcoceppi> I did one like that a few nights ago, then thought I should ask you all first
<marcoceppi> sbbrtn: local provider caveats have been documented https://juju.ubuntu.com/docs/config-local.html#caveats
<jamespage> marcoceppi, should we move the official branches over to that team as well?
<marcoceppi> jamespage: you're more than welcome to, it makes no difference to us at this point, though it might break a lot of existing deployer files. I know CTS explicitly uses lp:~charmers branches for their deployers
<jamespage> ok
<marcoceppi> I just didn't want to break your review workflow
<marcoceppi> and I didn't want new charmers to start reviewing stuff since it might be in flux with the openstack charms
<ahasenack> hi, could I inderest someone into taking a look at a one-liner MP against the apache2 charm? https://code.launchpad.net/~ahasenack/charms/precise/apache2/apache2-no-failing-juju-log/+merge/194403
<context> MP ?
<context> must be a bzr/lp term
<teknico> MP means Merge Proposal
<AskUbuntu> Juju with GUI Installation | http://askubuntu.com/q/375570
<AskUbuntu> Unable to install juju-gui locally? | http://askubuntu.com/q/375635
<rick_h_> bac: lol at dual post answering
<rick_h_> and <3 jcastro for editing
#juju 2013-11-12
<hazmat> context, equivalent to gh pr
<AskUbuntu> ceph deploy has not created ceph.conf files | http://askubuntu.com/q/375760
<sbbrtn> what would be a string unique to each charm?  Say I have two charms on the same machine and I want to put each in its own separate folder.  Unit name a good unique string?
<davecheney> sbbrtn: i' not sure I understand the question
<davecheney> each charm has to have a unique name
<davecheney> and each charm is a directory
<sbbrtn> davecheney: I have a charm that downloads a repo into a folder.  I want to make sure that if deploy two charms on the same machine they don't use the same folder.
<sbbrtn> So, I need a string to append to the folder name to make it unique between charms so they don't conflict
<davecheney> sbbrtn: simple solution
<davecheney> don't deploy two charms to the same machine
<davecheney> we don't even let you do that by default
<sbbrtn> you mean two of the same charms? you can use the --to directive to deploy multiple on same machine, right?
<stub> sbbrtn: You can munge the unit name if you want a unique key, but note that --to installs to a new container on a particular machine so the units are isolated.
<sbbrtn> stub: thanks.  I was under the impression that they are working towards isolation between charms but they aren't isolated yet.  If that is the case, I don't need a unique folder name then.
<gnuoy> Hi, I just noticed that there seems to be no preinstall hook in the ceph or keystone charms
<gnuoy> the other charms use execd_preinstall from charmhelpers but I see keystone doesn't chip with charmhelpers
<gnuoy> shall I use charm-helper-sync to get minimal charmhelpers into keystone ?
<jamespage> gnuoy, I know adam_g is working on keystone/charmhelper update
<jamespage> if you wanted to do a minimal sync to make pre-install work that's good with me in the interim
<jamespage> (adam_g is off for a couple of weeks...)
<gnuoy> jamespage, will do, thanks
<jamespage> gnuoy, if you want to see what's still pending landing - http://pad.ubuntu.com/charm-branches-to-land
<gnuoy> jamespage, useful, thanks
<jamespage> smoser, pushed you apt-install updates for charm-helpers - thanks!
<rbasak> jcastro: on jujucharms.com I searched for ceph and got what I think is an old charm as the first hit. Is this a known thing?
<rbasak> (the one I wanted was second; I'm just concerned that people won't find the right thing by default)
<rick_h_> rbasak: which charm did you expect to show first? Did you mean in the quicksearch results or after you hit enter and got the real results on the page?
<rbasak> rick_h_: quicksearch results
<rbasak> rick_h_: trying it now, hitting enter gives me stuff in the order I'd expect
<rick_h_> rbasak: there's a bug on the quicksearch results that they're not ordered/cleaned properly, they're too raw a set of results right now. Once you hit the page, the 'recommended' charm should show first and right up top
<rbasak> rick_h_: but I selected the first result in the quicksearch
<rbasak> rick_h_: ah, that'll be the bug, then
<rick_h_> rbasak: ok cool, yea we've got a bug files on quicksearch to work on building those results better with weights/etc
<rbasak> rick_h_: IMHO that's quite important. At least I generally choose the quicksearch result instead of hitting enter first. I wonder what others do.
<rick_h_> rbasak: yes, and it'll get more so once jujucharms.com gets updated with the latest release where the deploy button in quicksearch results is available
<rbasak> rick_h_: (since I can differentiate once I get the result - seeing oneiric is obviously wrong to me - but others may not)
<rick_h_> rbasak: however, it's not been brought up much and we hoped that the official icon only being on the recommended charms helps sheppard users a bit
<rbasak> I am unable to differentiate the icons (neither are familiar to me)
<rick_h_> rbasak: yes, we've put the work into filtering/sorting the main results and that has to be duped for quicksearch.
<rbasak> rick_h_: great to know it's in hand - thanks :)
<rick_h_> rbasak: gotcha, the colored omega looking one is the official ceph icon http://ceph.com/ceph-storage/
<rbasak> I've just been told I need to deploy ceph. I know relatively little about it, and just want juju to do its magic for me :-)
<AskUbuntu> Juju GUI Network Interface | http://askubuntu.com/q/375923
<marcoceppi> stub sbbrtn --to does a hulk smash, there is no separation unless you provide the lxc syntax
<marcoceppi> and the lxc container sort is still premature as of yet
<mthaddon> marcoceppi: any ideas why https://code.launchpad.net/~gnuoy/charms/precise/quantum-gateway/external-nets/+merge/194153 isn't showing up in http://manage.jujucharms.com/tools/review-queue ?
<marcoceppi> mthaddon: only things assigned to ~charmers show up in the review-queue. Since charmers don't review openstack stuff, I've abstained charmers from review and assigned to openstack-charmers
<mthaddon> marcoceppi: ok, thanks - jamespage, is that one on your queue some other way? ^
<jamespage> mthaddon, yup
<jamespage> I have a backlog of stuff to review an until our QA environment is back up I can't test anything.
<jamespage> so a bit stalled for this week now - I'm out until Monday and adam_g is in Japan on hols
 * mthaddon nods - thx
<X-warrior> Hey guys, I just bootstrapped a machine on amazon, then I tried to deploy a postgresql to the same machine as bootstrap (already did this in the past and it worked), but now it stays as pending forever, and if I check the logs inside the machine there is a log of "/bin/sh: 1: exec: /var/lib/juju/tools/unit-postgresql-0/jujud: not found" messages
<AskUbuntu> Install juju on the main network interface | http://askubuntu.com/q/376087
#juju 2013-11-13
<gnuoy> Hi, I've use the openstack charms to deploy swift and joined swift-proxy to keystone and as the admin user I can post objects etc without a problem. But as any other user I'm getting a 403. I can't see any likely looking roles that need granting. As far as I can see keystone returns a valid token but the swift proxy rejects it with a 403
<gnuoy> nm, I didn't appreciate the admin role was a prereq for creating objects
<stub> gnuoy: Do you know what I'm doing wrong with https://bugs.launchpad.net/charms/+source/swift-proxy/+bug/1238660 ?
<_mup_> Bug #1238660: Default installation fails <swift-proxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1238660>
<stub> Hmm... no keystone...
<gnuoy> stub, it looks like you're not explicitly setting any config for swift-storage, is that right /
<gnuoy> ?
<gnuoy> If you're using a single swift storage node I think you'll want replicas to be 1
<stub> gnuoy: yeah, was going to use defaults
<stub> You are saying things that sound like I need to read some Swift documentation.
<gnuoy> stub, defaults assume 3 swift storage nodes each with region set differently. Unless you use zone-assignment=auto but even then the number of replicas needs to be intune with the number of storage nodes
<gnuoy> stub, the README in the swift-proxy charm is good
<stub> charm store has really low google juice :-(
<gnuoy> jamespage, is there a workaround for Bug #1241674  ?
<_mup_> Bug #1241674: juju-core broken with OpenStack Havana for tenants with multiple networks <cts-cloud-review> <openstack> <juju-core:Triaged> <https://launchpad.net/bugs/1241674>
<stub> # losetup --find /etc/swift/storagedev1.img
<stub> losetup: Could not find any loop device. Maybe this kernel does not know
<stub>        about the loop device? (If so, recompile or `modprobe loop'.)
<stub> # modprobe loop
<stub> FATAL: Could not load /lib/modules/3.11.0-13-generic/modules.dep: No such file or directory
<stub> Is this just going to fail with lxc?
<freeflying> is it possible to force rerun hooks
<stub> yer, looks like you need custom lxc templates to get loop mounts (required for swift)
<stub> freeflying: 'juju resolved --retry' can rerun failed hooks. Apart from that, no.
<freeflying> stub, we have a node in error, retried times, but can't resolve it
<freeflying> and now it even reuse to retry :)
<freeflying> refuse
<stub> You will need to be more specific, maybe a pastebin of juju status output. Someone here might be able to identify the problem.
<X-warrior> Hey guys, I just bootstrapped a machine on amazon, then I tried to deploy a postgresql to the same machine as bootstrap (already did this in the past and it worked), but now it stays as pending forever, and if I check the logs inside the machine there is a log of "/bin/sh: 1: exec: /var/lib/juju/tools/unit-postgresql-0/jujud: not found" messages
<X-warrior> Ideas? :S
<marcoceppi> X-warrior: oh, that's interesting
<marcoceppi> X-warrior: if you try to deploy to a machine other than bootstrap, does it work?
<marcoceppi> can you also install and run tree on /var/lib/juju/tools and pastebin it?
<X-warrior> marcoceppi: let me run this tests
<X-warrior> just a sec
<X-warrior> marcoceppi: you want me to run tree on bootstrap + postgresql machine right?
<marcoceppi> X-warrior: yeah
<marcoceppi> just run tree /var/lib/juju/tools and pastebinit
<X-warrior> marcoceppi: sorry, I'm recreating it x)
<X-warrior> marcoceppi: http://pastebin.com/prp1ttWj
<marcoceppi> X-warrior: okay, so something is failing during install
<marcoceppi> X-warrior: do you have a unit-postgresql-0.log in /var/log/juju ?
<X-warrior> yes
<X-warrior> http://pastebin.com/ZQYFCxQi
<marcoceppi> awesome, X-warrior, what does machine-0.log show?
 * marcoceppi notes that this is def a bug
<X-warrior> http://pastebin.com/BFuhLuX3
<X-warrior> Could it be a problem because I'm using juju 1.12 on my localhost? I'm thinking that maybe, when I do a bootstrap it download latest juju version, which could not be compatible with mine.
<X-warrior> marcoceppi: thoughts?
<marcoceppi> X-warrior: it's quite possible that is the issue
<marcoceppi> especially when using the --to flag
<X-warrior> marcoceppi, well that doesn't seem very nice imo... I have a production environment that was deploy with juju 1.12, if I update my local version, maybe I could have problems with the production environment. Maybe, when bootstrapping, juju should send local version to server, and it downloads the same that I'm using, to guarantee compatibility?
<marcoceppi> X-warrior: well, it's supposed to do that
<X-warrior> how could I check the bootstrap version?
<marcoceppi> it should, to some extent perform version matches for tools
<marcoceppi> X-warrior: it's 1.16.3
<X-warrior> so it is not working
<marcoceppi> if you destroy, then run juju bootstrap --show-log --debug
<X-warrior> since my local version says I'm using 1.12
<marcoceppi> you should see the tool matching logic
<X-warrior> does the output of this new bootstrap could help you guys find out what is going on?
<marcoceppi> X-warrior: potentially
<marcoceppi> sinzui: juju should match minor versions of tools to client?
<X-warrior> let me rerun it
<marcoceppi> X-warrior: I actually think this is a bug in 1.12 now that I think about it
<sinzui> marcoceppi, damn right
<marcoceppi> which was fixed in later versions of the juju client
<sinzui> marcoceppi, but I think you also mean it must match patch aswell
<sinzui> marcoceppi, 1.16.0 must not select 1.16.3
<marcoceppi> sinzui: right, has that always been in core? X-warrior is seeing different results
<sinzui> marcoceppi, no. juju-core has always selected the latest. a recent change made it match minor to avoid pyjuju fiascos. juju-qa reported a bug that our testing tools don't like getting 1.16.3 when we are testing 1.16.2
<X-warrior> uhmm
<sinzui> X-warrior, Bug #1247232 is the issue we reported
<_mup_> Bug #1247232: Juju client deploys agent newer than itself <ci> <deploy> <juju-core:Triaged> <https://launchpad.net/bugs/1247232>
<marcoceppi> sinzui: that's uncomfortable story for people with production deployments using older Juju versions
<sinzui> yep
<marcoceppi> any suggestions? Ultimately my mind jumps to "spin up lxc to manage multiple juju-cores"
<marcoceppi> I remember, back when pyjuju was fun and still installed, you could install versions of juju and use update-alternatives, but I don't think that exists anymore
<sinzui> X-warrior, marcoceppi. an even more uncomfortable/unintuitive work around is to use "juju upgrade-juju --version=1.16.2" to force a downgrade immediately after bootstrap
<marcoceppi> sinzui: that sounds scary and awesome
<sinzui> marcoceppi, it does exist, and I am working on the devel packaging fix to ensure 1.17.0 doesn't have a regression
<marcoceppi> sinzui: Oh, okay, so you could theoretically install 1.16.3, maybe compile it somewhere, then use update-alternatives to switch back and forth?
 * marcoceppi considers a blog post for this
<sinzui> marcoceppi, update-alternatives is provided by the package, not juju-core
<marcoceppi> sinzui: right, so you've installed juju-core 1.12, but also want 1.16.3, you'd need to compile 1.16.3 since the apt package would just remove 1.12 from the system
<marcoceppi> unless there's some debian way to not have juju-core package remove the previous version
<sinzui> marcoceppi, oh that is right
<sinzui> marcoceppi, but there is another way
 * marcoceppi is excite
 * X-warrior still around here
<sinzui> you can download the new package and install it in another root
 * sinzui searches for the option
<marcoceppi> sinzui: ahh, this sounds way better than compiling
<X-warrior> Is the 'juju upgrade' for a production environment too risky?
<marcoceppi> X-warrior: yes! sorry, have not forgotten you, want to make sure we can switch back and forth
<marcoceppi> X-warrior: I've seen a few people do it, 90% positive working, with a 10% failure rate
<X-warrior> because I think it is better to update now from 1.12 to 1.16 then later from 1.12 to 2.xx for example
<X-warrior> or maybe 1.50
<sinzui> marcoceppi, X-warrior dpkg  --instdir=/opt for exampe will change the root dir
<marcoceppi> sinzui: X-warrior awesome, I'll give that a go and create a blog post on how to manage multiple juju installs on a machine
<X-warrior> anyway it is a little scary to try to update from 1.12 and maybe screw everything :X
<X-warrior> that is awesome
<sinzui> marcoceppi, X-warrior for the last two month I have use lxc to maintain a container that has all the stable tools. My own computer is on trusty and bleeds. since I mount my home dir in the container, both stable and devel have the same juju-configs. I test upgrades and compatibility that way
<X-warrior> so in my case if I would like to update my juju version should I upgrade my local version from 1.12 to latest, and then run juju upgrade-juju to update the tools on my aws?
<marcoceppi> X-warrior: the upgrade is a bit hairy, I'll try to find a 1.12 deployment and see if it upgrades OK
<marcoceppi> though, sinzui might be able to speak to it better than me
<sinzui> marcoceppi, I wasn't testing back then. X-warrior your concerns about upgrade are legitimate. We only promise upgrades from stable to stable 1.12.x to 1.14.x. Going to 1.16.x find the tools, but no one has tested we can leap that far
<sinzui> s/16.x find/16.x will find/
<X-warrior> so the best approach for future project is keep it update on every stable version? I mean, if I had upgraded it from 1.12 to 1.14 later I could update from 1.14 to 1.16...
<sinzui> That is right
<X-warrior> sinzui: so what about using 'juju upgrade-juju --version=1.14' to upgrade from 1.12 to 1.14 and then 'juju upgrade-juju' to get latest stable release from 1.16?
<sinzui> X-warrior, that will work
<X-warrior> oh, nice.
<X-warrior> but when using upgrade-juju it will just update the environment tools not my local ones right? So if I would like to do that, I must update my local juju to 1.14 then run upgrade-juju and then update my local to 1.16 and use the upgrade-juju again?
<X-warrior> I'm not really sure about how local juju installation is related to remote/upgrade-juju
<context> how can i add the 'juju-core' lp to my apt sources. cloud-tools only has 1.16.0
<marcoceppi> context: let me check, cloud-tools should have 1.16.3
<marcoceppi> X-warrior: it's only the remove juju
<marcoceppi> X-warrior: apt-get is for upgrading the client
<X-warrior> marcoceppi: my client is 1.12 version, can I use 'upgrade-juju --version=1.14'? Or do I need to update my client to 1.14 first for example?
<marcoceppi> context: you're right, it only has 1.16.0 sinzui who's responsible for updating the cloud-tools archive? jamespage ?
<marcoceppi> X-warrior: you should be able to do that from 1.12 without issue, since it's like juju deploy, it's just telling the bootstrap node what to do. I'm not sure if there are incompatiblities between  1.12 and 1.14
<marcoceppi> context: if you can, ppa:juju/stable has the latest juju-core; sudo add-apt-repository ppa:juju/stable; however, if you're using the cloud-tools archive you're probably doing so for a reason
<context> marcoceppi: i did that 'status error on juju status' ticket, which might have in fact been an issue regarding dns.  i get an unsupported protocol 'sftp' now.
<context> trying again to get it running :) i replaced /usr/bin/jujud with the 1.16.2 its trying to use but still tries to fetch it. is it trying to install it somewhere else?
<marcoceppi> context: the tools on the server aren't installed from the cloud archive
<marcoceppi> they're pulled from a public bucket
<context> yeah, do you know where juju bootstrap tries to install it ?
<marcoceppi> one of the places is the juju-dist bucket on s3
<marcoceppi> context: what cloud are you using?
<context> func SharedToolsDir(dataDir string, vers version.Binary) string { return path.Join(dataDir, "tools", vers.String())
<context> marcoceppi: manual (yeah i know, highly unstable)
<marcoceppi> context: then it's probably coming from the juju-dist bucket
<context> yeah, but where is it trying to install the jujud
<marcoceppi> context: it will try to pull down the latest tools tgz, extrac, and install it to /var/lib/juju/agents
<marcoceppi> context: http://juju-dist.s3.amazonaws.com/
<context> yeah thats where it pulled from
<context> but when it tries to install its trying to do sftp://
<context> so im trying to install it 'manually' so bootstrap can skip over that
<context> or even just some how 'bootstrap' manually on the server so then i can do the rest of the juju stuff locally
<context> marcoceppi: i want to fix whats already known broken manually, so i can move forward and see what else is broken :)
<marcoceppi> context: whatever you're up to sounds crazy ;)
<context> im new to Go but not programming, just hard to read through the juju code sometimes :x
<context> marcoceppi: where would i look for what exactly it wants in /var/lib/juju/agents i made it /var/lib/juju/agents/jujud but it still tries to install
<context> marcoceppi: sorry if my being a bother, you can feel free to ignore me ;)
<marcoceppi> context: nope, it's versioned
<marcoceppi> context: it's okay, I've got 40 mins before the conference opens up
<context> versioned as in /var/lib/juju/agents/1.16.2 or :x
 * context opens the juju-local vm he has
<marcoceppi> context: /me checks
<marcoceppi> there's a few symlinks too
<jamespage> marcoceppi, thats correct - but it has to go through Ubuntu SRU for saucy first
<jamespage> and I've not had time todo taht
<marcoceppi> jamespage: gotchya, good to know
<jamespage> its in the stable ppa if you really need it
<marcoceppi> context: this is a slightly older machine, but /var/lib/juju tools like this:
<marcoceppi> context: you just need to update the tools directory, not the agents dir
<marcoceppi> http://paste.ubuntu.com/6411417/
<context> kk
 * context plays around
<context> grr !@# still dont work :-/ i guess ill leave it be for now :(
<context> oh oops
<context> cant even juju bootstrap manual provision with bootstrap-host: localhost
<marcoceppi> context: that's a really bad idea
<marcoceppi> as it will clobber your local host
<context> oh thats fine
<context> but nothing happens cause you still get screwed on unknown protocol sftp
 * context adds ppa
<context> yeah i guess im just stuck
<context> it confuses me more cause i got passed this point once.  i actually got bootstrap to succeed
<marcoceppi> Â¯\_(ã)_/Â¯
<context> yeah. thnx for the help, again
<context> looks like fix is committed but not released yet
<marcoceppi> context: ah, 1.17.0 will probably bring a lot of good to this story
<context> yeah
<context> ive been keeping an eye on the milestone
<context> wish i knew Go more to be able to help out
<context> or the juju codebase for that matter. its a pretty big beast
<marcoceppi> context: it's quite a large project
<marcoceppi> one of the largest open source golang projects iirc
<context> i am so confused
<context> jujud-machine-0 start/running, process 17240
<context> grrrr !@#
<context> all i did was rm -rf /var/lib/juju
<context> AND juju status works
<context> now attempting to add-machine
<context> W00t W00t
<X-warrior> marcoceppi: just to let you know, I installed the 1.16 version on a vm, and deployed the bootstrap and postgresql to the same machine, that error didn't happen
<marcoceppi> X-warrior: interesting
<X-warrior> I mean deployed from vm to aws both of then on machine 0.
<X-warrior> and the status of postgresql is running and logs doesn't have any similar message to that one
<context> trying to deploy juju-gui i get this error:
<context> 2013-11-13 17:15:42 INFO juju.worker.uniter context.go:255 HOOK ImportError: No module named yaml
<context> trying to run the install hook
<frankban> context: did you use any non-default charm option?
<context> how do i 'redploy' a charm
<context> frankban: nope. just did a manual apt-get install python-yaml to hopefully fix it
<context> things dont like to die it seems :-/
<frankban> context: can you paste the whole log somewhere? anyway, you can try "juju resolved --retry juju-gui/0
<context> i did destroy-service and it just sits at life: dying
<frankban> context: try now "juju resolved juju-gui/0"
<frankban> context: but the whole log would really help investigating the isuue
<context> yeah trying to get a clean slate :x
<frankban> context: cool thanks
<context> doesnt seem to be any 'forcively destroy-service'
<context> so now the service just sits at 'dying' and no way to get rid of it
<context> oh nm gone now
<context> frankban: http://pastie.org/8478000
<frankban> context: looking
<context> w00p w00p !
<context> frankban: manually install python-yaml fixed it
<frankban> context: what provider are you using?
<context> frankban: manual
<context> :D
<frankban> context: ok thanks for the feedback, we will release a new version of the charm soon, with a fix included
<context> starting to like juju more and more
<X-warrior> marcoceppi, sinzui, I'm leaving now, thanks for all the help, have a good one :D
<sinzui> thank you X-warrior
<context> hmm, i deploy mysql and it says success but mysql is not running
<context> damn, some charms are really touchy
<rick_h_> jcastro: boo for not bundles beta in your juju updates :P
<jcastro> it's not available for people yet
<jcastro> random devel PPA doesn't count!
<rick_h_> jcastro: sure it is, it's in the charm you go get right now if you juju deploy juju-gui
<rick_h_> it's just 'in beta'
<rick_h_> and an update coming later today
<rick_h_> with jujucharms.com getting updated tomorrow hopefully
<jcastro> ok so next week's update
<rick_h_> gotcha
<rick_h_> side note, thanks for the emails. try to keep up
<rick_h_> that is, I use them to try to keep up
<jcastro> ok so update me
<jcastro> when can people deploy the cloudfoundry bundle?
<jcastro> all the stuff should land tomorrow with the jujucharms.com update?
 * rick_h_ goes to dbl check the quickstart ppa
<rick_h_> jcastro: yes, everything should be up to date tomorrow me thinks
<jcastro> any word on better URLs in general?
<jcastro> I am trying to explain to kirkland how to deploy the bundle and he wants to punch himself in the face
<rick_h_> bundle: is supported in the ppa release today or tomorrow
<rick_h_> will need a follow up to update the url in the deploy tab
<jcastro> so bundle:~jorge/cloudfoundry ?
<rick_h_> bundle:~jorge/cloudfoundry/3/cloudfoundry
<jcastro> ugh
<jcastro> dude, you are killing me
<rick_h_> unfortunately you can have multiple bundles in there
<rick_h_> we can't make it shorter than that and have them work like they do man
<rick_h_> go back and talk to the original people that designed the deployer files and such
<jcastro> hazmat, surely there's something we can do here
<rick_h_> jcastro: k, well fyi updated the quickstart and juju quickstart bundle:~jorge/cloudfoundry/3/cloudfoundry  works
<rick_h_> at least it's running the install on cf-release right now
<hazmat> jcastro, yeah.. there's several sane things we can do.. like omit the deployment name within the bundle when its the only one.. omit the version and get the current one..
<hazmat> would minimize nicely to bundle:~jorge/cloudfoundry
<jcastro> yeah it just needs to be something rememberable by the user
<jcastro> rick_h_, confirm, the bundle works fine here
<rick_h_> jcastro: woot
<jcastro> this bundle/charm is deceptive though
<jcastro> since it's doing so much if you don't debug-log there's no way to know what it's doing
<rick_h_> jcastro: yea, definitely
<jcastro> I wonder if there's a way to echo "Go have a smoke, it's going to be a while." somewhere
<jcastro> like a louder version of juju-log
<rick_h_> heh, in the gui we do that with all bundles. I guess it'd be something to note in the charm readme
<rick_h_> or summary/etc that's somewhere visible
<jcastro> yeah
<rick_h_> since it's really the charm that's the long-runner
<jcastro> right
<HereticLocke> Hi
<jcsackett> jcastro: just thought you would like to know, askubuntu q's are *finally* in the review queue.
#juju 2013-11-14
<Azendale> Is there a comand I should run to collect bug report data? I'm ssh'ed in with debug hooks right  now
<davecheney> Azendale: the contents of /var/log/juju/unit*.log are a good start
<davecheney> that and the version of juju you are using
<Azendale> davecheney: ok, thanks!
<Azendale> Ok, I've reported the bug, what's the proper way to exit the debug-hooks terminal without signaling that the hook succeeded?
<davecheney> exit 1
<davecheney> in debug hooks you are the hook
<davecheney> so if you exit 1
<davecheney> that is the same as the hook failing
<X-warrior> juju upgrade-juju --version=1.14 gives me "error: invalid version "1.14"
<X-warrior> Solved it, I need to pass the patch version as well
<marcoceppi> X-warrior: did that upgrade work?
<X-warrior> marcoceppi: it seems it does, I used the upgrade-juju to 1.14.0 and later to 1.16.3, after that I used juju status to check agent versions, and all versions were 1.16.3, tried to connect to all machines and was able to... until now, everything seems ok :D
<marcoceppi> I'm so happy to hear that!
<marcoceppi> that's awesome
<X-warrior> yes, it is :D
<context> anyone know where to get a cheap 1u for home use, looking at this: http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=8312921&CatId=1205
<josepht> the link to Charm Tools installation instructions here is broken: https://juju.ubuntu.com/docs/authors-charm-writing.html
<marcoceppi> josepht: for what dustup?
<marcoceppi> distro*
<marcoceppi> josepht: https://juju.ubuntu.com/docs/tools-charm-tools.html
<marcoceppi> thanks for the report I'll have that psyched in a few
<josepht> marcoceppi: thanks
<marcoceppi> patched* silly auto correct
<jcastro> jcsackett, hey ninja
<jcastro> how often does the queue with the AU questions update?
<jcsackett> jcastro: once a day.
<jcastro> marcoceppi, ok I added a "non-reviewing-charmers" team
<jcastro> underneath ~charmers
<jcastro> mattyw, hey, I see you applied for ~charmers
<jcastro> can you take your application to the list?
<mattyw> jcastro, hey there - that was ages ago - I don't think I really should be added
<jcastro> ok
<mattyw> jcastro, I can't really commit to reviewing charms
<mattyw> jcastro, and if I could I don't really have enough experiece for my input to be valid
<jcastro> it's ok
<jcastro> you can apply for ~charmers and not review
<context> offtopic: anyone see any reason why i shouldn't get this for at home: http://www.ebay.com/itm/1U-Supermicro-Server-Twin-Node-Low-Power-Server-4x-Intel-Xeon-5148-8GB-X7DWT-/131046365744?pt=COMP_EN_Servers&hash=item1e82f8da30
<sarnold> context: 3Gbps sata ports will be slower than a single ssd drive. I gotta say that looks like an impressive setup.
<AskUbuntu> Juju on Windows - unable to bootstrap | http://askubuntu.com/q/377075
<bladernr_> Hey, I have an issue with juju-local (https://juju.ubuntu.com/docs/config-local.html)
<bladernr_> that says I need to install the raring lts backport kernel, but when I do so, my NIC is no longer capable of grabbing a dhcp address
<bladernr_> I just filed this https://bugs.launchpad.net/ubuntu/+source/linux-lts-raring/+bug/1251401
<_mup_> Bug #1251401: Raring LTS backport kernel breaks dhcp when bringing ethernet device up <amd64> <apport-bug> <precise> <qa-kernel-lts-testing> <third-party-packages> <linux-lts-raring (Ubuntu):New> <https://launchpad.net/bugs/1251401>
<bladernr_> because I think it's a kernel issue rather than really a juju issue, but has this been encountered before?
<sarnold> bladernr_: I odn't know if thsi is your issue, but take a look at  https://bugs.launchpad.net/ubuntu/+source/isc-dhcp/+bug/930962
<_mup_> Bug #930962: dhcp3-server reports many bad udp checksums to syslog using virtio NIC <checksum> <dhcp3-server> <dhcpd> <kvm> <verification-done> <verification-done-lucid> <verification-done-precise> <virtio> <dhcp3 (Ubuntu):Fix Released> <isc-dhcp (Ubuntu):Fix Released> <lxc (Ubuntu):Confirmed> <dhcp3 (Ubuntu Lucid):Fix Released by stgraber> <isc-dhcp (Ubuntu Precise):Fix Released by stgraber> <isc-dhcp (Ubuntu Quantal):Fix Released> <https://launchpad.
<sarnold> bladernr_: there's been a lot of hassle with offloaded udp or ip checksums
<bladernr_> sarnold: nope... in my case, the dhcp request is never even being made
<sarnold> bladernr_: you can see that on the client with e.g. tcpdump or strace or similar?
<bladernr_> just the logs... that bug says I should see this: Feb 11 06:57:18 ... dhcpd: 5 bad udp checksums in 5 packets
<bladernr_> which I am not.  and besides, this is all bare metal at this point
<bladernr_> the client is a laptop running precise w/ the Raring LTS backport kernel, and the server is a Precise server install as well, no weird iptables rules or anything like that
<bladernr_> that bug seems to occur when running dhcpd inside a VM somewhere
<sarnold> bladernr_: yeah, that specific bug is about virtualized instances... but I'd have sworn I've seen reports of checksums not being correctly offloaded to nics sometime recently. anyway, if you're not seeing those requests even being sent, this is probably not that :)
<bladernr_> well, anyway, I was just curious if I was the only one who's seen it... who knows... it's a kernel bug anyway, and I have a work-around (it's just annoying to have to manually configure things)
<bladernr_> heh
<thumpba> is there a good guide or walkthrough to install openstack via juju?
<sarnold> here's something that's quite old: https://wiki.ubuntu.com/ServerTeam/OpenStackHA
#juju 2013-11-15
<Luca__> Hi there
<Luca__> I am getting some errors with juju deply commands
<Luca__> specifically no matter what I try to deploy I always get the error "ERROR cannot get latest charm revision: charm not found: cs:saucy/juju-gui"
<Luca__> juju status is reporting information about bootstrap node, however I am not able to deploy anything with saucy salamander. Any help would be greatly appreciated
<sarnold> Luca__: I don't think there are any saucy charms so far: http://manage.jujucharms.com/charms
<Luca__> This is what I noticed, however I dont see charms from raring either, which surprises me a lot
<Luca__> sarnold:I have seen documents deploying mysql on raring, however I cant see the charm available there. I am lost
<sarnold> Luca__: well, now that's funny. I thought I saw some raring charms there too...
<Luca__> sarnold: yes, and it looks strange to me there are no charms at all for the 13.04, which is 6 months old
<sarnold> Luca__: I do know that raring and quantal charms were rare, since most people deployin juju were using 12.04 LTS instead of the newer releases
<Luca__> sarnold: funny thing about is that 13.10 should apparently make the life easier for an openstack deploymnent, however I cant even start with a simple mysql installation...
<Luca__> sarnold: therefore I was wondering if there is something that I am not considering or escaping from my understanding
<sarnold> Luca__: I'm no expert, but I do wonder if you'd run openstack on 13.10 and then run the charms on precise instances -inside- openstack...
<sarnold> Luca__: (though 13.10 might not have the support length you'd really want for a cluster...)
<Luca__> sarnold: I agree for the support, though this is only a PoC waiting for the 14.04 in april. I am trying to build openstack using juju charms. I know this has been done on raring, however I cant find anything for saucy
<Luca__> sarnold: beside on the ubuntu server page they specifically talk about charms http://www.ubuntu.com/server
<sarnold> jcastro: ^^^ what am I missing? :)
<Luca__> sarnold: I bootstrapped using precise and it is working. However I was pretty sure would not be a problem using charms on Saucy
<sarnold> Luca__: having the saucy charms around would probably be useful for folks, like you, planning for 14.04 LTS...
<Luca__> What is the point then installing 13.10 for Openstack is no charms for this release are available? Could anybody clarify this?
<sarnold> you _can_ install openstack without juju, even if it is a pain in the butt.. :)
<Luca__> sarnold: As you said it is a pain without an orchestration tool
<Luca__> I did a year ago and I was crying
<sarnold> ow :/
<Luca__> beside I need an HA configuration, therefore I was hoping on Ubuntu
<sarnold> heh, setting up 28 machines by hand doesn't sound fun..
<Luca__> Rackspace just come out with a new coobook for quantum support among them
<Luca__> sarnold: right, this is another point that I would like to get clarified. I am sure many services can be deployed on the same machine, reducing the # of needed servers
<sarnold> Luca__: i certainly hope so.
<Luca__> sarnold: http://www.jorgecastro.org/2013/10/30/a-very-spooky-charm-status/
<sarnold> Luca__: interesting, jcastro linked to http://manage.jujucharms.com/recently-changed but there's only precise charms..
<Luca__> sarnold: Indeed. I may raise a question in the Ask Ubuntu and see if I get some answers
<Luca__> They might have been pulled off from the repository and will be later pulled in
<sarnold> Luca__: that's a decent idea, I know the juju team follows up on questions there pretty well; the mail list might also work well
<Luca__> mail list?
<sarnold> Luca__: ah! here we go: https://lists.ubuntu.com/mailman/listinfo/juju
<Luca__> sarnold: Thank you!
<AskUbuntu> What are the steps to deploy OpenStack in a VM using juju? | http://askubuntu.com/q/377353
<jaywink> any ideas what could be wrong on a saucy local bootstrap, status shows forever pending, ssh to machine 0 gives connection refused, deployed a service, machine 1 also in pending, can see in syslog the IP given from DHCP but ssh connection reset, ping works ..  last log line "2013-11-15 09:52:49 INFO juju.provisioner provisioner_task.go:367 started machine 1 as instance jaywink-local-machine-1 with hardware <nil>"
<jaywink> tools 1.16.0.1
<jaywink> (and UFW is off, tried that already :))
<jaywink> pastebin: http://paste.ubuntu.com/6420446/
<jaywink> (new attempt)
<jaywink> ok now it worked, I deleted server image (oct 24) from /var/cache/lxc/cloud-precise/ as instructed https://juju.ubuntu.com/docs/troubleshooting-local.html -> No machines start ...
<AskUbuntu> Is it possible to use Juju on MAAS nodes not connected to Internet? | http://askubuntu.com/q/377397
<gnuoy> mgz, if you do win the internal lottery and end up working on Bug#1241674 let me know if I can be of any help, I have an environment we can test in.
<_mup_> Bug #1241674: juju-core broken with OpenStack Havana for tenants with multiple networks <cts-cloud-review> <openstack> <juju-core:Triaged> <https://launchpad.net/bugs/1241674>
<mgz> gnuoy: thanks!... I guess
<gnuoy> you guess right !
<noodles775> Is this just bad timing (related to uploading new juju tools perhaps?), bootstrap failed with "error: Get https://juju-dist.s3.amazonaws.com/tools/juju-1.16.3-precise-i386.tgz: EOF". More at http://paste.ubuntu.com/6420792/
<jaywink> how do I get rid of a "life: dying" situation - a failed nfs service install does not want to be destroyed :( juju local does not seem very stable :(
<noodles775> jaywink: I haven't updated in the past few weeks, but yes, the local stuff is quite new, and last time I was playing with it, I needed the following to destroy the environment properly: http://paste.ubuntu.com/6421224/
<noodles775> (obviously don't use if you've got other lxc containers you care about there)
<jaywink> noodles775, doing "juju resolved nfs/0" (in this case) solved it :P maybe it wasn't destroying because of the failed install?
<noodles775> jaywink: great (sorry, I thought you were trying to destroy/recreate your local environment).
<marcoceppi> jaywink: running resolved is something you'd have to do in any environment
<jaywink> marcoceppi, ok, I think the docs could be clearer - I saw no mention while skimming through for help :)
<jaywink> (or just a hint from destroy-service that there is an error would be golden - will check if this is not reported yet..)
<marcoceppi> jaywink: it should have been placed in the destroying services page as a caveat
<marcoceppi> but it wasn't
 * marcoceppi updates the docs
<jaywink> thanks marcoceppi, added a note to a bug I had already commented on the same issue to :) (https://bugs.launchpad.net/juju-core/+bug/1219902)
<_mup_> Bug #1219902: Cannot destroy service when install hook failed <canonical-webops> <destroy-service> <juju-core:Triaged> <https://launchpad.net/bugs/1219902>
<benji> rogpeppe: I'm still new to mongo, but I think a query like this will do what you want: {field: {$size: {$mod: [2, 0]}}}
<benji> (where "field" is the name of the element that holds the array)
<rogpeppe> benji: thanks
 * rogpeppe tries it
<rogpeppe> benji: doesn't seem to work unfortunately
<rogpeppe> benji: i think the argument to $size must be a number
<benji> rogpeppe: darn
<bladernr_> hey all, how do I file bugs against juju installed from the cloud-archive tools pocket on precise?
<bladernr_> apport tells me that Juju is not an official Ubuntu package :/
<bladernr_> should I file it on Launchpad against juju, or is there a special place for the cloud-archive packages
<mbruzek> Can a hook call another hook?  Like if I wrote a start and stop hook can the installer call those or is it better to have the code to start and stop in the install hook?
<arosales> mbruzek, hello
<arosales> sorry, missed that message earlier
<hazmat> mbruzek, is the question still hook can call another hook?
<arosales> we have a few folks at a conference this week so little late on the replies
<mbruzek> Hi Antonio
 * hazmat looks for the irc logs
<mbruzek> Yes, hazmat can I call start from install?
<mbruzek> Or should the hooks not know about each other?
<arosales> hazmat, thank for chiming in :-)
<mbruzek> I read the hook documentation and I did not see anything about calling each other.
<hazmat> mbruzek, so they can, but they don't nesc have the same access to information
<mbruzek> I see that very often people write one hook and make symbolic links to the one hook.
<arosales> mbruzek, when you say installer, it looks like that is the application installer not the hoook install, correct?
<hazmat> mbruzek, yes.. they have the exec name as dispatch
<hazmat> to internal functions doing the hook, mostly its code orgranization technique
<hazmat> mbruzek, so order of hooks on a charm starting up is.. install, config-change, start
<hazmat> mbruzek, you can call out to other hooks .. their just other executables.
<mbruzek> OK thanks.  I understand the all in one hook, but I am trying to write them separately
<hazmat> mbruzek, relation hooks in particular have additional context
<mbruzek> That answers my question, thanks.
<hazmat> like REMOTE_UNIT_ID, etc. for those you have to setup some additional env variables if they want to use the relation cli api (relation-get/relation-set/relation-list) etc.
<hazmat> ie. config-changed needs to modify a relation value, it will use relation-ids to fetch the applicable relation and then set that before calling out to the rel hooks
<hazmat> k
<mbruzek> OK let me see if I understand this correctly.
<arosales> mbruzek, so to your question, "Can a hook call another hook"
 * arosales lets mbruzek type
<mbruzek> I have a database-relation-changed hook that needs to stop and restart a service.
<mbruzek> I have that code in the stop and start hook.  So I was wondering if I can simply call the hook or if I need to duplicate the start and stop logic in the database-relation-changed hook.
<mbruzek> My understanding is that I DO NOT need to duplicate.
<mbruzek> And that one hook can call another.,
<mbruzek> Is there anything wrong in my assertion hazmat /
<mbruzek> ?
<hazmat> mbruzek, nothing wrong with that
<hazmat> mbruzek, ie. call out to start/stop in your rel-changed hooks
<hazmat> mbruzek, which db you working with?
<mbruzek> Thanks, that was the basic question thank you.
<mbruzek> mysql
<Azendale> ok, something that is confusing me: there is LXC (which I've used) and I've heard of the local provider. Are they the same thing? Or is local just configuring what it is run on, without containers?
<sinzui> hi stokachu. DO you have a few minutes to talk about Bug #1222671 ?
<_mup_> Bug #1222671: Using the same maas user in different juju environments causes them to clash <cts-cloud-review> <maas-provider> <Go MAAS API Library:Fix Committed> <juju-core:Fix Committed by thumper> <https://launchpad.net/bugs/1222671>
<hazmat> Azendale, local provder uses lxc
<hazmat> Azendale, local provider just automates the creation and setup of lxc containers as though they were machines in an environment.. effectively translating.. add-unit -n 3 wordpress into creating lxc containers and setting up workloads on them
<hazmat> Azendale, there's also separately the notion of adding lxc containers to machines in other environments .. ie add-unit --to=lxc:2 wordpress  ... which will create an lxc container on machine 2 in the env and deploy wordpress to it
<Azendale> hazmat: thanks! I was wondering if you could get better density when using a MaaS environment by using LXC :)
<hazmat> Azendale, indeed.. that's why that capability is there. it works for most charms.. the only exceptions to date have been some of the openstack ones which want either direct device access or kernel modules (ceph and neutron)
<mbruzek> What would be the correct command to copy down the mysql charm to my machine?
<mbruzek> bzr branch lp: <?>/mysql/trunk
<mbruzek> Something like that right?
<mbruzek> Sorry I found the answer on my own, for future reference it was:
<mbruzek> bzr branch lp:~charmers/charms/precise/mysql/trunk mysql
<mbruzek> Do we have any mysql experts in the room?
<mbruzek> marcoceppi, it looks like you wrote the charm.  Are you there?
<mbruzek> I am writing a charm that uses the database relation and I don't understand why.  The error message is:
<mbruzek> ???Error executing sql: create database if not exists `?` default character set utf8 - Could not create connection to database server. Attempted reconnect 3 times. Giving up.???
<mbruzek> Unable to create the database. The password might be incorrect or the database is not started.
<mbruzek> I retrieved the root mysql password as it says in the README, by going to /var/lib/mysql/mysql.passwd
<mbruzek> And I entered that into openmrs DB wizard as the mysql root password.  But when it tries to create the database I get the error.
 * arosales reads backscroll
<arosales> marcoceppi is traveling atm . .  . /me looking into error
<arosales> mbruzek, our tech writer just took a first whack at documenting the mysql charm interface @ https://docs.google.com/a/canonical.com/document/d/1WTws_k5K__NAsfZwxf-oKHYqZnZYvrg7K58MvOHFpAY/edit#
<arosales> mbruzek, do you check if the db has been set up before accessing
<arosales> mbruzek, take a look at the doc under "# Check to see if 'database' has been set, and loop until it is"
<mbruzek> Yes it is up and I can even log in using the root password that I gave openmrs
<arosales> mbruzek are you setting the mysql port to 3306 in openmrs?
<mbruzek> Yes that too.  Here is the Database Connection string before OpenMRS attempts to create the database
<mbruzek> jdbc:mysql://localhost:3306/@DBNAME@?autoReconnect=true&sessionVariables=storage_engine=InnoDB&useUnicode=true&characterEncoding=UTF-8
<mbruzek> well maybe it should not be localhost... since this is running on tomcat
<mbruzek> let me dig a little bit more
<hazmat> mbruzek, also bzr branch lp:charms/precise/mysql
<mbruzek> hazmat, Do I need the ~ before charms?
<hazmat> or a bit more ambiguous but also valid atm .. bzr branch lp:charms/mysql
<hazmat> mbarnett, nope
<hazmat> whoops
<mbruzek> OK thanks.
<arosales> mbruzek, ping here if changing out local host helps establish the connection
<hazmat> mbruzek, the fully qualified name includes the ~charmers, but when a charm is officially approved (ie, gone through qa by charmer reviewers) and promulgated those aliases are valid.
<mbruzek> I don't know where the localhost is coming from, the property file that my charm builds properly puts the host ip address in there, let me grab that
<hazmat> mbruzek, i wrote the original version of it (mysql).. what's the issue?
<mbruzek> OpenMRS tries to create a database and I get the following error:
<mbruzek> ???Error executing sql: create database if not exists `?` default character set utf8 - Could not create connection to database server. Attempted reconnect 3 times. Giving up.???
<mbruzek> Unable to create the database. The password might be incorrect or the database is not started.
<hazmat> the jdbc conn string which looks invalid. you need to pull relation-get private-address from the db-relation-changed hook when forming the jdb conn string
<hazmat> mbruzek, your connection string looks invalid
<mbruzek> yes I am pulling that info in the charm., but it does not look like open
<mbruzek> openmrs is not getting my property file
<mbruzek> This is the URL my charm is generating
<mbruzek> connection.url=jdbc:mysql://10.0.3.3:3306/openmrs?autoReconnect=true
<hazmat> mbruzek, you can use debug-hooks to try and debug the issue.. add an exit 1 into your relation-db-changed hook, so it fails, and then use debug-hooks to get into it, and do juju resolved --retry to drop into the debug shell
<hazmat> mbruzek, you can juju ssh openmrs/0 to get into the machine and use mysql client to verify the connection string, where's the username password?
<mbruzek> my database-relation-changed hook outputs a properties file which OpenMRS is supposed to read and use.
<mbruzek> That value is what the charm wrote in the file.
 * hazmat nods
<mbruzek> # OpenMRS Runtime Properties file.
<mbruzek> connection.username=iochahmezahhaip
<mbruzek> connection.password=quiethoovuyiphi
<mbruzek> connection.url=jdbc:mysql://10.0.3.3:3306/openmrs?autoReconnect=true
<mbruzek> So it looks like OpenMRS is making up their own connection URL, I will have to look if I can disable that.
<hazmat> k, that looks good
<mbruzek> Or change it
<mbruzek> Thanks hazmat
<mbruzek> OK I have found a way to enter the database connecton url, and changed it to what should be right jdbc:mysql://10.0.3.3:3306/openmrs?autoReconnect=true
<mbruzek> I still get the could not create the connection to the database error
<mbruzek> Do I need to open port 3306 on the mysql server?
<mbruzek> So my charm is running as a subordinate to tomcat, and the mysql server is a different machine 10.0.3.3
<arosales> mbruzek, https://answers.openmrs.org/questions/795/openmrs-configuration-error suggests so
<arosales> on the mysql side
<mbruzek> Yeah but I think the mysql charm should have that one open
<mbruzek> That is how normal connections are made
<mbruzek> Reading Antonio's document helped me, I guess the database is created when the relation is made.  Now that I fixed the URL it tries to create the database that is already there.
<mbruzek> I just tried the option to not create the database and that looks like it worked.
<mbruzek> woot!
<arosales> mbruzek, so you got past "Could not create connection" error?
<mbruzek> Thanks hazmat and arosales for your assistance
<mbruzek> Yes
<arosales> \o/
<mbruzek> hazmat, Are you still here?
<arosales> mbruzek, hello
<arosales> mbruzek, could you state your question again?
<mbruzek> In the database-relation-changed hook I am able to call `relation-get user` for the database user name.
<mbruzek> I was wondering if there was a way to get that database user name using the juju command line
<mbruzek> The hook gets these three values.
<mbruzek> db_user=`relation-get user`
<mbruzek> db_db=`relation-get database`
<mbruzek> db_pass=`relation-get password`
<mbruzek> In this case my charm is named openmrs, and when I create a relation the openmrs database is created
<arosales> mbruzek, can openmrs be foreced to reread the properties file
<mbruzek> No, the more I read about it here it only writes the propertis file.
 * arosales is also new to openmrs
<arosales> mbruzek, does the mysql creds have to be typed in via the config wizard?
<arosales> if not perhaps post config wizard you can invoke a config-changed to pass in the mysql creds
 * arosales not sure what methods openmrs allows for passing in creds post set up wizrd though
<mbruzek> The only way I got it to work was by doing the advanced set up, where I could specify the DB url, and then tell the wizard NOT to create the database and then yes I had to give it the db user and db pass
<mbruzek> Reading the mysql README there was a complicated way to get the mysql root password.  You have to ssh to mysql instance and cat a file.
<arosales> mbruzek, and in that case you needed to query the db user and pass, correct?
<mbruzek> Is there an easier way to get the root password from mysql that I missed?  I may be able to work with that
<mbruzek> Yes the user who is setting up the charm would need a way to get the database user and password.
<arosales> mbruzek, and you are querying that from the tomcat charm, correct?
<mbruzek> Well the OpenMRS is a subordinate to the Tomcat charm
<mbruzek> but yeah the hook is getting that info properly, I just want to see if the user can get that information easily?
<arosales> bcsaller, hazmat ^ ?
<marcoceppi> mbruzek: you don't want to use the root password.
<mbruzek> Once the hook has that information, can I put it somewhere the user can get it out easyis?
<mbruzek> Hi marcoceppi, I would only need the root password if I had to create the table.
<mbruzek> I found out that the table openmrs is created when I create the relation
<mbruzek> So I need the user and pass that are authorized to that table and they are not natural like openmrs/opemrs
<mbruzek> connection.username=iochahmezahhaip
<mbruzek> connection.password=quiethoovuyiphi
<mbruzek> connection.url=jdbc:mysql://10.0.3.3:3306/openmrs?autoReconnect=true
<mbruzek> The user who deploys this charm will not know that iochahmezahhaip is the username
<mbruzek> and they need to enter that in a wizard, I am just trying to figure out how I can provide this to the user in an easy way
<marcoceppi> mbruzek: one second, getting my laptop
<arosales> config opt on the openmrs charm?
<mbruzek> arosales, As far as I know the user/pass for the DB is not something I can specify.  I am only getting it once the relation to the DB is created
<mbruzek> I could set the config option once I have them couldn't I ?
<arosales> you could via juju set, but that may not be the most elegant solution talking about outloud now
<arosales> marcoceppi, hello btw :-)
<sarnold> what happened to all the != precise charms? a chap in here last night couldn't find any charms from any other versions and wondered if juju was more or less dead or dying..
<arosales> sarnold, was he looking in trusty?
<arosales> precise is where we maintain the charms atm,
<arosales> I would point him there, and he can force a precise charm onto a later ubuntu version if need be.
<marcoceppi> mbruzek: you can't set configuration options from the charm
<sarnold> arosales: he was mostly interested in saucy, e.g. http://www.ubuntu.com/server  promised interesting juju charms for 13.10 and better openstack...
<mbruzek> OK
<marcoceppi> mbruzek: what you need to do, is find out the fields that the wizard expects, then POST that data to it from http://127.0.0.1/path/to/wizard
<mbruzek> marcoceppi, not even in a hook?
<marcoceppi> mbruzek: there is no config-set from within a hook
<marcoceppi> configuration is explicitly done by the user
<mbruzek> The wizard is a multipage wizard
<mbruzek> I am not sure POST will work... but that is a great idea
<mbruzek> The wizard has like 3 or 4 pages
<arosales> marcoceppi, well from one charm to another you can't, yet . . .
<marcoceppi> mbruzek: you can probably still mock out each step in the wizard, it's a bit of work and a little tricky, but it's how I would approach the problem
<marcoceppi> arosales:  o/ re:invent is over
<marcoceppi> time to de-compress ;)
<arosales> mbruzek, well there is the advanced option though and just forget the wizard
<marcoceppi> arosales: charms can't set their own config options. There's charm feedback from a charm on the roadmap but it hasn't been specified
<arosales> sarnold, ah I see. The charms still can run on other ubutu versions and the client is of course tested on 13.10, but the workload itself is tested on the precise branch
<arosales> marcoceppi, yup
<mbruzek> marcoceppi, why do I not want to get the root pass on mysql?  You mentioned that I should not do that right away
<arosales> marcoceppi, mysql-root relation appropriate here?
<marcoceppi> mbruzek: that's the root mysql password, arosales the mysql-root relation creats a new superuser account, not the root
<marcoceppi> mbruzek: same reason we use sudo accounts over giving our root passwords
<marcoceppi> out*
<sarnold> arosales: makes sense, you've only got so much time so you've got to pick your priorities. It just was jarring for a user to see 13.10 promoted so heavily and then be told that nearly all the work was being done on a system three releases back :) he wanted to get ready for 14.04..
<mbruzek> OK.  Well the wizard wants the root mysql password to create the table, but will accept the user/pass that has access to the table too
<arosales> sarnold, understood and we need to start looking at testing charms against 14.04
<arosales> hopefully when the testing starts we'll be able to better field those questions
<sarnold> arosales: thanks for catching me up :)
<arosales> the charm branch conundrum is not an easy one to follow :-/
<arosales> sarnold, thanks for pinging here with the feedback
<sarnold> no, it isn't. I thought I had even authored one or two charms for newer-than-precise in the charmstore, so was surprised to not find them. :) hehe
<sarnold> ("I _did_ do that, right? right?" :)
<marcoceppi> sarnold: they're still there, just not promoted
<arosales> sarnold, if you had a charm in 13.04 it doesn't automatically get promoted to 13.10 and 14.04
<arosales> Thus, as of today there are no charms in 14.04
<sarnold> arosales: sure, I wouldn't expect _that_, that's asking for magic :) but I thought I had a 12.10 charm or two written..
<arosales> as they need to be tested and ack'ed into 14.04
<arosales> sarnold, hmm the charmstore may not look at other series branches
<arosales> sinzui could confirm
<arosales> I don't see him online atm
<arosales> sarnold, you said you had charms in quantal?
<sarnold> arosales: let me go look ..
<sarnold> arosales: hrm. all I can find now are my own precise versions. Color me confused. :)
<arosales> sarnold, you looking in lp or in jujucharms?
<sarnold> arosales: local hard drive, I figured I'd still have the trees laying around..
<sarnold> arosales: .. and launchpad agrees, again just precise. sorry for the wild goose chase.
<arosales> sarnold, no worries. I think there are a few, but currently we concentrate on LTS'es for the charms
<arosales> and we will be working on the 14.04 charm story
<marcoceppi> veryveryveryvery soon
<arosales> which I think was the root of the person's queries for no charms in 13.10 (getting prepped for 14.04)
<sarnold> he -mostly- wanted a way to deploy the newest version of openstack with less pain :)
<marcoceppi> sarnold: he can do that already
<sarnold> marcoceppi: I spent half an hour trying to find charms to help :(
<marcoceppi> sarnold: all versions of openstack are available in the openstack charm, for the record
<sarnold> marcoceppi: wow. got a pointer? :)
<marcoceppi> sarnold: as part of our promise for LTS, we make all versions of openstack available via the cloud-archive
<marcoceppi> arosales: the charm readmes
<arosales> config opt in the charm to point the the cloud archive
<marcoceppi> https://wiki.ubuntu.com/ServerTeam/OpenStackHA
<marcoceppi> https://wiki.ubuntu.com/ServerTeam/OpenStackHA?action=AttachFile&do=view&target=local.yaml
<arosales>   source: 'cloud:precise-updates/grizzly'
<arosales> s/grizzly/newest-openstack
<marcoceppi> you can change cloud:precise-updates/grizzly to havanna or folsom, etc
<marcoceppi> we prefer LTS, given the support lifespan, etc
<marcoceppi> makes sensefor a server, not nessisiarly on a desktop though
<sarnold> marcoceppi: I saw the OpenStackHA page last night but figured it was Too Old when I saw references to juju 0.7 ..
<marcoceppi> sarnold: almost all charms are compatible between juju versions
<sarnold> now that _is_ magic. :)
<marcoceppi> sarnold: it's a promise the core team has made
<marcoceppi> think of the pain of updating over 300 charms if juju changes :)
<sarnold> wow, 300? nice work :)
<marcoceppi> well, we've got 140 "charm store" charms
<sarnold> also impressive
<marcoceppi> but a ton more outside the store
<flickerfly> Has anyone gotten juju connected to Rackspace yet? I see some have looked at doing it, but nothing successful.
<marcoceppi> flickerfly: i heard yesterday that it should work now
#juju 2013-11-16
<arosales> have  a good weekend
<sarnold> bye arosales :)
<hazmat> marcoceppi, welcome back
<flickerfly> marcoceppi, thanks
<flickerfly> So do need some ppa version other than juju-stable-saucy to get Rackspace connected?
<flickerfly> I told rackspace I want them to work on juju access.
<flickerfly> Could use your vote: http://feedback.rackspace.com/forums/71021-product-feedback/suggestions/4997037-document-connecting-juju-to-rackspace-cloud-
<marcoceppi> hazmat: won't be back until tomorrow. looking forward to being home go sure
<Luca___> marcoceppi: Hi there, I cant find any charm for saucy, do you know whether they are available?
<Luca__> Hello
<Luca__> Hi there, anybody knows something about bundles in 13.10?
<Luca__> According to the ubuntu page this release should support them, however I cant find even charms for saucy
<Luca__> anyody there?
<Luca__> Hi there
<Luca__> Hi there, anybody knows something about bundles in 13.10?
<Luca__> According to the ubuntu page this release should support them, however I cant find even charms for saucy
<Luca__> jcastro: Hi there, do you have any information about bundles on 13.10?
<jcastro> Luca__, what do you need to know?
<jcastro> I blogged some examples
<jcastro> http://www.jorgecastro.org/2013/11/14/from-0-to-hero-in-a-few-minutes/
<Luca__> jcastro: I am trying to a PoC of openstack with 13.10, and following few links I have found, however can only find charms on precise
<Luca__> and I am kind of lost
<jcastro> charms usually deploy precise instances
<Luca__> however if I issue juju deploy ...... on saucy I most only get ERROR, charm not found
<jcastro> you can specify a series
<jcastro> juju deploy cs:precise/mysql for example
<Luca__> right, I did so for saucy
<Luca__> but not sure it worked
<jcastro> probably not, we only have charms for precise in the charm store
<Luca__> so as far as I understand I cant use juju to deploy something on 13.10? I am sure I am wrong somewhere, but can figure our where
<jcastro> you can, it's just we only test for precise
<jcastro> though it's probably better to pull down a precise charm and deploy it locally
<jcastro> than from the store
<jcastro> juju deploy --repository=. local:haproxy
<jcastro> after you download the charm
<Luca__> right
<Luca__> will try and get charms for precise and deploy locally
<jcastro> no one's ever really asked for saucy charms before, heh
<Luca__> yeah, I guess so. Last thing, I know you are doing a great job. I am mainly using this https://wiki.ubuntu.com/ServerTeam/OpenStackHA for Openstack but # of servers is limiting factor as well on a virtual environment. Do you know if there is any documentation with a smaller number of servers, maybe using containers or virtaulization?
<jcastro> we're working on a bundle that will do just that
<jcastro> but it's in progress and not ready
<jcastro> you want to ping jamespage about that when he gets back next week if you're interested in checking it out
<Luca__> sure, thanks a lot, I appreciate!
<lazyPower> marcoceppi: o/
<marcoceppi> \o lazyPower
#juju 2013-11-17
<InformatiQ> hey guys I'm new to juju and a bit confused about the difference between deploy and add-unit
<InformatiQ> could some one kindly explain
<rick_h_> InformatiQ: howdy, sure thing. Deploy is a service that you want to setup in your environment.
<rick_h_> InformatiQ: add-unit is a way to add another instance of an existing service
<rick_h_> InformatiQ: so you'd deploy apache, and then add-unit to bring up 2 more configured the same as the first
#juju 2014-11-10
<jose> bradm: ping, mind a quick PM?
<bradm> jose: sure, go for it.
<jose> cool :)
<jam> voidspace: want to join the 1:1 room?
<bloodearnest> rick_h_: loving the new info pages for charms, thanks! :)
<rhalff> hi, is there any specification for the websocket API?
<yaell> Hi guys I need some help. I am trying to add a new container to an existing machine on a maas environment using the command : juju add-machine lxc:1 but I always get the same error message :container failed to start. Does anyone have an idea why this is happening?
<rick_h_> bloodearnest: <3
<mgz> bug 1386143
<mup> Bug #1386143: 1.21 alpha 2 broke watch api, no longer reports all services <api> <regression> <juju-core:Triaged> <juju-gui:Invalid> <juju-quickstart:Invalid> <https://launchpad.net/bugs/1386143>
<mhall119> arosales: marcoceppi: gaughen: jose: we only have today and tomorrow before UOS starts, and so far there is only one Cloud & DevOps session in Summit. Have you been recruiting sessions and just haven't added them to Summit yet, or are there no sessions planned yet?
<arosales> mhall119: thanks for the reminder. I need to sync up with gaughen as she back from ODS pairs now
<gaughen> mhall119, I will only be scheduling 2-3 sessions
<mhall119> thanks gaughen
<johnmce> Hi all, is there anyone around that can give me some advice on using the hacluster charm with MAAS?
<johnmce> I'm specifically concerned with how the openstack charms now support multiple networks, and how that is supposed to work with MAAS+LXC
<jamespage> gnuoy, dosaboy: https://code.launchpad.net/~james-page/charms/trusty/quantum-gateway/vlan-flat-support/+merge/241308
<johnmce> Can anyone point me in the direction of any documentation that explains how to get Juju to create LXC containers (in MAAS) that have multiple virtual NICs?
<johnmce> All of the updated Openstack charms support declarations for multiple networks, and multiple vips, but without the ability to create LXC contains with multiple NICs this is useless.
<johnmce> Please, can anyone help?
<jose> johnmce: hey, I think you have better chance of getting someone to explain if you email juju@lists.ubuntu.com, OpenStack Charmers don't idle too much in this channel
<jose> mhall119: have got only the CI one, apparently people are too busy to respond
<jose> stub: did you get bundletester running? if not, I can give you a hand with it
<lazyPower> dpb1: i haev no idea what I was thinking when i sent that MP in - fixed up and re-set for re-review. Thanks for the catch!
<jose> tvansteenburgh: hey, any updates on bundletester+constraints?
<tvansteenburgh> jose: it's implemented, but i'm waiting for a patch to juju-deployer to land first
<jose> got it!
<jose> thanks :)
<tvansteenburgh> np!
<jose> hey rick_h_, do you think you could do an UOS session demonstrating what's new on the gui and probably what's coming?
<rick_h_> jose: maybe
<rick_h_> jose: when ya thinking?
<jose> rick_h_: wed, thu or fri, you choose the time
<rick_h_> jose: otp but sounds like something I could do :)
<jose> rick_h_: lemme know when you've got a bit of time and we can discuss, thanks! :)
<jose> tvansteenburgh: quick question, I just ran bundletester and I'm not seeing any sentries, is this expected? I'm on 1.21-beta
<tvansteenburgh> sentries were removed from amulet in v1.8.1
<jose> oh, ok
<tvansteenburgh> i mean, the sentry api still exists in it's same form
<jose> thanks for the clarification :)
<dpb1> lazyPower: looks good now
<tvansteenburgh> but the sentries are no longer implemented as subordinate charms
<johnmce> jose: I've email the list. This is it: https://lists.ubuntu.com/archives/juju/2014-November/004516.html
<jose> johnmce: cool, that's better chances of getting an accurate response :)
<rick_h_> jose: hey sorry. So I've got gaps wed or friday
<ezobn> Hi all: Have ran all the openstack charms, make needed relations, all working well, except the  nova security groups, the nova.network.security_group.neutron_driver get the error with error getting them. I use  openstack-origin:trusty-juno. Does anybody faced that ?
#juju 2014-11-11
<jose> rick_h_: event runs from 14 to 20 UTC, you can choose your slot!
<ezobn> Hi All, After fresh install of juno openstack on trusty (openstack-origin:trusty-juno) the neutron security group service is not started. How I can run it ? Why on controller node (installed by charm) no neutron-server installed ?
<vogonpoetry> Trying to make changes to cinder-vmware charm work with another storage driver. Looking at it, I think I mainly need to edit cinder_context to take in the new config keys defined in config.yamlâ¦ Is there more to it than that? If so where else should I look.
<vogonpoetry> and many thanks for any help
<cargill_> hi, when I deployed a bundle and a service is not started, can I start it from the gui?
<cargill_> the debug-log does not mention any errors, however it does not mention starting that either
<cargill_> hmm, yet stat says "agent-state: started"
<jose> tvansteenburgh: hey, have a min?
<tvansteenburgh> jose: yep
<jose> tvansteenburgh: quick question, will bundletester deploy the local charm or the one in the store as a default?
<jose> cory_fu: those tests for chamilo are now done, should I do an MP against your branch or against the store?
<tvansteenburgh> jose: bundletester just finds tests and executes them. what's deployed would depend on what's in the amulet test or the bundle file
<jose> tvansteenburgh: got it, thanks
<rick_h_> jose: where do I sign up for a slot?
<jose> rick_h_: with me
<rick_h_> jose: ah ok.
<rick_h_> jose: put me down for 15:00 friday?
<jose> rick_h_: sure, what would be a short description of the presentation?
<rick_h_> jose: I'm going to ping alexisb and see if she wants to dual up again or not.
<jose> rick_h_: got it, lemme know! :)
<rick_h_> jose: so for now "What's new and upcoming in the work of Juju UI Engineering" can work for now
<cory_fu> jose: If you're happy with the services framework implementation of the charm, I'd say MP it against trunk
<rick_h_> jose: and for a description just something about the latest in the progress of the Juju GUI, jujucharms.com, and juju-quickstart.
<rick_h_> jose: and if alexisb can join then I'll ask you to ammend it to be more general juju
<jose> rick_h_: awesome, sec to give you the link...
<jose> rick_h_: http://summit.ubuntu.com/uos-1411/meeting/22387/whats-new-and-upcoming-in-the-work-of-juju-ui-engineering/
<jose> cory_fu: ok, MP against trunk is open
<rick_h_> jose: ty
<cory_fu> tvansteenburgh: Why is bundletester tagged to bzr==2.6.0 instead of bzr>=2.6.0?  My system apparently has 2.7.0dev1 and it's complaining
<tvansteenburgh> cory_fu: that's how i inherited it, and i haven't tested it with anything else
<tvansteenburgh> you didn't install it in a virtualenv?
<cory_fu> tvansteenburgh: Yes, but I had to use --system-site-packages to get pythonapt to work, which meant that my system-level installed bzr made it so I couldn't install the right version of bzr into the virtualenv
<cory_fu> I know I had this working at one point, so I'm not sure how I managed it before
<tvansteenburgh> pip install -I
<tvansteenburgh> (capital i)
<cory_fu> Ah!  That works.  I thought there was such an option, but it's not listed on pip --help
<tvansteenburgh> pip help install :)
<cory_fu> Gah
<cory_fu> :)
<cory_fu> Thanks
<tvansteenburgh> np
<cargill_> hi, is it possible to run juju local environment when the host is an lxc container already?
<lazyPower> cargill_: lxc in lxc gets a bit hairy, and isn't really recommended.
<LinStatSDR> sorry for the delay, lazyPower is correct.
<cargill_> lazyPower: I've been using vagrant until now, but the fact that everything is routed through 10.0.3.1 which breaks some relations, notably postgresql, is quite annoying, not to mention slow and memory-hungry
<lazyPower> cargill_: i understand your frustration. aisrael is working with our team that maintains the images toa ddress most of those issues. there's also work being done to provide a docker based image for the workflow (allbeit a much lower priority than fixing vagrant papercuts presently)
<cargill_> and I'm running Debian, which juju is not really happy with and I haven't had the time to find out how to fix that yet
<aisrael> cargill_: the issue with postgresql and routing should be fixed in the latest vagrant box images
<cargill_> I've downloaded one yesterday, is that new enough?
<LinStatSDR> cargill_ what version of ebian?
<cargill_> jessie
<LinStatSDR> okay
<lazyPower> cargill_: where did you fetch the box from?  aisrael - have we updated the docs with the latest box url(s)?
<aisrael> cargill_: yes. Are you seeing the routing issue with that box?
<cargill_> the amd64 one linked in https://juju.ubuntu.com/docs/config-vagrant.html
<cargill_> aisrael: it's this issue, isn't it? 'FATAL: no pg_hba.conf entry for host "10.0.3.1"'
<aisrael> cargill_: Yep, that's the issue
<LinStatSDR> ;(
<aisrael> lazyPower: looks like the doc is pointing to an image from August
<aisrael> cargill_: Could you try an image from here? http://cloud-images.ubuntu.com/vagrant/trusty/current/
<aisrael> I'll get the docs updated
<lazyPower> aisrael: i suspected as much, we should try to automate that part of the docs or poitn them at the /current/ images so its always up to date
<LinStatSDR> juju does need the updating
<aisrael> lazyPower: Yeah, it should just point to current. I'll see if we can get that stale image removed, too.
<lazyPower> aisrael: offtopic to whats going on here - will you be available to help run a session at UDS over the dev workflow with vagrant? ~ 20 minutes give or take
<cargill_> I'd still prefer to try the LXCception approach, with 4GB of memory, VirtualBox is not a nice neighbour
<aisrael> lazyPower: depends on the day. My schedule this week is going to be somewhat challenging
<cargill_> and I'm mostly after testing out stuff locally, not production use
<cargill_> what are the common issues with that approach?
<lazyPower> cargill_: I've run into issues with cgroups failing the upstart task - and didnt pursue it any further
<lazyPower> cargill_: and to note, i haven't actually tried to run juju within that lxc container
<lazyPower> cargill_: however if you come up with a working solution - i'm all for talking to you about your approach and documenting it for science.
<LinStatSDR> For science!
<cargill_> at the moment, I'm getting 'juju.container.lxc clonetemplate.go:167 container failed to start: container failed to start' in the container, nothing more descriptive in the juju debug-log -l TRACE
<cargill_> but I haven't tried to create a container yet, this is where I was before I asked here
<lazyPower> cargill_: if i were to guess at the culprit - i'd say its networking
<LinStatSDR> i dislike generic failure debug statements
<cargill_> yeah, I'm trying to get all the information there is on LXC in LXC and see if I can get an lxc container start manually, to make sure that works
<ktosiek>  /join #nagare
<ktosiek> dang it
<lazyPower> cargill_: https://www.stgraber.org/2012/05/04/lxc-in-ubuntu-12-04-lts/ - container nesting.
<LinStatSDR> hmm 12.04
<lazyPower> the info is  bit old :(
<LinStatSDR> Yes, needs a bit of updating lol
<lazyPower> https://help.ubuntu.com/lts/serverguide/lxc.html#lxc-basic-usage - that may be more up to date.
<lazyPower> under nesting - it talks about a profile to be used for nesting
<lazyPower> lxc.aa_profile = lxc-container-default-with-nesting
<LinStatSDR> LazyPower, that is much newer. For 14.04
<cargill_> ok, so the host being debian, I don't have to do this part, right? since systemd manages its cgroups, would it interfere?
<LinStatSDR> Does it "have" to be debian :D
<cargill_> LinStatSDR: kinda has, yes, I'm not reinstalling my machine just to have juju running on it :)
<cargill_> (although I can get rid of systemd, it is a bit painful sometimes)
<LinStatSDR> Hehe, I'm just busting your chops cargill_. That and I seem to less "problems" if you will, issues running Ubuntu for juju
<lazyPower> cargill_: since we're not officially moving ot SystemD for another few cycles, i dont think anyone here has really worked with juju under a systemd supervisor.
<lazyPower> so we wont have much in terms of info in that regard
<cargill_> yeah, I thought as much, thanks anyway
<LinStatSDR> =)
<cargill_> does the lxc.mount.auto = cgroup line go into /etc/lxc.conf or the container's own config file in /var/lib/lxc/<container>/config?
<lazyPower> mhall119: ping - we've got all 5 of our sessions in pending on summit.
<lazyPower> oo, i missed one. 6 sessions
<mhall119> lazyPower: in pending? then you need a track lead to approve them
<lazyPower> mhall119: do I ping antonio for that? sorry for my daftness - this is my first time doing the scheduling.
<mhall119> jose: gaughen: marcoceppi: ^^
<mhall119> lazyPower: antonio or one of the ones I just pinged
 * lazyPower facepalms
<lazyPower> you sent that in the email - sorry
<mhall119> it's okay :)
<cargill_> hmm, juju-gui is started, but keeps on responding with a 301 to https:// + whatever I put in the Host header, then does not want to speak TLS, is there something I'm missing?
<lazyPower> rick_h_: have you guys done any experimentation with the gui being nested in lxc? I think i know the answer to this...
<lazyPower> rick_h_: for clarity - Debian host => Parent JUJU Environment container => juju gui is container in a container.
<cargill_> ah, it was trying to redirect me to port 443, just that it got confused
<rick_h_> lazyPower: no, the big thing is how do you get to it via the network? It needs to have the ability to hit the juju api websocket
<rick_h_> cargill_: cool, yea one of the guys on the team is currently working on making the charm take a port as a config param to run on
<rick_h_> which should help it colocate better with other services soon. Should be released next week
<lazyPower> rick_h_: good question - i'm not sure - cargill_ is pioneering this
<rick_h_> lazyPower: yea, never tried it I guess. Nothing 'shouldn't work that I know of but it all depends on the networking setup from what I can think of
<lazyPower> I haven't either tbh - but this sounds like a compelling alternative to using vagrant if the networking re-config is trivial.
<cargill_> after I got the croups remounted in the container as well, juju seems to be mostly happy (apart from me setting apt proxy wrong and install of juju-gui failing because of that :))
<rick_h_> lazyPower: huh? How does it help with vagrant? Does vangrant just run the lxc?
<lazyPower> rick_h_: vagrant is a heavier weight alternative when i can just share resources with my HOST
<rick_h_> lazyPower: ok, but I'm missing the point. If you're on linux and have lxc why do lxc in lxc?
<lazyPower> rick_h_: all that i *really* want this for is doing the testing setup for charm reviews - as my workstation is polluted beyond recognition from all the junk in 00-setup from teh charms.
<rick_h_> you can do multiple envs in lxc?
<lazyPower> isolate all that business so i can just wipe out the lxc container when im' done and call it.
<rick_h_> lazyPower: right but do an env and just destroy-environment --force?
<lazyPower> and i dont need nested containers for that.
<lazyPower> now that i think about it, i've been undeniably lazy - i could just snapshot a testing container, and put my environments.yaml in there - use it for testing with cloud hosts
<rick_h_> lazyPower: ok, I'm all for helping you solve a problem with the gui. Just not understanding the problem atm so forgive me.
<rick_h_> lazyPower: give me a setup you want to work and I'll see what we can do
<lazyPower> rick_h_: well my use case is crazy different than what cargill_ is doing, and cargill_ would be a better source of truth on that matter than I.
<lazyPower> i pinged you to see if you guys had ever had a crazy notion to try that - but most of us that i'm aware of, have not gone the route of nested lxc.
<rick_h_> lazyPower: right, we've not really. lxc is usually 'contained' enough for our needs.
<lazyPower> yo dawg, i heard you like to contain things, so we spun up containers in your containers so you can container while you container.
<LinStatSDR> lol
<LinStatSDR> lazyPower +1
<cory_fu> Just submitted my dhx (debug-hooks-ext) plugin pull request: https://github.com/juju/plugins/pull/32  I am interested in feedback, but I think it makes for a much improved charm debugging experience
<cargill_> is there a way to get the public-address of a service either in the GUI or through the juju command?
<cargill_> apart from juju stat, which is a bit hard to parse in the shell
<cargill_> actually, not through the gui, because I want to get the gui service address to set up networking after I bootstrap a new environment...
<lazyPower> cargill_: juju run --unit service/# "unit-get public-address"
<cory_fu> cargill_: juju status service | grep public-address | awk '{print $NF}'
<lazyPower> if your'e on one of the beta builds, we have newer options for juju status as well
<cargill_> lazyPower: thanks
<cargill_> yay, juju feels much snappier, does not eat my memory for breakfast and no problems with cross-machine connections going through 10.0.3.1
<cargill_> thanks for your help everyone
<lazyPower> No problem cargill_ - if you have any notes i'd love to see them.
<lazyPower> its an interesting approach for sure
<jose> mhall119, lazyPower: scheduling? shoot a PM
<lazyPower> jose: i sent an email follow up - let me fwd it to ya
<jose> cool
<lazyPower> jose: you've got mail - and there are 6 sessions proposed on summit
<jose> got it
<jose> lazyPower: have you already proposed the sessions on summit or created a blueprint?
<jose> don't see them
<lazyPower> jose: no blueprints - the only session we have for planning is the open feedback - and what project would you target that against as it encompasses charms and juju?
<lazyPower> jose: however mbruzek created 6 sessions that are in pending
<mhall119> jose: you don't see pending meetings in http://summit.ubuntu.com/uos-1411/review/ ?
<jose> lazyPower, mhall119: was looking at scheduling, sorry
<jose> lazyPower: 17 UTC wed, charm testing: slot is plenary. 15 UTC fri, open feedback, slot is taken by rick_h_
<lazyPower> jose: can you just shift them down into open slots and I'll update our tenative slots on the calendar?
<lazyPower> jose: what i'm more concerned with is keeping the order that we have them scheduled, as its a progressive build on the prior
<jose> ok
<lazyPower> the times are adjustible however
<jose> lazyPower: charm testing is on the 18UTC slot on wed, open feedback on the 14 UTC slot on Fri
<jose> lazyPower: actually, you can choose between 14UTC fri or 18 UTC fri
<lazyPower> jose: 18UTC sounds like a better timeslot as its after standup.
<jose> ok
<jose> lazyPower, mbruzek: all meetings approved and scheduled
<lazyPower> jose: thanks for the follow up o/
<jose> 5 slots are still open if anyone wants tot ake them
<jose> 5 slots are still open if anyone wants tot ake them
<jose> np :)
<bidwell> Is this the place where one might get help with getting juju+maas to bootstrap?
<lazyPower> bidwell: we can certainly try - whats the situation?
<bidwell> I have a maas server with 6 machines behind it that have been provisioned with ubuntu 14.04.1 and returned to the ready state (but still up).  When I run "juju bootstrap" from the maas server it runs for 30 minutes and then says "ERROR bootstrap failed: waited for 30m0s without being able to connect: /var/lib/juju/nonce.txt does not exist"
<bidwell> I can ssh to them as ubuntu@host.domain from the maas server and sudo once there.  I am not sure what I am missing.
<lazyPower> oh, interesting
<lazyPower> missing nonce.txt huh?
<lazyPower> bidwell: i'm not positive on why this is the case but i'm goign through questions on AU with similar symptoms
<lazyPower> bidwell: can you get me the output from juju -v --debug bootstrap -e maas (or your maas environment name)
<lazyPower> preferrably in a pastebin
<themonk> lazyPower, hi
<jose> themonk: hey, need any help?
<themonk> jose, : hi yes :)
<jose> what's up?
<themonk> jose, i have submitted my charm for review but i am just waiting, tipicaly how long it takes to review? and can you see my page and tell me is there anything i did wrong?
<jose> themonk: let me check if it's on the queue
<jose> themonk: which charm is it?
<themonk> gluu-server
<lazyPower> hah
<jose> that's lazyPower
<lazyPower> jose: i have that one locked but have since gotten pre-occupied
<jose> cool then!
<lazyPower> if you want it i can unlock it and you can get the first round
<jose> definitely
<jose> I've got some time now
<lazyPower> allrighty - incoming c-c-c-c-c-c-combo-breaker
<lazyPower> themonk: tiem to review is subjective - it depends on whats int eh queue and so forth - as you're an ISV we'll give ya some express privledges ;)
<lazyPower> jose: unlocked - have at it
<jose> cool, checking now
<themonk> lazyPower, :)
<marcoc> whit, how do you deploy cloud foundry?
<whit> marcoc, usually I follow the readme, but execute the actual deployment command by hand
<jose> themonk: mind if I PM?
<whit> marcoc, what are you seeing?
<whit> marcoc, or was your question even more general?
<marcoc> whit, I have bootstrap, what's next is my question
<themonk> jose, ?
<jose> themonk: wanna make a couple questions about the charm, and was wondering if it was fine for you if I sent a private message
<whit> marcoc, alright!  I suggest checking out the source from launchpad and following the instructions in the README
<themonk> jose, ok np
<whit> marcoc, let me grab you a link
<marcoc> whit, and the source is?
<whit> marcoc, https://code.launchpad.net/~cf-charmers/charms/trusty/cloudfoundry/trunk
<marcoc> whit, ta
<whit> marcoc, hollar if you have any issues.  Where are you planning on deploying?
<jose> marcoceppi, lazyPower, mbruzek: auth request to push https://code.launchpad.net/~ibm-demo/charms/trusty/mediawiki/trunk/+merge/240072
<marcoc> AWS west-2
<lazyPower> jose: go for it
<jose> lazyPower: thanks
<whit> marcoc, ok cool.  be sure to set a constraint for instance-type=m3.medium
<marcoc> instance-type works as a --constraint?
<whit> marcoc, iirc, the deploy script included will create a "dense" placement
<whit> marcoc, only on aws, but yeah
<whit> marcoc, you can also pick a size of cpu or memory and get the same effect.  main thing is to avoid getting hung up on limit m1.smalls
<whit> *limited
<whit> marcoc, it should spin up 8 machines iirc
<marcoc> oh crap, where do I set the constraint? before running cfdeploy?
<whit> marcoc, before
<whit> marcoc, generally we bootstrap with it
<marcoc> oops
<marcoc> juju deployer -T
<whit> marcoc, anyway, it will fail fairly fast. what version of  juju are you using?
<whit> marcoc, exactly
 * whit should alias that to juju undeploy
<marcoc> 1.21-beta1
<marcoc> whit, I wrote a juju-reset plugin
<whit> marcoc, ah nice
<whit> marcoc, 1.21-b may not have the m1.small issue
 * whit crosses fingers
<whit> marcoc, how's reinvent?
<marcoc> hasn't started yet, but it's pretty hectic
<bidwell> my 'juju -v --debug bootstrap -e maas' pastebin should be at http://pastebin.com/fxMRSJM
<marcoc> marcoc, how long should this take?
<marcoc> I got an internet window
<marcoc> to a thing with tabs
<marcoc> but it's been spinning for a while
#juju 2014-11-12
<skay> hi all, I'm trying to deploy jenkins and hte state seems to be stuck in pending for a while now
<skay> http://paste.ubuntu.com/8952564/ that's the history of commands. have I done anything glaringly wrong?
<skay> I'm mostly following what the charm docs say
<skay> woo! finally. state changed from pending to error. at least that is some info for me
<jose> skay: still around? I'd like to debug that
<skay> jose: hi, I can't help debug it at the moment. I'm working on something at hte moment
<skay> jose: not sure if I will be able to help this week, unfortunately, I'm working from an different timezone than normal, and I have been pretty sleepy outside of work hours
<jose> skay: oh, no problem :) if something else pops up, please let me know
<jose> skay: totally comprehensible :)
<skay> jose: okay, no worries! I will try to remember later to help.
<cargill_> is there a way to see if a hook is in progress? for instance if it failed, then I run juju resolved -r, juju stat still shows failed and not pending even if the hook is still running
<cargill_> also, I seem to have got juju-gui into a weird state where it did not want to interact with me anymore, it was juju-gui, because destroying and readding it has helped (a restart has not)
<mbruzek1> Hi Juju people.  We are about to kick off a UOS Juju charm school
<mbruzek1> if you want to ask questions please follow us in #ubuntu-uds-devops-1
<mbruzek1> If you want to join us on the hangout please join here:  https://plus.google.com/hangouts/_/hoaevent/AP36tYc0Jup3WIfx2y7xjN_n5r_CqEIPHfQALDK4M8cYEvra-yB17Q?authuser=0&hl=en
<hatch> should new bugs for juju-core be filed on github or launchpad?
<rick_h_> hatch: launchpad
<rick_h_> hatch: but check for a dupe, I'm going to bet there's something along those lines before
<hatch> yeah will do
<hatch> thx
<drbidwell> Where can I find a description of what "juju bootstrap -e maas" does on a server when it ssh's into it?  I need a better "Theory of operation" understanding.
<lazyPower> drbidwell: great question - is the debug output enough tog et you started or are you looking for a white paper style document?
<drbidwell> The debug output that I have so far just shows that it did "ssh ubntu@host /bin/bash"  and I have no idea what it is trying to do, if it is succeeding or failing.  No idea what to fix to get it to work better.
<drbidwell> lazyPower: is there way to get more debug outpout to tell me what it is doing and success or failure?
<lazyPower> drbidwell: ah, just -v --debug - if you need more info i'll need to draw in a core developer
<drbidwell> lazyPower: I have machines under MAAS which I can ssh to as ubuntu@host and sudo, but get litle feed back about what is really happening or not happening.  I have changed the timeout from 30 minutes to 1 hour and to 2 hours with no difference in the results.  Do I need to take this to the juju-dev channel?
<lazyPower> drbidwell: I apologize for the delay in response -'m getting ready for my UOS track - I would suggest you invistage that avenue, and i'll circle back after the session(s)
<lazyPower> *investigate
<drbidwell> lazyPower: thanks
<mbruzek1> The next UOS session starts soon
<mbruzek1> Please join us here https://plus.google.com/hangouts/_/hoaevent/AP36tYekk8djMgop1pzFddjyTDy7plhXOwcKYregkciGjHoVC0aAXQ?authuser=0&hl=en
<mbruzek1> If you are interested in Juju charm testing
<mbruzek1> Join #ubuntu-uds-devops-1 if you have questions or want to participate
<mbruzek1> Anyone in Juju can join our session for charm helpers
<mbruzek1> https://plus.google.com/hangouts/_/hoaevent/AP36tYeP-0vXA_qvCTv1x9cIvBwcqZIs4SMygfqh1B2py0tqo8maag?authuser=0&hl=en
<mbruzek1> Please join us if you want to participate
<mbruzek1> #ubuntu-uds-devops-1
<mbruzek1> on IRC
<amriunix> any free tutorials about juju ???
<whit> on 1.21beta1 where can I find out how to use the collect-metrics hook?
<rick_h_> whit: hit up cmars and company
<whit> rick_h_, aight
<JoshStrobl> marcoceppi, mind taking a look at this: https://github.com/juju/docs/pull/209
<lazyPower> JoshStrobl: he's at AWS re:Invent this week - it may be latent. I'll be happy to take a look when i'm out of this meeting
<lazyPower> drbidwell: circling back - have you found a satisfactory answer?
<JoshStrobl> lazyPower, thanks
<lazyPower> JoshStrobl: verified and merged
<JoshStrobl> \o/
<drbidwell> lazyPower: I am getting closer.  I have a lead to follow at least.
<lazyPower> drbidwell: ok. with juju not properly provisioning on the maas node - it can be a slew of things. So i'm glad you're on a lead as to what it might be.
<lazyPower> drbidwell: typically in my scenarios i've run into issues with networking, and that caused the issue - where the vlan in maas was improperly setup by me, so all apt-calls were failing, and juju wasn't able to bootstrap or do much of anything - and it hung during the bootstrapping phase. But that's anecdotal - and may or may not be related to what you're seeing.
<hatch__> Is there a way to reset a config option back to its default value via the cli? unset resets all options
<hatch__> wait nm :)
#juju 2014-11-13
<skay> any idea when the pure-python branch of lp:charms/python-django will land?
<skay> I'd like to consider a change in how python-django pip installs things. I'd like it to be able to pip install from local wheels.
<johnmce> Hi guys. Is anyone free to answer a quick question about the nova-cloud-controller charm and the new neutron-api charm?
<mbruzek1> We have 30 minutes until the Juju Big Data UOS session.  Please attend if you are interested in BIG DATA
<mbruzek1> http://summit.ubuntu.com/uos-1411/meeting/22392/big-data-and-juju/
<gnuoy> johnmce, hi, what's the question?
<johnmce> gnuoy: Hi. I'm upgrading my test cluster from Icehouse to Juno. HAd a few breakages along the way. Now deploying updated nova-cc charm and neutron-api. nova-cc charm fails to establish mysql relationship due to failed neutron db upgrade.
<johnmce> Was wondering if nova-cc should still be fiddling with the neutron db in the presence of neutron-api
<gnuoy> johnmce, unfortunately, yes. the nova-cc charm is in charge of running db migrations for neutron for os >= Juno
<gnuoy> johnmce, from a juju pov there is a relation between nova-cc and mysql but the neutron db migration was not run, is that right ?
<johnmce> gnuoy: OK, I don't suppose you would know how to work around for this failure. Command-line being run on nova-cc node is "/usr/bin/neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head"
<johnmce> gnuoy: Error is "sqlalchemy.exc.OperationalError: (OperationalError) (1050, "Table 'agents' already exists") '\nCREATE TABLE agents ......"
<johnmce> gnuoy: So, I seem to have at least a partial schema update
<gnuoy> johnmce, is it definitely partial ?
<gnuoy> I mean, are you sure the migration hasn't run through?
<johnmce> gnuoy: Well, I seem to have a table that it thinks I shouldn't have.
<johnmce> gnuoy: Presumably that table didn't exist pre-Juno, yet I seem to have it.
<johnmce> gnuoy: If I don't need a schema update (migration), then the logic that determines when an upgrade should be performed must be broken.
<gnuoy> johnmce, that is possible. Do you see any other evidence that the neutron db is not in the state it should be?
<gnuoy> errors from neutron-server etx
<johnmce> gnuoy: I'm not familiar with the db changes required for a Juno migration, so I wouldn't know what to look for. I've not check the neutron server logs.
<gnuoy> johnmce, I'm just wondering if everything is rosy but, as you say, the logic around when to run the migration is broken
<johnmce> gnuoy: Things are pretty broken generally right now, so I'm just taking it one step at a time. Been fixing up other charms as I go.
<johnmce> gnuoy: Keystone, glance, horizon are all good, but nova_cc can't get past this
<gnuoy> johnmce, when you say it can't get passed it do you mean a juju hook keeps erroring ?
<johnmce> gnuoy: I've destroyed the nova-cc service and re-deployed numerous times, so mayb the migration happened before. The juju hook for shaed-db always fails. "'hook failed: "shared-db-relation-changed" for percona-cluster:shared-db'"
<gnuoy> johnmce, right, I see. Sounds like a charm bug that migrations fail if the schema is already present.
<gnuoy> johnmce, I can work on trying to reproduce and getting a fix tomorrow.
<gnuoy> johnmce, could you raise a bug report please ?
<johnmce> gnuoy: I don't mind fixing it myself, if I can get a handle on what it's doing. Do you have any idea offhand what clues the charm looks for the decide a migration is needed, or does it just unconditionally call the neutron-db-manage script?
<lazyPower> Greetings #juju - UOS 1114 - Big Data track is goign to start in about 6 minutes.  If you'd like to participate - https://plus.google.com/hangouts/_/hoaevent/AP36tYfh0sDqCTgtXmsp4LRdu4lnwysNlJ0jMTS7tlh8HNWfgen-Tw?authuser=0&hl=en
<gnuoy> johnmce, I think it blindly call it on the establishment of a relation with percona
<johnmce> gnuoy: Mybe the logic should be in neutron-db-manage?
<gnuoy> johnmce, yes, I think that is true. I wonder if the nova db manage utility can be rerun without explosions
<johnmce> gnuoy: I'll try commenting that 'create table' bit out for now and see if I get to to progress to completion. Looks like it's got some broken logic somewhere though.
<gnuoy> johnmce, I'm not sure I'd want to comment that 'create table' out tbh, you might get some unexpected results from neutron-db-manage
<gnuoy> I'd be temted to mark the failed hook as resolved and see if things continue from there
<johnmce> gnuoy: I've already tried repeatedly to say it's resolved, but it always re-runs that script. You'd think there'd use an "if not exists" type option on table creation.
<gnuoy> yeah, that's be good
<gnuoy> s/that's/that'd/
<johnmce> gnuoy: This goes well beyond a single table. It's trying to create every table from scratch, when they already exist.
<jamespage> gnuoy, dosaboy: did you have any thoughts on tuning down the log level across the openstack charms?
<jamespage> dosaboy, just thinking about it in the context of the ceph-broker work but we probably need to discuss more generally on what's the right level
<jamespage> dosaboy, I personally think our default level should be DEBUG with only end-user usefull info going at INFO level
<dosaboy> jamespage: agreed
<dosaboy> same as openstack
<jamespage> dosaboy, oo - do they have that documented somewhere?
<dosaboy> hmm lemme dig
<jamespage> dosaboy, so a end use message would be "Configuring ceph storage pools with name XX, replicas XX"
<jamespage> and "Storage pools configured, starting cinder volume service"
<jamespage> maybe
<jamespage> not use
<jamespage> not sure
<gnuoy> johnmce, I'm wondering if the stamp didn't get run
<dosaboy> jamespage: *some* info here - http://docs.openstack.org/openstack-ops/content/logging_monitoring.html
<dosaboy> lemme find better
<dosaboy> https://wiki.openstack.org/wiki/LoggingStandards
<gnuoy> neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini stamp icehouse then upgrade head
<dosaboy> jamespage: ^^
<gnuoy> johnmce, ok, I think I see the bug
<gnuoy> can you try running: neutron-db-manage --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini stamp icehouse and see if the migration then runs cleanly ?
<johnmce> gnuoy: Not read this properly yet, but it is related: http://www.gossamer-threads.com/lists/openstack/dev/42070
<johnmce> gnuoy: Response to command: INFO  [alembic.migration] Context impl MySQLImpl. INFO  [alembic.migration] Will assume non-transactional DDL.
<gnuoy> johnmce, yes, that thread looks to be the same issue
<johnmce> gnuoy: Seems to be working now!
<johnmce> gnuoy: thanks for your help
<gnuoy> johnmce, np, I will fix that in the charms tomorrow
<gnuoy> johnmce, sorry that you hit a bug. fwiw the bug is with line 522 of hooks/nova_cc_utils.py. that conditional shouldn't be there and was left over from when nova-cc and neutron-api where deciding between them on who should run the migration
<johnmce> gnuoy: OK, I see the problem now. Thanks for the info. I'll modify my copy.
<whit> working on aws, does anyone ever get a machine that reports itself as one  public ip, but when you ssh in, ssh thinks it's a different public ip?
<lazyPower> whit: i've seen this behavior before - whats weird was querying the metadata url caused the ip's to correct themselves.
<whit> lazyPower, tell me more about this metadate url?
<lazyPower> whit: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html
<whit> lazyPower, danke
<lazyPower> whit: no idea what caused the voodoo tbh - it was an isolated incident from another user that joined #juju ~ a week ago.
<whit> lazyPower, wonder if that force the route to get set?
<lazyPower> well, a curl tot he metadata url shouldn't have any effect - thats all set during cloud init
<whit> I've seen it 3 times in the last week
<whit> dunno
<jose> whit: status will give the public IP, ssh will use the internal IP
<jose> whit: or at least that's the behaviour I'm seeing on AWS
<jose> probably using the bootstrap node as a proxy
<lazyPower> jose: juju ssh does
<lazyPower> all connectivity between the workstation and nodes proxies through the state server
<jose> there it is
<lazyPower> Juju Open feedback session is about to get started
<lazyPower> https://plus.google.com/hangouts/_/hoaevent/AP36tYdwa3d9A4ohXuoRdM-SQCQSJUzu4xgEflvg996V8rMyAw427g?authuser=0&hl=en
<lazyPower> if you'd like to join and add your view/comments/feedback we'd love to hear from you
<whit> cmars, the one thing that comes to mind is that it might make sense to not run the collectic-metrics hook if a charm does not define any metrics
#juju 2014-11-14
<gnuoy> jamespage, dosaboy, if either of you have a moment I'd like to get https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-1392645/+merge/241775 landed pretty sharpish
<jamespage> gnuoy, does that need a test case update?
<gnuoy> jamespage, if it doesn;t then we're missing a unit test. let me fix it either way
<jamespage> gnuoy, +1
<gnuoy> jamespage, branch updated, just waiting for UOSCI to do its thing
<gnuoy> jamespage, could you take another look at  https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-1392645/+merge/241775 if/when you have a moment?
<jamespage> gnuoy, +1
<gnuoy> thanks
<gnuoy> jamespage, would you mind extending your +1 to the stable equivalent https://code.launchpad.net/~gnuoy/charms/trusty/nova-cloud-controller/next-1392645/+merge/241799 ?
<jamespage> gnuoy, +1
<gnuoy> ta
<rogpeppe> does anyone know if config-changed is guaranteed to be called when a charm starts up?
<rogpeppe> fwereade: ^
<fwereade> rogpeppe, essentially, yes
<rogpeppe> fwereade: ta
<fwereade> rogpeppe, if wecome up in an error state we won't call it until the error's resolved
<rogpeppe> fwereade: right, that's what i would expect
<fwereade> rogpeppe, but it's pretty much the first thing we do in ModeAbide: if c-c hasn't run yet for this instance of Uniter, run it
<rogpeppe> fwereade: but it'll be called after "start", right?
<fwereade> rogpeppe, before start
<rogpeppe> fwereade: ah, interesting
<fwereade> rogpeppe, which is a problem
<fwereade> rogpeppe, because "has the start hook run yet" is state we don't actually expose to the charm
<rogpeppe> fwereade: yeah, i was planning on restarting the service when the config changes
<fwereade> rogpeppe, so most charms, AFAIK, just assume they're always started, and always run their software in c-c
<rogpeppe> fwereade: that doesn't surprise me
<rogpeppe> fwereade: so the start hook is pretty much redundant
<fwereade> rogpeppe, you'd be technically wrong, but in good company, and constrained by not having a good way to do it right
<rogpeppe> fwereade: well, it would be easy to remember whether the start hook has been called or not
<fwereade> rogpeppe, well, IMO the true problem is that we don't expose have-you-started-yet as an env var
<rogpeppe> fwereade: but is that a bad thing to do?
<fwereade> rogpeppe, yeah, but it's be duplicated in every damn charm under the sun
<fwereade> rogpeppe, it's a good thing to do, but it's working around juju being bad
<rogpeppe> fwereade: perhaps the start hook really is somewhat redundant in fact, and could be deprecated
<fwereade> rogpeppe, mm, I think that stop/start are actually worthwhile
<rogpeppe> fwereade: is there some juju doc online that sets out the hook ordering guarantees succinctly?
<rogpeppe> fwereade: are you thinking about possible charm migration there?
<fwereade> rogpeppe, looking ahead to the possibility of migrating units and their storage from one machine to another, yeah
<fwereade> rogpeppe, https://juju.ubuntu.com/docs/authors-charm-hooks.html
<rogpeppe> fwereade: ah, that doc's perfect, thanks
<rogpeppe> fwereade: i guess if we always guarantee to call config-changed before start, we'd still be ok not implementing the start hook
<fwereade> rogpeppe, that is true
<fwereade> rogpeppe, I really ought to just expose started state in the env though
<rogpeppe> fwereade: that's true. although does that mean you should observe its value in every hook, and stop if it's not true... ?
<fwereade> rogpeppe, or maybe actually $JUJU_STOPPED, so a test for an empty STOPPED value comes closest to matching behaviour in old jujus
<fwereade> rogpeppe, I don't *think* so
<fwereade> rogpeppe, config-changed is the only hook that'll run when not started
<fwereade> rogpeppe, at least for now
<rogpeppe> fwereade: install :)
<fwereade> rogpeppe, sorry
<fwereade> rogpeppe, config-changed in the only hook that doesn't run in acontext where it can know whether it's started
<rogpeppe> fwereade: another unrelated charm question: where's a good place to put charm-specific persistent state?
<fwereade> rogpeppe, if it's only going to be read by the charm, I think the charm dir is actually fine probably
<fwereade> rogpeppe, the thing that freaks me out is writing stuff to the charm dir that will e read from outside a hook context
<rogpeppe> fwereade: what it's for example, a downloaded binary that is going to be used as an upstart service?
<rogpeppe> fwereade: or a config file for same
<rogpeppe> fwereade: that's the context i'm considering it in
<fwereade> rogpeppe, I would keep those outside the charm dir
<rogpeppe> fwereade: um, yes
<fwereade> rogpeppe, in my mind the dividing line is whether it's necessary for the software itself, or necessary for the management layer
<fwereade> rogpeppe, so templates used in hooks? fine in the charm dir
<fwereade> rogpeppe, stuff actually read by other processes? keep it outside, so that it's most likely to keep working even if juju manages to completely trash its own state
<fwereade> rogpeppe, fail as safe as possible
<rogpeppe> fwereade: i know one charm that's using /etc/init/$appname:$servicename
<fwereade> rogpeppe, that sounds sane to me
<fwereade> marcoceppi, does that sound like a reasonably-best practice to you?
<rogpeppe> fwereade: i'm not sure that's great though, as it's vulnerable to hulksmashing
<fwereade> rogpeppe, I'm pretty sure wedon't allow hulk-smashed units of the same service on the same machine
<rogpeppe> fwereade: i'd prefer to have the unit name in there too, and perhaps the env uuid too
<fwereade> rogpeppe, modulo subordinates
<rogpeppe> fwereade: interesting. i wonder why not - it seems potentially useful for testing.
<fwereade> rogpeppe, I don't think I have a well-reasoned answer to that beyond "it seemed like it opened more scary doors than useful ones"
<fwereade> rogpeppe, would indeed be useful for testing
<rogpeppe> fwereade: actually, that does work
<fwereade> rogpeppe, I think jam had some way around it?
<rogpeppe> fwereade: well, i'm waiting for the unit to come up...
<fwereade> rogpeppe, hmm, you can just do it? interesting
<rogpeppe> fwereade: yeah, seems to work
<fwereade> rogpeppe, and I see no code that would prevent it from so doing
<lazyPower> jose: ping
<fwereade> rogpeppe, maybe it just got deleted when it was determined to be bloody hindering awkward for testing ;p
<rogpeppe> fwereade: well, i see a status with two started units from the same service on the same machine, which seems fairly clear
<fwereade> rogpeppe, yeah, and I remember it being akward code to write too
<rogpeppe> fwereade: yeah, i think it's reasonable - it falls out logically from the basic juju operations
<fwereade> rogpeppe, so, not really sorry to see it gone
<fwereade> rogpeppe, just puts a slightly heavier burden on the charm author
<fwereade> rogpeppe, and strongly pushes towards generating based on unit name
<rogpeppe> fwereade: if there was an easy way to get a name guaranteed to be unique to the charm, that would help
<rogpeppe> fwereade: because presumably with manual placement, it's possible to have services from several envs on the same machine... or maybe that's a step too far :)
<fwereade> rogpeppe, my suspicion is
<fwereade> rogpeppe, that that will fail in surprising and unhelpful ways, because of multiple machine agents trying to run at the same time
<fwereade> rogpeppe, and I have half an inkling that the manual provider code checks for pre-existing machine agents
<rogpeppe> fwereade: yeah, the upstart service names aren't named using the env UUID
<fwereade> rogpeppe, to prevent exactly that
<fwereade> rogpeppe, although
<rogpeppe> fwereade: otherwise it would probably be fine
<fwereade> rogpeppe, for general cleanliness' sake, we should absolutely be writing our code such that it should work
<rogpeppe> fwereade: oh, and /var/log/juju etc
<rogpeppe> fwereade: i wonder about $CHARM_UUID
<fwereade> rogpeppe, mm, I have a suspicion the machine agents would try to recall each others' deployed units
<fwereade> rogpeppe, UNIT_UUID?
<rogpeppe> fwereade: yeah
<rogpeppe> fwereade: although we've already got CHARM_DIR
<lazyPower> jrwren: ping
<jrwren> pong
<jrwren> lazyPower: you have great timing. I had a question for you.
<lazyPower> jrwren: awesome - whats up?
<jrwren> lazyPower: you first. You pinged first :)
<jrwren> lazyPower: ok, I'll ask first.  https://juju.ubuntu.com/docs/authors-hook-errors.html says always return error code zero from hooks. OK. How do I return from a hook in a "bad configuration" state.  e.g. charm was deployed with --config mycharmconfig.yaml   and the state of that config mixes known bad options.
<lazyPower> jrwren: was just following up on the fix you had for mongodb - did that make it as a backport to the precise charm as well?
<jrwren> I want the end user to somehow know their charm is misconfigured.
<jrwren> lazyPower: I know nothing about the precise charm. :(
<lazyPower> jrwren: you can read the config in the install/config-changed hook and return 1 if there are bad options mixed.
<lazyPower> but afaik you cannot do that on the CLI at present. it wont halt during pre-deployment
<jrwren> lazyPower: is there a way to return an error string so that status show "bad config" or something ?
<lazyPower> jrwren: there's no state outside of "charm is in error" - the best case we have today is making sure you log the error in the unit log.
<jrwren> lazyPower: Thanks. That is the information I needed to know.
<lazyPower> jrwren: can you take a look at getting your patches to the mongodb charm backported and MP'd against precise?
<lazyPower> would be wunderbar! if you could.
<jrwren> lazyPower: I'll add it to my chalkboard, at the bottom.
<lazyPower> thanks jrwren
<jrwren> lazyPower: My ETD on that is 2.5 years from now.
<lazyPower> rick_h_ natefinch: Great session! Lots of good info in there! If you missed it: http://summit.ubuntu.com/uos-1411/meeting/22387/whats-new-and-upcoming-in-the-work-of-juju-ui-engineering/
<jrwren> lazyPower: https://bugs.launchpad.net/juju-core/+bug/1392786
<mup> Bug #1392786: charm has no way to report error state <juju-core:New> <https://launchpad.net/bugs/1392786>
<lazyPower> bazinga
<mhall119> marcoceppi: gaughen jose: who is going to present the Cloud & DevOps summary in an hour?
<marcoceppi> I'm still at conference
<mhall119> I think jose is at class, gaughen are you okay to do it?
<gaughen> don't think I have a choice
<gaughen> mhall119, I will be quick as I have another mtg
<mhall119> not unless you have an urgent need not to ;)
<mhall119> gaughen: you can go first, or last, if that would help you
<gaughen> mhall119, first would be perfect
<gaughen> now I need to go get my last session started
<rogpeppe> can anyone tell me if open-port is idempotent?
<rogpeppe> i.e. if i call it twice with the same port, will it succeed the second time?
<jrwren> rogpeppe: afaik, yes. I'm 90% sure. :)
<rogpeppe> jrwren: :)
<rogpeppe> jrwren: i'm just being lazy and not reading the source code tbh
<jrwren> rogpeppe: nothing wrong with that. charm authors shouldn't have to read the source code.
 * rogpeppe is in that unusual position this afternoon :)
<rogpeppe> s/unusual/unaccustomed/
<marcoceppi> rogpeppe: yes, iirc all juju commands are idempotent
<mwenning> lazyPower, ping
<jose> mwenning: hey, you need help with anything?
<mwenning> jose, I'm getting an error when I try to bring up juju - I'm behind a proxy and it keeps timing out.
<mwenning> can you help there?
<jose> mwenning: is that when bootstrapping?
<mwenning> https://pastebin.canonical.com/120521
<mwenning> yes.
<jose> mwenning: not canonical, could you mind pasting it to paste.ubuntu.com?
<mwenning> jose, pastebin.ubuntu.com/9014361/
<jose> thanks
<jose> mwenning: if you're using a proxy and the MAAS cluster is on your local network, then the proxy is probably the issue
<jose> can you access the MAAS admin site from your web browser?
<mwenning> jose, yes, that part's working fine.
<jose> hmm, I'm not sure then
<jose> this is more a MAAS thing than a Juju thing
<mwenning> I've been using maas to do Certs
<jose> you're getting a 502 (Service Unavailable)
<mwenning> jose, that's the maas server right?
<jose> mwenning: yeah. nothing related with Juju, unfortunately
<jose> sorry to not be of much help
<mwenning> looks like it's trying to access boot-images?
<mwenning> I can bring up a browser and touch that page just fine
<jose> mwenning: mind a quick PM?
<mwenning> PM?
<jose> private message
<mwenning> sure
#juju 2014-11-15
<arbrandes> Greetings, charmers!
<arbrandes> I THINK I'm hitting https://bugs.launchpad.net/charms/+source/keystone/+bug/1391784, "HA failure when no IP address is bound to the VIP interface".
<mup> Bug #1391784: HA failure when no IP address is bound to the VIP interface <openstack> <cinder (Juju Charms Collection):In Progress> <glance (Juju Charms Collection):In Progress> <keystone (Juju Charms Collection):In Progress> <neutron-api (Juju Charms Collection):In Progress> <nova-cloud-controller
<mup> (Juju Charms Collection):In Progress> <openstack-dashboard (Juju Charms Collection):In Progress> <percona-cluster (Juju Charms Collection):Invalid> <https://launchpad.net/bugs/1391784>
<arbrandes> However, the VIP is on the same exact network as the currently bound interface.
<arbrandes> And as a matter of fact, the VIP *does* get brought up properly.
<arbrandes> BUT... I still get "'hook failed: "ha-relation-changed" for keystone-hacluster:ha'", and debug-log says "INFO ha-relation-changed ValueError: Unable to resolve a suitable IP address based on charm state and configuration"
<arbrandes> Is this a known issue?
<arbrandes> BTW, this is trusty + juno + Juju Stable PPA.
<mah9> Where on the default settings juju downloads charms?
#juju 2014-11-16
<dosaboy> tvansteenburgh1: can we please allow the charm-helpers Makefile to use an existing venv rather than build a new one everytime i clone the branch and run tests
<dosaboy> it is quite prohibitve to enforce a new build everytime
<dosaboy> e.g. a have a prebuilt venv that i activate whenever i want to run tests for any charm/charm-helpers
<dosaboy> which lives outside of tree
<rick_h_> dosaboy: I'd suggest a patch that allows setting a venv place on disk and then your scripts can set that path and the makefile should detect already exists and roll with it?
<rick_h_> dosaboy: I've not looked at the charm-helpers makefile but normally we point to a bin/python path and if that doesn't exist run the venv command, so setting your own bin/python would be a good way to work with that
<dosaboy> rick_h_: if you are running within an activated venv you should not have to specify paths at all
<dosaboy> python works it out and uses the packages you have installed in your venv
<rick_h_> dosaboy: ok, but the point I think is the same, to add the detection for an activated env
<dosaboy> rick_h_: yeah that would be good
<rick_h_> dosaboy: but it's pretty standard practice against all of our stuff to build a fresh makefile so that depsn, download caches, etc are setup to catch a bug/mistake early and often vs trusting an existing setup
<dosaboy> i'll have to scratch my head and think of bect way to do that
<rick_h_> we've all been bit too many times by reusing 'it works here' where it's never been cleaned and rebuilt from commit to commit.
<dosaboy> rick_h_: both have their merits imo, but I agree that the final gate should definitely be using a fresh venv
<dosaboy> i'm basically applying the same rules as with upstream openstack
<dosaboy> if only tox support had landed in charm-helpers...
<rick_h_> dosaboy: understand, and the way it is in kind of how we've done things around here for a long time.
#juju 2015-11-09
<blahdeblah> Any charmers able to look at https://code.launchpad.net/~paulgear/charms/trusty/ntp/sync-charm-helpers/+merge/276826?
 * marcoceppi_ shrugs
<marcoceppi_> sure
<blahdeblah> thanks marcoceppi_ - should be an easy one; I just wanted to get it out of the way before proposing my second one.
<blahdeblah> marcoceppi_: https://code.launchpad.net/~paulgear/charms/trusty/ntp/sync-canonical-is-charms/+merge/276951 is the second one
<blahdeblah> thanks for the merge!
<marcoceppi_> blahdeblah: it's almost midnight and still Sunday, I'll let this get handled in the queue
 * marcoceppi_ sleeps
<blahdeblah> np
<blahdeblah> Hi all - I've got a question about the juju-info implicit relation.  When we "nova floating-ip-associate $MACHINE $IP" in an OpenStack environment, juju status eventually works out that $IP is the public address, but that doesn't show up in the relation data returned from relation-get.  Why?  How can I get access to this information from within the charm?
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/odl-controller/amulet/+merge/276959 I'm investigating amulet fails for > trusty atm
<stub> gnuoy: _initialize_tests has a whole heap of unit/0's hardcoded, and there may no longer be a unit/0
<gnuoy> stub, there may no longer be a unit/0 ?
<gnuoy> I must be missing an exciting new development
<stub> Yes, with Juju 1.25 unit numbers stopped being recycled. So your first unit will not be unit 0 if the environment previously had a service using the same name.
<gnuoy> ack, thanks for the tip
<stub> https://code.launchpad.net/~stub/charms/trusty/cassandra/spike/+merge/276372
<gnuoy> stub, even better , thanks for the fix!
<stub> Share it around, its going to bite every non-trivial set of Amulet tests ;)
<jamespage> dimitern, morning - nice picture ;-)
<jamespage> http://blog.naydenov.net/2015/11/deploying-openstack-on-maas-1-9-with-juju-network-setup/
<jamespage> thanks for the credit btw :-)
<dimitern> jamespage, hey :) yeah - I'd like to give credit where it's due
<dimitern> jamespage, thanks
<jamespage> dimitern, great replay on our conversations btw
<dimitern> jamespage, there are still some unresolved points, but it's shaping up I think
<jamespage> dimitern, yup
<jamespage> dimitern, so I think that thedac will be on point from my team to work on this - gnuoy is going on holiday for december afaict
<gnuoy> haha, I'm around but lots of holiday dotted around the place
<gnuoy> jamespage, any idea when python-networking-odl will be in the cloud archive for kilo and liberty?
<jamespage> gnuoy, its already there
<gnuoy> jamespage, argh, sorry. I'm looking at a vivid deploy so no cloud archive
<gnuoy> jamespage, is it on its way to vivid?
<jamespage> gnuoy, actually vivid did not get that enablement
<jamespage> gnuoy, only the CA
<jamespage> gnuoy, I don't think we'll do vivid
<jamespage> its high cost, with no users as far as I'm concerned
<jamespage> we can use the UCA for hardware enablement stuff like this
<gnuoy> jamespage, ok, so the amulet tests for odl-controller can just be trusty?
<jamespage> gnuoy, for now yes that's fine
<sto> I'm deploying openstack liberty with juju using the same configuration I used for kilo and it looks that the neutron-gateway charm does not install the neutron-dhcp-agent ... is that intentional?
<sto> With the current deployment my openstack nodes have no dhcp service
<sto> BTW, if this is not the right irc channel a pointer to the right one will be appreciated
<jamespage> sto: that's not intentional, but not something I've seen either in my testing
<jamespage> (and this channel if fine for openstack charms + juju stuff)
<jamespage> sto: can you tell a bit more about your deployment?  output of juju status might be good for a start
<sto> jamespage: I'm re-deploying it now to have a clean installation, I'll send you a pastebin once deployed
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/odl-controller/amulet/+merge/276959 is ready for review. I can remove the systemd stuff if you want.
<gnuoy> jamespage, I missed a ch change to support the previously mentioned branch https://code.launchpad.net/~gnuoy/charm-helpers/exlude-no-openstack-origin/+merge/277005
<ddellav> jamespage, here's the link to the neutron port security MP again from friday in case you missed it: https://code.launchpad.net/~ddellav/charms/trusty/neutron-api/ml2-port-security/+merge/276764
<ddellav> for some reason tests don't seem to have run, i must have disabled them somehow, I'll fix it.
<jamespage> ddellav, looking now
<jamespage> ddellav, couple of comments
<ddellav> jamespage, got it, fixing.
<jcastro> marcoceppi_: hey, wrt. charm-tools and all our associated tooling, did you see barry's plan for python3 only in main for 16.04?
<jamespage> jcastro, erm python3 only in the iso
<jamespage> not quite the same thing as python3 only in main
<jcastro> ack
<jcastro> acutally I said that entirely backwards, what I should have said is "hey are we ready for distros to drop python2"
<jcastro> http://askubuntu.com/questions/693945/openstack-dashboard-instance-console-unavailable
<jcastro> I've got some openstack questions that need to be scrubbed if people want to check them out
<Odd_Bloke> So I'm trying to configure Juju (1.22.8) to use an HTTP proxy and (a) I'm seeing that I can't configure an HTTP proxy but _not_ an apt HTTP proxy, and (b) that the OpenStack provider attempts to use the configured proxy for OpenStack API calls.
<Odd_Bloke> Are either of these things fixed/changed in more recent versions, or shall I report bugs?
<beisner> Odd_Bloke, that is a thing for sure.  in my experience, the only way to get endpoint http/https traffic to not proxy api calls, is to set no_proxy to include every possible endpoint IP.  note, no_proxy doesn't understand cidr.
<beisner> hi gnuoy, i've added odl/amulet MP comments.  holler with any ?s.  thanks!
<marcoceppi_> jcastro: yes
<sunitha> Hi
<sunitha> i want to discuss about IBM XL Fortran charm review comments, which we recieved recently
<sunitha> Hi Matt
<mbruzek> Hello
<sunitha> The charm must cryptographically verify all downloaded software. Line 125 of hooks/config-changed will skip the cryptographic check if the sha_fortran configuration variable is an empty string (which is default).
<mbruzek> yes
<sunitha> This is about Ibm XL fortran charm
<mbruzek> Yes, I am watching this channel now that you said my name.
<mbruzek> I recall the review.
<sunitha> actually till now we were doing for all the charms as if user provides some value for cryptographic check, it will check, if user has not provided any thing it will ignore
<mbruzek> sunitha: A charm can not skip the crytpographic check.  The way I read the code if the sha_fortran configuration value was NOT set then the software would not be verified.
<sunitha> yes
<mbruzek> sunitha: I see people just leaving that value empty when they deploy the charm (because empty is default)
<mbruzek> So by default the charm will never check the cryptographic signature.
<mbruzek> unless the user sets the sha_fortran value.
<mbruzek> sunitha: You don't have to write code to verify payloads that are already inside the charm.  This is why I highly suggest putting the xl fortran payload inside the charm.
<sunitha> now you want us to specify default  package value there?
<mbruzek> sunitha: no. But you can not skip the validation if the value is empty.
<sunitha> but if i keep build inside charm user can deploy charm for only specific version
<sunitha> he cannot use for another build version
<mbruzek> sunitha: How often is this xl fortran package updated?
<mbruzek> sunitha: many charms make use of this pattern (putting the binary inside the charm).  Only the IBM charms require downloading the binary and putting inside the charm.
<mbruzek> sunitha: I suspect there is not much activity on the Fortran binary.
<mbruzek> but if there is, you can either update the charm, or write in the README how the user can update.
<mbruzek> sunitha: I see the BASH code looks for *tar.gz anyway so in theory the user could replace with the new version.
<mbruzek> Before they deploy .
<sunitha> Matt: In last 6 months they updated with 2 fix packs
<mbruzek> I still highly suggest putting the binary inside the charm, it makes it way easier to deploy the charm.
<mbruzek> much easier user experience
<sunitha> ok I will keep this inside charm only
<sunitha> and one more thing about license
<sunitha> with out setting license value , prouct uninstall will not happen
<sunitha> if user want to uninstall the software..
<sunitha> if i remove license parameter, how user will uninstall the software?
<mbruzek> yes I understand
<mbruzek> the license needs to be accepted
<sunitha> matt:I cannot add any relations to this , since it is a stand alone and its just a compailer, using this user can develop applications in power linux systems.
<mbruzek> sunitha: that is unfortunate, it really limits the usefulness of the charm.
<mbruzek> sunitha: please consider the merge proposal I made to improve the README.md file.  If you do end up putting the file inside the charm the README.md file will need to be edited to remove the apache server, and perhaps tell the user how to put a new binary in the charm.
<mbruzek> sunitha: regarding the uninstall:  The user does not need to uninstall the software.  If fortran is the only thing on the instance then when the user is done they would just shutdown/destroy the instance.
<mbruzek> sunitha: uninstall:  If you mean how would the user uninstall the software when the license was no longer accepted that code would not change.  You can still remove it from /opt/ibm/.... so they can not run the fortran compiler any longer.
<sunitha> Ok .Thanks Matt.
<jcastro> cory_fu: bcsaller: So adam did a ghost charm with reactive
<jcastro> and I started thinking, and it may be dumb but I'd like to think aloud for a minute
<bdx> hey whats going on everyone? When specifying "JUJU_DEV_FEATURE_FLAGS=address-allocation" it seems my deploy is blocked after the first 3 devices are allocated ips see -> http://cl.ly/image/2o292Z3g1F3Q
<bdx> I have verified this multiple times now......I'm sure you guys are already all over this though....
<lazypower> bdx o/
<lazypower> bdx - make sure you follow up with a bug on that please. It helps when tracking issues like this - if its a duplicate we can link the issues and track in a singular location
<bdx> lazypower: I did
<lazypower> bdx: ta :) appreciate the diligence in feedback
<bdx> lazypower: of course!
#juju 2015-11-10
<apuimedo> beisner: gnuoy: I'm trying to use my private openstack cloud
<apuimedo> but i'm getting some problems bootstrapping
<apuimedo> http://paste.ubuntu.com/13216879/
<apuimedo> do you think it could be due to not being an admin (I have perms for creating instances and nets though)
<beisner> hi apuimedo, could you paste a (sanitized)  `juju stat --format tabular`  for the private cloud?
<beisner> it'll help me to understand the services and topology
<apuimedo> ok
<apuimedo> beisner: I get the same error
<apuimedo> trying to do the juju stat
<beisner> apuimedo, do you have openstack already deployed?
<apuimedo> yes, I have several instances running on this cloud
<apuimedo> now I want juju to create its machines there too
<beisner> apuimedo, ok, so i'm referring to that cloud that is deployed.  can you juju stat that cloud?
<beisner> apuimedo, ie. 1 layer down.
<apuimedo> beisner: that cloud has been deployed with ansible, not juju
<apuimedo> beisner: does juju need admin level credentials for the bootstrapping?
<beisner> apuimedo, ok.  in that case, i'd need to see your environments.yaml file, `keystone catalog` and `keystone endpoint-list` output.
<beisner> apuimedo, no, shouldn't need to be admin @ the undercloud.
<apuimedo> cool, thanks
<beisner> apuimedo, sanitized for sensitive data on those pastes, of course.
<beisner> also can you show `apt-cache policy juju`  ?
<apuimedo> sure
<apuimedo> beisner: http://paste.ubuntu.com/13216940/
<beisner> apuimedo, what vers of openstack is the cloud?
<apuimedo> Kilo
<apuimedo> I was talking with my operator
<beisner> apuimedo, it may not be directly related to the issue at hand, but you can use ppa:juju/stable to get the current stable release of Juju (1.25).  https://launchpad.net/~juju/+archive/ubuntu/stable
<apuimedo> he fixed some of my credentials and now I got past that error
<apuimedo> oh, that sounds like a good idea anyway
<apuimedo> http://paste.ubuntu.com/13216957/
<beisner> apuimedo, it looks like a previous juju bootstrap didn't get completely destroyed.  can you remove (back up if you want to) the /home/ubuntu/.juju/environments/openstack.jenv file, then see if juju stat exits ok (showing no enviro bootstrapped)?
<apuimedo> beisner: ^^ that will be due to the streams, right?
<beisner> apuimedo, i'd rm that .jenv file to make sure the basic juju stat cmd is good.  then do a `juju bootstrap --debug` ... which will show more detail if there's a failure somewhere.
<apuimedo> cool. I'll try that now
<apuimedo> beisner: http://paste.ubuntu.com/13216996/
<apuimedo> yeah... It seems I have to define the images somehow in the environment
<apuimedo> I have to say the guide I found online was not the clearest
<beisner> apuimedo, i think this is what you're after :  https://jujucharms.com/docs/1.25/howto-privatecloud
<apuimedo> yes. That's the one I'm looking at beisner
<jcastro> lazypower: mbruzek: check this out: https://sysdig.com/digging-into-kubernetes-with-sysdig/
<lazypower> interesting
<mbruzek> jcastro: cool
<beisner> apuimedo, images and metadata are the things to sort out for the private cloud juju usage.  i've got a test cloud building, and will be able to step through that on my end later today.
<beisner> in the past, i've used that guide.
<apuimedo> I'm doing that now
<apuimedo> let's see if I succeed :-)
<beisner> apuimedo, be aware too - it looks like you're hitting a destroy bug (for which a fix is underway).  the bug is that the destroy errors out, leaving the .jenv file behind, and you have to manually rm it after destroying.  but that won't affect bootstrapping or deploying.
<beisner> https://bugs.launchpad.net/bugs/1512399
<mup> Bug #1512399: ERROR environment destruction failed: destroying storage: listing volumes: Get https://x.x.x.x:8776/v2/<UUID>/volumes/detail: local error: record overflow <amulet> <bug-squad> <openstack> <sts> <uosci> <Go OpenStack Exchange:In Progress by gz> <juju-core:Triaged> <juju-core
<mup> 1.25:Triaged> <https://launchpad.net/bugs/1512399>
<apuimedo> cool
<apuimedo> thanks beisner
<beisner> yw apuimedo - let us know how you turn out
<apuimedo> I will, thanks
<apuimedo> beisner: I set up agent-metadata-url and image-metadata-url
<apuimedo> but somehow it seems it still goes to
<apuimedo> 2015-11-10 17:17:53 DEBUG juju.environs.simplestreams simplestreams.go:429 read metadata index at "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson"
<apuimedo> :(
<apuimedo> ok, my mistake
<apuimedo> I had a typo :P
<apuimedo> sorry about that
<apuimedo> got it bootstrapping ;-)
<apuimedo> thanks for the help beisner!
<beisner> apuimedo, woot!  you're welcome.  happy deploying.
<apuimedo> beisner: is it possible to specify the ram amount when adding machine
<beisner> hi apuimedo - juju can use constraints to choose the nearest specs match presented by your cloud.  i believe "equal or greater" logic is used in that.    `nova flavor-list`  will show the machine sizes you have to choose from.
<beisner> https://jujucharms.com/docs/1.25/charms-constraints
<apuimedo> juju machine add --constraints mem=8G
<apuimedo> like so?
<apuimedo> it worked
<apuimedo> thanks
<beisner> apuimedo, ok good :)
<blr> is the new lxd provider going to obviate the local provider?
<jose> blr: well, local uses lxc, which lxd is based on (I believe)
<blr> jose: right, just curious if the long term intention is to replace the local/lxc provider
<rick_h__> blr: yes in time the lxd provider, with better isolation and ability tondo things like HA will deprecate the local provider.
<blr> rick_h__: nice
<blr> lazypower: if you're about, have a simple MP for charmhelpers that could use a look (https://code.launchpad.net/~blr/charm-helpers/pip-constraints/+merge/276062)
<apuimedo> beisner: is there any way to make juju deploy lxc to the machines on openstack providers with the juju-br0 model that it uses when the machine is provided by maas?
<rick_h__> frobware: ^ this was done for the machine registration work and isn't part of OS provider work atm? Or is it planned/part of it?
<beisner> apuimedo, i believe the juju openstack provider is designed to use 1 nova instance per juju unit.  i don't think there is currently support for juju deploying services to lxc containers within nova instances via the openstack provider.
<apuimedo> beisner: well, it does deploy the lxc containers
<beisner> apuimedo, rick_h__  ... and i was about to say, let me check though.  ;-)   thanks rick_h__  ...  fwiw - that would be immensely useful for us in openstack engineering too (juju/goose support for lxc containers on nova instances).
<apuimedo> but obviously they are unreachable
<rick_h__> beisner: well you can use provision to an lxc on a nova vm with --to=lxc and such.
<beisner> apuimedo, right, i think it will, but then they are marooned in my experience.
<rick_h__> beisner: at least I wanted to think that was possible with normal --to support
<apuimedo> beisner: I just had my nova instances remove the anti spoofing filter
<apuimedo> so if they would just have the bridge on the eth0 device
<apuimedo> it would all work
<apuimedo> or putting different /24 on each nova vm
<rick_h__> apuimedo: beisner frobware leads the team working on networking and can best speak to that atm. however he's EU and EOD
<beisner> rick_h__, something legacy is ringing a vague bell with goose -- where it didn't / couldn't know about all of the various neutron bits to wire up in advance.
<apuimedo> EU here too :P
<rick_h__> beisner: yea, I'll not be surprised that maas has it because it's more knowledgable
<apuimedo> well, what I do is every time I put up a nova VM
<apuimedo> you get a new port
<apuimedo> so you can immediately do whatever to remove the filter
<rick_h__> beisner: apuimedo jrwren had an interesting post around the bridge http://jrwren.wrenfam.com/blog/2015/11/10/converting-eth0-to-br0-and-getting-all-your-lxc-or-lxd-onto-your-lan/
<rick_h__> not sure if it's helpful at all, but fyi
<apuimedo> rick_h__: I was considering doing it like that
<apuimedo> the problem for OSt deployments is that the dhcp should not give you an address
<apuimedo> so what I was thinking of was to make juju call neutron to set routes between the VMs
<apuimedo> and, of course, use a /24 net per VM
<apuimedo> rick_h__: do you happen to know who's in charge of networking in the context of the openstack provider?
<rick_h__> apuimedo: frobware's team
<apuimedo> ok
#juju 2015-11-11
<Prabakaran> Hi Team,
<Prabakaran> My new charm is not getting reflected in the review queue. It has been more than a week even though my charm is not reflecting under review queue. Below are the details. Branch : https://code.launchpad.net/~ibmcharmers/charms/trusty/ibm-platform-rtm/trunk    Bug Link is : https://bugs.launchpad.net/ibmcharms/+bug/1510216
<mup> Bug #1510216: New Charm: IBM Platform RTM <IBM Charms:New> <https://launchpad.net/bugs/1510216>
<Prabakaran> Hi Mup Pet, Please advise on this.
<ionutbalutoiu> Hey! Does somebody know by any chance why the icon is not shown in charm store for this charm: https://jujucharms.com/u/cloudbaseit/ironic/trusty/10 ?
<sto> jamespage: I've found my problem (non deployment of dhcp server on neutron-gateway); I was deploying neutron-gateway and nova-compute on the same host, and one charm was overwritting the other...
<sto> jamespage: anyway, now I have my deployment and found this bug (https://bugzilla.redhat.com/show_bug.cgi?id=1272572) on my cinder deployment, and the fix is the same (add a [keymgr] section to /etc/cinder/cinder.conf) ... I guess it has to be added to the cinder charm?
<sto> Should I report a bug?
<jose> arosales // arosales_ : ping
<arosales_> jose, hello
<jose> arosales_: hey, so Ubuntu is applying as an organization for Google Code-In, and we can publish Juju tasks for students to help us reduce some workload
<jose> arosales_: my question is, would it be feasible to give a temporary key to those students to the AWS account we have in order for them to review stuff and develop new charms?
<jose> I think that would encourage a lot of participation on their side, since cloud expenses can be expensive
<marcoceppi> jose: all of them should just sign up for developer.juju.solutions
<marcoceppi> we can give them all creds to work on juju
<jose> marcoceppi: that's amazing, thanks!
<arosales_> marcoceppi, jose: +1 to anyone applying for charm dev time @ developer.juju.solutions
<arosales_> open to anyone doing charm dev work
<jose> great!
<jose> I'll email the list soon about what it is and how we can publish tasks
<arosales_> jose, also encourage folks you know working on those tasks to pop in here, on the list, and during office hours
<jose> will do - contest will run starting on Dec and finishing by the end of Jan, so expect to see a bit of a pike on activity there
<arosales_> jose, do you have a link?
<jose> arosales_: codein.withgoogle.com
<arosales_> jose, also do you need help compiling a lit of tasks?
<marcoceppi> lazypower: my live stream video got removed in some countries because of your mix :(
<jose> arosales_: not right now - we're on the process of finishing our application (reg closes in 1h!) and if we get approved we can then gather some tasks
<lazypower> marcoceppi : I'm not surprised.
<lazypower> copyright law is draconian
<arosales_> jose, ok - thanks for applying
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charm-helpers/exlude-no-openstack-origin/+merge/277005 has been up for a few days if you get a moment
<marcoceppi> stub: hey, does your postgresql rewrite also work on precise?
#juju 2015-11-12
<stub> marcoceppi: I don't know and I don't care. Should I care?
<marcoceppi> stub: we accidentally merged it with precise
<stub> Hmmm...
<stub> It should work - it just relies on charmhelpers and the PG stuff is common, and the default PG version for precise is still coded in there.
<marcoceppi> stub: we can revert, but it seems to be working
<stub> Ok. I was kind of hoping to just drop support for precise (since we don't need it any more), but we can drop support for this version rather than the previous version
<stub> If the basic deployment, the rest will work the same or better than the previous version. Replication or some extensions like wal_e might be wonky, but the older version would be wonkier.
<stub> So lets leave it.
<stub> I'll try and have the reactive rewrite up soon :)
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/odl-controller/new-tests/+merge/277258 is ready for another review if you have a moment
<jamespage> gnuoy, ok looking now
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/neutron-openvswitch/lp1515008/+merge/277325
<jamespage> gnuoy, I'll just wait for amulet to pass that before landing
<gnuoy> kk
<jamespage> gnuoy, can we get osci pointed at the odl-controller branches?
<gnuoy> yep
<gnuoy> jamespage, when the branches exist I think https://code.launchpad.net/~gnuoy/ubuntu-openstack-ci/odl/+merge/277327 should do the trick
<gnuoy> beisner, ^
<jamespage> gnuoy, can we create /next branches now from the current source branches and propose against those? running things by hand is nasty
<gennadiy> hi all. i need to create specific instance type of aws ec2 instance for my service.
<gennadiy> i know about juju set-constraints --service <name> instance-type=m3.xlarge
<gnuoy> jamespage, yes, but I don't know what needs to happen to get osci to pick up the change to lp:ubuntu-openstack-ci
<gnuoy> we can wait for beisner if you like
<jamespage> gnuoy, sure
<jamespage> if your branch tests out ok I'll land it
<gennadiy> but when i add unit from juju-gui it creates default instance tyoe
<jamespage> gnuoy, we need to plumb in AMULET_ODL_LOCATION=http://10.245.161.162/swift/v1/opendaylight/distribution-karaf-0.2.3-Helium-SR3.tar.gz as well
<gnuoy> yep
<gnuoy> jamespage, added setting AMULET_ODL_LOCATION to the mp
<dpm_> hi all, anyone around who is familiar with https://jujucharms.com/python-django and can help with a couple of questions?
<apuimedo> frobware: Hi. I was told it is you who works on juju networking and using OpenStack for providing machines to Juju
<frobware> apuimedo, yes
<frobware> apuimedo, (and the team!)
<apuimedo> frobware: that was fast :P
<apuimedo> frobware: who's the team?
<apuimedo> *in
<frobware> apuimedo, dimitern, voidspace, dooferlad
<apuimedo> nice to meet you guys ;-)
<dooferlad> hi
<frobware> apuimedo, want to briefly HO - I saw you had some questions earlier in the week
<apuimedo> HO?
<frobware> apuimedo, google hangout
<apuimedo> sounds good
<frobware> apuimedo, https://plus.google.com/hangouts/_/canonical.com/juju-sapphire
<dooferlad> frobware: there in a couple of minutes
<apuimedo> frobware: I'm getting a google hangout error trying to join
<apuimedo> when requesting permission to join
<jamespage> frobware, you'll have to allow external participants as thats under a canonical.com hangout
<frobware> apuimedo, OK, let's just try here in IRC
<apuimedo> frobware: let me create a meeting on ho
<apuimedo> frobware: https://plus.google.com/hangouts/_/midokura.com/juju_openstack
<dimitern> hey apuimedo
<apuimedo> hey ;-)
<jamespage> gnuoy, juju-test INFO    : Results: 4 passed, 0 failed, 0 errored
<jamespage> awesome
<gnuoy> \o/
<jamespage> gnuoy, ok landed all of that
<gnuoy> ta
<jamespage> gnuoy, also snuck in the tox bits needed for if we want to upstream this charm to /openstack
<gnuoy> k
<jamespage> gnuoy, I think we could also run func tests under tox as well
<apuimedo> dimitern: is there any trick that would make the openstack provider set up the bridges as if it were maas?
<apuimedo> (I can disable the arp anti spoofing filter in my openstack provider)
<gnuoy> jamespage, I'm guessing I need to
<gnuoy> create the /next branches and delete the originals
<gnuoy> or is there a smarted way
<jamespage> yeah
<gnuoy> smarted? smarter
<gnuoy> kk
<jamespage> just branch then and mark the old ones as decprecated
<gnuoy> jamespage, I see Abandoned is that the one?
<jamespage> yah
<gnuoy> ta
<jamespage> gnuoy, I'd create the trunk and next branches for all of them now under openstack-charmers
<gnuoy> will do
<gnuoy> beisner, I've probably missed something but if you get a sec https://code.launchpad.net/~gnuoy/ubuntu-openstack-ci/odl/+merge/277327
<thomnico> Hello
<thomnico> does someone is familiar with  add_source(source, key=None): in juju helpers python ??
<thomnico>  I try to pass a block with the gpg key starting with -----BEGIN PGP PUBLIC KEY BLOCK----- and it fails on safe_loader ..
<marcoceppi> thomnico: I think you need a key ID not a key file, not sure though
<lazypower> correct, it polls the configured keyserver for the key
<thomnico> checking the code it show I should be able to add a keyfile ..
<lazypower> or at least thats how i've used it
<thomnico> and I won't have access to hard coded keyserver.ubuntu.com
<lazypower> when live gives you lemons, write a bash script and use subprocess *ducks from impending object trajectory*
<lazypower> s/live/life/
<thomnico> hehehe you guys are the python fans
<lazypower> I'm a pragmatist, I'm a fan of what works reliably
<thomnico> so do I lazypower (but you know already)
<lazypower> <3
<thomnico> where should I raise bug on helpers please ??
<lazypower> launchpad.net/charm-helpers
<thomnico> It might pretty well be me not putting the expcted syntax though ..
<lazypower> thats possible, but if there's a bug for it we can get it on the docket to take a closer look
<lazypower> there very well may be a bug in there
<thomnico> ok ...
<bdx> jamespage: nice presentation! How can I start testing nova-compute-lxd? I'm checking out the lxd charm now....is there a method or procedure you have defined for how you are doing this?
<jamespage> bdx, https://jujucharms.com/u/openstack-charmers-next/openstack-lxd
<bdx> ooooohhh nicceeeee!! thx!
<thedac> bdx: fyi, jamespage fixed a bug with DVR. This should help you. https://bugs.launchpad.net/charms/+source/neutron-openvswitch/+bug/1515008
<mup> Bug #1515008: L3 agent missing on compute node in DVR setup <backport-potential> <openstack> <neutron-openvswitch (Juju Charms Collection):In Progress by james-page> <https://launchpad.net/bugs/1515008>
<dpm_> Anyone knowledgeable on the python-django charm? I'm trying to use fabric as described on https://jujucharms.com/python-django , but whenever I try to execute a task with 'fab', I'm being asked for "Login password for 'ubuntu': "
<dpm_> lazypower perhaps? ^
<bdx> thedac: YES!
<bdx> jamespage: ^^
<thedac> bdx: Fix already landed in next and the backport to stable is on the way
<bdx> thedac, jamespage: nice fix! ... it totally makes sense that was the issue...I can't stop smiling
<bdx> thedac: thanks for your help over the last few days investigating that
<thedac> bdx: no problem. Sorry I did not catch it sooner
<bdx> thedac: your good man... ditto
<beisner> gnuoy, jamespage - fyi uosci is now feeding on the odl trio;  please see proposals on those charms for a few cosmetic adjustments.
<lazypower> dpm_ sorry i'm not sure whats going on there. thumper is the current maintainer of the django charm. I think a mail to the list as he's in NZ timezones, would be a good path forward to getting support with that particular error :(
<lazypower> sorry i'm not of more help
<jcastro> lazypower: remind me, did you do the charm testing and debugging at the last summit?
<lazypower> negatory
<jcastro> ah, that was you mbruzek iirc
<mbruzek> yeah that was me
<jcastro> ok you're doing one for cfgmgmntcamp. :)
<dpm_> lazypower, no worries, I ended up posting on http://askubuntu.com/questions/697318/how-to-use-fabric-with-juju and pinging cory_fu - will ask thumper on e-mail if all else faile
<dpm_> *fails
<jcastro> that reads like it's a key issue doesn't it?
<jcastro> like, you should be able to just ssh in there without any prompts
<cory_fu> jcastro: Yes, though per the instructions in the README it ought to just work.  I'm not sure how the charm is supposed to tell fab to use the Juju SSH key, nor even how it tells it how to resolve a unit ID (foo/0) into a host name
<jcastro> I'm going to stab in the dark and I bet thumper exports a bunch of environment variables for fab that juju consumes or the other way around
 * thumper has burning ears
<cory_fu> Oh, no, it's the fabfile.py in the charm.  You have to have that locally.  I haven't used Fabric before.  :)
<dpm_> cory_fu, yeah, you bzr branch the charm, and then you can point fab to the fabfile
<dpm_> if you're running fab from the charm's code directory, it finds the fabfile.py automatically too
<dpm_> I've also added
<thumper> what you do need?
<cory_fu> thumper: https://askubuntu.com/questions/697318/how-to-use-fabric-with-juju
<jcastro> http://askubuntu.com/questions/697318/how-to-use-fabric-with-juju
<dpm_> thanks guys :)
<cory_fu> It's prompting for the ubuntu user's password
<thumper> lazypower: that reminds me, I need to propose another change to the django charm as I've deployed celery in prod now
<cory_fu> Not using Juju's ssh key file
<thumper> lazypower: and I need to support upgrading django so I can get to 1.8
<thumper> dpm_: I'm unclear as to what you are attepmting
<cory_fu> dpm_: Ok, so if your SSH public key is on Launchpad, you can get this to work by importing your key into the Juju deployment: juju authorized-keys import <launchpad-username>
<dpm_> thumper, essentially to be able to run "fab -R python-django/0 manage:collectstatic" from my desktop PC
<cory_fu> thumper: I think either that needs to be added to the Fabric section of the README, or it should somehow use the juju_id_rsa key
<cory_fu> ("that" being the auth-keys import instructions)
<thumper> hmm... not at all familiar with fabric
<thumper> why does this not just translate through a juju run thing?
<dpm_> cory_fu, that worked nicely, thanks! Now the actual command failed, but at least key authentication worked
<cory_fu> thumper: It apparently uses normal ssh.  There might be a way to have it use `juju ssh` as its ssh command, but I don't know how that would work
<dpm_> I'm not familiar with fabric, either, I just used it as the charm's documentation mentions it as the way to do what I was trying to do. If there is an equivalent way to do it with juju, I'd be more than happy to try that instead
<cory_fu> These functions should really be redone as Juju actions, but that would require work on the charm
<dpm_> cory_fu, in any case, your suggestion worked, so if you want to add it to the Ask Ubuntu question, I'll check it as the answer
<cory_fu> Done
<dpm_> \o/ thanks!
<cory_fu> dpm_: What error did you get from the command failure?  Anything useful?
<dpm_> cory_fu, it seems not all of the fabricfile.py commands work. Luckily, the one I'm interested in (manage:collectstatic) does. But here is an example of one that doesn't: http://pastebin.ubuntu.com/13241329/
<lazypower> thumper so many wants, so many todos, so little time
<dpm_> it seems to use ubucon_site instead of the expected gunicorn as the service name
<lazypower> thumper i know those feels :-|
<jcastro> man, authorized-keys import. I didn't even know that existed
<cory_fu> dpm_: What does this give you: juju ssh python-django/0 -- ls /etc/init/ubucon_site.conf
<cory_fu> dpm_: Scratch that.  This instead: juju ssh ubucon_site/7 -- ls /etc/init/ubucon_site.conf
<dpm_> cory_fu, there is not such a file. ubucon-site is not a service, it's created using a ubucon-site.yaml config for the python-django charm
<cory_fu> dpm_: The way the charm looks like it works is that it creates an Upstart job conf file in /etc/init based on the name of the deployed service, from the unit name (in your case, ubucon-site/7).  So if the site were up and running, and thus could be reloaded, there should be an /etc/init/ubucon-site.conf file on the unit
<cory_fu> Is there a config option that you haven't set for the charm to start the service?
<cory_fu> (thumper might be of more use there)
<cory_fu> I don't really know much about that charm
<dpm_> cory_fu, the site is up and running, but the charm seems to set up only the gunicorn upstart job as means of reloading the site: http://paste.ubuntu.com/13241388/
<dpm_> and http://paste.ubuntu.com/13241402
<cory_fu> dpm_: Ah, I think the issue is that the Fabric code pre-dates the wsgi / gunicorn change
<dpm_> aha
<cory_fu> dpm_: You could edit the reload function in fabfile.py and hard-code the service name to "gunicorn"...
<cory_fu> -    sudo('service %s reload' % env.sanitized_service_name)
<cory_fu> +    sudo('service gunicorn reload')
<dpm_> yeah, I was toying with the idea :)
<cory_fu> That should be changed in the charm to handle whatever wsgi subordinate was used, but I'm not sure how that would work
<dpm_> That is way beyond my charm-fu, but for now I'm happy that it's more or less working :)
<cory_fu> Glad we could help
<dpm_> indeed, thanks :)
<blahdeblah> Hi all - anyone able to tell me what happened here?  http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1392/console
<blahdeblah> Looks like a problem with the test infrastructure, not the MP.
#juju 2015-11-13
<blahdeblah> Any charmers around to look at http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1392/console ?
<apuimedo> gnuoy: Hi
<apuimedo> do you know who to contact to remove a charm from the store? I thought removing the branch would suffice
<apuimedo> but it's been a week and the charm remains in the store
<gnuoy> hi apuimedo, I'm not sure how that happens. jcastro might be able to point us in the right direction when he comes online
<apuimedo> thanks
<MrBy> hi, i successfully deployed landscape and now i have a minimal openstack installation. how can i extend it with juju to deploy ceilometer, heat, ...?
<coreycb> cory_fu, I'm looking through the vanilla demo for layers: https://jujucharms.com/docs/1.24/authors-charm-building
<coreycb> cory_fu, who calls provide_database()?  I'd think it would be called in mysql/postgresql charm but I'm not seeing anything there.
<jrwren> anyone familiar with mongodb charm ansible tests able to help me with its test failures?
<jrwren> https://code.launchpad.net/~evarlast/charms/trusty/mongodb/fix-dump-actions/+merge/277191  i tried to fix the failing tests due to the recent juju change in unit numbering, but it is still failing, I do not see why.
<cory_fu> coreycb: In a meeting, sorry.  Give me a bit
<coreycb> cory_fu, me too, no rush
<lazypower> jrwren - 1 sec let me take a look
<lazypower> jrwren looking @ the console output of the failing test, it looks unrelated to your change. its due to the unit numbering behavior change
<lazypower> jrwren and there's still some older syntax in the tests - DEBUG:runner:    mongo = self.deploy.sentry.unit['mongodb/0'].info['public-address']
<lazypower> thats emitting from the relate-ceilometer test
<jrwren> lazypower: oh, I thought I updated all those old reference forms. I'll push a change updating those. Thanks.
<tvansteenburgh> jrwren: http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-3607
<jrwren> *doh* now I see how I missed it.
<jrwren> oh wow, fails proof too. should I address those?
<tvansteenburgh> jrwren: yes plz
<jrwren> poor neglected mongodb charm.
<jrwren> in the charm proof output for that mongodb charm, it says benchmark has no hooks. There is nothing there for benchmark. Can I delete the interface from metadata.yaml?
<lazypower> if its I: you can safely ignore it
<lazypower> W:/E: are the blockers
<jrwren> ah, ok. i won't touch that.
<cory_fu> coreycb: If you're talking about, e.g., https://github.com/johnsca/juju-relation-mysql/blob/master/provides.py#L50, then yes, it would be called by the mysql charm, e.g. https://github.com/marcoceppi/mysql-charm-layer/blob/master/reactive/mysql.py#L246
<cory_fu> coreycb: Of course, if the relation is being provided by the existing, non-reactive mysql charm, then it won't use the interface layer at all.  But it should still work as long as I got the interface protocol right (fair chance I mucked it up somehow)
<coreycb> cory_fu, ok I think that's the issue, I was looking at lp:charms/trusty/mysql
<cory_fu> Yeah, that doesn't use the interface layer at all, yet.
<coreycb> cory_fu, but the net is you need reactive charms on each side of the interface, right?
<lazypower> coreycb - not at all :) recall the etcd example i showed yesterday on our h/o?
<coreycb> lazypower, sort of, can you point me to that again?
<lazypower> https://github.com/chuckbutler/interface-etcd
<lazypower> i apologize in advance for the state of this repository and it slack of a proper readme
<coreycb> lazypower, not a problem, thanks!
<coreycb> lazypower, would it also be legitimate to call relation_get/set in requires.py?
<natefinch> marcoceppi, dpb1:  There's an error in CI trying to deploy the landscape bundle on master. Looks like an error in config-changed.  This line is throwing a KeyError for 'services': https://github.com/charms/haproxy/blob/master/hooks/hooks.py#L354
<natefinch> Seems like maybe the latest changes on master are firing config-changed before all the data the charm expects to be there actually exists... but I think the charm shouldn't depend on it being there, right?
<dpb1> natefinch: can you show me the bundles you are using?
<natefinch> dpb1: it's something CI is deploying. mgz_ do you know where those bundles live?
<dpb1> (it's probably an older bundle deploying more recent code)
<natefinch> dpb1: here the CI log, FWIW: http://reports.vapour.ws/releases/3307/job/aws-quickstart-bundle/attempt/1317#highlight
<natefinch> dpb1: looks like it's called landscape_scalable.yaml.... but I don't know where it comes from
<natefinch> (asking sinzui on juju-dev)
<coreycb> lazypower, ok so I just looked through the RelationBase code a little and it looks like get/set_remote call relation_get/set.
<coreycb> so I think that answers my question
<cory_fu> coreycb: You should use get/set_remote instead of relation_get/set in interface layers because of how conversations are managed.
<sharan> Hi
<cory_fu> coreycb: Conversations group remote units together based on whether it makes sense for them to proceed through states as a logical group and to share data.  To set_remote can send data to multiple remote units if the scope is SERVICE or GLOBAL
<cory_fu> This really only applies to developing interface layers, though
<cory_fu> When using interface layers, you shouldn't need to worry about conversations or using either set_remote nor relation_set, etc, because the interface layer should provide an API for interacting with the other side
<coreycb> cory_fu, awesome thanks, it's making sense now.
<lazypower> I just saw something interesting during a review. a hook decorator that doesn't implicitly define the hook context in which it should be run decorating 2 methods
<lazypower> as in @hooks.hook()  <cr> def install():
<lazypower> same declaration for a config_changed()... does this run on every hook context without the proper identifier?
<sharan> Hi i am trying to implement peer relation in my charm. I have deployed my charm and using "juju action do" downloading package after completion of this download i'll set license flag true and charm works perfectly. when i do "juju add-unit" to get peer relation at this time license flag is made true for the  first deployment , here before downloading the package charm is going into installation of the product but package is not present
<lazypower> sharan - this seems like something the charm should do natively instead of relying on an action, or call the action from the hook code.
<sharan> is it possible to call juju action from the hook code?
<lazypower> you can. The CWD during hook execution is $CHARM_DIR
<lazypower> so you can call action/thing-you-need-to-do
<lazypower> but you wont have any action parameters available, so you will need to compensate for that
<lazypower> by either allowing it to be parameterized on the CLI, import the method from the action (if written in python)
<marcoceppi> lazypower: sharan the long of the short of it is no. Actions are only invoked by operators
<marcoceppi> they are one time tasks and not a way to persist data in a service. If you need to persist data it should be modeled as configuration
<marcoceppi> that way scale out will get the same copy of data instead of worrying about quorum and managing peer relation data
<lazypower> and thats a fair point ^
<sharan> how do we configure the data? is it in config.yaml we need to configure?
<lazypower> sharan - correct. if there's something you need to provide as configuration, it belongs in config.yaml and you can then take action when its present, if no value is present you can set status to blocked identifying the user they need to configure the charm before it can proceed.
<natefinch> rick_h_: you around?
<natefinch> or marcoceppi or anyone else... trying to deploy a charm I just pushed to my personal namespace, and juju is saying charm not found
<rick_h_> natefinch: howdy
<rick_h_> natefinch: which charm? what command?
<natefinch> $ bzr push lp:~natefinch/charms/vivid/ducksay/trunk
<natefinch> Created new branch.
<natefinch> $ juju deploy cs:~natefinch/vivid/ducksay
<natefinch> ERROR cannot resolve URL "cs:~natefinch/vivid/ducksay": charm not found
<rick_h_> natefinch: not seeing it yet when did you push? https://jujucharms.com/u/natefinch/
<natefinch> rick_h_: like 3 minutes ago.... too fast?
<rick_h_> natefinch: yea, by about 2hrs
<natefinch> rick_h_: ug
<natefinch> rick_h_: why is there lag?
<rick_h_> natefinch: yea, the charm upload/etc stuff will be more like you're expecting
<rick_h_> natefinch: processing bzr uploads every 15min between legacy and new servers that have to stay in sync and take a while to process and sync up between them
<natefinch> rick_h_: I guess I figured the charm could be retrieved on the fly when requested.  But *shrug*.
<natefinch> rick_h_: are we getting deploy from github any time soon?
<rick_h_> natefinch: not deploy from github but upload to the store by EOY
<natefinch> rick_h_: as long as it means I don't have to type bzr, I'm happy.
<rick_h_> natefinch: +1
<rick_h_> natefinch: talk to uros and see if you can get in early
<rick_h_> natefinch: but they're working on getting it asap
<rick_h_> natefinch: and have PoC working and should be beta'able this monthish and such
<natefinch> rick_h_: it's ok, just makes it hard to test features that require specific charms in the store
<rick_h_> natefinch: well to test the charm do you need it in the store?
<natefinch> rick_h_: I need to test that my code does the right thing when the charm is coming from the store vs. local
<rick_h_> natefinch: ah gotcha
<rick_h_> natefinch: you can ask uros for access to the current charm upload command
<rick_h_> it's not the official stuff, but it works today to upload straight to the store and bypass bzr
<rick_h_> natefinch: if it's just testing/etc it'd save you hours
<rick_h_> natefinch: showed you a doc
<rick_h_> natefinch: shared that is
<natefinch> rick_h_: cool, thanks
<natefinch> rick_h_: I'll talk to uros
<blahdeblah> Hi. Any charmers still around to tell me why http://juju-ci.vapour.ws:8080/job/charm-bundle-test-lxc/1392/console is failing?
#juju 2015-11-14
<arosales> any ~charmers around?
<arosales> I think most folks have started their weekend
<marcoceppi> o/
<marcoceppi> blahdeblah: it's failing lint
<marcoceppi> DEBUG:runner:call ['/usr/bin/make', '-s', 'lint'] (cwd: /tmp/bundletester-FGMSmT/ntp)
<marcoceppi> DEBUG:runner:hooks/ntp_hooks.py:77:80: E501 line too long (97 > 79 characters)
<marcoceppi> DEBUG:runner:hooks/ntp_hooks.py:118:80: E501 line too long (90 > 79 characters)
<marcoceppi> DEBUG:runner:make: *** [lint] Error 1
<marcoceppi> DEBUG:runner:Exit Code: 2
<blahdeblah> marcoceppi: thanks - those run *after* the amulet tests?
<marcoceppi> blahdeblah: first
<marcoceppi> http://reports.vapour.ws/charm-test-details/charm-bundle-test-parent-3531
<marcoceppi> blahdeblah: that's a better breakdown of that output
<blahdeblah> Right - that is much better; I'll get an update to that MP done over the weekend.
<arosales> marcoceppi, wow still around :-)
<arosales> marcoceppi: seems I can't find the MP for http://review.juju.solutions/review/2342
<marcoceppi> arosales: it was deleted
<marcoceppi> arosales: I'll remove from queue
<arosales> blahdeblah: but looks like the ntp tests pased DEBUG:runner:The ntp deploy test completed successfully.
<arosales> marcoceppi: thanks
 * arosales will move onto the next one
<marcoceppi> arosales: removed ;)
<blahdeblah> arosales: Yeah - those tests aren't terribly sophisiticated
<arosales> well at leasts there is tests
<arosales> :-)
<marcoceppi> good news is, the tests pass, bad news is pep8 hates you ;)
<blahdeblah> There's a way to tell those tests to override on a given line, isn't there?
 * blahdeblah asks Google
<arosales> marcoceppi does charm proof check for pep8?
<marcoceppi> arosales: it checks the charm if there's a "lint" target
<marcoceppi> the charm author has a make lint target so we run it as part of bundle tester
<marcoceppi> so it's basically, bundletester will do the following:
<marcoceppi> - charm proof
<marcoceppi> - make lint (if available)
<marcoceppi> - make test (if available - unit tests)
<marcoceppi> - run the charm integration tests
<arosales> marcoceppi: ok, thanks
<cory_fu> marcoceppi: Have you given any thought to making charm proof wrt. layers?
<cory_fu> Charm layers tend to fair ok, but not so much base or interface layers
<marcoceppi> cory_fu: I really want to make charm create for layers and charm add
<marcoceppi> cory_fu: like charm create layer, charm add layer:nginx. I keep messing up the damn includes syntax like a dope
<cory_fu> Agreed
<marcoceppi> cory_fu: it's not a bad idea, it's not on the road map for this iteration but could make it on there before EOY
 * marcoceppi packs up computer for the weekend
<cory_fu> T'was just an errant thought
<arosales> marcoceppi: For monday, note charm CI is marking charm CI as green even though LXC fails, (aws pass) [ref = http://review.juju.solutions/review/2350]
<marcoceppi> arosales: the logic for that might not be nessisarily bad
<marcoceppi> do we want to weight failures higher than passes?
<marcoceppi> esp. given the flakiness of some of the substrates
<marcoceppi> lxc failed because of a provider problem (I restarted the tests)
<arosales> one school of thought was that it had to pass on local and public cloud
<marcoceppi> arosales: yes, but a failure doesn't always mean it's a charm problem
<arosales> in this case the failure is due to timeout, most likey due to infrastructure
<arosales> agreed
<arosales> but charm CI doesn't tell us why it failed
<arosales> just that it failed
<marcoceppi> it does tell us
<arosales> well doesn't surface up infrastructure or charm fail
<marcoceppi> DEBUG:runner:Deployment timed out (900s)
<arosales> sorry, I didn't state the correctly
<marcoceppi> arosales: the output we link people to is kind of crap
<marcoceppi> it's hard to find that
<marcoceppi> arosales: I agree we should work to distinguish infrastructure failure vs testing failure
<marcoceppi> but we don't have that atm
<arosales> but to your point, is it a charm failure or a infrastructure failure
<arosales> but regardless
<marcoceppi> agent-state-info: lxc container cloning failed
<marcoceppi> it was infrastructure
<arosales> the question is when do we mark a Charm CI test as a green box, ie passing
<marcoceppi> LXC was broken for about 20 test runs because of some weird lingering issue
 * arosales saw that in a couple of test runs
<marcoceppi> arosales: right, and the icon says "some tests have passed" it's never a definitive. It hink we favor passing over failing given how often we have substrate issues
<arosales> re my questions when to mark a charm CI as passing I thought it had to pass on local and a cloud
<marcoceppi> arosales: we can reverse that logic, without problem, but it needs some discussion
<arosales> but it seems currently it marks it as passing if it passes on just 1 cloud
<marcoceppi> arosales: at the moment yes, I can see how the logic is confusing there
<arosales> I think passing on 1 cloud is fair for green
<arosales> but just wanted to confirm my understanding
<marcoceppi> as soon as it gets one test result back we say the status, where passing > failing
<arosales> oh
<marcoceppi> so, it'll say "some tests are passing" for any result that comes back that's testing
<arosales> so if it failed on 2 cloud, but passed on 1 it would be red?
<marcoceppi> not sure
<marcoceppi> I'm doing a terrible job of explaining this
<arosales> sorry, I was taking you litterally on passing > failing
<arosales> I think I follow you though
<marcoceppi> I'm saying pass is weighted greater than failure if there's a mix result
<marcoceppi> because of infra flakiness
<marcoceppi> but we can easily reverse that logic where fail is if any one test has failed
<marcoceppi> I've got to catch a plane so I need to EOD and pack, but we can chat more on Monday
<marcoceppi> the new review queue will be a bit better at explaining this
<marcoceppi> by just showing the numerical result
<marcoceppi> X pass / Y fail
<marcoceppi> explicit :)
<arosales> I like the weight on passing
<arosales> later marcoceppi, travel safely
<blahdeblah> marcoceppi: Pushed fix to that MP; does it retry testing automatically?
<aisrael> Anyone had problems with juju under wily not starting?
<aatchison> i
#juju 2015-11-15
<marcoceppi> blahdeblah: restarted the tests
<marcoceppi> they are not automatic at this point, but will be soon
<blahdeblah> marcoceppi: thanks!
#juju 2016-11-14
<Danni-33> hello, is anyone here familiar with the kubernetes-core bundle? it keeps failing to install easyrsa for me and I am unsure how to debug...
<Danni-33> seems to stop with this: 2016-11-14 01:07:33 INFO client-relation-changed error: could not download resource: HTTP request failed: resource#easyrsa/easyrsa not found
<kjackal> good morning juju world
<Mmike> Hello, guys. Do  you know when 1.25.7 is going to hit trusty archives? (or at least stable juju ppa?) I see that 1.25.7 got release almost two weeks ago: https://launchpad.net/juju-core/+milestone/1.25.7
<magicaltrout> kjackal: its a shame you're not here, i need someone to laugh at!
<magicaltrout> .. i mean with
<kjackal> magicaltrout: :)
<kjackal> magicaltrout: so what are the highlights from apache big data?
<magicaltrout> kjackal: admcleod 's shiny head, SaMnCo 's campness and my sobriety so far.....
<magicaltrout> marcoceppi: when you wake up, i spun up a bunch of stuff the otherday, then did apt-get update which amazingly removed everything from my /home directory
<magicaltrout> so I can't shut the nodes down
<magicaltrout> so when you clean stuff up, i've got a bunch of crap running doing nothing cause I can't access them
<marcoceppi> magicaltrout: I'm in EU time zones
<magicaltrout> or that
<magicaltrout> fair enough
<marcoceppi> magicaltrout: so I'll do tha tnow
<magicaltrout> i have some nodes running :P
<marcoceppi> magicaltrout: in eu-west-1?
<magicaltrout> sounds correct
<marcoceppi> magicaltrout: cleaned up, you may need to kill-controller to reset
<magicaltrout> i have no home directory, it's as reset as it comes :P
<magicaltrout> thanks
<admcleod> i wonder if the code of conduct specifies 'cant slap tom' explicitly
<magicaltrout> doubt it, but as you're a woman i'm not allowed to retort either which is deeply unfair admcleod
<admcleod> excellent
<magicaltrout> admcleod: one of you chaps should sit down with Rich Bowen and do a quick interview about why juju works well with Apache projects
<magicaltrout> i'm sure you wont, but just a thought
<magicaltrout> send petevg along to talk nonsense
<admcleod> *cut* *paste*
<petevg> I can talk nonsense. Sounds fun.
<magicaltrout> well we're wanting more projects other than Bigtop to upstream charms
<magicaltrout> so chatting with Rich about it in an audio blog thing wouldn't be a bad way to get the message out a bit
<petevg> magicaltrout: has Rich established a base of operations somewhere, or is it a matter of bumping into him?
<magicaltrout> he'll just be milling around petevg
<magicaltrout> I asked him earlier he said the best bet is getting him whilst sessions are on as it'll be a quiet period
<petevg> Cool. Thx, magicaltrout
<rock> Hi all.  We need to know the answer for one question. This is very important for us. In order to interact with charm store we need Ubuntu SSO account and we need to login to launchpad even once right. We created Ubuntu SSO account with Mail ID: "Harish.Kannamthodath.CTR@kaminario.com"  and username is "kaminario". We had account authentication issue. So without deactivating launchpad account unknowingly we deleted Ubuntu SSO account perm
<rock> Then we tried to create Ubuntu SSO account  using the same mail ID and username. But we got an issue like "kaminario" username already in use. Probably "kaminario" username has stored somewhere in server cache. So please provide me the way to get "kaminario" username back. Because we want "kaminario" username account to push our charm store. How can we delete "kaminario" username from ubuntu server cache?
<rock> We raised the bug. But didn't get reply. https://github.com/CanonicalLtd/jujucharms.com/issues/372
<axino> rock: hi, please email rt@ubuntu.com with your request and we'll sort it out
<rock> axino: OK. Thanks.
<rock> axino: We will put mail very soon. Please provide me the resolution as soon as possible. Thanks.
<pascalmazon> Hello, I'm developping a charm that takes control replaces the standard network driver to use our own userland one.
<pascalmazon> To do so, it performs an `ifdown -a` then `ifup -a` to re-record the interfaces configuration.
<pascalmazon> Unfortunately, the lxd-bridge is not registered in /etc/network, and lxdbr0 ports to my containers are lost.
<pascalmazon> I have to manually re-add veth ifaces into lxdbr0.
<pascalmazon> Do you people have a suggestion on how to deal with that properly?
<pascalmazon> I tried `systemctl restart lxd-bridge` just in case. didn't change the situation
<jproulx> is there a complete reference for the YAML I can use in specifying a private OpenStack cloud?  specifically now I'm trying to define what network to use but seems like a full reference would be quite handy
<rick_h> jproulx: I believe you just need --config network=$uuid
<bloodearnest> marcoceppi, many thanks for charm-tools 2.1.6 \o/, works for me on trusty
<marcoceppi> bloodearnest: thanks for the confirmation. Can't wait for snaps on trusty then i can boot out these debs and dep management
<bloodearnest> marcoceppi, looking forward to it! :)
<cory_fu> marcoceppi, mbruzek, aisrael: Please take a look at https://github.com/juju/charm-tools/issues/276
<mbruzek> cory_fu:  on it
<mbruzek> cory_fu: Do you mean: https://github.com/juju/charm-tools/pull/282
<aisrael> cory_fu: ack, I ran into that problem as well
<cory_fu> mbruzek: Yes, the PR not the issue.  Thanks
<cory_fu> aisrael: I believe the PR should resolve both issues, as I believe they were both due to the venv not being used and it instead trying to upgrade pip on the system.  I'm not 100% certain about it fixing #274, though
<cory_fu> aisrael: Hrm.  On second thought, I think it probably won't, since that's failing in load_entry_point
<cory_fu> mbruzek: Comments addressed or replied to
<mbruzek> cory_fu:  OK. I had another one just now. I think venv parameter is unused.
<cory_fu> mbruzek: Hrm.  It's not exactly unused, but I agree that it's confusing
<mbruzek> cory_fu:  oh? let me take another look then.
<cory_fu> I changed it to a state var because we had multiple levels of passing it through now.
<cory_fu> mbruzek: Take a look now.  param removed entirely
<mbruzek> cory_fu: How/where does __call__ get invoked?
<cory_fu> mbruzek: That's the main entry point for the tactic.  It's invoked by the build system
<mbruzek> Ok thanks cory_fu
<cory_fu> mbruzek: https://github.com/juju/charm-tools/blob/master/charmtools/build/builder.py#L448
<cory_fu> That's the initial call, and the call within WheelhouseTactic.__call__ is recursion
<cory_fu> (Sort of)
<cory_fu> mbruzek: I added a clarification to my explanation about the difference between the 1.x and 9.x pips and why each is needed.  I think the verison pin in the PR is needed, but I am a little worried that it is a bit "hidden" in the code.
<cory_fu> But having it should prevent breakages due to newer versions being released in the future.
<jproulx> rick_h: thanks --config network=$uuid was the next puzzle piece
<mbruzek> cory_fu: Yes I am trying to verify the .join logic, then you get my +1
<rick_h> jproulx: so you should be able to stick that in the cloud file via nested yaml
<rick_h> config: <newline and indent>network=$uuid
<rick_h> jproulx: you were the one on the mailing list thread?
<jproulx> thanks it was the 'config:' section I was missing
<jproulx> rick_h: yup that's me
<mbruzek> cory_fu: When I make a local path _venv I tried this command: 'bash', '-c', ' '.join(('.', _venv / 'bin' / 'activate', ';', 'pip3') + args)
<mbruzek> I get: TypeError: can only concatenate tuple (not "list") to tuple
<rick_h> jproulx: k, I'll reply to the thread as well to make sure it's documented. thanks for fighting through it
<cory_fu> mbruzek: def foo(*args): comes in as a tuple.  If you defined it as a list, it won't wokr
<cory_fu> mbruzek: i.e., use args = ('foo',) instead of args = ['foo']
<mbruzek> oh
<cory_fu> I never really understood why Python rejects tuple + list or list + tuple.  They seem like reasonably compatible types
<cory_fu> I mean, list.extend(tuple) works fine
<mbruzek> TypeError: can only concatenate tuple (not "dict") to tuple
<cory_fu> As does a_tuple + tuple(a_list)
<cory_fu> mbruzek: lolwut
<mbruzek> I made args['foo']='bar'
<cory_fu> mbruzek: Yeah, a dict is not a tuple
<mbruzek> my mistake
<mbruzek> I see the error of my ways now
<mbruzek> thanks
<cory_fu> :)
<cory_fu> marcoceppi: Can you take a look at https://github.com/juju/charm-tools/pull/282 and include that in the release?
<sk_> Hi, I have a query related to juju relations. When we add a relation say juju add-relation Service A Service B, the relation is added at service level.So if Service B has multiple units deployed, each and every unit of Service B will connect to Service A.
<sk_> But I have this requirement where I want relations to be added at unit level, can we achieve this ?
<magicaltrout> what does the unit stuff achieve sk_ ?
<magicaltrout> for example you could detect if it was the leader unit and act on that I guess, or pick out other unit specifics
<sk_> the scenario is that unit1 of Service A gets connected only to unit1 of Service B and similarly unit2 of Service A gets connected to unit2 of Service B and so on
<magicaltrout> sk_: but how would you ensure the same number of units existed in each?
<magicaltrout> that said, when the relation is joined you could ask for a list of units from the other side
<magicaltrout> and use some logic to pick the one you want, no?
<rick_h> sk_: I think you'd have to basically allow juju to manage the juju relation data, but work with the leader to coordinate exact to exact requirements
<rick_h> sk_: the issue would be that what happens when unit 3 goes down and you add unit 9 and you want A9 to work with B3
<magicaltrout> you could setup some config based hash to maintain the connection list, not sure what happens if machines go missing though
<sk_> ok, thanks I would try out these options and see it works for my requirement
<magicaltrout> demoed some juju stuff to some folk from JPL today rick_h they have plans to upstream a Solr Cloud charm and some other bits hopefully
<rick_h> magicaltrout: <3 awesome, solr the fulltext tool or something different?
<magicaltrout> yeah they make use of solr a lot instead of elasticsearch. I believe they have some spark <-connector->solr platform
<magicaltrout> so were looking to use the spark charms with some solr stuff
<magicaltrout> they were trying to do it in docker so we ran through the juju stuff and quickly got them converted
<rick_h> nice
<marcoceppi> cory_fu: that didn't fix it
<cory_fu> marcoceppi: I'm getting a different error now: Command 'lsb_release -a' returned non-zero exit status 1
<marcoceppi> cory_fu: yeah
<marcoceppi> because you're using bash
<cory_fu> It is using the venv pip, though
<cory_fu> marcoceppi: Why is using bash a problem?
<marcoceppi> well, maybe not
<cory_fu> marcoceppi: File "/tmp/tmpJDK3hF/bin/pip3", line 11, in <module>
<marcoceppi> pip 9.0 i using lsb_release
<marcoceppi> ugh, I hate software
<cory_fu> It is using the venv pip correctly.  I don't understand why lsb_release would fail
<marcoceppi> because it's a sap
<marcoceppi> snap*
<cory_fu> Snaps can't see what system they're running on?
<marcoceppi> no
<marcoceppi> it's confined. /etc/lsb-release just doesn't exist (really, practically) for snaps
<marcoceppi> why does pip need to query lsb_release
<cory_fu> I have no idea
 * marcoceppi angrily digs into code
<marcoceppi> cory_fu: so, it's just doing distro detection, which, whatever
<marcoceppi> I'll update the snap, angrily
<cory_fu> marcoceppi: How can you work around that in the snap?
<marcoceppi> cory_fu: ANGRILY
<marcoceppi> not sure, but I'm going to try
<MotherDuckingNew> Salutations.
<MotherDuckingNew> I was following THIS video tutorial: https://www.youtube.com/watch?v=bxvCkPnC53U, but it seems he already has some clouds set up on his first video, so I can't really follow along, since I have zero clouds set up. What's a really good beginner tutorial on juju?
<MotherDuckingNew> p-please reply guys. :3
<tvansteenburgh> MotherDuckingNew: https://jujucharms.com/docs/stable/getting-started
<andrey-mp> Hi! I have a question about bundles. Is it right place to ask it?
<skay_> I am trying to debug hooks. I don't have the commands listed here, https://jujucharms.com/docs/stable/developer-debugging#debugging-reactive
<skay_> did I not install everything?
<cory_fu> skay_: Those commands must be run on the deployed unit, in a hook context, and will only be available if your charm is reactive.  Also, if your charm specifies use_venv in layer.yaml, then you will have to activate the venv from /var/lib/juju/agents/unit-<app>-X/.venv/bin/activate
<cory_fu> skay_: So, the main two questions are: Are you in a debug-hooks session, and does your charm use_venv?
<skay_> cory_fu: hi, I started off by running juju dhx, which starts up tmux then eventually drops me to a window for the hook
<skay_> cory_fu: my charm doesn't use venv, I wonder if that would be helpful?
<cory_fu> Ok, good, so you're already in a debug-hooks session
<skay_> cory_fu: do I have to do anything special when I've got a charm with layers? I'm not clear from reading the docs
<cory_fu> skay_: It's recommended for subordinate charms, and it is probably better for all charms but does add a step when debugging hte charms
<cory_fu> skay_: Nothing other than building the charm with `charm build`.  Can you point me to, or pastebin your layer.yaml file?
<cory_fu> The only other thing I can think of is that maybe you forgot to add 'layer:basic' to your includes: list
<andrey-mp> question about bundle: In juju 1.25 I can use prepared machines for bundle. But Juju 2.0 ignores created machines even they have same "names". Is it a bug?
<rick_h> andrey-mp: no, it's a difference in the built in bundle deploy vs the deployer tool
<rick_h> andrey-mp: the goal of the built in bundle deploy is to use bundles that are portable for all users and to leave it to the deployer to do special case things that only work locally
<skay_> cory_fu: done
<andrey-mp> rick_h: thank you for answer. I made this solution because bundle didn't support disks for machines. Is it possible to specify disks for each machine in 2.0?
<cory_fu> skay_: That seems fine.  You should be able to run charms.reactive from the command line in your dhx session.  What do you get when you type that?
<skay_> cory_fu: I get 'command not found' https://paste.ubuntu.com/23477174/
<skay_> cory_fu: perhaps I missed something when building my charm?
<cory_fu> skay_: That's very strange.  Check the wheelhouse/ directory in the built charm and see if it has the charms.reactive wheel in there
<skay_> cory_fu: I do not see a wheelhouse directory (I didn't build this one with a venv enabled)
<skay_> cory_fu: where would a wheelhouse directory go, or is it only around when someone uses a venv
<cory_fu> skay_: When you do `charm build`, one of the first lines output by that command says what directory it's building into.  The wheelhouse directory will be there.  Alternatively, it will be on the deployed unit in /var/lib/juju/agents/unit-<app>-X/charm/wheelhouse
<skay_> cory_fu: it's not in /var/lib/juju/agents/unit-<app>-X/charm/wheelhouse
<skay_> cory_fu: I'm going to rebuild
<skay_> (I can't remember what to put in metadata.yaml to use a venv and I don't remember where the docs are for valid metadata entries)
<lazyPower> skay_ - its layer.yaml, virtualenv: true  as a layer option
<lazyPower> https://github.com/juju-solutions/layer-basic#layer-configuration - to anyone following along at home
<skay_> lazyPower: thanks, that was driving me to distraction
<lazyPower> np skay_  :)
<skay_> thnks lazyPower and cory_fu. I've rebuilt my charm, and I have access to the commands I didn't have before (and I also am using a venv now)
<lazyPower> \o/ nice
<brandor5> Hello everyone: quick question, how can i map a juju machine to the maas system it lives on?
<brandor5> anyone?
<marcoceppi> brandor5: in Juju, it lists the instance-id, which is the ID for the MAAS node
<brandor5> marcoceppi: oh, i wasn't aware. thanks !
<marcoceppi> np!
<MotherDuckingNew> I'm feeling kind of nervous and jittery. Can someone tell me everything will be alright?
<Walex> MotherDuckingNew: difficult to do that in the #Juju channel :-)
 * rick_h passes juju juice around for calming effect
<Walex> but most things will be allright
<skay_> I'm unable to install snap with the snap-layer. it fails on grabbing ubuntu-core
<skay_> systemctl pastebin https://paste.ubuntu.com/23477593/
<skay_> journalctl pastebin https://paste.ubuntu.com/23477594/
<skay_> I'm trying to install a snap via  resource file
<skay_> it failed, then I started up a debug session and tried running a snap install command directly to see the output
<skay_> it's actually called layer-snap https://github.com/stub42/layer-snap
<skay_> stub: hi, have you seen a problem where someone is unable to install ubuntu-core when trying to install a snap?
#juju 2016-11-15
<magicaltrout> hey marcoceppi how angry were you when looking at code?
<magicaltrout> I can't help but think it was something like this
<magicaltrout> https://dl.dropboxusercontent.com/u/8503756/8252253547_c234b97423_b.jpg
<lazyPower> skay_ xenial right?
<skay_> lazyPower: yes
<lazyPower> I just ran it on a Xenial host, seems to have worked here with the hello-world snap...
<stub> skay_: Are you by any chance trying to do this under LXD? Snaps don't work in LXD containers for reasons I haven't looked into. I was hoping this would be fixed in the short term so I wouldn't need to document the limitation or put checks in the code.
<kjackal> Good morning Juju world
<anrah> Good morning!
<admcleod> hello - what, specifically, do i need to upgrade to resolve this: ERROR unrecognized command: charm publish
<evilnickveitch> admcleod, i believe it has been replaced with 'charm release'
<andrey-mp> Hi! How to define bindings for the charm? Some charm uses 'network-get public --primary-address' and gets the same as 'unit-get private-address' but in juju 1.25 it was public-address
<andrey-mp> and it's a problem
<skay_> stub: hey, yes indeed I'm using lxd. maybe I can ask in the snappy channel
<skay_> lazyPower: were you using lxd?
<rick_h> admcleod: s/publish/release
<MotherDuckingNew> I did everything it told me to do, but this is still different:
<MotherDuckingNew> I did everything it told me to do, but this is still different:Unit         Workload  Agent       Machine  Public address  Ports  Message
<MotherDuckingNew> mediawiki/0  waiting   allocating  0                               waiting for machine
<MotherDuckingNew> mysql/0      waiting   allocating  1                               waiting for machine
<MotherDuckingNew> Machine  State    DNS  Inst id  Series  AZ
<MotherDuckingNew> 0        pending       pending  trusty
<MotherDuckingNew> 1        pending       pending  trusty
<marcoceppi> MotherDuckingNew: what's the cloud you're using?
<MotherDuckingNew> I don't know. I just followed this. https://jujucharms.com/docs/stable/getting-started
<rick_h> MotherDuckingNew: can you please pastebin the output of juju status --format=yaml
<MotherDuckingNew> http://pastebin.com/tR1KJwAP
<rick_h> MotherDuckingNew: so the feedback there is important: "copying image for..."
<lazyPower> skay_ - i was not.
<skay_> lazyPower: I guess I could switch to using juju1 unless there is a way to use juju2 with lxc
<skay_> lazyPower: juju1 doesn't have layers, right?
<lazyPower> skay_ - you can use juju2 with lxd
<lazyPower> you can also use layered charms with juju 1
<skay_> lazyPower: old lxc
<skay_> lazyPower: if the layer-snap has trouble with lxd vs. old lxc then I can't use lxd
<skay_> lazyPower: ok, to recap. I can still use layers, regardless
<lazyPower> yep :)
<lazyPower> i dont think juju 1.25 has resource support though
<lazyPower> and you were attempting to deliver a snap as a charm resource right?
<MotherDuckingNew> (I just got disconnected) What is the lesson I should learn from that pastebin?
<rick_h> MotherDuckingNew: sorry, the status there states that lxd was downloading the ubuntu-trusty images and it was at 97% done
<rick_h> MotherDuckingNew: so the lesson was that there's more detailed information on "waiting for machine" in the yaml output that pointed to what was going on
<petevg> Hiya, cory_fu. How do you use that interface you added to zeppelin to register notebooks? Is it in the current zeppelin charm in the store?
<cory_fu> petevg: It is, and it's documented in the interface's README: https://github.com/juju-solutions/interface-zeppelin#charms-registering-a-notebook
<petevg> cory_fu: cool. I am jet lagged, but gave it a read. We don't have a way of just dumping a notebook in without it being part of another charm now, right?
<petevg> ... but I guess I can expose the register notebook routine as an action.
<cory_fu> petevg: Indeed. I had planned to make an action for it, but haven't done so yet
<petevg> cory_fu: cool. If I get ambitious, I will add it.
<marcoceppi> cory_fu: does this seem too crazy? https://github.com/juju/charm-tools/pull/284
<cory_fu> marcoceppi: Seems fine to me
<marcoceppi> cory_fu: I have no idea what entity is
<marcoceppi> or if it has makedirs_p as a method
<cory_fu> marcoceppi: It's a Path object, so yes, it should have that method
<cory_fu> marcoceppi: It gets populated by https://github.com/juju/charm-tools/blob/master/charmtools/build/builder.py#L326 via https://github.com/juju/charm-tools/blob/master/charmtools/build/builder.py#L295
<ryebot> Do child layers have access to relations from parent layers?
<ryebot> Like, if I have layer-foo that depends on layer-bar, and layer-bar includes interface-baz, does layer-foo have access to interface-baz?
<marcoceppi> ryebot: yes
<ryebot> marcoceppi: excellent, thanks!
<ryebot> Also, I assume it's bad form for interface layers to do things beyond communication, like install software. Is that correct?
<marcoceppi> ryebot: generally, yes, typically if you have to do things like installation, it's best to make a layer-<interface> which does the common components in a reactive directory for installation and handling of complex  measures then raising it's own state
<marcoceppi> ryebot: that's a guide line, and there are always exceptions
<marcoceppi> cory_fu: so, it works, but doesn't
<marcoceppi> cory_fu: jk, it works
<ryebot> marcoceppi: perfect, thanks
<skay_> lazyPower: yes, I wanted to use resources
<skay_> lazyPower: I like the idea of snapping my application, but I don't want to publish it int he store
<marcoceppi> cory_fu: can you TAL again re: https://github.com/juju/charm-tools/pull/284
<skay_> lazyPower: I thought maybe if I used a resource I wouldn't need to
<cory_fu> marcoceppi: Oh, yeah, that makes sense
<marcoceppi> cory_fu: I actually tested it, lol
<cory_fu> Madness
<deanman> Evening, is there a seperate channel for juju docs and contributing to that?
<evilnickveitch> deanman, there isn't a specific docs channel no, but I can help you.
<mgz> deanman: I don't think there's specific one for docs, but also happy to have docs talk in #juju-dev
<evilnickveitch> oh yeah, or there :)
<deanman> evilnickveitch: i'll catchup with you in juju-devs then
<marcoceppi> cory_fu: do you have a few mins to chat about the snap problems?
<cory_fu> marcoceppi: In a meeting.  Give me a few min
<marcoceppi> kk
<cory_fu> tvansteenburgh: Can you join us in https://hangouts.google.com/hangouts/_/canonical.com/daily-bigdata?authuser=1 real quick?
<cory_fu> Oh, he's at the dentist
<cory_fu> marcoceppi: Ok, I'm free.  What HO should I join?
<marcoceppi> cory_fu: how about
<marcoceppi> cory_fu: https://hangouts.google.com/hangouts/_/canonical.com/snappy-pip
<tvansteenburgh> cory_fu: back from dentist, you still need me?
<cory_fu> tvansteenburgh: kwmonroe, kjackal, and I wanted your help with getting the RQ updated in production, since we tested it in staging last week
<cory_fu> tvansteenburgh: I have another meeting right now, but if you wanted to jump on a HO w/ kwmonroe and kjackal, I can join when I'm done
<tvansteenburgh> cory_fu: sure, whatever you guys prefer
<kwmonroe> cory_fu: kjackal, let's do it after the CI chat since that might surface more work for tvansteenburgh.  so, like in an hour-ish.
<tvansteenburgh> kwmonroe: i have a meeting in an hour
<kwmonroe> tvansteenburgh: if you're free now, https://hangouts.google.com/hangouts/_/canonical.com/juju-ci (kjackal cory_fu too if you want)
<jrwren> the ceph charm suggests it should work with directories instead of devices: https://jujucharms.com/ceph/  osd-devices says "charm assumes anything not starting with /dev is a directory instead" Does that actually work? If it doesn't were should I file a bug?
<jrwren> lazyPower: how/where are we meeting re: elasticstack
<lazyPower> jrwren invite deployed
<lazyPower> https://hangouts.google.com/hangouts/_/canonical.com/elastic-stack?authuser=0
<lazyPower> to anyone else following along at home, you can join here
<jrwren> lazyPower: thanks!
<bdx> I'm hitting this on charm build across the board http://paste.ubuntu.com/23482201/
<bdx> ok nm only in certain charms
<lazyPower> bdx - is there an exclude: directive in the layer.yaml?
<bdx> lazyPower: no
<bdx> I've removed some layers from my layer.yaml ... getting charm build to succeed now ... should find the cause momentarilly
<lazyPower> bdx - i know that right now etcd only builds against charm-tools from tip, which is why i asked. and its due to the exclude: directive in the layer.yaml
<bdx> grrrrr - https://github.com/jamesbeedy/juju-layer-composer/blob/master/layer.yaml#L6
<bdx> srry - my b
<lazyPower> tadaaaa
<lazyPower> invalid yaml, dun dun dunnnnn
<lazyPower> bdx np, i do it all the time too
#juju 2016-11-16
<kjackal> Good morning Juju world!
<marcoceppi> kjackal: +1 to this change, i want to land it asap, but please drop maintainers field https://github.com/battlemidget/juju-layer-nginx/pull/14
<kjackal> marcoceppi: should I drop the tags field as well?
<marcoceppi> kjackal: probably, that one doesn't affect ownership of the charm as much, but if you don't have a strong opinion, then +1 to removing it
<kjackal> marcoceppi: ok, done
<marcoceppi> kjackal: merged
<kjackal> thanks
<kjackal> Hey cory_fu, got a question on juju resources and Amulet tests. These two do not mix nicely at the moment. What will be our strategy for testing charms with resources? Should we expect the store to have the resources beforehand?
<cory_fu> kjackal: https://github.com/juju/amulet/issues/142
<kjackal> cory_fu: yeap, thats why I am saying they do not play well. Should I disable the tests of apache kafka?
<cory_fu> kjackal: The standard assumption for charms is that you should be able to deploy them out of the box without error.  That means there will at a minimum need to be a placeholder resource already in the store.  However, that doesn't help for testing a charm from local, since locally deployed charms will never fetch resources from the store
<cory_fu> kjackal: The kafka charm should already have a resource uploaded to the store.  That was one of the few big data charms I was able to get a functioning resource into the store for.  However, it should also fall back if the resource is not provided, IIRC
<kjackal> cory_fu: yes, undestood. For testing local charms you should first attach the resources to the controller. What I am not sure is what will happen if you call bundletester on a charm on the store. I think in this case the resources will be fetched correctly.
<kjackal> cory_fu: I think the fall back works only if the getting resources is not implemented
<kjackal> cory_fu: thats is on apache kafka, let me lookup the line
<kjackal> cory_fu: https://github.com/juju-solutions/layer-apache-kafka/blob/fcd0b28242a1330530d219fa6d92e266403d24ea/lib/charms/layer/apache_kafka.py#L38
<cory_fu> kjackal: Right, I guess if deploying locally with 2.0 you *must* provide the resource.  If deploying from the store, like with BT, then yes, it will automatically fetch the resource from the store
<kjackal> cory_fu: ok, sounds good
<justicefries> hey all. I'm working on a pretty decent Juju setup, and I'm hitting the point where I need to get some Windows-based CI machines into the mix. i've been having a bit of a general automation nightmare with actually cutting images for them. anyway, what I'm wondering is if I need to actually do Windows config management (install build requirements and CI
<justicefries> agent), is that something worth wrapping up as a charm?
<justicefries> or should I just keep plugging away with whiskey and powershell?
<justicefries> also, can I add dependencies across models? let's say I want to do Kubernetes federation and had the chart to support it, does juju support the whole notion of cross-model dependencies?
<lazyPower> justicefries  - so your first question about windows components
<lazyPower> we support windows charms written in powershell, and i can put you in touch with some cloudbase peeps who are our primary point of contact for widnows charming, they kind of wrote the book on that
<justicefries> fantastic, I'd love that. its such a small part of my infra that causes so much pain.
<lazyPower> justicefries - and regarding federated clusters - our current k8s charms dont support the federated cluster feature set, but cross model relations are coming (theyd ont exist today)
<lazyPower> i would suspect we'll see something around 2.1 timeframe, but please dont hold me to that as its speculation. I jsut know we have it on the roadmap, not when it'll land
<justicefries> no, that's fine. I was figuring if it was an issue with the chart, but supported, i'd open a PR.
<justicefries> but since its not, I'll wait until they're in. :)
<lazyPower> ok :) justicefries - if you're interested in tracking that, the canonical-kubernetes-bundle would be a great place to start that conversation
<lazyPower> we've been talking about adding federated cluster support via config initially until xmodel relations land
<lazyPower> likely to tackle that in the late december/early jan timeframe assuming we stay on top of our k8s roadmap
<lazyPower> https://github.com/juju-solutions/bundle-canonical-kubernetes
<justicefries> sure sure. that makes sense. part of me would rather wait, but I suppose that's what joining the convo is for. :)
<justicefries> ah yes that was my next question, I was in the juju org. :)
<lazyPower> yeah, the earlier you join the conversation and let us know what your production cocnerns/needs are, the sooner we can get it in our planning sessions and make it happen
<lazyPower> for example, the etcd snapshot/restore for cloning clusters came up 2 weeks ago and i just pushed that out this week since it was a bitesized task
<justicefries> sure sure. out of curiosity - when that charm got upgraded for people, did that trigger everyone having to re-make clusters on that charm, or was it a graceful upgrade?
<lazyPower> so graceful upgrade -- i'm glad you asked. The feature itself was a drop in. but ot actually restore a cluster it rquires a snapshot/redeploy
<lazyPower> its extremely difficult to coordinate the cluster down/up steps during a restore, even the etcd admin guide recommends a nuke/re-pave from a snapshot
<justicefries> nice, and sure, that makes sense.
<justicefries> yeah.
<lazyPower> thats one of our more finicky components, as i'm sure you've noticed if you've had the pleasure of administering an etcd cluster :)
<justicefries> edges are all still a little sharp and etcd is that pet you keep chained to the strongest post you got lest it eat the mailman and entire post office.
<justicefries> bad analogy but yeah
<justicefries> exactly. :)
<lazyPower> hahaha
<lazyPower> oh man, i wish i could pin that message. thats brilliant
<justicefries> now is there any sense of a general system charm so I can throw out having a separate CM setup?
<justicefries> without calling out any names, I'm emerging from the relative darkness of pure cloud-config based provisioning and back into the fun world of CM.
<lazyPower> when you say general system charm
<lazyPower> what do you mean?
<justicefries> so I want to lay down my base config on units. nothing application specific.
<lazyPower> that sounds really close to what our internal services department uses a layer called "base-node" for
<justicefries> ah look at that, in the layer docs there's base layers.
<lazyPower> most charms are built from layers, and they just mix in the base-node layer to gain that functionality. it does however mean they are on the onus for ensuring they keep their deployed charms updated as they are adding a custom layer
<justicefries> nice.
<lazyPower> but, its not exactly the best ux. I think there's room for improvement there, if you're looking to just apply a set of policies a subordinate might be a good route forward, you eat a juju agent in turn for getting a single source to apply to the base configuration.
<justicefries> that's true. and i imagine a base layer wouldn't be "converging" on a config over time.
<justicefries> hmm, ok.
<lazyPower> the benefit to that approach, is it scales with your deployment, has isolated concerns
<lazyPower> the detriment is potential race with the principal charm, and eating that second agent.
<justicefries> right.
<lazyPower> so it really depends on what your policy is for your infra, and how you plan to execute on that
<justicefries> maybe I'll start with something traditional and then migrate towards ripping it out. we end up with a lot of dynamic infrastructure that's directly managed now, and I wouldn't mind at some point turning the actual workloads that cause the infra to spin up into a charm creation.
<justicefries> i mean, on the other hand, outside of basic things like security.
<justicefries> maybe some of the sysctl and other tweaks I do as "base" right now really relate to an application.
<justicefries> so there's probably equal room for re-thinking some of how that gets laid down now.
<lazyPower> right, the blanket base configuration has a place/time.
<lazyPower> which is why i'm hesitant to guide you away from it, more of a choose your own adventure thing
<lazyPower> if there's security measures you would take, we're likely to ask that you either bug it or submit a PR so everyone consuming those charms can be as secure as you're setting up your infra to be
<justicefries> that makes sense.
<justicefries> yeah, I'll first distill everything down into what truly is my base, then re-frame my question if there's anything left with maybe a better idea of how I'd like it to work in juju.
<justicefries> probably worth opening a proposal/issue at that point.
<lazyPower> yep, i agree with that 100%
<justicefries> hmm. one more for now.
<justicefries> juju HA - any way to split that out on a way depending on cloud support? eg, AWS/GCE AZs.
<justicefries> side note - before I open an issue on juju/juju, any reason its not compiled with 1.7? I'm of course using MacOS Sierra at the moment.
<justicefries> actually, it looks like the PR was closed.
<justicefries> but the release binary shipping for OSX is still being built with go 1.6
<rick_h> justicefries: so the issue is building across trusty/xenial/etc and what Go we can use for that.
<rick_h> justicefries: so we try to keep up a bit, but also don't want to create unecessary work
<justicefries> that makes sense.
<lazyPower> justicefries - when you say split that out, do you mean specify the AZ?
<lazyPower> justicefries - aiui, when you juju ensure-ha, its an auto split across the AZ its deployed into. us-east-1-a and us-east-1-c  for example
<justicefries> fantastic.
<lazyPower> i admittedly have very little experience with an HA controller.
<lazyPower> but would be happy to step through it if you need information
<justicefries> i may just start with backups and restores.
<justicefries> how is jenkins building juju for OSX? want to build on my own, but I must be missing some release configs/flags/something, because its looking for sierra artifacts which I obviously don't want.
<rick_h> justicefries: hmm, does the brew recipe not handle that?
<rick_h> justicefries: /me isn't sure tbh hasn't looked at build osx magic
<justicefries> brew hasn't been updated to 2.0.0 yet, seems there's an open PR.
<lazyPower> right, we were blocked until 2 or 3 weeks ago when 2.0.1 landed with the sierra fix
<lazyPower> i'm not sure what the status of that would be today, i'm pretty sure our release team handles updating brew.
<justicefries> ahh sure.
 * lazyPower makes a note to go poke about brew
<justicefries> yeah, the 2.0.1 OSX download on jujucharms.com is still built with 1.6
<lazyPower> i've been telling people to fetch from the release page and install juju /usr/local/bin/juju
<lazyPower> justicefries - pardon my ignorance, whats teh big todo about getting it bumped to go 1.7?
<rick_h> lazyPower: yea, the latest news there was updating 1.25 with the same sierra fix (1.25.8 I think) to help with the transition I think
<justicefries> i'm doing it in an ubuntu container just because. so go 1.7 added support for sierra.
<justicefries> go 1.6 will just randomly panic :O
<justicefries> and its not consistent.
<lazyPower> ah yeah that i do know about
<lazyPower> i get the random panics from both kubernetes kubectl and juju
<lazyPower> i thought it was hardware related though
<lazyPower> as it effected both, but that makes total sense
<justicefries> nope purely OS
<lazyPower> welp, nothing to do here
 * lazyPower jetpacks away
<justicefries> it'll make you feel crazy that's for sure.
<lazyPower> indeed, thanks for validating my sanity and hardware
<lazyPower> its real fun when running a watch
<lazyPower> as it just randomly hangs with a stack for a second, then pops back over to the proper status output
<justicefries> yup
<bdx> can I associate a users charm store sso login with a juju model user so juju users have access to their charm store team namespaces via juju gui?
<lazyPower> rick_h - ^ i'm pretty sure this already exists today?
<rick_h> bdx: lazyPower so not with the model user
<bdx> rick_h: in what way can it be accomplished?
<rick_h> bdx: lazyPower what are we trying to do? I mean the user can charm login different from model login so should be able to access things?
<bdx> rick_h: I have my 'creativedrive' namespace with private charms, I'm giving our devs a tour of the gui, and we are wondering how we deploy our 'creativedrive' charms via gui
<bdx> rick_h: when I try the usso login button, I get https://s12.postimg.org/7yf8276h9/Screen_Shot_2016_11_16_at_2_20_26_PM.png
<rick_h> bdx: oh hmm...
<bdx> when I login with my juju admin user I still can't see my 'creativedrive' namespace charms
<rick_h> bdx: right, the GUI doesn't know the split I guess
<rick_h> bdx: at one point in time there was a double login issue there
<rick_h> bdx: if you go to the gui and the user profile is there a link to login to the charmstore?
<rick_h> hatch: ^ where's the login to the charmstore link these days?
<bdx> rick_h: when I search for my 'creativedrive' namespace charm https://s11.postimg.org/vtao1yhib/Screen_Shot_2016_11_16_at_2_25_19_PM.png
<rick_h> bdx: right, so there was a link to login to the charmstore as a separate login from the juju controller
<rick_h> bdx: that would solve what you're looking for
<bdx> oooh I found it
<bdx> https://s11.postimg.org/sctse3d2b/Screen_Shot_2016_11_16_at_2_26_53_PM.png
<rick_h> there you go right, from the profile page
<bdx> rick_h, lazyPower: thanks guys ... srry .. my bad
<rick_h> been a while since I monkeyed with it
<rick_h> bdx: all good, give that a go and try that out and if there's a suggestion on how to make it more obvious let us know
<rick_h> bdx: at one point in time I thought a failed search result had a hint, but that might have come/gone
<justicefries> hmm. so I've created a user as a superuser, logged in as him. i created a model under admin to use.
<justicefries> but when I try to grant it to myself: juju grant justicefries admin admin/aws-test (or aws-test) it can't find it.
<rick_h> justicefries: hmm, not sure what "under admin" would be
<rick_h> justicefries: so what happens if you just "juju models"
<justicefries> so I did --owner=admin when creating the model
<justicefries> under my superuser user? nothing.
<justicefries> under admin, the model.
<rick_h> justicefries: ok, so you created a model without your own user having permission
<NewServerGuy> Using Lubuntu, can't install "sudo apt install  zfsutils-linux" from the basic tutorial.
<rick_h> justicefries: so you'll need to switch to admin (logout/login)
<justicefries> exactly. as admin, I made justicefries a superuser.
<rick_h> justicefries: ok, who are you currently logged in as? "juju whoami"
<lazyPower> NewServerGuy
<justicefries> ok. i'm mostly trying to figure out a good flow for this. want multiple users, but I don't want to "lose" a model if someone leaves, so for production models I want them consolidated somehow.
<lazyPower> which version of lubuntu?
<justicefries> i'm under my user currently.
<justicefries> so I think I know the answer, this is more of a flow question now.
<rick_h> justicefries: so as long as you create them, have the admin with admin access, and have users have write access they can't destroy the model/deal with if they leave
<NewServerGuy> lazyPower: Yes?
<justicefries> got it.
<rick_h> justicefries: so I'd suggest just using the admin user, create all the models, add write access for other users
<justicefries> so for production models, its probably worth having under a "production" user or "admin".
<justicefries> that makes sense.
<lazyPower> NewServerGuy - which version of lubuntu are you using? the getting started guide assumes xenial+
<rick_h> justicefries: yea, the admin default user is meant to encourage that flow
<justicefries> ð
<rick_h> justicefries: so you create all the other users, and admin is kind of like "root"
<NewServerGuy> not sure.
<justicefries> but there's no way to use --owner with grant, like with add-model
<lazyPower> NewServerGuy - can you pastebin the output of lsb_release -a?
<justicefries> even if you're a superuser.
<rick_h> justicefries: you don't really want to. You want the owner to stay admin so that users can't kill the model off
<rick_h> justicefries: so you just grant access
<justicefries> +1 make sense.
<justicefries> er, makes.
<NewServerGuy> uname -r
<NewServerGuy> 4.2.0-27-generic
<NewServerGuy> lazyPower http://paste.ubuntu.com/23487630/
<lazyPower> NewServerGuy - i dont think the zfs packages have been backported to trusty, its a xenial forward feature iirc.
<lazyPower> NewServerGuy - so, you can still use lxd without zfs...
<NewServerGuy> lazyPower what do I change in this script? http://paste.ubuntu.com/23487634/
<bdx> lazyPower, NewServerGuy: http://serverascode.com/2014/07/01/zfs-ubuntu-trusty.html
<lazyPower> bdx  ah man, thats a ppa though
<bdx> you can run zfs on trusty ... I've had mixed results though .
<lazyPower> not the canonical supported zfs stuff
<bdx> aaah
<lazyPower> yeah we experimented with this before too, and it was not as good as we had hoped
<lazyPower> NewServerGuy - lets start with where you're stuck
<lazyPower> because this script looks fine, but its obviously not going stellar just yet, so what phase does it get to?
<NewServerGuy_fro> refuses to do anything with lxd.
<justicefries> one weird thing with multiple users and the GUI, I had to explicitly specify the model, else I got this: ERROR cannot retrieve model details: model name "justicefries/" not valid
<lazyPower> ok, so its basically erroring out during the bootstrap phase?
<lazyPower> NewServerGuy - if you could capture the output of juju bootstrap lxd lxd  --debug  and pastebin it, that would be a good first step.
<NewServerGuy> paste.ubuntu.com/23487670/
<lazyPower> NewServerGuy - whats the output of juju --version?
<NewServerGuy> 1.25.6-trusty-i386
<lazyPower> ahhh and now it gets clearer to me
<lazyPower> ok, sorry about the long winded trail to get here. juju 1.25 does not support the lxd provider. You'll need to install the juju 2 package. You can continue using juju 1.25 but you'll want to look at the 1.25 documentation
<lazyPower> NewServerGuy - https://jujucharms.com/docs/1.25/config-LXC
<lazyPower> this document should un-muddy the waters for you on 1.25
<NewServerGuy> lazyPower We're trying to get a cloud going for a classroom.
<lazyPower> NewServerGuy - i would encouage you to use xenial, and juju 2.0.1 in that case
<NewServerGuy> is xenial an OS?
<lazyPower> Xenial is the 16.04 release of Ubuntu
<lazyPower> the Lubuntu install you're currently using is our last LTS release, 14.04
<justicefries> hm, ok. canonical-kubernetes works nicely.
<lazyPower> justicefries too much metal for one hand \ooo, ,ooo/
<justicefries> hahaha
<justicefries> i set a constraint while everything was still pending to up the worker machine size.
<justicefries> and it doesn't seem to be taking. do I have to add the units myself?
<lazyPower> ah that wont work for you though, you'll need to set constraints before you deploy
<justicefries> ð
<lazyPower> or you'll need to use something like conjure-up where you can edit the constraints on the fly
<justicefries> i can probably add units at this point and it'll take yeah?
<lazyPower> only if you set-model-constraints
<lazyPower> add-unit doesn't take constraints in 2.0+
<justicefries> ah ha.
<justicefries> even if the application name has a constraint?
<lazyPower> err
<lazyPower> i'm not sure i follow
<justicefries> so I have a constraint on kubernetes-worker
<justicefries> and I do: juju add-unit -n 3 kubernetes-worker
<lazyPower> ah, it might.
<lazyPower> i think it will
<lazyPower> give it a whirl and tell me if i need to go home for the day ;)
<justicefries> hahaha. ;) sounds good.
<justicefries> it would be nice to be able to at some point pin out "versions" of my application, so if I have a whole bunch of old workers I need to scale down, I can do it in one big swath. not sure mechanically how that'd work yet.
<justicefries> again, need to play and come up with what I want haha.
<lazyPower> justicefries - welp, have i got good news for you
<lazyPower> juju reports versions in status output if the charm author uses the application_set_version helper
<lazyPower> so for example in canonical kubernetes, you'll notice all the apps report their current versions, so you a window into whats out there
<justicefries> look at that.
<lazyPower> its not exactly pinning, but its introspection, and if the charm supports resources you can even lock that to provide whatever bins you wish at whatever version the charm supports
<lazyPower> risky when doing stuff like kubernetes 1.5 with our 1.4 charms
<lazyPower> but most of the time it'll just work
<justicefries> ok, so if I could do it based on extra model and/or application constraints (i want to know all 1.4.5 with an instance_type=m3.medium) I think that'd be gold.
<justicefries> i'm thinking like kubernetes labels and selectors right.
<justicefries> i know now that I want ot get rid of kuernetes-worker/0-2, but it'd be nice if I could automagically grab that based on constraints or other metadata.
<lazyPower> well, i dont think you can do that out of the box without some status parsing/munging
<lazyPower> we dont have filters on status other than filtering to the app that i'm aware of
<lazyPower> juju help commands && juju status --help would be good there
<justicefries> huh. -o yaml and -o json are empty o.o
<NewServerGuy_fro> Soy, lazyPower, running on a laptop, I already got the first page of the tutorial done on my laptop....
<justicefries> oh
<justicefries> that's output file lol
<NewServerGuy_fro> How do I access my mediawiki page on other computers on the network?
<lazyPower> NewServerGuy_fro - ok, glad i didn't discourage you
<NewServerGuy_fro> No prob. On the laptop I got done two days ago. It's the server that's punking me.
<lazyPower> NewServerGuy_fro - so thats a more advanced use of lxc, and it requires bridging the lxc bridge.
<lazyPower> to your physical ethernet adapter
<lazyPower> and i've helped people turn their lxc deployments into hot dumpster fires by doing that
<NewServerGuy_fro> How is that done?
<NewServerGuy_fro> SHIT?
<justicefries> hmm. so its not quite enough information to automate that the way I'd ultimately like. i think I can work around it just with the policy and multiple models in that case.
<NewServerGuy_fro> So that's dangerous?
<bdx> lazyPower: is there a best practice for that yet?
<lazyPower> well, only if you dont know what you're doing or how your network topology is laid out
<NewServerGuy_fro> fuck.
<lazyPower> bdx - i'm pretty sure the lxd install screens offer this out of the box these days
<lazyPower> did that change?
<lazyPower> i'm referring to lxc, on trusty. As i believe thats what NewServerGuy_fro is using.
<bdx> ooo
<bdx> nating the host adapter to the container ip?
<lazyPower> what you basically do, is remove the nat, and the lxcbr0 becomes a bridge adapter
<lazyPower> so you're pulling ip's directly from your router/DHCP server
<lazyPower> but again, this is moderate to advanced networking
<bdx> ooooh, when lxdbr0 is a bridge on your adapter
<NewServerGuy_fro> lazyPower I'm using more up to date Ubuntu on my laptop and installing more uptodate Ubuntu on server now.
<bdx> yeah .. I've built fire on top of that method too
<justicefries> this sure as hell beats my other kubernetes setup.
<NewServerGuy_fro> LAP:Ubuntu ; SERV-1:Ubuntu
<NewServerGuy_fro> Where I called you from originally --> SERV-2, is currently install new Ubuntu.
<lazyPower> yeah here's a post from 2013 where i covered this - and its really risky without knowing what you're doing - http://dasroot.net/posts/2013-12-22-making-juju-visible-on-your-lan/
<lazyPower> you can easily hoze a lxc deployment where no containers will work because networking is borked
<lazyPower> but if you're feeling brave, there's that.
<jrwren> NewServerGuy_fro: http://jrwren.wrenfam.com/blog/tag/bridge/ from 2016 instead of 2013 ;}
<lazyPower> hey right on jrwren
<lazyPower> <3
<jrwren> still, its... tricky
<bdx> my context is out of focus with aws ... I miss my maas stacks
<lazyPower> bdx metal, for lyfe
<bdx> 4sho
<lazyPower> NewServerGuy_fro - i'm at my EOD, but i'm happy to pick this up tomorrow
<lazyPower> and there are others in here to lend a hand like jay, who is pretty knowledgeable. if you get extremely stuck, dont despare, hit us up on the mailing list juju@lists.ubuntu.com and we'll be happy to circle and get you an answer
<lazyPower> *circle back
<NewServerGuy_fro> lazyPower, THanks man!
<lazyPower> anytime. we're here to help :)
<justicefries> be nice if I could point at an existing elasticsearch instance (AWS-managed one) within the kubernetes bundle.
<justicefries> otherwise this is slick.
<bdx> justicefries: you bring up a good point
<bdx> it would be cool to be able to drop in managed services in some places for sure
<justicefries> yup
<justicefries> marcoceppi: wonder if you'd just have a :proxy interface much like you have the :client interface now?
<justicefries> weird, kubeapi-load-balancer died out on me, had to add a new unit.
<justicefries> and machine
<justicefries> marcoceppi: I assume a cloud provider/AWS specific charm would work the same way huh? make an elb-proxy charm, have an interface on it, add relations, the charm handles the specifics of mapping the ELB to related machines.
#juju 2016-11-17
<justicefries> ok one last one. can I refer to a local charm or upload one to the controller from my local system? or is canonical's store the best way to do that with locked down ACLs?
<kjackal> Good morning Juju world!
<gnuoy> Is lxc profile named "juju-default" magically applied to all models or do you have to manually tell juju to apply that profile to a model?
<gnuoy> oh, is 'default' the model name ?
<gnuoy> so if I do "juju add-model foo" I need a corresponding juju-foo profile ?
<gnuoy> ok, for any future travellers juju seems to look for an lxc profile called juju-<mode name>, if it finds it it applies it to the containers in that model.
<gnuoy> s/mode name/model name/
<jcastro> Reminder to all: Charmer Summit / config management camp CFPs are due tomorrow!
<aisrael> jcastro: is there a CFP template somewhere?
<jcastro> http://summit.juju.solutions/ has a link to the form
<aisrael> ta
<deanman> anyone having a working environment with openstack mitaka ?
<marcoceppi> justicefries: hey, so you'd want to use the existing client interface, that way it just seamlessly integrates
<marcoceppi> justicefries: an elb-proxy-charm is a great idea, we had an early attempt a while ago, but it would just reuse the http interface and take aws sepcifiic config as charm config and clue the two together
<justicefries> hmm nice. ok. might roll up my sleeves today and write some charms.
<aisrael> jcastro/marcoceppi: do you have a couple minutes to chat in eco-wx?
<jcastro> aisrael: I'm editing mid video, I need a few minutes
<aisrael> jcastro: no worries, marco answered my q's <3
<jcastro> cool
<jcastro> anyone have anything for the crossteam?
<aisrael> jcastro: yes, 1 sec
<aisrael> jcastro: https://bugs.launchpad.net/bugs/1640242
<mup> Bug #1640242: debug-hooks doesn't accept a named action <juju:Triaged> <https://launchpad.net/bugs/1640242>
<aisrael> That's not a wishlist item, imo, but a usability issue
<jcastro> cool, anyone else have a burning bug they'd like to see core address?
<jcastro> I'm going to ask about spot instances again
<jcastro> lazyPower: just waiting for youtube to finish the edit I did to trim the front of the video and I'll publish it on the YT channel.
<lazyPower> nice, thanks jcastro
<jcastro> marcoceppi: any bugs from you?
<marcoceppi> jcastro: how about that one where you ahve to have credentials even if you get addmodel access to a controller
<jcastro> I don't think I've run into that yet?
<marcoceppi> jcastro: it's been around since the summit
<marcoceppi> since rc1
<jcastro> I've been gone remember? Link me up.
<jcastro> lazyPower: mbruzek: https://github.com/conjure-up/conjure-up/issues/505
<marcoceppi> jcastro: I can't find a bug now
<jcastro> ok, I can dig around
<lazyPower> jcastro - yeah he hopped on a hangout with us yesterday and we saw the progress
<lazyPower> so we've got most of the stuff there, still sorting out system acccess control issues, but otherwise stokachu made a ton of progress there
<marcoceppi> jcastro: https://bugs.launchpad.net/juju/+bug/1630372
<mup> Bug #1630372: "ERROR no credential specified" during add-model as non-admin user <usability> <v-pil> <juju:Triaged> <https://launchpad.net/bugs/1630372>
<arosales> lazyPower: you guys looking at a release for canoniocal-kubernetes and kuberntes-core today or going to try to gamble on a Friday
<jcastro> got it
<jcastro> I am confused by the bug work in core lately
<jcastro> like, bugs are being closed with no explanation
<rick_h> jcastro: example?
<jcastro> https://bugs.launchpad.net/juju-core/+bug/945862
<mup> Bug #945862: Support for AWS "spot" instances <adoption> <juju-core:Won't Fix> <https://launchpad.net/bugs/945862>
<lazyPower> arosales - good question, we're more than likely going to push today.
<lazyPower> arosales - is there something specific you're looking for?
<arosales> lazyPower: generally interested, but was also noticing the only failure core and canonical k8 on cwr was that pesky lint issue
<lazyPower> ah, yeah. I didn't see the refactor merge come in yesterday, so i'll circle back on that and we'll get a release made as soon as its validated
<lazyPower> closer to EOD, but likely today
<arosales> ref = http://data.vapour.ws/cwr-tests/results/bundle_canonical_kubernetes/ec410f94fa8d4c58b482b9b9d04cf530/report.html and  http://data.vapour.ws/cwr-tests/results/bundle_kubernetes_core/b117cfc786174737af81ef32c3372108/report.html
<arosales> lazyPower: thanks
<jcastro> marcoceppi: can you explain the use case in more detail in https://bugs.launchpad.net/juju/+bug/1630372
<mup> Bug #1630372: "ERROR no credential specified" during add-model as non-admin user <usability> <v-pil> <juju:Triaged> <https://launchpad.net/bugs/1630372>
<jcastro> rick is confused as to what you're actually trying to do
<marcoceppi> jcastro: otp
<bildz> good morning
<bildz> how do I find out what container is running what instance of openstack?
<bildz> I have to control all the openstack compontents out of juju?
<bildz> i cant just edit config files on the servers cause they get overwritten :(
<jcastro> lazyPower: mbruzek: how do we look on azure? there's a guy asking in the sigcluster-lifecycle channel about azure
<mbruzek> jcastro: Last I checked we deploy fine in Azure I had kwmonroe do it a few times
<lazyPower> we have good test results on azure deploys in CWR aside from the lint error
<jcastro> awesome, good to know
<jcastro> I think I'll just respond each time a kops or kargo guy responds to a question
<jcastro> lazyPower: dang, so that lint error makes everything look broken?
<lazyPower> yeah, arosales already reached out about it this morning
<jcastro> ack
<marcoceppi> lazyPower: the nginx one is fixed
<lazyPower> marcoceppi - sorry i lost context, in what regard?
<marcoceppi> lazyPower: the nginx lint errors in kubeapi-load-balancer
<lazyPower> ah ok
<kwmonroe> mbruzek: any objection to me kicking off a new jujubox build on dockerhub?
<kwmonroe> (last one was 16 days ago)
<mbruzek> kwmonroe: yes
<mbruzek> kwmonroe: can you review the 2 pull requests in the repo?
<mbruzek> I just landed them today
<mbruzek> Giving you the option to build with a user other than ubuntu
<mbruzek> but by default it will build with ubuntu
<mbruzek> If those meet your approval, then I would like to merge them so we can build a new one.
<mbruzek> kwmonroe: I anticipate problems with charmbox with my changes yesterday and today.
<mbruzek> But I am committed to fixing those too
<kjackal> stokachu: Hey stokachu I see you have put up for review & promulgation dokuwiki until revision 11 , but I see you have also revision 15 under your namespace. Would you like to update the dokuwiki revision you have up for review?
<bildz> i've tried to install the juju-gui and it's sitting in an unknown state and when connecting to the web interface its hanging on "connecting to juju model hangs"   juju-2.0                                   2.0~rc3-0ubuntu4.16.10.1
<tvansteenburgh1> hatch: ^
<hatch> bildz: with Juju 2 you no longer have to deploy the GUI charm
<hatch> the GUI charm is only for Juju 1
<hatch> to access the GUI with juju 2 simply run `juju gui --show-credentials` and it'll open a browser with the GUI and output your credentials to the CLI
<bildz> oh sweet
<hatch> bildz: and - if you've got a long running controller you can run `juju upgrade-gui` to get the latest gui release. :)
<bildz> thanks, hatch
<hatch> np, anytime, if you run into any issues there just ping me
<hatch> thanks tvansteenburgh
<bildz> appreciate the help!
<quixoten> I'm having an issue with Juju trying to connecto to MAAS API version 1.0. Version 1.0 is not supported on the version of MAAS I'm using.
<quixoten> ERROR cmd supercommand.go:458 new environ: Get http://10.0.96.2:5240/MAAS/api/1.0/version/
<quixoten> juju version => 2.0.1-xenial-amd64
<quixoten> maas version => 2.1.1+bzr5544-0ubuntu1 (16.04.1)
<quixoten> anyone had this problem before ?
<brandor5> hello everyone: for the last few days when I try to bootstrap a juju controller on maas it fails with the error: "ERROR failed to bootstrap model: bootstrap instance started but did not change to Deployed state: instance "4y3hek" is started but not deployed" Anyone have any ideas? I'm seeing older stuff on google but nothing recently...
<brandor5> this command worked fine the week before last, btw
<quixoten> any errors output on the console of the machine that was started?
<brandor5> quixoten: I hadn't thought of that, gimme a few and I'll see what happens on the console
<verterok> Hi, any chance I can get some help with a wonky bootstrap node? looks like the mongodb config got broken/gone
<verterok> here are the mongodb logs: http://paste.ubuntu.com/23491858/ after restarting juju-db
<stokachu> kjackal: yea i need to re-review that charm and then ill push a new review request
<bildz> hatch: I've made changes to the openstack charms and have commited them, but they dont appear to be refreshing the proper changes.
<hatch> bildz: was this on a fresh deploy?
<bildz> yes
<bildz> i did a conjure up conjure-up
<bildz> this is absolutely amazing though
<bildz> my mouth dropped
<hatch> :D
<lazyPower> hackedbellini o/
<hackedbellini> lazyPower: here! :)
<lazyPower> so, to recap for anyone that comes across this later, we're continuing investigating running a docker based workload in lxd
<hackedbellini> lazyPower: so, how can I rebuild the layer of the charm?
<lazyPower> and you ran into a problem with a really old version of a charm that hasn't been refreshed with the latest layer fixes
<hatch> bildz: so when you click on the application on the canvas, and you go to the configuration settings in the inspector - does it show your changes?
<bildz> I need to restart the nova-cloud-controller and computes
<bildz> checking
<lazyPower> hackedbellini - first you'll need to clone the layer: https://github.com/chuckbutler/redmine-layer
<lazyPower> hackedbellini - you'll also need charm-tools installed,  with the juju stable ppa enabled, apt-get install charm-tools,    or you can snap install it    snap install charm
<bildz> hatch: yes they changes are there
<bildz> the*
<hackedbellini> lazyPower: both done!
<lazyPower> hackedbellini - cd into the charm dir, and issue `charm build -r --no-local-layers`
<hatch> bildz: ok then beyond that I'm not familiar with the internals of the openstack charms.
<lazyPower> this will assemble the charm from its declared layers, and output to a build path. its likely to put it in $PWD/builds  unless you've exported $JUJU_REPOSITORY in your shell
<bildz> after making a change to a charm, is there a way to restart the service from the UI
<bildz> seems when i made the change, openstack went plop
<hatch> bildz: when a configuration option is changed, the 'config-changed' hook in the charm is run. It's up to the charm to do what it does at that point. If you wanted to manually restart you'd have to ssh into the machine and do it manually
<hatch> bildz: I'd imagine that the openstack charms would restart what needed, but again, outside of my wheelhouse there
<hackedbellini> lazyPower: ok, it worked! Now I move the build to my charms dir?
<lazyPower> yep, and juju deploy ./redmine
<hackedbellini> lazyPower: should I do a new deploy or change the charm of the one I already deployed?
<lazyPower> I would recommend a fresh deploy
<lazyPower> just to ensure we dont have any niggly issues hiding in there that might muddy the results
<bildz> hatch: thanks I will let you know what i find out
<justicefries> hmm. private charms. what's the way to do them? upload them to canonical with only the people I want having ACLs on them? any way to just do it directly from a private git repo, or is it the controller that's pulling the charm?
<jcastro> hey bdx, did you submit something for the summit?
<justicefries> also got a weird one on the canonical-kubernetes bundle, and I think it has to do with the kubeapi-load-balancer
<lazyPower> justicefries - we're cycling through an update which should catch the stray error with the api-lb
<justicefries> I installed tiller now that helm 2.0 is out, and I think it proxies through kubectl, but I'm getting an upgrade request when forwarding ports.
<justicefries> oh, cool.
<lazyPower> we just published the charms, but hte bundle hasn't been revved yet
<justicefries> ^ that error too, or the one with the instance bouncing?
<hackedbellini> lazyPower: I have to go home now. Tomorrow I'll ping you to continue (hopefully what we did will be enough)
<lazyPower> you can set ACL's on your charms in the store, yep
<hackedbellini> thanks for your time! :)
<lazyPower> so you can use private repos, and then restrict the charms to your team using hte charm store ACL's
<lazyPower> so its private all the way across
<justicefries> ok. any notion of self-hosted stores at this time? not a requirement for me, just curious
<lazyPower> not that i'm aware of
<justicefries> cool. OH! but I can use --local while I'm devving charms, nice.
<lazyPower> yep
<justicefries> hmm ok. if I'm creating an infrastructure charm (say, aws-elb) that doesn't depend on a certain version of ubuntu, what's the right folder structure? is `charms/precise` from the example simply convention, or is it a GOPATH-like requirement?
<justicefries> hey all, FYI, I had to build my own MacOS Sierra version of juju off the 2.0 branch because what's on the releases page is still on 1.6. anyway, this works fine when you already have a juju controller, but when you're trying to stand something up it can't find the right agent version (thanks to Sierra being in the version). maybe I should build off the tag
<justicefries> instead.
<justicefries> doesn't matter now because I have a controller
<lazyPower> justicefries - you can make multi-series charms
<lazyPower> eg:
<lazyPower> in metadata.yaml just define `series: -xenial -trusty`
<lazyPower> now, you can define series, but you cannot define multiple cross-series, like have centos-6 listed as well as -xenial
<lazyPower> unless thats changed recently
<justicefries> hm ok got it.
<lazyPower> also re-bootstrapping with tools
<lazyPower> i would poke in #juju-dev about that, they might have some super secret sauce for you there
<justicefries> working with non-machine resources as charms overall just feels a little weird.
<justicefries> ah nice ok.
<lazyPower> yeah, i totally understand
<lazyPower> we call those proxy charms, and they just poke things with a stick to make it do somethin
<lazyPower> which in itself is kind of odd but it does get the job done.
<justicefries> yeah
<lazyPower> whats nice about them though, is you can colocate them in lxd on some unit you have running in your infra
<lazyPower> so its all nice and isolated and cozy
<lazyPower> if thast even a concern of infra
<lazyPower> :)
<justicefries> now is that something I'd have to specify in the charm metadata that it can colo with another machine? or do I specify the machine when deploying my unit to make that happen?
<justicefries> the rules of when I get a machine vs. when it packs are a little fuzzy.
<lazyPower> ah, ok.
<lazyPower> so you can deploy most charms to lxd on a pricipal unit, eg --to lxc/5  which allocates a container on machine #5, whatever that may be
<lazyPower> in the instance of bundles, our CDK core bundle uses colocation to squeeze easy-rsa on machine 0
<lazyPower> https://github.com/juju-solutions/bundle-kubernetes-core/blob/master/bundle.yaml#L27
<lazyPower> also looks like i botched the syntax, its now --to lxd:#
<justicefries> i think i'm starting to see through the murk. ok. so what I'm going to want next is to make sure I specify my --cloud-provider on the kube-apiserver. there's no way to add a flag as it stands today with another layer, is there?
<justicefries> basically to get the setup I want with my bundle, I need that, and I need to make sure my machines get an IAM profile, and I'd like it to create the IAM profile as well just so I have completely repeatable clusters.
<lazyPower> Correct, you'd need to extend the kubernetes charms to take that --cloud-provider flag to enable the cloud provider specific integrations. we dont support that as it encourages bad behavior by not going through juju to request resources.
<lazyPower> but if thats your end goal to fully integrate with $CLOUD, its a reasonable expectation to add some extensions to the template logic to enable that, and you'd have to manually provision the IAM role sets.
<justicefries> hmm. that's an interesting way to put it.
<lazyPower> well its that or open a bug and we can openly talk about it. I know that in our previous planning sessions we explicitly decided to punt on the cloud provider specific integrations as its not portable
<lazyPower> you provision a workload in kubernetes using an ELB, and then suddenly it doesn't work when you re-deploy on maas because its a different resource set on the backend
<justicefries> yeah. you want to keep it portable. i need to think about the balance I'd want here. obviously I'm used to PVs provisioning on the provider, and services/ingress doing the same.
<justicefries> where available.
<lazyPower> I dont think its an unreasonable request, just not one we've committed to supporting yet. Ideally we would get some primitives for those in juju and extend kubernetes to talk to juju
<lazyPower> ergo, i need a load balancer
<justicefries> oh, sure.
<lazyPower> it requests a juju deployed haproxy
<justicefries> that would be a really nice way to do it
<lazyPower> my workload wants storage, juju requests up EBS flavors
<justicefries> you'd almost need a charms equivalent of resources. "queue this charm up when kubernetes asks for this resource"
<lazyPower> i've been thinking about how we can extend the worker pool with cloud storage using the existing juju storage feature set, it seems fairly limited, but it may be good enough to work as we can enlist those PV's directly with a simple manifest render after its been attached to the unit.
<lazyPower> but today we only support ceph RBD as a PV in our k8s stack, with some commitment to extend that in the coming cycles with our other vendors like nexenta.
<justicefries> until it gets rescheduled right
<lazyPower> exactly
<lazyPower> as workloads move, the PV would be stuck on a different unit
<lazyPower> so things get wonky in that scenario
<justicefries> yup. suddenly you're pinning stuff with node labels :o
<justicefries> heh. be nice if I could just attach kubernetes to my model's credentials.
<justicefries> and the charm could use that to make decisions.
<lazyPower> interesting idea
<lazyPower> what i would really like is the ability to aggregate resources without directly attaching them to a unit, instead allocating them against the charm's definition, and they become floating resources, which would enable those PV's to travel between the units.
<lazyPower> but thats a pipe dream today as its a big departure from how its currently modeled
<justicefries> yeah sure
<justicefries> you'd almost at that point need kubernetes workloads represented as charms.
<lazyPower> 10k ideas, 100 hours to complete them
<lazyPower> go
<justicefries> haha yup
<justicefries> hmm I can't find the repo for containers/kubernetes-master
<lazyPower> we're nested deep in the kubernetes repository 1 sec and i'll get you a direct link
<lazyPower> https://github.com/juju-solutions/kubernetes/tree/master-node-split/cluster/juju/layers
<lazyPower> ^ this is our latest work we just published today. We're nested deep in the cluster/juju  directory tree of the kubernetes proper repo. We're a bit behind getting our changes upstream to their master branch, but we're actively working towards making that an easy process with submitting our e2e test results on a regular basis
<lazyPower> which i'm actively working on today
<justicefries> ah ha.
<justicefries> well maybe I should stop asking questions then. :p
<lazyPower> nah you're fine :) I'd rather help a user get moving with what they want to do, than satisfy beurocracy fwiw
<justicefries> heh, looking through these charms, I've been doing Go for years, getting used to python again phew.
<lazyPower> yeah, duck typed refresher course
<rick_h> quack quack
<lazyPower> i felt the same way coming to ruby/python from .net
<justicefries> decorators are sweet though in python 3.
<lazyPower> awe thanks :D we abuse them like candy
<justicefries> yeah I kind of want to check out the new C# and .NET Core 1.1 stuff.
<lazyPower> @when('this.makes.sense')
<justicefries> ah. is the kubernetes resource coming from a `charm attach`?
<lazyPower> justicefries - correct, our resources are vetted by hand by mbruzek and I. we then attach those resources to the charms in the store during our release management process. If you wish to use your own bins, you can certaily override them with a `juju attach`
<justicefries> maybe at some point, right now just feeling it all out.
<lazyPower> and when i say by hand, i mean we run e2e suits against a deployment and some additional things by hand.
<lazyPower> but its mostly automated
<justicefries> sure sure
<justicefries> i'm going to have to do a bit of similar stuff with the windows CI charm I need to write :|
<lazyPower> i feel like saying anything is manual is a bad thing in this whole process, its kind of baffling that just 2 dudes do all this.  #thanksjuju
<lazyPower> well, i was thinking about that
<lazyPower> is there any reason yo ucouldn't use the .net container as a base for running those?
<justicefries> the way I'm putting it to my team is that juju and kubernetes used well lets us punch well above our weight class.
<lazyPower> that would skinny up the required charm code
<lazyPower> justicefries - thats a fan-freaking-tastic description. Can i quote you on that?
<justicefries> sure. :)
<lazyPower> <3 love itttt, ta
<justicefries> GPU requirements on some of them.
 * lazyPower heads off to twitter
<justicefries> that's the barrier on the containers.
<justicefries> on the linux side, sure, there's a lot of good precedent now for GPU containers.
<lazyPower> https://twitter.com/lazypower/status/799373475819483139
<lazyPower> Hmmm you're right
<lazyPower> cuda integration on containers for windows is funky, i just googled and saw the mess they're untangling.
 * lazyPower retracts his shower thought
<lazyPower> so its coming, but its not here today.
<justicefries> yeah. nvidia-docker wrapper is great so you don't get all screwed up on device mounting and driver versions on linux.
<justicefries> yup
<lazyPower> well fortunately, when you're ready for that, i got your back
<lazyPower> i'll reach out to teh cloudbase peeps
<lazyPower> see if i can get you someone to pair and patch-pilot your first windows charm into the store
<justicefries> i wish. :| i'm basically just wrapping it all up into a well isolated thing that I don't need to deal with. that'd be awesome. windows automation just feels so ugly to me.
<lazyPower> having come from a msdeploy based background
<lazyPower> i know the feeling. powershell got a lot better, but its still not where i would want it to be.
<justicefries> yeah. fortunately there's a path there to linux for some of that for me in the next few months. it really affects the resources you're able to throw at the problem when you're constrained to windows for a certain part of the whole thing.
<lazyPower> we had a large scale deployment for a marketing firm at my last job, and the core component of all of that was mssql server, and at the time there was zero support for running that on linux (which appears to have changed). So i completely understand the frustration there. Having a single mssql backend surrounded by ubuntu was maddening when it was the most finicky component of them all.
<lazyPower> but i'm also not an mssql admin, so i probably did something wonky in there.
<lazyPower> all i do know, is that WAL files for mssql are nightmare fuel
<justicefries> ugh
<lazyPower> haha, it seems i'm in good company
<justicefries> i fortunately haven't been within a ten mile radius of mssql
<justicefries> this is for sure nightmare fuel too though. fortunately all of the services and everything else isn't that way
 * lazyPower nods
<justicefries> unfortunately because even though stuff is migrating to linux, a lot of the devs are going to remain on windows, so there's a whole fun gyp infrastructure in place
<justicefries> and a rat's nest of linking that that team is maintaining.
<lazyPower> hackedbellini - hey no problem. sorry it took me forever to see that message. i went scrolling back to touch base with how you were doing and see you went home for the day. Cheers until later today (when you see this) then :)
<justicefries> hmm. does it make sense to create a general "aws" charm with interfaces for each type of resource you might want to relate to? could you have a unit with multiple relations to the same interface? say you want 2 EBS volumes or something.
<justicefries> maybe not. maybe that'd end up being clunky versus just making two aws-ebs units
<justicefries> and adding the relations.
<lazyPower> i think that having succinct representations for those managed services
<lazyPower> so 1 charm for rds, 1 charm for ebs storage
<justicefries> yeah
<lazyPower> you can abstract the common bits of that into a base layer
<lazyPower> like layer-aws-managed-credentials or something
<justicefries> probably just an aws base layer that contains boto and stuff
<lazyPower> so you can plug in your keys and all that, then write shim layers on top using the aws sdk
<justicefries> hmm credentials is an interesting one.
<justicefries> maybe a sane idea to do vaultproject.io and then have relations (once the vault is unsealed) that ultimately provide the related unit's api key
<lazyPower> ahh see now you're getting into where we got mired and basically cound't agree. we wanted to use vault
<lazyPower> but i dont know enough about it to really use vault effectively
<lazyPower> its on my TODO to repalce easyrsa with vault for a ssl CA
<justicefries> i just wish it wasn't open core. :|
<justicefries> oh nice.
<lazyPower> yep, so expect a pilot of that one in the coming months.  we have some vault layers/charms in the wild already as community submissions
<lazyPower> we're likely to pick that up, polish it, and drop it right into the bundle as a flavor
<justicefries> that adds a lot of power to it. does the current charm handle renewing with easyrsa?
<lazyPower> we want to add that, but it doesn't exist today
<lazyPower> the idea is to juju run-action easyrsa re-key, and it regenerates and pushes the keys out ot anything attached to the CA
<lazyPower> its a long standing issue in kubernetes-proper, how to re-key a k8s installation. We'd like to contribute that back if we can
<justicefries> hm. noticed an interesting one
<justicefries> kubernetes-master needs socat installed to port forward!
<lazyPower> really? that seems new
<lazyPower> it was just using iptables before
<justicefries> E1117 15:48:41.963192   46813 portforward.go:329] an error occurred forwarding 49400 -> 44134: error forwarding port 44134 to pod tiller-deploy-2241983194-k4tdu_kube-system, uid : unable to do port forwarding: socat not found.
<justicefries> yup
<lazyPower> good find
<lazyPower> and easy fix too
<justicefries> is that something you'd put in basic or kubernetes-master?
<lazyPower> kubernetes-master, in the layer.yaml under packages:
<lazyPower> but you can work around it temporarily by just juju run --application kubernetes-master "apt-get install socat"
<justicefries> ah interesting, didn't know there was an option there for it. i was looking in the actual reactive
<justicefries> ah sure
<lazyPower> yeah we'll get that committed for the next release
<lazyPower> we just bumped the charms today, so its unlikely to get pushed unless mbruzek  tells me i'm being a ninny
<lazyPower> which he just did, great
<lazyPower> why did i say anything
<lazyPower> for context, we're on a hangout. i got it first hand
<justicefries> haha fair
<vmorris> is api.jujucharms.com down?
<justicefries> so that works. the workers need it too.
<lazyPower> ack, i'll re-tag the bug to target both
<justicefries> i had to expose kube api directly, because going through the LB was giving an upgrade error, so something's off in that nginx config.
<justicefries> you can replicate by grabbing helm 2.0, doing a `helm init` to install tiller, and then `helm status`
<lazyPower> vmorris - it doesn't appear to be. i'm able to deploy from the store, which is api driven
<vmorris> yeah i'm not able to deploy from the store for some reason :(
<justicefries> huh, are beta extensions disabled?
<lazyPower> we dont explicitly disable them... what else have you uncovered justicefries?
<justicefries> OH
<justicefries> privileged is disabled.
<lazyPower> yeah, i want to make that a config option
<justicefries> which I need for CI agents. though maybe I could just do a LXD CI agent and call it a day.
<vmorris> lazyPower can you confirm that api.jujucharms.com is at 162.213.33.122? and is that supposed to be pingable?
<lazyPower> that way you can expsoe a smaller subset of workers that need priv. containers
<lazyPower> 162.213.33.121 is the correct ip vmorris, however i think icmp is disabled
<vmorris> okay ty
<justicefries> ah that'll probably require some worker labeling
<lazyPower> right, i can prototype that out real quick, 1 sec
<justicefries> i'm probably marching to my own internal k8s bundle and then backfeeding things that can be generalized.
<lazyPower> justicefries - http://paste.ubuntu.com/23492869/
<lazyPower> something like this. you can import that into jujucharms.com/demo and visualize it. you get different worker pools for different "roles" per say. and using the tagging/labels you can narrow down how the workloads get scheduled
<justicefries> so options will get passed in as flags today? nice.
<lazyPower> theres a labels config flag for worker
<lazyPower> you'll see in the coming release, (probably the next actually) that ingress=True will only flag and schedule ingress on the units in that service pool, as it is today, its an all or nothing shot
<lazyPower> and not every k8s worker should be an ingress, it was a blanket decision early on for expedience, but we're at a point today we can fine tune those operations to only effect sibling units.
<lazyPower> s/service/application/
<lazyPower> man good thing mark isn't looking or i'd be flogged for that.
<lazyPower> or rick_h for that matter
 * lazyPower ducks
<justicefries> haha
<justicefries> actually I had to re-parse it though since ingresses technically do point at services
<lazyPower> yeah, we're talking about a very mixed set of logical operands. and the more overlapping words, the more confusion the docs will be without illustration
<lazyPower> and i come with no illustrations
<lazyPower> https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/135 -- justicefries - your issue for socat, if you want to subscribe
<bdx> horrible issues with spaces on aws today ... just mailed the list
<bdx> no matter what I do, additional units will not deploy to a subnet in my space
<bdx> I've created subnets in each AZ in my region
<bdx> and added them to my space
<bdx> still no luck
<bdx> so bummed
<lazyPower> sorry to read that bdx  :/
<rick_h> bdx: spaced and aws aren't fully supported. There was work there that was more of a PoC and so I'm sure it's an uphill battle.
<lazyPower> ooo man, i think i told bdx otherwise :|
<rick_h> bdx: were working to reset and not make things so provider specific, but it's going to take time to basically rebuild the networking aupport unfortunately.
<lazyPower> this is probably myf ault
<bdx> I didn't know aws spaces was POC
<bdx> this really throws a stick in my spokes
<rick_h> We celebrated spaces support with Maas a cycle ago, we spent the next cycle making it work properly on Maas and aws didn't get the same attention. It's something that we're learning hard lessons from right now
<rick_h> bdx: I'm sorry, we've not set you up for success here.
<lazyPower> rick_h  i apologize for my part in this too :|
<lazyPower> running off all willy nilly with good news for everyone
<bdx> its cool .. thanks
<lazyPower> i mean bdx
 * rick_h owes bdx beverages next summit
<bdx> I don't know how I should move forward now ... lol .... 100+ subnets created .... all mapped out for each app
<rick_h> bdx: jam was looking at the bootstrap subnet work from your email to the list as a first step
<bdx> rick_h: thats great news
<rick_h> bdx: and has been mapping out the bits that need to be rebuilt.
<bdx> rick_h: thats awesome, thanks
<rick_h> bdx: but it's currently a 2.2 target for end of this cycle to have meaningful improvements.
<bdx> darn ...  ok
<rick_h> Right now it's very much in the 'spec and build a better path mode
<bdx> nice
<bdx> rick_h, lazyPower: so, what should I do then, just have a non-prod vpc for all non-prod apps
<bdx> and a prod vpc for prod apps
<bdx> it doesn't feel right
<bdx> bc different client have different users accessing non-prod env, and if they are all clustered across the same address spaces ....
<bdx> same with production envs
<lazyPower> bdx - i'm unsure of how to recommend a better path to you at face value that wouldn't require unwinding temporary/workaround style fixes for this.
<lazyPower> you're looking to gain tenant isolation up and down the stack at every stage right? between units/networking/et-al
<bdx> lazyPower: yea ... because we have different client's users accessing the machines and services in across apps/app envs
<lazyPower> right, and without spaces thats not a juju native primitive. You could achieve something like that by using another means of sdn, and configuring apps nievely to use that sdn  - but its not clean, automated, or easy to rip out once spaces gain the proper support
<bdx> yea
<bdx> thanks for your insight
<lazyPower> talking non-trivial surgery that woiud likely yield a redeploy
<bdx> yeah, I mean ... luckily the next production deploy I'm doing is on private infrastructure and I'll be using the manual provider
<bdx> I won't have to spin up any prod on aws till january I think
<bdx> ehhh nix that
<bdx> big aws prod deploy next month
<bdx> I think I should just use a separate vpc for each production app deploy anyway
<bdx> hopefully that will siplify things, though I've never tested adding models in vpcs outside of the one I bootstrapped to
<bdx> simplify*
<bdx> on a brighter note, I did get my barbican stack up and publicly accessable on aws
<bdx> http://paste.ubuntu.com/23493006/
<bdx> it was a bear, and required hacking of the barbican charm in multiple areas
<bdx> 20 deploys later
<bdx> W000t
<rick_h> bdx: yea I mean is it worth just using a different regions?
<lazyPower> for the uninitiated like myself: https://wiki.openstack.org/wiki/Barbican
<lazyPower> bdx  - interesting, so does this replace your interest in vault or does it augment it?
<bdx> rick_h: aah, like create my models in different regions?
<bdx> rick_h: then they would be forced to use subnets in the region?
<bdx> errr, then *juju* would be forced to deploy the units to subnets within those regions
<bdx> and disjoint from the other apps in other subnets in other regions?
<rick_h> bdx: just thinking out loud of forcing separation from staging and prod
<rick_h> bdx: using regions might be an approach
<bdx> thats a great suggestion
<bdx> rick_h: let me get back to you after I try implementing that
<bdx> lazyPower: in all reality I'm super comfortable interfaceing to keystone
<bdx> lazyPower: I was getting stumped around every corner with the intricacies of vault
<lazyPower> I felt the same way during my discovery session with it
<lazyPower> but i also thought that was just me being nooby with it, and once we had really flexed it it would become more obvious
<bdx> the fact that I have no experience interfacing with vault, combined with the lack of (or any) documentation
<bdx> posed a huge road block
<bdx> I spent two weeks learning how to admin and interface to vault .... with 100+ clients, and each client with many users
<lazyPower> yeah that sounds like nightmare fuel as a learning curve
<bdx> I just don't see myself having the bandwidth to facilitate being the admin for it across a hoard of clients/users/apps/envs
<bdx> keystone/barbican on the other hand
<lazyPower> I'm a big +1 for using the applications you're comfortable with. thats 100% the reason kube-api-loadbalancer is nginx based today. I consciously thought to myself: If i have to go into a customer site and debug this deployment, i know nginx. I barely know haproxy. I can either spend the time learning  that or use what i know and go from there.
<bdx> exactly
<lazyPower> and offtopic, but you're likely to be interested in this too bdx  - https://twitter.com/lazypower/status/799401051300372480
<bdx> no way!
<bdx> thats awesome!
<lazyPower> yeah man stokachu just wrapped that POC up today
<lazyPower> i think its too ealry to say its "supported" as he does in the blog post, but hey, with that hurculean task complete, he can sell it as whatever he wants :D
<bdx> wow ... theres some people around my company who have been waiting for that so they an start playing with local deploys
<bdx> haha
#juju 2016-11-18
<bdx> nice work
<lazyPower> stokachu mad props boi, you got mad squabbles
<bdx> lazyPower: its still pretty rough, but this is the barbican client idea I'm working with -> https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py
<lazyPower> i love this random "i dont have this so just set it without doing anything" https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L13
<lazyPower> i do stuff like that in prototypes all the time and it drives matt crazy
<bdx> my motive behind that, is that I want to be able to react to barbican being installed in other layers, not just when the secrets are set
<lazyPower> bdx - looks like a good start. I'm not very familiar with barbican so it makes it difficult to review/make suggestions, but at a glance, it looks straight forward enough.
<lazyPower> are the 'containers' a primitive of barbican?
<bdx> yea, containers store refs to secrets
<lazyPower> like, you put secrets in 'containers' and an app requests the secrets to put in its local container so its encrypted at rest or something?
<bdx> I wish, nothing going on here with encrypting at rest ... the values sit in the app config on the filesystem after they have been retrieved from barbican
<lazyPower> ah
<lazyPower> i gave it a lot of credit then
 * lazyPower nods
<bdx> sad I know
<lazyPower> i dont think its sad, i think its indicative of our industries stance on security
<bdx> with the exception of ^
<lazyPower> i cant say my solutions are much better than that :)
<lazyPower> but i'm interested in making them better
<bdx> using barbican in conjunction with keystone, allows me to create projects in domains, and users in projects, and separate their access to differet secrets in barbican
<bdx> using keystone to authenticate and authorize users in front of barbican
<bdx> so simple, so sweet
<lazyPower> seems like its a tiered setup like you're expecting :) so thats good
<lazyPower> nice, granular, and interrelated
<bdx> yes, YES!
<bdx> and I already have it working across all of my charms :-)
<bdx> take that vault
<lazyPower> lol
<lazyPower> you tell em buddy
<bdx> lazyPower: https://s17.postimg.org/jtqy0d9mn/Screen_Shot_2016_11_13_at_9_40_03_AM.png
<lazyPower> interesting
<bdx> so I get the secret refs from the container here https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L47
<bdx> then iterate over them, unpacking the payload of each https://github.com/jamesbeedy/juju-layer-barbican-client/blob/master/reactive/barbican_client.py#L51
<bdx> and setting to the leader
<bdx> no doubt there are probably better ways to do this
<bdx> its just a start ... I had to start somewhere
<lazyPower> nah this seems reasonable
<lazyPower> leader coordination seems like the right thing to do here
<bdx> the only problem with setting to the leader is the type can only be a string
<bdx> lol
<bdx> I think this is the same for unit data
<bdx> I mean you can set other value types, but the are converted and saved as strings
<jcastro> stokachu: the latest conjure-up I've gotten from the next ppa doesn't have localhost as an option for me. My LXD is all configured and works already
<jcastro> stokachu: ahh, looks like the latest hasn't been built by lp yet in the next PPA
<lazyPower> bdx - yeah, thats fair. There's no notion of types in leader-data
<lazyPower> jcastro - are you jazzed up on lxd kubernetes now?
<jrwren> i'm afraid.
<stokachu> jcastro: what version of conjure-up are you running
<stokachu> this was just a spell modification no core code changed
<stokachu> larrymi: ah yea the whole supported thing lol
<stokachu> oops wrong person
<kjackal> Good morning Juju world!
<magicaltrout> I'm back, what did I miss?! ;)
<bloodearnest> hey folks. Is there a way for me to customise the base image used by the lxd provider locally?
<bloodearnest> I used to do this with the juju-template images the local provider, it allowed me to preinstall a bunch of stuff, and pre-download a load of others, which meant much faster provisioning
<bloodearnest> afaics, juju will always use the 'ubuntu-trusty' or 'ubuntu-xenial' aliases. If publish a custom image to my local lxd image server, with the alias 'ubuntu-trusty', what should work, right?
<magicaltrout> petevg: did you raise a bug that quality arrow problem on jujucharms.com?
<petevg> magicaltrout: I did not. I will add it to my list of things to do this morning (not sure which repo it lives in ...)
<magicaltrout> no problem petevg i'll do it
<magicaltrout> done
<petevg> Cool. Thx, magicaltrout.
<hackedbellini> lazyPower: hey!
<lazyPower> o/ hackedbellini
<hackedbellini> lazyPower: using xenial worked! I'm just having one last problem now.. I'm not very familiar with docker so maybe you will know what is wrong
<hackedbellini> https://www.irccloud.com/pastebin/gOa72syy/
<hackedbellini> lazyPower: the same problem jhappened with the default config (it was 'postgresql:9.5'). I changed it to 'postgresql:latest' but the same happened
<lazyPower> They restructured their tags it looks like
<lazyPower> https://hub.docker.com/r/library/postgres/tags/
<hackedbellini> lazyPower: hrm, so I should just remove the leading 'postgres:' and 'redmine:' from the config?
<lazyPower> the postgres:9.5 image should work
<lazyPower> change the image line to read that, postgres:9.5
<lazyPower> and give it a `juju resolved` to retry the hook
<hackedbellini> lazyPower: it didn't work before with 'postgres:9.5'. It gave me the same error as above. But I'll try again
<lazyPower> kwmonroe - hey kev, got a sec?
<hackedbellini> lazyPower: there's a typo on the charm config. It was written as 'postres:9.5' instead of 'postgres:9.5'. I changed 9.5 to latest and didn't even notice the typo
<hackedbellini> now with 'postgres:latest' I think it worked
<lazyPower> nice
<lazyPower> glad it was simple :)
<hackedbellini> lazyPower: nice! It is taking a while in the "Pulling postgresql (postgres:latest)..." but I think that is normal, right? Lets see what happens now, hopefully it will complete now
<lazyPower> yeah, running docker in lxd, the image pulls were quite slow, however once they were cached things were about normal timed.
<lazyPower> hackedbellini - did it finally settle and turn up the services?
<marcoceppi> stokachu: nice blog post! http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/
<vmorris> is it not possible to debug hooks on a subordinate charm?
<lazyPower> vmorris - it certainly is possible
<hackedbellini> lazyPower: it is still installing. I have a meeting right now, will give you more details when I come back!
<lazyPower> ok, sounds good
<vmorris> lazyPower: just wondering how i might do it, do I need to run debug-hooks on the parent charm? for example, this cinder-ceph subordinate unit doesn't show up in the debug-hooks list
<lazyPower> vmorris  - nope. you target it like any principal unit
<vmorris> lazyPower: alright well this doesn't seem to be working as designed then
<vmorris> lazyPower: I'm getting "can't find session cinder-ceph/2"
<lazyPower> that seems like perhaps there was a prior debug-hooks session that was disconnected?
<lazyPower> or your'e already in a debug-hooks session on the principal unit?
<vmorris> ah yes, the 2nd is true
<vmorris> ^^ thanks
<lazyPower> yeah, the output messaging there could be improved, but thats a bit obtuse as it would have to know some things first.
<marcoceppi> mbruzek lazyPower ryebot Cynerva great job on the release last night, seems to have cleared up cloud weather report http://cwr.vapour.ws/bundle_canonical_kubernetes/index.html cc arosales
<lazyPower> jcastro Have you interfaced with Sarah Novotny recently?
<lazyPower> jcastro I need to coordinate with them on a reasonable place to xpost our release notes for CDK. I thinks she is the right person to track/ask as she's the community manager for k8s right?
<jcastro> she is
<jcastro> but the community meetings are closed for the holiday
<jcastro> so I won't talk to her for like 2 weeks.
<lazyPower> Right, but i was thinking email. i'm digging in slack to find her deets
<jcastro> right, sec, I'm in a community room with her
<lazyPower> 	sarahnovotny@google.com - found it
<lazyPower> jcastro - https://docs.google.com/document/d/1nx8v7FgzwKgF9uFu1KK-ecx6bOGxwEh3XorodDJ6s24/edit  - mind proofing this before i hit send?
<lazyPower> i've got you on CC as well
<arosales> marcoceppi: thanks for the fyi. Did k8-core get updated?  Looks like it still has the lint failures re: http://data.vapour.ws/cwr-tests/results/bundle_kubernetes_core/9cff5292b7434ce29891195a47c18131/report.html
<lazyPower> arosales thats my b, we have an update coming ot kubes-core later today with those fixes
<jcastro> lazyPower: ack, just removed one marcoism, lgtm.
<lazyPower> ta
<arosales> lazyPower: thanks, and thanks for the release
<arosales> mbruzek: ryebot Cynerva: lazyPower marcoceppi: thanks for the k8 release, looking forward to trying it out
<lazyPower> arosales - wanna try it out on LXD?!
<lazyPower> :D
<arosales> indeed I do, conjure-up
<lazyPower> ah whoops, didnt mean to leak email addresses here, i thought i was somewhere else. whoops
 * lazyPower flogs himself
<magicaltrout> I apt get updated the other day and it removed everything inside my /home directory
<magicaltrout> which i blame entirely on jcastro
<lazyPower> wait what?
<magicaltrout> because i had a bunch of ZFS stuff on a semi stable Xenial because of his blog post ages ago :P
<lazyPower> O_o
<magicaltrout> I dunno, but it did happen :)
<magicaltrout> I think something inside ZFS removed itself during the update and everything else around it
<magicaltrout> which wiped a bunch of charms i'd not pushed anywhere.... sad times
<magicaltrout> that teaches me
<jcastro> did you reinstall?
<jcastro> because you can usually zfs export and reimport the pool
<magicaltrout> my home directory wasn't inside ZFS
<magicaltrout> but I think my pool was chillaxing somewhere around there
<magicaltrout> i dunno, anyway, it all vanished :)
<lazyPower> magicaltrout - on top of VCS i also use syncthing to keep my stuff sync'd with my NAS.  Might be worth investigating.
<lazyPower> i know this is hindsight and all that, but just a thought
<magicaltrout> you're very wise lazyPower
<lazyPower> no, i catch stray good suggestions from jcastro
<magicaltrout> do i just catch the bad ones? ;)
<lazyPower> littlie of column a little of column b
<jcastro> also, your filesystem and your backup strategy are not the same thing. :D
<jcastro> it's not my fault you didn't push
<magicaltrout> yeah, next time, i'll grep the debian update scripts for "rm -rf /home/*" ;)
<lazyPower> jcastro - isn't this like a replica of the mysql charm fauxpaus that happened 2 years ago?
<jcastro> this is basically the same thing
 * lazyPower grins
<lazyPower> good times
<jcastro> magicaltrout: oh hey are you submitting a session to the charmer summit? Last call is today
<magicaltrout> i'm debating whether my liver will have packed in by then or not
<magicaltrout> I don't like those guys since they rejected my last 2 attempts
<magicaltrout> and told me to stop selling a product :P
<magicaltrout> okay i'll pitch something, but i'm not trying too hard :P
 * magicaltrout offsets the lack of effort by pitching with his NASA email address
<hackedbellini> lazyPower: still some problems:
<hackedbellini> https://www.irccloud.com/pastebin/GbEMv3zD/
<magicaltrout> done jcastro we'll see how that goes
<lazyPower> hackedbellini - thats without having postgres related correct?
<lazyPower> hackedbellini - as in, using the redmine charm as the all-in-one-using-docker path
<lazyPower> it seems like compose didn't spin up postgres first, or after spinning up it immediately exited. You'll have to juju ssh into that unit and investigate the status of the workloads on the docker bridge
<lazyPower> s/docker bridge/docker runtime/
<hackedbellini> lazyPower: yeah, I didn't see any "db" relation on the charm, so I didn't relate it to my postgresql unit. Was that an option?
<hackedbellini> ok, so the problem now is totally docker? Nothing related to juju/lxc anymore?
<lazyPower> yep
<lazyPower> hackedbellini  in a meeting, i'll circle back after
<jcastro> stokachu: I think I'm missing something obvious, localhost is missing for me when I do `conjure-up kubernetes`, using your next ppa, lxd and everything is already configured
<stokachu> jcastro: silly me, try it again please
<lazyPower> thanks stokachu
<lazyPower> was this a packaging issue or something?
<geekgonecrazy> is the charm snap in a functioning state?  Tried to do a charm create and getting an error.  Ideas?
<lazyPower> geekgonecrazy - can you pastebin your error for me?
<geekgonecrazy> https://paste.ubuntu.com/23496377/ the output I get after doing charm create -t bash my-charm
<geekgonecrazy> I didn't see anything in the documentation about folder location, but with snaps being limited on folder I was kinda wondering if that was the issue.  But the output didn't really give me any useful clues
<geekgonecrazy> Tried specifying just incase that was it.  Same error
<geekgonecrazy> lazyPower: should I give the version in the juju ppa a try instead?
 * lazyPower looks
<lazyPower> ah yeah, this does look like a packaging issue with the snap
<lazyPower> pkg_resources.VersionConflict: (SecretStorage 2.3.1 (/snap/charm/9/lib/python2.7/site-packages), Requirement.parse('secretstorage<2.3.0')) -- and thats not something you can easily remedy
<lazyPower> geekgonecrazy - would you mind terribly opening a bug against charm-tools for this? https://github.com/juju/charm-tools/issues
<geekgonecrazy> lazyPower: done! https://github.com/juju/charm-tools/issues/287
<lazyPower> geekgonecrazy thanks for that, we'll try to circle back to that
<lazyPower> but yeah, to answer your follow up question, i would purge the snap and install from the ppa then
<jcastro> stokachu: awesome, working now, though it expected a controller up and running already, it didn't fire off a controller ootb for me, no idea if that's on purpose or not.
<geekgonecrazy> lazyPower: perfect, i'll give the PPA a go.  Thanks for taking the time to take a look
<lazyPower> np, sorry you found that rough edge. we'll get that sanded down for ya
<geekgonecrazy> All good.  It happens to all of us :D
<jcastro> stokachu: dude, this is awesome.
<stokachu> jcastro: yay \o/
<beisner> beware: https://bugs.launchpad.net/juju-deployer/+bug/1643027
<mup> Bug #1643027: juju-deployer deploys the wrong unit series from charm store <uosci> <juju-deployer:New> <https://launchpad.net/bugs/1643027>
<beisner> that bad boy is causing us serious funk in the osci amulet test gate
<beisner> marcoceppi, arosales, thedac, gnuoy, tinwood ^
<thedac> thanks for filing that
<tinwood> boy, tricky one.
<geekgonecrazy> Is there any best practice for the start hook?  Should I drop in a systemd unit during installation and trigger it to start / stop in the hooks?
<arosales> juju deployer!
<arosales> beisner: I bet if you try that with 2.0 you don't see it correct?
<beisner> juju-deployer via amulet via bundle-tester!  :-)
<arosales> as we only need deployer with 1.x juju
<arosales> ah via bundletester
<beisner> juju-deployer is still called with juju 2.0 when using bundletester
<arosales> yes, it is bundletester that is using deployer
<beisner> ack
<lazyPower> geekgonecrazy - are you charming with reactive/layers?
<geekgonecrazy> lazyPower: i'm a huge noob to charms.  Familiar with snaps but not charms.  In the charm i've got it installing from a tar ball.  I will need to have it react to relationships (mongodb primarily) and configuration changes.  If that's what you mean?
<lazyPower> hackedbellini - hey there, circling back to your docker issues, have a few minutes to rif? i can help you dissect the issues now.
<lazyPower> geekgonecrazy - not exactly. Reactive/layers is a new paradigm for charming that will really help you get moving with delivering that tarball and doing the right things at the right time, and allow you to re-use code we've already written for things like how to talk to mongodb
<lazyPower> geekgonecrazy https://jujucharms.com/docs/devel/developer-layer-example - take a look at this doc. it may be a bit windy and cover too many new concepts if you're brand new to charming, but its a walkthrough by example of how to charm by layers.
<lazyPower> err rather, i intended to link here: https://jujucharms.com/docs/devel/developer-getting-started
<geekgonecrazy> lazyPower: hm... yikes... another set to wade through.  Any good examples you know of?  Would make it a lot easier to dive in and adjust my thinking. :D
<lazyPower> you bet, we've got a lot of layers already.  So what your'e trying ot do is deliver a tarball based resource, and then wire that up to talk to mongo?
<lazyPower> let me see if i can find something similar. i'm pretty sure cmars has something thats very close to this already.
<hackedbellini> lazyPower: yeah it would be great! I tried redeploying the charm with this docker-compose config: https://github.com/sameersbn/docker-redmine/blob/master/docker-compose.yml
<lazyPower> geekgonecrazy - while its not mongodb, this is a mattermost (open source slack alternative) layer which exercises a good amount of layers/functions-in-juju   https://github.com/cmars/juju-charm-mattermost
<geekgonecrazy> lazyPower: yeah pretty much a nodejs app bundled in a tar ball.  Extract / npm install then wire up to mongo
<geekgonecrazy> I guess that's fitting, i'm working on the Rocket.Chat charm haha
<lazyPower> geekgonecrazy - you'll really want to grok the resources: usage of mattermost, as you're not using na official dependency management solution there, you can declare the tarball as a resource of the charm and instnatly win there.
<lazyPower> oh thats fantastic :D
<lazyPower> i didnt even know #winning-by-accident
<lazyPower> hackedbellini - ok, did you update it to be a jinja template?
<hackedbellini> lazyPower: I'm having a problem checking that charm log. The postgresql charm logs _a lot_ of information all the time, and I miss the relevant parts from redmine there. I tried running "juju debug-log" with "--include". I tried putting the unit name, the application name, the machine name/oid, but none of that works
<hackedbellini> lazyPower: yes I did!
<lazyPower> hackedbellini ah yeah, the -i flag is really confusing ot new users since the 2.0 change
<lazyPower> hackedbellini `juju debug-log -i unit-postgresql-0`  is the format to include only postgres unit output
<lazyPower> well, postgres unit 0 output.
<geekgonecrazy> lazyPower: I'll give it a look though, see if I can figure out a way.  Go binary for sure a lot easier to work with then node.js
<lazyPower> geekgonecrazy - well, we have a layer-nodejs too
<lazyPower> geekgonecrazy how about this, i'll help you get started, and answer direct questions you may have. if i'm not availble, do you agree to post to the mailing list so others can lend me a hand helping you? (juju@lists.ubuntu.com)
<hackedbellini> lazyPower: hrm, so for "redmine/9" (yeah, already on the 9th try) it would be "-i unit-redmine-9"? I'm trying it as I write this to you and it isn't displaying anything
<lazyPower> i have a scaling problem where i'm a single poin tof failure.
<lazyPower> yep, that should be the magic incantation
<lazyPower> hackedbellini ^
<geekgonecrazy> lazyPower: Sounds awesome.  already subbed :) i'll follow up after doing a bit more research
<lazyPower> sounds good. dont hesitate to poke if you get mired down in the docs. I helped write most of our dev documentation. so its likely scattered. bugs / feedback welcome always
<lazyPower> stokachu - welp, i have results. none of my worker nodes have registered as ready :|   but, it did complete and looks like its still converging in the background
<lazyPower> it crushed my i5 deploying this whole stack in one go :D took ~ 30 minutes from start-finish.
<lazyPower> hackedbellini - one other thing is either the charm isn't initialized... or maybe you should pass --replay so it forces debug-log to replay the messages from the beginning of the units creation
<hackedbellini> lazyPower: omg, its friday I'm tired... The service was 6 but I read it as 9 lol
<lazyPower> haha <3 i do these things all the time. dont feel bad
<lazyPower> hackedbellini - the best is when you're trying to multi-task, run a deployment, then switch terminals and run bundletester and wipe out the deployment you just did. :|
<lazyPower> #mistakes-i-realize-i-will-repeat-again-and-again
<hackedbellini> lazyPower: hahaha yeah. So, those are the last lines of the log:
<hackedbellini> https://www.irccloud.com/pastebin/KjgnG4S6/
<hackedbellini> lazyPower: hhaha did that some times too =P
<hackedbellini> it is "stuck" on that for more than a hour now
<lazyPower> ok, if i remember correctly this is the end of the sidewalk where that prior docker work was done. As of lastnight stokachu has some profile edits that got it further along in the context of kubernetes. I think there's some modules we need to unblock on the container yet ot make this 100% functional
<lazyPower> hackedbellini - so whats the status of the workloads in docker? did it pull the images and attempt container spinup?
<hackedbellini> lazyPower: strange that if I run 'debug-log' with '--replay' the latest lines are pretty different:
<hackedbellini> https://www.irccloud.com/pastebin/E05LE9wT/
<lazyPower> oo fantastic actually
<lazyPower> juju ssh into that unit and run `docker images`
<lazyPower> do you see the nornagon postgres image in there?
<hackedbellini> lazyPower: no, no images
<lazyPower> or rather: postgres:latest
<lazyPower> ok, so it seems like pulling the image is either blocking, or is hozed and not signalling back that the hook has failed
<lazyPower> and thats not good :| i haven't encountered that before so there's likely no logic in the charm to handle the scenario
<hackedbellini> lazyPower: yeah that is strange. My previous deploy did finish pulling the container
<hackedbellini> something happened in this one specifically
<hackedbellini> is there a way to make juju rerun the hook, considering that it "failed"?
<lazyPower> hackedbellini - try running the docker pull postgres:latest on that unit
<lazyPower> yep
<lazyPower> juju resolved redmine/6
<lazyPower> that by default will attempt a retry of the hook, if you want to skip  its juju resolved --no-retry redmine/6
<lazyPower> s/try running/try manually running/
<lazyPower> that way we can capture the behavior and determine what the root cause is
<hackedbellini> lazyPower: ok, waiting for docker pull to finish. Let's see if that works :)
<hackedbellini> I should run "docker pull" with the ubuntu user itself? Or with root?
 * lazyPower crosses fingers "no whammies no whammies no whammies, c'mon big money"
<lazyPower> Shoudl work with either/or
<hackedbellini> lazyPower: strange that is seems to be "stuck". This is my output:
<hackedbellini> https://www.irccloud.com/pastebin/looQTz6T/
<hackedbellini> but even if I ctrl+c it and do it again, I get the exactly same output
<lazyPower> hackedbellini - any output from journalctl -u docker?   hoping there's something in the docker runtime logs that will help indicate what hte problem might be
<lazyPower> stokachu - found the culprit, looks like the workers cant open /proc/sys/vm/overcommit_memory, its cuasing the workers to panic and abort the registration process
<hackedbellini> lazyPower: this is the tail of the log:
<hackedbellini> https://www.irccloud.com/pastebin/Z7jVEdwC/
<hackedbellini> so my "docker pull" refers to those latest 2 lines
<hackedbellini> forget it, it seerms that the pull finished
<hackedbellini> docker images now has the postgres
<hackedbellini> I'll try to "juju resolved" now
<lazyPower> ok, with the image cached it should skip pulling during docker-compose up
<hackedbellini> hrm, didn't even need. The debug-log just advanced to "INFO unit.redmine/6.install Pulling redmine (redmine:latest)..." by itself
<lazyPower> omg now i know why it was fighting
<lazyPower> two threads of the engine trying to pull the same image
<lazyPower> that explains the slowness
<lazyPower> whoops
<lazyPower> i thought the hook was already in failure mode
<hackedbellini> lazyPower: it seems to have worked. Let me check if it is really running now
<hackedbellini> lazyPower: so, the charm now says that "Redmine is running on port 8000", but there's nothing on that port
<hackedbellini> even netstat doesn't show that port
<lazyPower> docker ps -a
<lazyPower> do you see the container running, and that its bound to por 8000?
<hackedbellini> lazyPower: hrm, both postgres/redmine are with status "Exited (1) 2 minutes ago" (I don't understand anything of docker, sorry for not being able to debug it better =P)
<lazyPower> welp, thats why nothing is listed as being bound to a port :)
<lazyPower> you can fish up the container logs to see why
<lazyPower> docker logs <id of container from `docker ps`>
<lazyPower> hackedbellini by the end of this exercise you're going ot be an expert docker charmer you're literally touching every corner of the stack
<hackedbellini> lazyPower: the log has some lines with "error: exec: "initdb": executable file not found in $PATH"
<hackedbellini> lazyPower: hahaha nice! Maybe I can help contribute back to you in the near future
<lazyPower> :| not cool postgres image, not cool
<lazyPower> i wonder if osmething changed there
<hackedbellini> lazyPower: strange that I'm using postgresql:latest. Also, the redmine:latest gave me that: "error: exec: "rake": executable file not found in $PATH"
<hackedbellini> so, the problem are with the images?
<lazyPower> seems like something has changed in the images, yeah
<lazyPower> so, question
<lazyPower> hav eoyu tried this using just compose without teh charm on your host to verify the docker components are good to go?
<lazyPower> i'm in another meeting, but i can give it a shot when i'm out
<hackedbellini> I didn't really, but I can try. Just don't think I'm gonna make it today :(
<lazyPower> hackedbellini - well you've made a lot of good progress and you're literally at the last 5%
<lazyPower> whats your TZ? are you EU based?
<hackedbellini> lazyPower: I'm on Brazil hahaha, it is 17h19 right now
<hackedbellini> obs. Maybe it is related to this (https://github.com/docker/compose/issues/1639), but maybe not
<lazyPower> ah yeah, late in the work day/week
<jcastro> justicefries: we're in the hangout
<justicefries> omw
<justicefries> ugh I can't find it
<justicefries> did you send the invite?
<justicefries> i can't find the hangout
<lazyPower> https://hangouts.google.com/hangouts/_/canonical.com/kubernetes-with
<lazyPower> justicefries ^
<justicefries> hmm it has me in requesting to join
<stokachu> lazyPower: hmm i thought that was fixed with my profile edit
<stokachu> lazyPower: ill look again
<lazyPower> stokachu - i'm going to file some bugs and do the proper process
<lazyPower> where would you like them to be filed? against the bundle and tag them with conjure?
<lazyPower> or would you prefer i find the spell and file them there or?
<stokachu> lazyPower: yea lets file them on the spell https://github.com/conjure-up/spells/issues
<lazyPower> ack, will do
<stokachu> i thought i fixed that but maybe not
<justicefries> github.com/kz8s/tack
<marcoceppi> justicefries: http://github.com/juju/python-libjuju
<lazyPower> justicefries http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/ sending here for persistence
<jcastro> justicefries: http://blog.astokes.org/conjure-up-canonical-kubernetes-under-lxd-today/
<lazyPower> jcastro ninja'd
<bildz> hey guys...  I have my horizon up, but in looking into the nuetron settings for the bridged interface, I'm missing networks that should be there.  Is there any means of looking at the YAML files used by conjure up to get a better idea on how this process is handled?
<bildz> keep in mind that i've performed conjure-up openstack about 10 times
<justicefries> hmm.
<justicefries> 1 nice to have thing. i have a superuser account, i'd like to be able to grant models to myself without having to do the logout/login as the model's user/logout/login as myself dance.
<justicefries> i'm able to create the model under that username.
<justicefries> or even have machine users/groups that I can put a model into. that'd sort of force that as being necessary.
<justicefries> but it seems weird I can create a model under someone else's user, grant access to anybody I want on it
<justicefries> but not grant it to myself
<marcoceppi> justicefries: not sure I follow
<justicefries> so if you do:
<justicefries> `juju add-model foo-bar --credential bam --owner admin` while logged in as justicefries
<justicefries> where justicefries is a superuser
<justicefries> doing a `juju grant justicefries admin admin/foo-bar` fails, you have to log in as the model owner.
<marcoceppi> yup
<marcoceppi> wat
<justicefries> let me try again to verify
 * marcoceppi tries
<justicefries> yup, can't do it. and just confirmed i'm a superuser
<marcoceppi> interesting. Not sure if this is a bug or by design, rick_h ^^ ?
<marcoceppi> justicefries: does the inverse work okay for your usecase, you're the admin, then you make "admin" user an admin of that model?
<justicefries> let me try. so make the model under my user, then grant it to admin from admin
<marcoceppi> I guess for auditing, you'd just be the owner of all the models though
<marcoceppi> justicefries: yeah, create the model, then as the owner of the model, give the other use admin access to it
<justicefries> yeah I'd like the primary ones to all be under an admin/production user.
<justicefries> oh, as the owner
<justicefries> yeah, I can do it as the owner.
<marcoceppi> sure
<justicefries> i think the weird thing is I can create a model for someone else, but then not get access to it.
<marcoceppi> interesting, I'll file a bug see what shakes out from it
<marcoceppi> yeah, superuser seems to not inherit admin of models it controls
<justicefries> should be consistent between the two either way
<justicefries> if its intended, I'd expect add-model --owner shouldn't work for someone else
<marcoceppi> well, if that user does not have "addmodel" or "superuser" it wont
<justicefries> right
<marcoceppi> I'll file a bug and link you up in a min
<justicefries> oh, I see, because superuser doesn't inherit admin over models, I can't see it, and so grant 404s.
<marcoceppi> justicefries: is this with 2.0.2 or 2.1-beta1?
<justicefries> 2.0.2
<justicefries> i can try on 2.1-beta1
<marcoceppi> yeah, I wonder if this is just an ommision of the permission level of superuser
<marcoceppi> where if it doesn't own a model, it really can't see it, despite being the creator (and superuser)
<marcoceppi> justicefries: https://bugs.launchpad.net/juju/+bug/1643076
<mup> Bug #1643076: superuser does not have admin over models it created but does not own <juju:New> <https://launchpad.net/bugs/1643076>
<justicefries> awesome
<geekgonecrazy> lazyPower: so I built the charm.. I guess you guys would probably call it the old style.  Because mainly using bash.  Having issues locally getting the application to die so I can deploy it again.  juju remove-application appname doesn't work, and neither does juju remove-unit appname/0
<geekgonecrazy> last time I had to destroy the controller and manually purge the lxd containers to get it to finally remove
<geekgonecrazy> which sucks because have to wait on it to download the mongodb charm again :D
<geekgonecrazy> man... it just won't remove any.  I'm sure i've got to be missing something
#juju 2016-11-19
<bdx> rick_h, lazyPower: per our conversation yesterday afternoon, I think I'm going to have to use the manaual provider, and add my aws instances
<bdx> this way I'll still be able to get address space isolation
<bdx> rick_h: the regions workaround you suggested was a good idea .... but I don't think it will be easily maintainable
<bdx> rick_h: I'm thinking that manually provisioning those machines will be the best route here
<bdx> lazyPower, rick_h: you guys are probably eow, but yea  ... let me know what you think
<brandor5> hey guys, i'm starting to mess around with network spaces... if I'm going to deploy a bundle using spaces do i need to have the nodes set up with the network config on those fabrics first or is juju smart enough to figure that part out?
<kulinacs> I'm attempting to bootstrap juju for a MAAS cluster without power management. The bootstrap step keeps hanging at "Fetching Juju agent version 2.1-beta1 for amd64". Any ideas what is going on?
#juju 2017-11-13
<blahdeblah> thumper: you around?
<[Kid]> kwmonroe, that means anything i deploy with juju has to be in a cloud. I have two sites that i manage and would do cloud, but it is only a tertiary option.
<skay> for my charm, I have a db relation, and call set_database and set_roles when the relation is joined. why does the relation not reflect the dbname when I get the relation info?
<bdx> skay: hey
<skay> o/
<bdx> I've beaten that path
<bdx> so
<bdx> I know @stub is/was aware of this, as I brought it to his attention when I was having trouble it
<bdx> the solution
<skay> (is there a bug I can subscribe to to keep up to date?)
<bdx> looking
<skay> thank you
<skay> meanwhile, halp
<skay> :)
<skay> s/halp/thanks
<bdx> np
<bdx> so, I cant find the bug
<bdx> https://bugs.launchpad.net/postgresql-charm
<bdx> there are only ~12 bugs there
<kwmonroe> skay: what's the database that you're connecting?
<kwmonroe> mysql/maria/db2/pgres?
<bdx> and I dont see any the ones I've filed in the past
<bdx> postgresql
<bdx> I just assumed you were using postgresql too because of the functions you are calling
<skay> kwmonroe: postgresql
<skay> bdx: how did you work around it?
<bdx> https://bugs.launchpad.net/postgresql-charm/+bug/1618248
<bdx> finally
<bdx> ha
<bdx> I had to search by bug number
<bdx> it looks to be private or something
<bdx> having a tough time finding the mailing list thread
<bdx> anyway
<skay> oh. I wonder if that is a mistake
<bdx> yeah, possibly it has been fixed by using the api differently
<bdx> if the api did change (which it sounds like it did), then I havent started using  it yet in my charms and probably need to refactor my postgresql relations to take advantage of this
<bdx> my work-around was to do this https://github.com/jamesbeedy/layer-django-web/blob/master/reactive/django_web.py#L70,L74
<bdx> but it was really just a poor cover up
 * bdx ashamed to have moved forward with ^ in so many places previously
<kwmonroe> yeah skay, it looks like the requested db name gets stored as part of the connection string, so parsing it back out like bdx did (pgsql.master.dbname) seems like a fine way to get it back.
<skay> kwmonroe: while debugging my hook, I did a relation-get to see the connection string, and it does not have the appropriate name
<skay> it's named after the unit (which is the default,right?)
<skay> I mean the app
<bdx> yea
<skay> I could get it from my charm's config if I need to
<skay> as a work around
<skay> if someone could subscribe me to the bug, I'd appreciate it so that I can keep track for when I don't need a workaround
<skay> ~codersquid
<bdx> skay: I think this has been fixed though
<kwmonroe> fwiw, i can't see the bug that bdx opened
<bdx> oh
<bdx> just subscribed both of you
<kwmonroe> cool, i'm in bdx.  skay's issue feels a bit different though.  sounds like if you don't explicitly call pgsql.set_database, pg will default to the unit name, yet that dbname isn't coming back with pgsql.master data.
<skay> what is happening to me is that my charm joins a db relation with the posgresql charm
<skay> and when I get the relation, it is not reflecting the dbname I set when the relation was joined
<skay> I could maybe make a tiny charm to demonstrate
<bdx> oh totally, I linked the wrong bug
<bdx> darn
<skay> I explictely call pgsql.set_database
<kwmonroe> ah, ok skay, i wasn't sure if you were calling set_db or not.  either way, sounds like a new bug to me if the connection string from an available pg master doesn't contain the dbname.
<skay> I'll go ahead and write a small example and file a new bug
<bdx> here's another bug I have open in that area https://bugs.launchpad.net/interface-pgsql/+bug/1666337
<mup> Bug #1666337: 'NoneType' object has no attribute 'uri' <pgsql Interface for charms.reactive:Fix Released> <https://launchpad.net/bugs/1666337>
<bdx> here's the mailing list thread https://lists.ubuntu.com/archives/juju/2017-February/008638.html
<bdx> well, @skay possily I didnt file a bug on your exact issue
<bdx> or at least I cant find it .... I had a lot going on with that charm/interface for a while as you can see
<bdx> I know I've hit what you are describing though
 * skay nods
<bdx> and I would have filed a bug had I hit this
<bdx> but meh
<bdx> srry
<bdx> lol
<bdx> so essentially, I think the first bug I linked is this bug
<bdx> because you are getting back the wrong db
<bdx> it means that the rest of the information in the relation is also incorrect
<bdx> because there is a user/database/password affinity
<bdx> so if you are getting the wrong db back
<bdx> then everything else is wrong too
<bdx> not just the db
<kwmonroe> rick_h: is there a fast way to clear the hook queue for a failed unit?  [Kid] had a unit go into error state and he wanted to remove it.  however, leadership-hook-foo and like 9 other hooks queued up, so he had to 'resolved --no-retry' 10 times before the remove-unit could go to work.
<kwmonroe> maybe like a --force on remove-unit could auto resolve a unit?
<rick_h> kwmonroe: nothing I'm aware of, I wonder if you restart the agent what happens
<kwmonroe> dunno rick_h, none of my units ever go into error state ;)  but i'll try to wedge one and see what agent horrors i can perform.
<[Kid]> kwmonroe, everything is right again
<[Kid]> i about about to re-deploy
<kwmonroe> woohoo!
#juju 2017-11-14
<externalreality> test
<jam> tick
<rick_h> Jam tock
<jam> rick_h: sorry I'm late, just finishing up the last meeting, brt
<jac_cplane> We have a charm for xenial that relys on /etc/network/interfaces, there was a recent update to curtin that moves /etc/network/interfaces to /etc/network/interfaces.d.   I dont think this is correct, but I' not sure why this change was made.  Can someone help?
<jac_cplane> bug openend https://bugs.launchpad.net/maas/+bug/1732202 on curtin 532
<mup> Bug #1732202: Xenial Deploy fails when using /etc/network/interface <MAAS:New> <https://launchpad.net/bugs/1732202>
<[Kid]> can you not create models from logging into a controller?
<[Kid]> also, is it possible to login to a controller as admin on any other server than the one that juju was bootstrapped from?
<[Kid]> ahh i see what the problem is. the cloud providers are only stored on the juju server that the controllers were bootstrapped from
<[Kid]> i.e., you can't login to a controller from a random juju install and see the cloud providers for that controller
<[Kid]> so far, it seems like if i need to make most changes outside of removing or adding a worker node to a kubernetes cluster i have to remove the model and re-add it
<[Kid]> so basically a full re-deploy
<[Kid]> does that sound right?
<rick_h> [Kid]: so you can add new users with admin and superuser permissions to access from other locations
<rick_h> [Kid]: check out https://jujucharms.com/docs/stable/tut-users
<rick_h> [Kid]: as far as changes to the cluster needing a redeploy, I'd hope not. I think the team would be curious what changes and see what's not supported in the charms and such that wrap the operations
<ryebot> [Kid]: gimme a second to catch up
<ryebot> [Kid]: What changes do you need to make?
<[Kid]> sorry, i am here
<[Kid]> i currently tried a deploy and MAAS didn't finish deploying 2 of the nodes and i have machines in a down state
<[Kid]> i released the machines in MAAS, but how to i get juju to try and redeploy to those machines?
<[Kid]> it already thinks it allocated them
<[Kid]> the changes that i was trying to make were like removing flannel and adding calico. it worked but then it rebuilt the master nodes and my  ssh keys changed and i couldn't get juju scp to work.
<kwmonroe> [Kid]: watcha mean by 'rebuilt the master nodes'?  you mean like the flannel cni relation was removed and calico joined?
<kwmonroe> also [Kid], i'm at a loss when you say you lost 'juju scp' capabilities.  i can't think of a reason why the juju keys used to ssh to a unit would change, regardless of how that unit was changed post deployment (unless, of course, you changed ~/.ssh/authorized_keys on the juju unit out-of-band)
<kwmonroe> rick_h: are there circumstances that cause ~/.local/share/juju/ssh/juju_id_rsa* to change?
<rick_h> kwmonroe: ...not that I can think of
<kwmonroe> [Kid]: the ssh key that juju uses for things like juju scp are stored in ~/.local/share/juju/ssh/juju_id_rsa, and the .pub key in there is what is normally on all deployed units.  rick_h assures me this is bulletproof. ;)
<cory_fu> What does this error mean when trying to bootstrap lxd / localhost?  ERROR Get https://10.130.48.1:8443/1.0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "root@murdoch")
<cory_fu> stokachu: Do you recognize that, by chance?  ^
<stokachu> cory_fu: yea i think it's the certificate in .local/share/juju/bootstrap-config
<stokachu> or one of those files
<stokachu> maybe credentials.yaml
<cory_fu> stokachu: Ah.  Just torch it?
<stokachu> yea
<cory_fu> stokachu: It was credentials.yaml.  Thanks!
<stokachu> cool, np
<[Kid]> kw and rick, thanks. i will continue to look at it
<[Kid]> kw and rick, is there a way to have juju retry the deployment of a machine?
<[Kid]> i have two machines in a down state and message is failed deployment.
<[Kid]> i manually fixed MAAS, so i wanted it to try on those same machines agin
<[Kid]> again
<rick_h> [Kid]: so there's juju retry-provisioning, if the machine didn't come up
<rick_h> [Kid]: or if you want to just retry as something failed and you cleaned it up just juju add-unit xxx
<rick_h> and have juju pull another up for use
<[Kid]> ahh yes, i think juju retry-provisioning is what i need
<rick_h> cool, hopefully that helps some
<[Kid]> rick
 * rick_h ducks
<[Kid]> it accepts that command, but doesn't do anything
<[Kid]> haha
<[Kid]> just stuck in a down status
<[Kid]> guess i might have to remove and re-deploy
<rick_h> [Kid]: try with --debug and see if anything is fishy? Maybe trace debug-log and see if anything comes up.
<[Kid]> i did
<rick_h> [Kid]: just remove and add-unit ?
<[Kid]> juju debug-log didn't have anything
<[Kid]> i will try the --debug on the command
<[Kid]> well, destroying the model again
<[Kid]> .....
<[Kid]> redeployings
<[Kid]> i have to wonder if i am just a special case.....
<rick_h> [Kid]: no I mean something's up. Do you know it failed to come up?
<zeestrat> Hey [Kid], if you fixed the maas nodes so that's not the issue anymore and the `retry-provisioning` didn't work then you should be able to do `juju add-unit <name-of-service-that-failed-to-deploy>` so you don't have to destroy the whole model as rick_h mentioned above.
<[Kid]> rick, yes, it didn't come up
<[Kid]> i waited like 30 minutes
<[Kid]> just in a down state and had "failed deployment" in the message field
<[Kid]> looks like the re-deploy worked
<[Kid]> i just hate that i have to keep re-deploying for simple changes
<rick_h> [Kid]: well this isn't a redeploy as the first one didn't succeed right?
<rick_h> I mean it's not in a change, but that maas didn't get a machine to juju?
<rick_h> I guess that's what I'm wondering is what went wrong between maas/juju, did curtain get things started, cloud-init go ok, juju agents get installed and setup?
<zeestrat> Maas has logs for it's deployments on the node page and should say why it ended up as a failed deployment.
#juju 2017-11-15
<bdx> cory_fu: ran into this yet http://paste.ubuntu.com/25969059/ ?
<cory_fu> bdx: I have not.
<cory_fu> bdx: Do you have the latest copy of that PR?  The line numbers from your stack trace don't match up.
<bdx> oooh
<bdx> I may not in that case
<cory_fu> bdx: Though, I'm guessing it's still an issue.  I think we need to check for a None result from relation_factory
<cory_fu> bdx: Is there anything in the log file that syas "Unable to determine role and interface for relation" or "No RelationFactory found in ..."?
<bdx> cory_fu: totally is  http://paste.ubuntu.com/25969142/
<bdx> or something to that effect 2017-11-15 18:22:24 ERROR juju-log Unable to find implementation for relation: requires of juju-info
<bdx> oh I think I see it
<bdx> the juju-info relation is defined in the beats-base layer
<cory_fu> bdx: Ok, easy fix.  That just means that you're not pulling in the juju-info interface layer, but I could see not really needing it, so we should handle that case gracefully
<cory_fu> Does the beats-base layer not pull the interface layer in for it, then?  Maybe it just handles it manually.  juju-info is a trivial relation anyway
<bdx> cory_fu: https://github.com/jamesbeedy/layer-beats-base/blob/master/metadata.yaml#L6
<bdx> I'm wondering if I pull juju-info into the top layer if it would make a difference
<bdx> right
<cory_fu> bdx: Yeah, it's because it's missing the interface:juju-info in https://github.com/jamesbeedy/layer-beats-base/blob/master/layer.yaml
<cory_fu> But I can't really claim that that's wrong, given how trivial juju-info is
<bdx> right
<bdx> cory_fu: adding juju-info fixed
<bdx> thanks
<bdx> I should have caught that
<cory_fu> bdx: NP, I appreciate you bringing it up, because I think it's an issue in the PR that needs to be fixed
<bdx> np, I filed a bug for you
<R_P_S> hi, I created a juju controller on a temporary machine.  I'd like to be able to login as admin/superuser on another machine.  How do I find the registration string for the superuser to register the controller on the 2nd machine?
<rick_h> R_P_S: there isn't one. You have to add a new user and grant them admin rights and login as that new user.
 * rick_h wonders if we got working the juju login to an ip address
<R_P_S> so the superuser can be removed then once this alternate user is added?
<rick_h> R_P_S: hmmm, honestly I've never tried. You might need to give the new user superuser access so you've got one superuser in the system...but not sure
<R_P_S> yeah, that's precisely what I'm worried about losing... superuser access
<R_P_S> trying to remove single point of failure, but I can't assume the instance I used to create the controller will survive forever
<rick_h> R_P_S: so I wonder if you can do this, the admin's password is written out in one of the cache files, you can do `juju login -u admin $ipofcontroller` I think
<rick_h> R_P_S: well that's what the HA is for and yea, add additional users, give them superuser, etc.
<rick_h> R_P_S: to be honest, back up .local/share/juju and you're ok
<rick_h> R_P_S: so .local/share/juju/credentials.yaml has the auto generated admin credentials for the controller
<R_P_S> ok...
<R_P_S> how does HA of the controller (not gotten there yet) save the credentials on a completely arbitrary machine?
<rick_h> R_P_S: sorry, I just meant as far as being resilient. ha controllers is good stuff, but yea doesn't effect the client end
<rick_h> so summary: 1) create new users, register them so you can login with them as username/password's you know
<R_P_S> ok, got it... message just lost in translation :)
<rick_h> 2) you might be able to login as admin on another machine using the generated password
<rick_h> 3) ignore me, HA doesn't effect clients at all and I got confused mid-stream :)
<R_P_S> looks like I can create another superuser
<rick_h> definitely https://jujucharms.com/docs/stable/users
<R_P_S> hum, so the admin user never gets logged out?  just tried a manual logout and for an error "preventing account loss" due to no password
<R_P_S> so each registration string is just password setting, along with configuration of the controller?
<rick_h> R_P_S: hmm, so the password should be the generated one. It's auto used generally
<rick_h> R_P_S: yea, it sets up a new account so you can give them a unique password and get the controller info cached
<R_P_S> weird, so I tried creating a 2nd user account, and going through the registration flow I needed to enter in a different controller name
<rick_h> R_P_S: right, because it's on the same machine. There's a single controller cache
<R_P_S> but then it said secret key for user "abc" not found
<rick_h> hmm, hmm, not sure on that bit
<R_P_S> so to create a 2nd user, say for testing permissions and using logout/login between the users, how would I complete the registration for the 2nd user?  wipe .cache?
<rick_h> R_P_S: well I'd just create a lxd container or something to setup a second home basically
<rick_h> R_P_S: you could probably do it by changing the $JUJU_DATA path or something to keep them separate
<rick_h> or actually, given that just another user on the local system so there's two $HOME or something
<R_P_S> yeah, I was thinking just a different user on the linux box itself... but that seems... clumsy
<R_P_S> hum, so I remove-user the 2nd dummy account, and now it cannot be recreated.  does the controller seriously permanently blacklist that username?
<rick_h> yes, due to the auditing effects usernames are use once
<R_P_S> that seems strange then to have both remove-user and disable-user when the functionality is the same, but the former also "blacklists" the user.
<R_P_S> bit of a tangent, but it seems that any company would want to delete users after turnover to keep the list from growing unwieldly, but then a rehire would be forced to get a new username
<rick_h> well but if they were auditing who did what when, especially around the turn over time it's best to keep them split. It's reasonable the new person is unlikely to have the same name. If the same person is rehired, well then I grant you
<R_P_S> so, it turns out there's an extra permissions level - owner.  tried to disable the admin account, but the error just states "cannot disable controller model owner"
<rick_h> ah, right. I'm working on defining some work to get rid of that
<rick_h> it's something that's outlived it's usefulness as things scale long term, it's on the todo list to clean up but only defining the work at this point
<R_P_S> is it safe to "lose" the admin account once other superusers have been created?
<rick_h> yes, there's nothing the admin can do that a user with superuser permissions cannot
<R_P_S> or should the .local/share/juju be permanently backed up for the admin user
<R_P_S> ah, ok
<rick_h> it's why I started with "add new accounts" as it's usually the best path long term to give users proper accounts with permissions and manage them vs continue to use the special "admin" account
<R_P_S> I'm still working with a conjure-up demo... I'm about to wipe the last of it and try to build it properly
<rick_h> cool stuff
<R_P_S> I was surprised by the lax security of conjure-up though, didn't request VPC info.... it'd been so long since I'd created an instance in ec2-classic I'd forgotten it was there and trying to figure out why the VPC wasn't specified on the isntance
<R_P_S> actually, while I have you here... after creating instances via juju, can I modify the security groups?  seeing 0.0.0.0/0 ingress ssh makes me want to cry
<rick_h> so you can specify the vpc details during bootstrap with --vpc-id
<rick_h> as far as that goes, it's a feature folks have requested but to do things like juju run and juju ssh/scp clients outside need ssh access so it's not currently able to be blocked out
<R_P_S> yeah, I'd found vpc specification in the docs since conjure-up
<rick_h> I think you can leverage a vpc to help lock that down somewhat, bdx might have more details around that
<R_P_S> juju ssh uses external IPs?
<rick_h> yes, because a controller can host many models, and the client is not on the same network, it tries to use external addresses to help map to the machines
<R_P_S> oh dear
<rick_h> you can have model access but no controller access so you can't ssh through/onto the controllers themselves as a user on a model
<R_P_S> that completely breaks the idea of bastion hosts, VPCs and even AWS security group <-> security group references
<rick_h> it's not going and sticking elastic ips on all the instances or anything
<R_P_S> I'd like to both assume and enforce the client being on the same network (VPC+peering) as any instance created by juju, including the controller itself.
<R_P_S> now, this scope is for kubernetes...
<R_P_S> it sounds like I should be putting absolutely everything except the load balancer in the public subnet...
<R_P_S> master, workers, etcd,easyrsa all in private subnet?  then they wouldn't have a publuic IP and juju ssh would work off the internal IP?
<R_P_S> doh... logic inverted 2 lines up.  everything except load balance in private subnet
<R_P_S> I think I may just need to try this out now... rebuild controller in VPC and see what happens
<R_P_S> but I believe the question technically still stands.  If the client (bastion host) has ACLd to the SG containing all the juju instances for private subnet instances, that can be changed and juju won't complain?
#juju 2017-11-16
<R_P_S> hey rick_h, just wanted to thank you for the help so far.
<bdx> is there support for artful series on aws?
<bdx> Im seeing juju has grabbed a machine http://paste.ubuntu.com/25970976/
<bdx> but just doesn't want to start
<bdx> been sitting there for a while, making me think artful hasn't made the cut just yet
<bdx> by a "while" I mean like 20 mins
<ChaoticMind> is the jaas controller being super slow right now or is it just me?
<mhilton> ChaoticMind, what cloud/region are you seeing problems in?
<ChaoticMind> aws/eu-central-1
<mhilton> ChaoticMind, thanks I'll try it out and see if I see the same.
<ChaoticMind> thanks
<mhilton> ChaoticMind, is there any particular command that's taking it's time for you?
<ChaoticMind> mhilton: just deploying the bundle took forever (like 3 minutes for a smallish bundle). Setting relationships took like 15 seconds each
<ChaoticMind> Usually it's about 0.5 seconds
<ChaoticMind> I made a new model and tried it again now, it seems ok now!
<mhilton> ChaoticMind: I'll look into it, one of the controllers may be overloaded. Thanks for mentioning it.
<ChaoticMind> no worries
<bdx> yoyoyo - whats the deal with artful deploys? Will someone `juju add-machine --series artful` on aws and let me know if I'm crazy
<bdx> oooh shoot, looks like adding a machine of artful worked actually
<bdx> nm
<bdx> jeeze
<bdx> ahh, `juju deploy ubuntu --series artful --force` is what is failing
<kwmonroe> bdx: how about juju add-machine --series artful; juju deploy ubuntu --to X --force?
<kwmonroe> just a shot in the dark
<bdx> kwmonroe: no, great shot, I actually just did that and it worked
<kwmonroe> sweet
<bdx> and it looks like what was failing me last evening is now working too, with the `juju deploy ubuntu --series artful --force`
<bdx> gd
<bdx> I was experiencing some extreme jitter yesterday on JAAS I think
<bdx> I was trying to get an artful deploy going for quite a while and it was just failing at machine "pending"
<bdx> really strange
<kwmonroe> ha!  yeah, "juju deploy ubuntu --series artful --force" just worked for me too on aws
<R_P_S> hey, so as part of my ongoing evaluation of juju, I've just created an HA controller.  But how do I specify what subnets to create the ha-instances into?
<bdx> jam:^^
<bdx> R_P_S: that is possible with the `--to` directive, its just not documented yet
<bdx> I think its something like `--to subnet=subnet-<id>`
<R_P_S> so $ juju enable-ha --to subnet=subnet-priv1b --to subnet=subnet-priv1c ?
<bdx> R_P_S: let me see if I can get it, omp
<bdx> R_P_S: `juju bootstrap aws/us-west-2 --to subnet=subnet-<id> --credential mycred`
<bdx> ^^ worked
<bdx> I'll see about the HA omp
<R_P_S> yeah, that worked for the first instance...
<bdx> instances are launching faster then I've ever experienced
<R_P_S> but after creating the initial controller, enabling HA put them in random subnets as far as I can tell
<bdx> I already have a bootstrapped controller
<bdx> crazy
<R_P_S> including moxing public and private subnets for controllers 1 and 2
<bdx> R_P_S: `juju enable-ha  --to subnet=subnet-<id>,subnet=subnet-<id>`
<bdx> worked seamlessly
<bdx> R_P_S: would you mind putting some heat on this please https://github.com/juju/docs/issues/2122
<R_P_S> I don't have a github account and I'm at work... I'll need to do that later... as an aside, that ticket doesn't appear to be about enable-ha
<R_P_S> so now I'm not sure how to remove the extra HA controllers
<R_P_S> https://jujucharms.com/docs/2.2/controllers-ha
<R_P_S> juju status doesn't have any mention of "has-vote" for the controller model... and "remove-machine" just fails with a message that "machine 2 is required by the model"
<rick_h> R_P_S: so juju show-controller should mention HA status bits and has-vote I believe
<jamesbenson> hi all, I'm trying to do a simple LXD conjure-up k8s with help, all-in-one.  But it fails from the get-go.
<rick_h> R_P_S: you can always remove-machine --force but yea, best to know what's up there.
<stokachu> jamesbenson: whats the issue?
<rick_h> jamesbenson: bummer, what's the issue? I'm sure folks can get you good to go here.
<jamesbenson> thanks stokachu and rick_h
<jamesbenson> I hope so
<jamesbenson> So there seems to be a few issues.  Sidenote: I'm doing this from a ubuntu server VM in openstack, xlarge.  Pretty sure that doesn't matter, but just in case
<stokachu> jamesbenson: whats the hw specs?
<stokachu> ram, cpus
<jamesbenson> 8 vCPU, 16GB RAM, 160 GB HD
<stokachu> ok should be fine
<jamesbenson> ubuntu 16. LTS
<jamesbenson> These are the commands I do from deploy: http://paste.openstack.org/show/626536/
<jamesbenson> https://snag.gy/nT0LPv.jpg
<jamesbenson> that's the latest state...
<jamesbenson> actually I've tried twice... here's the other: https://snag.gy/Z9cs2x.jpg
<jamesbenson> thoughts stokachu?
<R_P_S> so I'm trying to rollover controllers by simply terminating "bad" ones in aws directly and re-running enable-ha
<R_P_S> but I'm still unable to remove machines that don't show up with ha-status enabled in "show-controller"
<R_P_S> I upped enable-ha to 7 to test the subnets... it looks like I need to specify the subnets with each enable-ha command :\
<R_P_S> but show-controller lists machines 0,3,4,5,6,7,8 (1,2 were "demoted" accoding to enable-ha output)... but I still get "machine 1 is required by the model"
<R_P_S> and one thing I've found is that using --force for remove-machine leaves an orphaned security group
<kwmonroe> jamesbenson: just a guess, but can those units get to the outside world?  i know etcd and k8s charms snap install stuff, so i wonder if they're having trouble getting out.  can you pastebin a "juju debug-log --replay -i etcd/0"?
<R_P_S> juju remove-machine 1 --force
<R_P_S> fails
<R_P_S> this bug was opened almost a year https://bugs.launchpad.net/juju/+bug/1658033
<mup> Bug #1658033: Juju HA - Unable to remove controller machines in 'down' state <4010> <cpe-onsite> <juju:Triaged> <https://launchpad.net/bugs/1658033>
<bdx> R_P_S: downsizing the controller cluster isn't supported
<bdx> you have to dump the db and restore to a smaller cluster (I think)
<R_P_S> correct, once an -n N has been specified, it can't be shrunk
<R_P_S> but I'm trying to simulate failure
<R_P_S> so I terminated one instances and reran enable-ha to rebuild new ones
<R_P_S> but the terminated onces are still in the list, unable to be removed
<R_P_S> I'm up to 13 "machines" in the config, with 5/7 ha (currently rebuilding)
<R_P_S> <controllername>*  controller  admin  superuser  aws/us-east-1       2        13  5/7  2.2.6
<bdx> kwmonroe: I'm hitting it again, I just deployed these and they stood up just fine, tore it down and redeployed and its the artful instances that have been in pending for > 20mins now -> http://paste.ubuntu.com/25975589/
<bdx> kwmonroe: create a new model on the same controller, then deployed the same charm http://paste.ubuntu.com/25975644/
<bdx> see what I'm saying about the inconsistencies ?
<kwmonroe> bdx: i'm in us-east-1, and just verified "juju deploy ubuntu --series artful --force" worked again.  hard to say what's up with it being intermitten.  do a "juju ssh -m controller 0" and sudo grep around /var/log/juju for 'machine-X' to see if there's a provisioner issue.
<bdx> right
<kwmonroe> yeah bdx, frustrating for sure.. i'm hoping there's something in the controller log that will be more insightful about an artful provisioning issue.
<bdx> kwmonroe: http://paste.ubuntu.com/25975668/ - oh man
<kwmonroe> bdx: i haven't seen "failed to start instance (failed to start instance in provided availability zone)" before, and no sign of it in my controllers. however, i'm on 2.3-beta1 so that could be new logging in beta3.
<kwmonroe> bdx: what kind of constraints do you have for redis-cache?  any wicked machine reqs there?
<bdx> root-disk, spaces
<bdx> testing w/o any constraints
<kwmonroe> bdx: i was hoping you had "instance-type=p3.xxlarge" and i could just say "us-west is simply out of those instance types", but that doesn't seem like the case.
<bdx> kwmonroe: it was the constraints
<bdx> I removed them, and wala
<kwmonroe> must have been spaces, right?  surely not root-disk
<bdx> I wonder if I'm hitting disk cap on aws
<bdx> testing that right now
<kwmonroe> yeah, make sure you're asking for GB and not PB ;)
<kwmonroe> othwise RIP your wallet
<bdx> ha, yeah, "G"
<bdx> so I just logged into aws console and created 10 x 100G ebs volumes
<bdx> no issues
<bdx> kwmonroe: https://bugs.launchpad.net/juju/+bug/1706462
<mup> Bug #1706462: juju tries to acquire machines in specific zones even when no zone placement directive is specified <cdo-qa> <foundations-engine> <juju:Triaged by ecjones> <MAAS:Invalid> <https://launchpad.net/bugs/1706462>
<bdx> see my comment at the bottom
<bdx> kwmonroe: I'm about to suggest something crazy
<bdx> http://paste.ubuntu.com/25975792/
<bdx> taking ^ into consideration
<bdx> redis-space and ubuntu-space are both deployed with only a "spaces" constraint
<bdx> the ubuntu-space didn't have a --series constraint
<bdx> or bah
<bdx> --series argument
<bdx> the only failures I'm seeing here are when '--series' is specified alongside a spaces constraint
<bdx> because we see from ^ that redis-disk worked, it had '--series artful' and '--constraints "root-disk=100G"
<bdx> and ubuntu-space worked
<bdx> which had no '--series' arg, but had the spaces constraint
<bdx> but the only thing failing consistently
<bdx> are things deployed to a space that has the '--series'  arg
<bdx> I'll prove it by specifying '--series' with another series other than artful
<bdx> how about zesty
<bdx> since we see from ^ that zesty worked w/o a spaces constraint
<kwmonroe> bdx: i don't know enough about juju's zone handling, but happened here with graylog / #38? http://paste.ubuntu.com/25968550/
<kwmonroe> did graylog have constraints?
<bdx> no
<bdx> well yes
<bdx> but that isn't happening because of that
<bdx> that happens with every single instance deployed with 2.3beta3
<bdx> it eventually gets past the "failed to start instance (failed to start instance in provided availability zone) " and finds one and eventually starts
<bdx> I was just posting that to show that its not only maas thats having that issue
<bdx> ok, well I think this verifies my theory http://paste.ubuntu.com/25975824/
<bdx> I jus deploy the ubuntu-zesty-space
<bdx> it required the spaces constraint and --series
<bdx> and it failed similar to the artful
<bdx> just stuck pending
<bdx> #@(*$U(#@*$@#*&
<bdx> idk
<bdx> I may as well go back to sleep
<bdx> somehow I knew today would be a trying day
<kwmonroe> :)
<kwmonroe> bdx: i would note that in bug 1706462, that spaces + series repro this easily on aws
<mup> Bug #1706462: juju tries to acquire machines in specific zones even when no zone placement directive is specified <cdo-qa> <foundations-engine> <juju:Triaged by ecjones> <MAAS:Invalid> <https://launchpad.net/bugs/1706462>
<bdx> kwmonroe: series + spaces only with artful
<bdx> AND @kwmonroe
<bdx> ^ bug is entirely different then what I'm seeing I think
<jamesbenson> kwmonroe: sorry for the delay, turkey-luncheon thingy.... that command is giving me a TLS handshake timeout..
<jamesbenson> kwmonroe: The instance can ping google...
<bdx> kwmonroe: this verifies that it is only happening with artful http://paste.ubuntu.com/25975887/
<bdx> kwmonroe: what I'm seeing is the instances stay in pending for only series + space + artful
<bdx> kwmonroe: what I'm seeing is the instances stay in pending for only series + space + artful
<bdx> 1706462 - failed to start instance (failed to start instance in provided availability zone) within attempt 0, ret
<bdx> rying in 10s with new availability zone
<kwmonroe> wait bdx, your previous paste shows machine 8 waiting for machine with series zesty: http://paste.ubuntu.com/25975824/
<bdx> but then on *a* next attempt, juju will find an instance, and start it, and go on its way
<bdx> kwmonroe: ah, my bad, yea, that machine started
<bdx> which made me realize, in all cases, its only artful that is the commonality here
<bdx> when used with spaces + series
<bdx> try it
<bdx> oooh, it may be only beta3, let me try this on jaas
<kwmonroe> jamesbenson: how about just "juju debug-log"?  does that give you a tls timeout too?
<bdx> works great on jaas http://paste.ubuntu.com/25975933/
<bdx> kwmonroe: the juju agent never starts, so I don't get any log from those instances
<jamesbenson> yes
<jamesbenson> kwmonroe: ^
<bdx> oooh jamesbenson
<bdx> my b
<bdx> lol
<kwmonroe> :)
<jamesbenson> http://paste.openstack.org/show/626542/
<kwmonroe> jamesbenson: ooooohhh, i thought you meant the debug-log command wasn't showing any output.
<jamesbenson> kwmonroe: No, seems to have issues with net/http: TLS handshake timeout...
<jamesbenson> I'm not sure why that is, though, since easyrsa is able to get active ...
<jamesbenson> so it must be able to reach out, correct?
<kwmonroe> jamesbenson: easyrsa doesn't snap install anything
<kwmonroe> etcd and k8s charms do
<jamesbenson> oh man..
<jamesbenson> so something with the bridge then
<kwmonroe> so jamesbenson, i'll bet you all the money in my pockets that if you do a "juju run --unit easyrsa/0 'sudo snap install etcd'", it'll fail
<jamesbenson> kwmonroe: seems to be just sitting there...
<jamesbenson> yep, same error
<jamesbenson> juju run --unit easyrsa/0 'sudo snap install etcd'
<jamesbenson> error: cannot perform the following tasks:
<jamesbenson> - Download snap "core" (3440) from channel "stable" (Get https://068ed04f23.site.internapcdn.net/download-snap/99T7MUlRhtI3U0QFgl5mXXESAiSwt776_3440.snap?t=2017-11-16T20:00:00Z&h=30ced1b835617d49d8ff4221a62d789f7ca638aa: net/http: TLS handshake timeout)
<jamesbenson> sorry about the paste there...
<kwmonroe> jamesbenson: to test the tls/http connectivity more generically, do this.. juju ssh etcd/0, then wget https://google.com from the etcd unit.
<kwmonroe> (make sure it's https)
<bdx> ok, here it is https://bugs.launchpad.net/juju/+bug/1732764
<mup> Bug #1732764: series + spaces + artful + juju2.3beta3 = fail <juju:New> <https://launchpad.net/bugs/1732764>
<jamesbenson> kwmonroe: works.
<kwmonroe> interesting
<jamesbenson> http://paste.openstack.org/show/626545/
<kwmonroe> jamesbenson: how about a "sudo snap install etcd" from that same etcd unit?
<jamesbenson> kwmonroe : nope...http://paste.openstack.org/show/626546/
<jamesbenson> interesting...
<ryebot> jamesbenson: if this is an egress-restricted environment and you're unable to hit the snap store, I can provide you with some steps for installing them manually.
<jamesbenson> all ports are open....
<jamesbenson> I'll double check though..
<kwmonroe> bdx: nice detail in 1732764.  interesting that's it's such a specific combo.  also, you may want to s/"spaces=myspace"/"spaces=facebook" in case a more recent social media platform helps.
<jamesbenson> ryebot kwmonroe: iptables are empty, and security group is open all ports in and out.
<jamesbenson> http://paste.openstack.org/show/626547/
<jamesbenson> This is the only rule in my iptables -t nat: MASQUERADE  all  --  10.55.234.0/24      !10.55.234.0/24       /* managed by lxd-bridge */
<kwmonroe> jamesbenson: stick some quotes around that url... wget 'https://068ed04f23.site.internapcdn.net/download-snap/99T7MUlRhtI3U0QFgl5mXXESAiSwt776_3440.snap?t=2017-11-16T20:00:00Z&h=30ced1b835617d49d8ff4221a62d789f7ca638aa'
<jamesbenson> hmm... still shows connected.  But once connected it sits.
<kwmonroe> jamesbenson: how about running "env | grep -i proxy" on that unit.  anything in there?
<jamesbenson> NO_PROXY=10.55.234.245,127.0.0.1,::1,localhost
<jamesbenson> no_proxy=10.55.234.245,127.0.0.1,::1,localhost
<kwmonroe> hmph jamesbenson, that seems legit
<jamesbenson> ....so confused....  not a good sign that everything seems legit from you too...
<jamesbenson> kwmonroe: do you deploy on baremetal or in VM's?  Do you have any script or anything?
<kwmonroe> jamesbenson: by "legit", i meant the no_proxy stuff looks legit :)  if you can't do a "sudo snap install foo" from the unit, juju won't be able to either.
<kwmonroe> jamesbenson: there's a gremlin in there to be sure.  just need to figure out why those unit's can't snap install.
<kwmonroe> jamesbenson: i typically deploy to clouds or localhost (lxd).  not much experience with maas.
<jamesbenson> well this is in an openstack VM, so not with maas...
<kwmonroe> ah, right
<jamesbenson> I know I can deploy using openstack magnum, but want to do it manually...
<kwmonroe> well jamesbenson, from what i can tell, apt install works and wget works, so it's not like your units are totally locked down.  i'm not sure what's causing snap install to fail.
<jamesbenson> error: cannot install "foo": snap "core" has changes in progress
<kwmonroe> silly rabbit, dont' actually stick 'foo' in there
<jamesbenson> :-p
<jamesbenson> hey, didn't know if it was a test option ;-)
<jamesbenson> ansible ping/pong test ...
<kwmonroe> :)
<kwmonroe> jamesbenson: what does snap changes show?
<kwmonroe> "snap changes"
<kwmonroe> i'm guessing it's stuck somewhere trying to download the core snap
<jamesbenson> http://paste.openstack.org/show/626550/
<jamesbenson> you'll like that..
<kwmonroe> heh, classy
<kwmonroe> jamesbenson: how about a "snap download etcd"?
<kwmonroe> we should see the tls error.. just making sure.
<magicaltrout> "Hello Kubernetes support desk, Kevin speaking, how may I help you today???"
<kwmonroe> phew!  backup arrives.  magicaltrout, meet jamesbenson.  he's having trouble snap installing k8s.
<jamesbenson> yep, tls error
<magicaltrout> i have many k8s installations
<magicaltrout> too many
<kwmonroe> magicaltrout: any on openstack?
<magicaltrout> sorta
<magicaltrout> its manual though not openstack cloud provider
<jamesbenson> magicaltrout: I've got a ubuntu 16 LTS, VM sitting in openstack.  Security group is completely open.  No iptables rules...
<jamesbenson> 8 vCPU, 16GB RAM, 160 GB HD;  deployed using these commands:  http://paste.openstack.org/show/626536/
<jamesbenson> can't seem to install though, giving me tls errors.
<magicaltrout> okay, jamesbenson your cluster lives inside lxd on nodes on openstack?
<jamesbenson> yes
<jamesbenson> VM in openstack, lxd on that VM.
<magicaltrout> hmm i've not tried that before
<magicaltrout> if you snap install at vm level does it work?
<jamesbenson> yeah, did that to install conjure-up
<jamesbenson> and lxd
<magicaltrout> hmm
<kwmonroe> jamesbenson: it feels like something about your lxd-bridge is interferring with fetching data from the snap store, but i can't fathom a reason why it would affect snap and not apt or wget.
<R_P_S> I am having difficulties adding subnets to spaces to ensure instances are deployed in the correct VPC/AZ
<R_P_S> I get an error "cannot add subnet: no subnets defined" while running
<R_P_S> juju add-subnet 1.2.3.4/5 public subnet-12345678
<jamesbenson> kwmonroe, magicaltrout: do you have general guidelines/rules/instructions on how you set up lxd, zfs, and the network?
<jamesbenson> ipv6 is disabled...
<jamesbenson> but I wasn't sure about the bridge
<magicaltrout> i've only installed k8s with conjure up on lxd once
<magicaltrout> i just did whatever it told me
<jamesbenson> how do you typically install it?
<magicaltrout> i have 1 standard aws install and 3 openstack manual provider installs
<jamesbenson> I'm doing lxd to do some dev with multiple "nodes" in an all in one...
<jamesbenson> openstack with magnum?
<magicaltrout> nope
<jamesbenson> manual?
<magicaltrout> yeah just spin up some nodes
<magicaltrout> and deploy some stuff to them
<jamesbenson> using which method?
<magicaltrout> https://jujucharms.com/docs/2.2/clouds-manual
<magicaltrout> just like a small 3 node cluster for k8s dev
<stokachu> jamesbenson: how's your lxd bridge configured
<jamesbenson> stokachu: any command to detail it?
<stokachu> jamesbenson: lxc network show lxdbr0
<stokachu> jamesbenson: easiset to do `lxc network show lxdbr0|pastebinit`
<jamesbenson> http://paste.ubuntu.com/25976235/
<stokachu> do you have another bridge defined?
<jamesbenson> no idea about the pastebinit... awesome..
<stokachu> jamesbenson: whats `lxc network list|pastebinit` show
<jamesbenson> http://paste.ubuntu.com/25976248/
<jamesbenson> http://paste.ubuntu.com/25976252/
<stokachu> jamesbenson: yea youve got no bridge defined for lxd to use
<jamesbenson> okay...
<stokachu> jamesbenson: how'd you create lxdbr0 before?
<jamesbenson> sudo lxd init --auto
<kwmonroe> jamesbenson: fwiw, we have a generic lxd guide:  https://jujucharms.com/docs/stable/tut-lxd.  might be worth following that and bootstrap on a new node, then "juju deploy ubuntu", then "juju ssh ubuntu/0" and see if a "sudo snap install core" works.
<stokachu> kwmonroe: it's his bridge
<stokachu> it isn't configured
<jamesbenson> BRB
<jamesbenson> could you send me a few commands?
<stokachu> jamesbenson: `lxc profile show default|pastebinit`
<R_P_S> bdx / rick_h: any ideas why add-subnet is not working and complaining about subnets not being defined?
<kwmonroe> stokachu: if the bridge was borked, how did he get this far with kubeapi-load-balancer going active: https://snag.gy/nT0LPv.jpg
<kwmonroe> (and easyrsa)
<stokachu> well for one, his lxd bridge is inet addr:10.55.234.1
<stokachu> and those ip's are different
<jamesbenson> http://paste.ubuntu.com/25976291/
<stokachu> kwmonroe: also thats not output from conjure-up
<stokachu> so i dont know what he did there
<stokachu> jamesbenson: basically your lxd network bridge is acting up
<stokachu> jamesbenson: i recommend tearing down that setup
<stokachu> jamesbenson: juju kill-controller localhost-localhost
<jamesbenson> okay
<jamesbenson> and how do I bring it back up?
<stokachu> then delete that lxdbr0 bridge
<stokachu> one sec
<jamesbenson> ok
<jamesbenson> thanks ð
<stokachu> jamesbenson: then do `sudo brctl delbr lxdbr0`
<stokachu> jamesbenson: let me know when you've done that, and give output of `ip addr|pastebinit`
<R_P_S> so I just discovered that by creating a new model, the subnets aren't populated...
<R_P_S> the subnet info is available in the controller and default models, but I want to build a model for each environment
<R_P_S> how do I populate... or copy the subnet info from one model to the other?
<R_P_S> juju switch default && juju list-subnets -> full subnet output
<R_P_S> juju switch dev-k8s && juju list-subnets -> No subnets to display
<kwmonroe> R_P_S: does 'juju reload-spaces' while in the dev-k8s model do anything?
<kwmonroe> R_P_S: on aws, all new models look populated with subnets for me.
<R_P_S> reload-spaces appears to not do anything
<R_P_S> hold on
<R_P_S> would reload-spaces be dependent on a vpc-id being specified in the model-config?
<jamesbenson> stokachu: I think it's easier to reset the VM and start from scratch, no?
<stokachu> jamesbenson: probably
<jamesbenson> stokachu: So it's rebuilt.
<stokachu> ok so do this, `sudo apt-add-repository ppa:ubuntu-lxc/stable`
<stokachu> `sudo apt update && sudo apt install lxd lxd-client`
<stokachu> then `lxd init --auto` (no sudo)
<stokachu> then `lxc network create lxdbr0 ipv4.address=auto ipv4.nat=true ipv6.address=none ipv6.nat=false`
<stokachu> then `snap install conjure-up --classic`
<stokachu> and run conjure-up
<jamesbenson> snap needs sudi
<jamesbenson> sudo
<jamesbenson> running conjure-up
<R_P_S> ok, turns out you can't just add a VPC to a model after the fact (got errors), as the VPC parameters need to be specified during model creation with --config
<jamesbenson> stokachu:  oooo something different is happening... getting a good feeling ^_^
<stokachu> jamesbenson: \o/
<jamesbenson> what's the watch command again?
<jamesbenson> got it
<R_P_S> ok, now I'm straight up running into this bug :(  https://bugs.launchpad.net/juju/+bug/1704876
<mup> Bug #1704876: can't deploy to specific AWS subnets due to `juju add-subnet` fails <add-subnet> <aws> <conjure> <spaces> <subnet> <vpc> <juju:Triaged> <https://launchpad.net/bugs/1704876>
<R_P_S> how do you delete a space in a model?
<hml> spaces canât be deleted currently.  :-(
<R_P_S> ...
<R_P_S> are spaces completely broken? :(  can't delete, can't add subnets to a space... can't do anything with them?  yet they're core to defining where things will be deployed?
<hml> i never use spaces personally - there are other ways to define how things are deployed
<hml> depends on the cloud youâve bootstrapped
<R_P_S> I'm following: https://insights.ubuntu.com/2017/02/08/automate-the-deployment-of-kubernetes-in-existing-aws-infrastructure/
<R_P_S> how would I rewrite this command then to not use spaces?
<R_P_S> juju deploy --constraints "instance-type=m3.medium spaces=private" cs:~containers/etcd-23
<hml> ahâ¦
<hml> you can just make a space with a different name to use - suboptimal i know -
<R_P_S> the only things I've done different so far are that I'm not using cloudformation (infrastructure preexisting) and creating a model
<R_P_S> But how do I use empty spaces?
<R_P_S> since I can't add-subnet to a space?
<jamesbenson> stokachu : etcd/0 Missing relation to certificate authority.
<jamesbenson> https://snag.gy/5ED6sa.jpg
<jamesbenson> ah, my nginx just became active....
<R_P_S> so apparently you need to define your subnets when calling add-space...
<R_P_S> do it once, don't screw it up... and if you ever accidentally re-assign a subnet to a different space, you're screwed?
<jamesbenson> kwmonroe stokachu : thoughts?  seems like having a similiar issue like before.  the bridge is managed now though
<stokachu> If there is no error in juju status then give it time
<jamesbenson> error hook failed: "install" ?
<jamesbenson> for all etcd
<jamesbenson> but keeps on restarting...
<stokachu> Is this a full VM you're running?
<jamesbenson> full?  yes, ubuntu server cloud image...
<kwmonroe> jamesbenson: juju debug-log --replay -i etcd/0
<kwmonroe> let's see if it's still a problem snap installing
<kwmonroe> jamesbenson: alternatively, juju ssh etcd/0 and try a "sudo snap install etcd"
<jamesbenson> http://paste.ubuntu.com/25976912/
<kwmonroe> instant regret on the --replay ;)
<kwmonroe> please hold while your pastebin loads into ram
<jamesbenson> :-(
<stokachu> Looks like a timeout downloading the snaps
<jamesbenson> yeah
<jamesbenson> should I try the `sudo snap download etcd`
<stokachu> sudo snap install etcd
<jamesbenson> tried inside of etcd and got the TLS handshake timeout error
<kwmonroe> R_P_S: while i'm waiting on jamesbenson to crash my browser, what's your end game here?  i'm really not well versed with spaces/subnets, but i'm curious what people are up to when they have strict space reqs. i know bdx does these space constraints all the time -- i never knew why.
<stokachu> Yea, not a conjure-up or juju issue
<stokachu> But an issue nonetheless
<jamesbenson> kwmonroe: sorry!
<stokachu> Are you behind proxies?
<jamesbenson> stokachu: no
<kwmonroe> no worries jamesbenson -- it's ammo for getting a new rig for the holidays ;)
<stokachu> jamesbenson: may want to post on discuss.snapcraft.io
<jamesbenson> ooo, nice :-) ð. I'm running MBP with touchbar...
<R_P_S> kwmonroe: simple management of VPCs and subnets.  Without this, some of the anti-patterns that juju enables is mind boggling... things like 0.0.0.0/0 SSH ACLs on every instance
<kwmonroe> you happy with the touchbar jamesbenson?  i hear mixed reviews (kinda like battlefront 2), where by "mixed" i hear "i hate it" ;)
<kwmonroe> jamesbenson: i see we're back to the tls handshake timeout :/
<jamesbenson> lol... it's okay... I do need to reset it though, really random hangs and freezes as of late... circle of death for like 10-20 seconds then free's up.
<jamesbenson> does snapcraft have an IRC?
<kwmonroe> oof on the death spiral
<kwmonroe> jamesbenson: you'll get much better response from snapcraft.io, but there is a #snappy freenode channel
<R_P_S> kwmonroe: at that point, the only way to secure them is to control your subnets with things like public and private... these are very basic security concepts when building AWS infrastructure.
<jamesbenson> but I hack the hell out of it, so I probably fugged something somewhere...
<jamesbenson> back to point though
<kwmonroe> jamesbenson: i shouldn't say "much better", but i know those forums are monitored like crazy.  irc, i'm not sure.
<jamesbenson> okay
<kwmonroe> jamesbenson: https://forum.snapcraft.io/ is the place
<kwmonroe> dont go there ^^ from your etcd/0 unit because you'll probably get a tls handshake error.
<R_P_S> kwmonroe: the fact that conjure-up for a kubernetes cluster uses ec2-classic by default is, to be brutally honest, downright scary :(  ec2-classic was deprecated years ago, and should never be used again
<R_P_S> damn, I gotta run to meetings... I'll likely be in meetings until EOD...
<R_P_S> once again, thanks for all the help, I am making progress, but it is much slower than I'd hoped
<kwmonroe> R_P_S: thx for the insights on your use of subnets/spaces!
<kwmonroe> i'll catch up with you later to dive in more
<stokachu> Lol ec2-classic?
<kwmonroe> yeah - i dunno what ec2-classic is either, that was the diving part i alluded to ;)
<stokachu> R_P_S: feel free to elaborate on ec2-classic
<jamesbenson> kwmonroe: ha, thanks.  Not sure exactly what to post to them..  I suppose just snap is failing in openstack VM with that massive pastebin from earlier..
<kwmonroe> jamesbenson: like you said, back to the point, if you can't "sudo snap install etcd" from the deployed unit, juju won't be able to either.  so step 1 is to figure out why that's failing.  you're probably going to hear 1000 people ask "what are your proxy settings", don't be mad.  whatever's going on is probably a mix of openstack / lxd / snap.
<kwmonroe> jamesbenson: i would just create a new topic that says "snap install fails in a lxd container on an openstack VM"
<kwmonroe> jamesbenson: and the pastebin is good -- but since it has so much juju noise in it, i'd paste the failure that you see from "sudo snap install core" on the etcd unit.
<tvansteenburgh> stokachu: kwmonroe: ec2-classic == ec2 in the days before vpcs were introduced. if you have a sufficiently old aws account, the machines juju provisions will not be in a vpc, which can break things in unexpected ways. the way to get around this is to tell juju which vpc to use. you can do that using bootstrap or model config.
<jamesbenson> kwmonroe : Posted... lets see what happens.
<kwmonroe> i predict nothing but good things jamesbenson ;)
<R_P_S> ec2-classic is amazon ec2 before VPCs existed... IIRC, it's not even accessible on accounts that were created in the past few years
<R_P_S> ec2-classic was the equivalent of one giant public VPC that contained every amazon customer all in one giant internal subnet (split per region)
<R_P_S> ec2-classic didn't have as many features as "ec2-vpc" either.  examples include: the SG couldn't be changed, and could only have one SG attached to an instance.
<R_P_S> ec2-classic SGs don't have egress ACLs.  They simply don't exist (non-configurable ALL ALL 0.0.0.0/0 egress)
<R_P_S> ec2-classic without VPCs, you didn't make subnets either... I can't remember everything though, it's been years since I've done any significant amount of work in ec2-classic.
<R_P_S> ec2-classic is inherently insecure compared to ec2-vpc
<tvansteenburgh> R_P_S: you can make conjure-up use a vpc, but you need to bootstrap the juju controller or create the model before using conjure-up, then tell conjure-up to use that controller or model
<tvansteenburgh> for example: juju bootstrap --config "vpc-id=vpc-xxxxxxxx"
<tvansteenburgh> or: juju add-model --config "vpc-id=vpc-xxxxxxxx"
<R_P_S> at this point, I have my ha controllers in the VPC i want... kwmonroe was curious about ec2-classic though
<R_P_S> at the fact that the AWS account I'm using appears to be old enough to still support ec2-classic meant that a basic/barebones conjure-up created a kubernetes cluster in ec2-classic instead of inside a VPC
#juju 2017-11-17
<kwmonroe> +1 R_P_S, i appreciate the lesson!
<siraj> REGISTER sirajsaadi itssiraj@gmail.com
<siraj> sorry
<sirajsadi> is juju charm are stable?
<EdS> Morning Juju people :) I have just run into ssh WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! when using juju ssh. I'm unsure what's happened here, how do I fix this issue? If it makes any difference I've deployed into a MAAS self hosted setup.
<EdS> Following up on my issue earlier, I think that this remote host error is benign, but somehow the known hosts from a previous cluster have been retained. Any ideas where Juju keeps ssh known hosts?
<jamesbenson> kwmonroe: thanks for the vote of confidence :-)
<jamesbenson> stokachu : thanks for commenting on the post this morning to help push it along :-)
<bdx> kwmonroe: one of the fights I always end up fighting seems to be "juju automatically exposes ports to the wan that I dont want exposed on the wan"
<bdx> whether it be company policy, or red tape for a client
<bdx> just a bad practice none the less
<kwmonroe> roger that bdx.  but can't we all just get along on the internet?  don't hack me bruh, i'm just over here trying to make a living.
<bdx> aha
<bdx> yeah
<bdx> tell that to walmart
<kwmonroe> lol
<cory_fu> bdx: I landed a change to the Endpoint PR yesterday evening that might impact you.  Specifically, I removed the context collection, because it made it inconsistent to use interfaces with different implementations.  Not sure if you were using that or not
<kwmonroe> bdx: what ports get auto exposed?  i get 22.. maybe 17070 for cloud-hosted controllers.. others?  or is it the case that 'juju expose' opens things too much?
<bdx> kwmonroe: http://paste.ubuntu.com/25981632/
<bdx> notice whats on the private (nat) subnet and whats on the igw subnet
<bdx> cory_fu: that definitely will, checking omp
<kwmonroe> yeah, ack bdx.  btw, i see you've got some sense in that deployment.  no artful nonsense in there ;)
<bdx> right, https://github.com/jamesbeedy/redis-snap
<bdx> like the redis 4.0.2 too huh
<cory_fu> bdx: It's a little bit more verbose, but you can use endpoint_from_name('endpoint_name') as a drop-in replacement.  Also, we're formalizing recommending everyone import things directly from the top level (`from charms.reactive import endpoint_from_name`) rather than depending on the internal organization which we might need to change as we refactor things.
<bdx> I see, yeah silly me, I missed that srry :P
<stokachu> jamesbenson: np
<bdx> cory_fu: that was a needed change
<cory_fu> :)
<bdx> cory_fu: really nice work there
<cory_fu> Thanks
<cory_fu> I think it's about ready to land as a beta feature, then we'll need to get a new release cut
<bdx> thats awesome
<bdx> please do
<R_P_S> hey, so I have been continuing to follow these instructions, but the kubernetes-master and kubernetes-worker instances are ending up blocked
<R_P_S> https://insights.ubuntu.com/2017/02/08/automate-the-deployment-of-kubernetes-in-existing-aws-infrastructure/
<R_P_S> kubernetes-master/0*  blocked   executing   6        34.201.56.143             Missing kubernetes resource
<R_P_S> I found this in a controller log: ARNING juju.resource.resourceadapters charmstore.go:123 attempt 2/3 to download resource "kubernetes" from charm store [channel (stable), charm (cs:~containers/kubernetes-master-11), resource revision (-1)] failed with error (will retry): revision must be a non-negative integer
<R_P_S> but I don't understand why the resource revision is negative in the first place
<bdx> "but I don't understand why the resource revision is negative in the first place" - there is a fix in place for this I think
<bdx> R_P_S: thats actually a bug I think
<R_P_S> where exactly is the bug?  Do I need to find a different version for something?
<R_P_S> this affects both the masters and the workers, so I'm guessing this is something underlying?
<bdx> R_P_S: trying to find it omp
<bdx> R_P_S: https://bugs.launchpad.net/juju/+bug/1723970
<mup> Bug #1723970: unable to attach zero-length resource <papercut> <usability> <juju:Fix Committed by thumper> <https://launchpad.net/bugs/1723970>
<bdx> looks like it is in the rc1
<bdx> R_P_S: `sudo snap install juju --edge`
<bdx> or `sudo snap refresh juju --edge` if you've already got it, which I assume you do
<R_P_S> a buig in juju itself?
<bdx> R_P_S: yea, the fix has already landed though, see ^
<R_P_S> I guess I need to wipe the apps and redeploy
<R_P_S> I find it strange that this didn't affect my conjure-up initial test... I wonder if I installed juju via apt for that one
<R_P_S> and based on the version, when snap installs version 2.3.x by default, it'll include this fix
<bdx> R_P_S: not any 2.3, only the one in edge
<bdx> R_P_S: are you using JAAS?
<bdx> or you have your own controller(s) deployed?
<R_P_S> I created my own controller
<bdx> if you are using jaas it won't matter
<bdx> ok cool
<bdx> yeah, from my understanding, you will need to redeploy that controller with your newly install 2.3rc1 snap
<R_P_S> oh damn, the entire controller needs to go?
<bdx> well, I think you can upgrade it, but I'm not exactly sure how that works from beta -> rc1
<R_P_S> ok, well, I'll get started on that... in the meantime, I have a couple concept questions
<R_P_S> I'm confused about expsoing of the workers
<R_P_S> this tutorial doesn't include a load balancer, but I noticed the conjure-up does.  If there's a load balancer, wouldn't it just need to be exposed instead of the workers?
<R_P_S> I would like to be able to create the workers in a private subnet, so they'd have no external IPs
<R_P_S> exposing the workers just seems strange in that there's no way to know which worker a pod is on, so having an external IP seems counterintuitive
<bdx> R_P_S: yeah
<bdx> R_P_S: you totally should do that
<bdx> you have to setup spaces with your subnets and all
<R_P_S> srry, can't tell which way your statement is going... do or do not expose workers?
<R_P_S> yes, I've setup workers, and yesterday worked around the bugs surrounding add-subnets
<bdx> R_P_S: it just depends how you do your ingress
<bdx> but yeah, if you are using a proxy infront of the workers for application level ingress
<R_P_S> now if I were to expose my workers, does that mean I could technically connect to a service running inside kubernetes on the workers via any one of their IPs and it would just magically route to an appropriate worker if that worker doesn't contain a specific pod?
<bdx> then you can totally put subnets behind a nat gateway (by making a new routing table that routes to the nat gateway instead of the igw and using this routing table with your subnets you want to be private)
<bdx> then you can (as I do) hide everything in the nat subnets, and just proxy to them from the things that are deployed to igw subnets where the instance gets a wan ip
<R_P_S> yeah, I built the private subnets in AWS a couple days ago.  we just had public prior to that
<R_P_S> so does this mean that the masters could also technically be in the private subnet?
<R_P_S> I built the private subnets the instant I saw the 0.0.0.0/0 SSH ACLs.  the less instances with publicly accessible IPs, the better
<bdx> right right
<bdx> well it all depends on what the ingress story looks like there
<bdx> if you are operating behind a vpn, or if you want this  thing to be publicly accessible, etc etc
<bdx> if nothing needs to talk to the masters from the wan then ....
<R_P_S> our security model is fairly basic.  admin access via a bastion host (with port forwarding for gui stuff)
<bdx> R_P_S: what does "admin access" mean?
<bdx> like a host inside the vpn?
<bdx> admins get to login to it and use juju from there etc etc?
<R_P_S> yeah
<bdx> I see
<R_P_S> does conjure-up support spaces?  When I first used conjure-up, it didn't even ask about VPCs and actually built it in ec2-classic
<stokachu> R_P_S: not yet
<R_P_S> before I defined spaces, it was randomly creating instances in both the public and private subnets
<bdx> stokachu: can't he just clone the spells repo, and put spaces constraints in the k8s bundle?
<stokachu> bdx: yea
<bdx> R_P_S: ^^, then use `conjure-up --spells-dir spells/` (I think)
<stokachu> `conjure-up --spells-dir spells/ --nosync`
<R_P_S> hrmm... I think I'd like to try and avoid that path for now.  I don't even know where the spells repo is...
<R_P_S> I'll retry without conjure up using the edge juju
<bdx> R_P_S: https://github.com/conjure-up/spells
<bdx> git clone https://github.com/conjure-up/spells
<R_P_S> I do feel like that's a rabbit hole, and at this point I am under the gun to complete an evaluation here at work
<R_P_S> to quote my manager "everyone uses kubernetes, it can't be that hard, just find the right script to get it up and running" :(
<tvansteenburgh> HAHAHAHA
<bdx> lol
<bdx> right
<tvansteenburgh> clearly he must be idling in different slack channels than i am
<R_P_S> i know :(
<R_P_S> to be honest, O
<R_P_S> I'm consdering this an RGE, and trying to learn as much myself about kubernetes while still delivering
<bdx> somehow the whole KOPS thing spread like wildfire
<R_P_S> that was likely my next option if juju completely failed
<bdx> glad to see conjure-up is the first item in the list of tools to install production grade k8s on the site though
<bdx> https://kubernetes.io/docs/getting-started-guides/aws/
<R_P_S> but despite all the issues with juju, I still seem to be able to muddle my way through, and the conjure-up did technically work, but had a few anti-patterns that just made me weep
<bdx> stokachu: am I blind, or are the bundles not in the spells repo anymore?
<stokachu> bdx: only if you want custom bundles, otherwise it pulls from charmstore
<R_P_S> which is why I moved away from conjure-up to that tutorial that manually creates the controllers and apps for a kubernetes cluster
<bdx> stokachu: so a user can manually override the charmstore bundle by using --nosync and supplying their own bundle.yaml in the spell root?
<R_P_S> anyways, it's lunchtime... I'll be back at this after lunch
<stokachu> bdx: yep exactly
<bdx> k, cool, thx
<bdx> stokachu: can I choose an existing model to deploy to?
<bdx> stokachu: I got as far as being able to choose my existing controller, but it went straight to review and configure applications
<bdx> so, the model that I have predefined with the spaces
<bdx> is moot?
<bdx> in terms of what conjure-up can do with it
<bdx> R_P_S: when you get back, https://gist.github.com/jamesbeedy/608a2c819ed852b89de203d7f95cd22e
<bdx> R_P_S: put ^ in a file called "k8s-bundle-with-spaces.yaml"
<bdx> then you can
<bdx> `juju deploy k8s-bundle-with-spaces.yaml`
<bdx> R_P_S: assuming you have 2 spaces defined: nat, igw
<bdx> here's what my spaces look like http://paste.ubuntu.com/25983157/
<bdx> 3 subnets in each space
<bdx> with each subnet having an affinity to an az
<bdx> possibly creating spaces could be a pre-deploy step
<bdx> so that you can use the model that conjure-up create
<bdx> and adding spaces could just be a step
<bdx> R_P_S: it will only take about ~5 minutes after you run `juju deploy k8s-bundle-with-spaces.yaml`
<R_P_S> that looks pretty straight forward once I have the model and spaces setup
<bdx> R_P_S: then you will have a beautiful thing (takes about 10mins for it all to settle actually)
<R_P_S> although for the most part, that just looks like a single file that contains pretty much everything I'd built so far with individual commands
<R_P_S> looks like I'd just need to add my instance type constraints
<R_P_S> curious about the master... don't we want that to be redundant?
<stokachu> bdx: deploy to existing models isn't supported yet either
<stokachu> it's on our roadmap though
<bdx> stokachu: cool
<bdx> R_P_S: this is what it looks like with the spaces constraints when it all settles http://paste.ubuntu.com/25983232/
<bdx> R_P_S: yes, add you instance-type constraints where desired
<bdx> and any other machine constraints
<bdx> R_P_S: just deploy the bundle I shared with you first
<R_P_S> yeah, model helps definitely, since I can also put AWS tagging in the models.  I have a jenkins job that emails devops daily if it finds any instances not tagged or incorrectly tagged
<bdx> totally
<R_P_S> sorry, deploy the bundle shared fisrt?
<bdx> yea, then add your mods 1 at a time
<bdx> you will have much better milage when just getting introduced to all this if you just get the base stack up (which it sounds like you have ) then make these mods 1 at a time
<bdx> instead of seeing the end goal (all the mods and constraints and customizations) and trying to get it all in the first go around
<R_P_S> yeah, I scewed around with the default conjure-up for a while... but I needed to fix some of those base issues with security
<bdx> right
<R_P_S> I hope to have this up and running soon, then I can go back to dev and have them break everything inside kubernetes :P
<bdx> :) good luck
<R_P_S> what are gui-x and gui-y... I've never seen those before
<R_P_S> oh, those are for those map things
<R_P_S> are those required values?
<R_P_S> or can I just strip those out?
<R_P_S> oh, that yaml file is called a bundle... I didn't know that :P
<R_P_S> is there a reason the subnet constraints are specified twice?
<R_P_S> each app had constraints, but then each machine at the bottom also had the constraints... is the redundancy necessary?
<bdx> yeah
<bdx> I think
<bdx> idk test it out :)
<bdx> yeah, you can strip the gui-{x,y} if you don't care about the presentation in the gui
<bdx> the redundancy is necessary, because you are deploying machines to the desired constraints
<bdx> so like put this machine in nat space
<bdx> then at the application level
<bdx> you are saying "I only go to places where these things are true"
<R_P_S> ah, so the machines are for the first instance, but then the appluications is for increasing the app later type of thing?
<bdx> exactly
<bdx> (I think)
<bdx> hehe
<bdx> thats my disclaimer to everything
<R_P_S> seems a bit odd cause the single juju dieploy command creates the instance and the app with a single set of constraints
<bdx> right
<R_P_S> juju deploy --constraints "instance-type=t2.medium root-disk=32G spaces=pre-pub" cs:~containers/kubernetes-master-65
<bdx> totally, but in the bundle, you have the "to:" stanza
<bdx> which to me indicates you want to deploy to the machine specified there
<R_P_S> so that logically separates creation of instances from creation and deployment of apps?
<R_P_S> fortunately the lines are identical... vi yy p p p to the rescue :D
<bdx> R_P_S: you are right
<bdx> it is redundant
<bdx> this gives you the exact same thing https://gist.github.com/jamesbeedy/608a2c819ed852b89de203d7f95cd22e
<R_P_S> oh, the entire machine section is gone!
<bdx> yea, and the "to:" directives
<bdx> stanzas
<bdx> whatever
<R_P_S> right, just checked the diff, I didn't spot that myself
<R_P_S> I can't remember if I asked this before... but can I manually modify the security groups after deployment?  or will juju go back at some point in the future and overwrite my changes
<bdx> juju will keep persistence on them, people have workarounds for making mods that persist, but the ports that are open on the instances are open for good reason
<bdx> R_P_S: either  way, for things like that, you have to write the mailing list
<R_P_S> at least the private subnet will protect most of the machines from those 0.0.0.0/0 ACLs
<bdx> yeah, but really, how is juju suppose to know where you are ssh'ing from
<R_P_S> hah, so I had a typo in there with one of the spaces... one of the instances is hung at pending
<R_P_S> well, in terms off SSH, it feels like it could fit into the model.  Using internal IPs with SSH ACLs based on the subnet CIDRs
<R_P_S> like a model flag for internal.  Then that would provide basic support for a bastion host VPC model
<R_P_S> hey, is there any way to recover this bundle with the space typo?  or do I just wipe and rebuild
<R_P_S> machine 4 (the LB) is never going to spin up...
<bdx> R_P_S: there are primitives in juju that exist solely to remediate the idea you have of this "bastion host"
<bdx> R_P_S: I urge you to ditch that whole gang bang
<bdx> immediately
<R_P_S> not everything is going to be cerated by juju though...  this is just for apps that can be containerized...   the bastion host is meant to help secure the rest of our resources including AWS RDS and non-containerized apps
<R_P_S> I'm curious to hear more about why you recommend ditching bastion hosts, and what kind of setup you'd do in place?
<R_P_S> heh, I just checked wikipedia... and while I use the terms interchangeably, it appeasr that a jump server is a better definition of what I'm trying to build with
<R_P_S> now since we don't have a VPN between our office and AWS, it technically still is a bastion host, but I apologize for any confusion between jump vs bastion host
<R_P_S> hey bdx, I'd like to thank you for all your help.  I'm not sure if I'll be quite done with the with the juju layer, but hopefully everything forward is inside kubernetes now
<bdx> R_P_S: any time
<bdx> R_P_S: about the bastion host
<bdx> juju lets you operate it from the juju client
<bdx> and juju treats "users" as first class citizens so as to give you a supported user management system
<bdx> controlling acls and user access via the primitives for user management in juju itself
<bdx> will give you much better milage in the long run then using the "bastion host" thing
<bdx> when used correctly, anywhere a juju client is run is essentially a "bastion host" or "jump host"
<bdx> because the security mechanisms in place in juju itself give you identity management (which is basically what you are doing with your bastion host)
<bdx> you should take advantage of it, instead of doing the thing you do
<bdx> just an idea
<bdx> do whatever you please
<R_P_S> I'd love to be able to ditch the jump host security layer, but experience has taught me to always reduce attack vectors.  I'd love to have confidence in the authorization mechanisms of every service and tool, but the reality is different.
<bdx> right, so juju takes advantage of things like key based access
<bdx> it doesn't matter if you are key-based access from your laptop or from the bastion host
<bdx> its the same thing
<bdx> the bastion host is just an unneeded hop .... possibly I just don't see the extra layer of security you get from it
<R_P_S> except we have ACLs on our jump host to allow only from our office gateways.  our jump host is not accessible publicly
<bdx> oooo
<bdx> ok then
<bdx> :)
<R_P_S> well, it technically is cause it's got a public IP and we don't have VPN between offices and AWS
<R_P_S> but you'll find 0 ports open if you try hitting that IP from the wild
<bdx> nice
<bdx> https://jujucharms.com/openvpn/
<bdx> is what I use
<bdx> I just deploy ^ to whatever vpc I'm working in
<R_P_S> haha, the challenge is not the AWS side of the VPN... the challenge would be the office side
<bdx> I see
<R_P_S> the offices have... uh... ghetto hardware in places
<R_P_S> and I know there are subnet clashes between some VPCs and our internal office address space
<R_P_S> I inherited some... interesting AWS configs when I joined this company.  Including a production VPC in a /24 with only a single AZ :P
<bdx> lol
<bdx> oh man
<bdx> #beenthere
<R_P_S> I don't think we had a single SG that referenced another SG prior to my arrival.  everything had an EIP and nothing but IP whitelists everywhere
<R_P_S> After a single point of failure got cryptolocked, management's ears tuned in a bit more on the security front.
<bdx> "After a single point of failure got cryptolocked, " - this is exactly why not to use the "bastion host"
<R_P_S> jump hosts don't need to be SPOF... they can easily be scaled horizontally with minimal state :)
<R_P_S> this SPOF was particularily bad.  it was hosted in godaddy, held a firmware coded IP in customer owned hardware, and we had practically no backups of the thing
<R_P_S> I really need to write an ansible role for our jump hosts though.  Most of it is just making sure the tools are installed and ready to go.  Then it's just ndividual configuration of things like .bashrc etc
#juju 2017-11-18
<sirajsaadi> unable to remove application from local cloud whose install hook failed
#juju 2019-11-11
<nammn_de> stickupkid_: https://github.com/juju/juju/pull/10894 a ci job fix. How do I test it on jenkins again?
#juju 2019-11-12
<rick_h> timClicks:  ping, I'm trying to load the omnivector redis charm in the store but it's failing for me. Can you confirm?
<rick_h> https://jaas.ai/u/omnivector/redis just hangs but I can load https://jaas.ai/u/omnivector/redis-singleton/bundle/1 just fine
<timClicks> rick_h: looking
<timClicks> rick_h: not working for me either
<rick_h> :(
<rick_h> timClicks:  might be worth a check if your redis demo works atm. I was going to use it as a demo for a lightning talk but moving forward with something else atm
<timClicks> i wonder if he meant to upgrade and forgot to push to stable
<timClicks> am receiving a 502 Bad Gateway, which implies that there is an error with the charm store code
<rick_h> timClicks:  yea, not sure what's up. Seems odd others work
<rick_h> bdx:  did you break something :P
<manadart> achilleasa: https://github.com/juju/juju/pull/10899
<achilleasa> manadart: should I bump the machiner API for my proposed change? If not I suspect it could break if you upgraded the models but not the controller
<manadart> achilleasa: I think we should, yes.
<nammn_de> manadart stickupkid: in for a second review on this integration test fix for "vsphere" https://github.com/juju/juju/pull/10897/files ?
<manadart> nammn_de: Will look in a sec.
<nammn_de> manadart  i already got some feedback from stickupkid and rick_h  which I added.  I incorporated ricks additional feedback from the comment.  Could you give me a sanity check? Espeically because I renamed the featureflag to be branch instead of generations from the feebdack
<manadart> nammn_de: Just approved it.,
<nammn_de> manadart: thanks! Sry i meant this one as well https://github.com/juju/juju/pull/10867 little bigger, but stickupkid and rick_h already gave there a review and almost +1. Might need to have a sanity check for the feature flag change
<nammn_de> *little*
<nammn_de> *its big nvm me
<achilleasa> manadart: so... that machinewatcher thingie in the instancepoller? this is why it does not react to document changes: https://github.com/juju/juju/blob/develop/state/watcher.go#L501
<achilleasa> manadart: quick HO?
<manadart> achilleasa: Just firing down lunch. Gimme a few.
<nammn_de> manadart: your comment here: https://github.com/juju/juju/pull/10867/files/ff32067866004a0a04d5dba8e281dea7857f33ea#r345173545 with renaming you mean reuse the existing one? Because we already have them
<manadart> nammn_de: You could actually use a single type with fields common to all of those transports. At the client we transform them into specific types for display anyway.
<manadart> achilleasa: I'm in Daily.
<nammn_de> manadart: resuse the results? I can do that, I was "afraid" to add new fields as omnitempty
<nammn_de> to existing ones
<nammn_de> manadart: added your changes. I replaced my own "commit" types by the "generation" types and just added fields if thats you meant https://github.com/juju/juju/pull/10867
<stickupkid> who knows about description and optional fields?
<stickupkid> achilleasa, whilst i've got your attention :D https://github.com/juju/description/pull/65
<achilleasa> stickupkid: looking
<stickupkid> achilleasa, ty
<babbageclunk> hmm, having vpn issues
<babbageclunk> anastasiamac, hml: vpn working for you?
<hml> babbageclunk: attempting now
<hml> babbageclunk: yes, iâm using the US based
<babbageclunk> hml: thanks - seems all better now
<anastasiamac> babbageclunk: m usually not strating a day with vpn ... when i don't have enough caffeine in me, i do nothave patience for it... but it looks like it's sorted for u ;)
<babbageclunk> anastasiamac: I basically always connect to it - otherwise if I need to later I have to drop on irc and reconnect
<babbageclunk> + to get to vsphere I need to be on the vpn
<anastasiamac> babbageclunk: u've foud silver lining for vsphere - so beatiful :D
<babbageclunk> uh, it's like a grey lining
<anastasiamac> grey with sprinkles or sequin will still make it shiny
#juju 2019-11-13
<nammn_de> stickupkid: mind having a quick look? https://github.com/juju/charm/pull/298 This now should fix the unit tests on most occasions
<nammn_de> stickupkid?
<nammn_de> *: I added some tests as you metnioned
<nammn_de> could you take a look again?
<stickupkid> nammn_de, done :D
<nammn_de> stickupkid: ha, thanks! I was struggling to find a good way to test the timeout
<stickupkid> yeah, i thought you might, but it was worth seeing if it was possible
<nammn_de> In the end I let the context be a parameter
<nammn_de> because i was struggling to either "mock" the context and/or command
<stickupkid> anyone know how to get the controller name?
<stickupkid> from state?
<stickupkid> achilleasa, ?
<achilleasa> stickupkid: you mean the alias used by the client?
<stickupkid> ho quickly? achilleasa
<achilleasa> omw
<rick_h> doh he's gone
<rick_h> achilleasa:  heads up that a controller name tracked as config is on the roadmap for 20.04
#juju 2019-11-14
<nammn_de> achilleasa manadart is there a way to find out the fullname of the controllermodel?
<manadart> nammn_de: What do you mean by full name? It is always "controller".
<nammn_de> manadart: ohh, i meant with the fullname the owner name as well.
<nammn_de> right now `upgrade-model` assumes the current account name for a upgrade. So does upgrade-controller. I want to find the controller owner to inject a correct model name
<manadart> nammn_de: "juju show-model controller" has "name", which is the full name.
<nammn_de> manadart: thanks!
<achilleasa> manadart: is it possible to (temporarily) override the state clock inside a test?
<achilleasa> I am trying to write a test using WaitAdvance but timers are being created by other parts of the test suite
<manadart> achilleasa: Which suite?
<achilleasa> MachineSuite; I am testing the changes to the model machine watcher that I am working on
<achilleasa> (see comments in https://github.com/juju/juju/pull/10902)
<achilleasa> I could make the watcher c-tor accept a clock (or make it take functional options) but that is ugly/overkill (for options)
<manadart> Would it work to change StateSuite.SetUpTest to only instantiate the suite's Clock if nil, then you could derive from MachineSuite, and in that SetUpTest set the clock before calling the base SetUpTest?
<achilleasa> for my use case, the watcher pulls the clock from st.clock(); so if the clock gets injected it will be used by other things (e.g. txnwatcher) which means that I get more waiters on the clock (2 or 3 more)
<achilleasa> it wouldn't be a problem if the test was running in state instead of state_test...
<manadart> achilleasa: Mock the clock?
<manadart> achilleasa: I think it would be much nicer to do this everywhere and move away from the test clock.
<achilleasa> but I am already using the test clock?
<achilleasa> ah... as in gomock?
<manadart> achilleasa: Yep.
<achilleasa> not sure if it's worth the effort here.... maybe just add a method to give me the watcher with a clock.Clock in export_test?
<achilleasa> (yuk!)
<achilleasa> manadart: nah... I will just add a "SetClock" helper and try to patch it by checking the interface...
<stickupkid> manadart, got a sec, quick question
<hml> manadart: ready for review if you have a few: https://github.com/juju/juju/pull/10903
<manadart> hml: Yep, taking a ride home this minute, but I will review once home.
<hml> manadart:  rgr
<stickupkid> anyone up for a CR on a backport? https://github.com/juju/juju/pull/10906
<achilleasa> stickupkid: looking
<hml> anastasiamac: review pls https://github.com/juju/juju/pull/10909
 * anastasiamac looking
#juju 2019-11-15
<hpidcock> forward port from 2.7 https://github.com/juju/juju/pull/10910
<pmatulis> on 2.6.10 when i do 'juju show-cloud maas' i believe this should apply to the cloud info on the controller correct?
<pmatulis> yet the first line of the output is:
<pmatulis> 'defined: local'
<anastasiamac> on 2.6.10, shoe-cloud command only showed local clouds
<anastasiamac> the ability to see controller clouds was added in 2.7
<anastasiamac> show*
<pmatulis> anastasiamac, k, the --local option is confusing me
<pmatulis> in the help i mean
<anastasiamac> pmatulis: that's why on 2.7 we tried to deprecate it
<pmatulis> maybe update the help then. it's quite confusing
<pmatulis> no biggie though
<anastasiamac> m not sure whether we r planning another 2.6 release... it may only b critical issues that will b addressed
<nammn_de> manadart: as we were talking about this yesterday https://github.com/juju/juju/pull/10907 What do you think about adding the information to the modelcache? IMO this should be a safe operation, though I could just use the string "admin/controller" and remove the rest of the code.
<manadart> nammn_de: Do you mean adding "IsController" the the cached Model type?
<nammn_de> manadart: yes, because initially I was not sure whether we can alwaysd be sure that "admin/controller"  will be the controller model.
<nammn_de> So I thought there is nothing to lose to add that information
<nammn_de> but maybe it will never change and the change itself is not needed. Not even in the future ð¤ 
<manadart> nammn_de: For your purposes, I think using admin/controller is the shortest path, but I have no problem with syncing the rest of the multi-watcher model fields into the cache type - we'll probably need that eventually.
<nammn_de> manadart: thanks! And for my pr above? The current state works, which already adds the isController information. But I could revert the change and "just" add the admin/controller check
<nammn_de> Happy for a short pr review. I will fix the tests depending on the review output by then
<nammn_de> *s/by then//
<manadart> nammn_de: I say revert the local store changes. If we can get away without adding complexity there...
<nammn_de> manadart: will do :)
<nammn_de> manadart: change is in https://github.com/juju/juju/pull/10907 small one
<nammn_de> manadart thanks! I just amended and added the change you mentioned.  "luckily" the modelupgrade command, if no controllername is given, takes the controllername from the cache. Thats why I did not realise it during testing.
<nammn_de> manadart: is there a easy way to add a user to the controller in the tests?
<nammn_de> and then add them to the accountdetails?
<manadart> nammn_de: Do you mean unit tests for 10907?
<nammn_de> manadart: yes, I tried to create a user, add it to the model and change the current user to the one im testing with. Everything is working beside that he says that my current user does not have enough permission. Thats what I did
<nammn_de> https://github.com/juju/juju/pull/10907/files#diff-eb405c3d636e6c7c21e2d460f92ae522R193
<nammn_de> tbh, I just want to run one of those 5 tests to see whether it runs through
<nammn_de> because else it fails because of missing permissions. Which is not bad, because this is independent of my test. But the best test would be everything else is just silently mocked out and it can run through. I dont need the full suite for my change
<manadart> nammn_de: I think you need to change the suite's authoriser, which looks like it only auth's the admin user by default.
<nammn_de> manadart: tried, did not work :/
<nammn_de> manadart I finally got it to run :D.  manadart rick_h one of you want to give a quick review? https://github.com/juju/juju/pull/10907/files
<rick_h> manadart:  nammn_de we're making sure to now hard code "controller" as the expected name anywhere right?
<jam> axino: https://jenkins.juju.canonical.com/job/BuildJuju-amd64/2700/
