#juju 2011-10-17
<backburner> enmand where ?
<enmand_> Where what?
<enmand_> Where did I read stuff? I don't remember really? mostly blogs
<shang> hazmat: ping, I tried using the charm update; charm getall, to obtain all the charms, but the wordpress/mysql still not working
<hazmat> shang, as i recall, the problem was the wordpress formula hadnt been updated to use open-port/close-port... so it wasn't accessible via the internet
<hazmat> shang, as of bzr rev 50, charm rev 31 of the formula that should be fixed.. ie per this change https://bazaar.launchpad.net/~charmers/charm/oneiric/wordpress/trunk/revision/50
<hazmat> shang you can verify the charm formula in juju status
<_mup_> Bug #876488 was filed: juju subcommand 'debug-report' to collect logs from relevant locations <juju:New> < https://launchpad.net/bugs/876488 >
<robbiew> hazmat: ping
<hazmat> robbiew, pong
<mjfork> is there a guide that talks about openstack environments.yaml config?
<robbiew> mjfork: adam_g is probably the best person to ask
<mjfork> ok, can wait for him to rebutn
<mjfork> returen
<mjfork> sigh...return
<jamespage> mjfork: I have one working with openstack - its much the same as ec2 (same provider) but you need to specify ami, ec2-uri and s3-uri manually
<SpamapS> mjfork: using against openstack, or deploying openstack with orchestra?
<SpamapS> if the former, then jamespage's advice is correct. If the latter, then there's a guide in the works, I think RoAkSoAx is working on
<SpamapS> RoAkSoAx: ^^ you got some docs on orchestra+juju+openstack deployment yet?
<m_3> robbiew: ping
<mjfork> i am doing it against openstack
<robbiew> m_3: we got a call, right?
<mjfork> what is the control-bucket?
<mjfork> random md5?
<m_3> mjfork: I usually let juju generate it for me
<m_3> think it's just gotta be unique per account
<mjfork> ok
<fwereade> mjfork, m3: just a note: afaik s3 bucket names have to be globally unique
<SpamapS> globally unique in S3 itself
<fwereade> SpamapS, yep
<fwereade> need to be away for now; back later on
<SpamapS> hrm... local provider is having issues running on an ec2 instance
<m_3> SpamapS: bummer... I was totally wanting to play with that sometime
<SpamapS> I have an m2.xlarge with /var/lib/lxc on tmpfs ..
<SpamapS> but virsh net-start default is failing
<SpamapS> sudo juju bootstrap worked. :(
<m_3> wow... hmmmm
<SpamapS> so something borken in the way sudo is obtained
<m_3> virbr shouldn't need a reboot after installing libvirt-bin
<m_3> might tweak interfaces in /etc/libvirt/network/default.xml (something like that)
<SpamapS> no it works as root
<SpamapS> something else is broken
<SpamapS> have to logout/back in
<SpamapS> to get libvirtd privs
<jimbaker> my personal experience is that i had to reboot to get the virtual networking to work properly, but i didn't investigate further
<m_3> sounds like he got it... a groups thing
<SpamapS> I did not
<SpamapS> its fine now
<SpamapS> just had to logout/back in
<m_3> awesome
<m_3> cool potential for test frameworks
<m_3> I got my laptop up to a load factor of 12+ yesterday
<SpamapS> You know, the more I think about it the, more I think we should just put the first unit on the bootstrap machine.
<m_3> man that stack came up fast!!
<m_3> little warm on the lap though <grin>
<SpamapS> maybe try a bag of frozen peas? ;)
<m_3> ha
<m_3> suspend barfs the stack though
<SpamapS> yeah the single laptop disk tends to make running 5+ containers send this one to 11
<m_3> +1 on overloading bootstrap in localdev
<SpamapS> yeah zookeeper doesn't like the suspend/resume
<SpamapS> well with local dev there's no need
<SpamapS> all units go on machine 0 ;)
<m_3> yeah, good point
<SpamapS> #ubuntu-classroom now for a juju session, btw. :)
<hazmat> SpamapS, its the groups and the shell
<hazmat> the user executing the bootstrap has to be  a member of the libvirt group
<hazmat> but on initial installation and creation of the group, the executing shell doesn't have the group
<hazmat> bootstrap doesn't actually create any containers, just the network, zk, machine agent
<hazmat> all it takes to fix the groups is to execute a new shell
<SpamapS> wow
<SpamapS> local provider on tmpfs .. LIGHTNING
<hazmat> SpamapS, :-) sweet, did you just tmpfs /var/lib/lxc ?
<SpamapS> yep
<bcsaller> SpamapS: maybe we should recommend that for devel
<SpamapS> tmpfs                  14G  2.3G   12G  16% /var/lib/lxc
<SpamapS> m2.xlarge.. mmmmm
<hazmat> bcsaller, it was pretty fast on your ssd
<SpamapS> I have about 40 more minutes with it before I get chargd $0.50 more
<SpamapS> now I need to figure out why mediawiki in the charm collection doesn't have the config.yaml I added to it. :-P
<bcsaller> hazmat: yeah, but the apt-get install phase still takes too long
<elopio> thanks for the session SpamapS
<elopio> but after juju deploy --repository charms local:mediawiki I got state: null
<elopio> how can I start it?
<SpamapS> elopio: its probably still deploying
<hazmat> elopio, null typically mean its pending
<SpamapS> elopio: I cheated and put it on a giant tmpfs ..
<hazmat> elopio, we're working on making that more obvious in the status output
<SpamapS> debug-log also needs to show the local provider's unit logs
<hazmat> SpamapS, there is no local provider ;-)
<elopio> SpamapS, hazmat ok :) So I'll wait.
<hazmat> SpamapS, i mean there's no provisioning agent running
<SpamapS> $ ls ~/src/juju/trunk/juju/providers/
<SpamapS> common  dummy.py  dummy.pyc  ec2  __init__.py  __init__.pyc  local  orchestra  tests
<hazmat> SpamapS, the machine agent is just deploying the units
<hazmat> in an lxc container, it will work the same on ec2 or orchestra
<hazmat> when we enable lxc there
<SpamapS> So machine agent should be sharing them?
<hazmat> SpamapS, 'sharing' means what?
<hazmat> they are all units assigned to a single machine
<SpamapS> hazmat: sharing the logs
<hazmat> SpamapS, each unit has its own logs in the container .. i change the location to be a bit more fhs compliant.. /var/log/juju/unit-name.log
<SpamapS> elopio: you should be able to see the logs under ~/.juju/data/$USERNAME-local/units
<hazmat> SpamapS, there's a symbolic link for convience in the data-dir
<SpamapS> elopio: something like mediawiki-0/unit.log
<SpamapS> elopio: have to be root tho
<SpamapS> hazmat: I really like the idea of debug-log being comprehensive
<hazmat> SpamapS, it is comprehensive for all agents
<SpamapS> Err, but it doesn't show me the unit.log stuff
<hazmat> SpamapS, but in this case there is no provider agent, and no place to log to
<hazmat> SpamapS, hmm
<elopio> SpamapS, I have no ~/.juju/data directory.
<hazmat> SpamapS, it should, if not its a bug
<hazmat> elopio, its whatever directory you using as data-dir in local provider config in environments.yaml
<SpamapS> elopio: the example environments.yaml I pasted had /home/ubuntu/.juju/data .. so you may have it there
<elopio> ahh, ok.
<elopio> now my units directory is empty.
<SpamapS> elopio: check machine-agent.log in the directory above then
<hazmat> elopio, it takes  a little while for the first time ever, as the system needs to download and debootstrap a base distribution
<elopio> ok, I'm getting close to my error: http://paste.ubuntu.com/711155/
<elopio> SpamapS, hazmat ^^
<hazmat> elopio, what's the output of lxc-ls? and you have a <data-dir>/master-customize.log ?
<bcsaller> elopio: and this is on oneiric, not natty, right?
<bcsaller> hazmat: it looks like lxc-create for the master just failed outright
<bcsaller> I wouldn't expect a customize log, it didn't get that far
<elopio> hazmat, there's no output of lxc-ls. And I don't have a master-customize.log
<elopio> bcsaller, yes, oneiric.
<hazmat> time to check-in to my flight, bbiab
<SpamapS> elopio: maybe try 'sudo lxc-create -t ubuntu -n test-lxc -- -r oneiric'
<SpamapS> elopio: that should verify that lxc-crate *can* work on your system. ;)
<elopio> SpamapS, yes, it can.
<elopio> I did it all again, and now my log says Creating master container...
<elopio> I think that I started juju without sudo.
<SpamapS> you should not
<elopio> just $ juju bootstrap
<SpamapS> it doesn't need to run as root
<SpamapS> it will use sudo when it needs it
<elopio> now I did $ sudo juju bootstrap, and it seems to be working
<SpamapS> That isn't necessary, I'm sure it was some other problem.
<SpamapS> but glad its working in some capacity
<elopio> let's see what the log says after creating the container.
<elopio> well, but anyway, this juju thing rocks.
<elopio> SpamapS, I'd just add sudo in front of all the commands in your script :p
<elopio> yes, it's working now. I have the units/mediawiki-0 directory, and the log says it's downloading packages.
<elopio> thank you people!
<SpamapS> elopio: btw, I just updated the mediawiki charm with the config.yaml that was missing..lets you change the name, skin, logo, and admin user/pass
<elopio> SpamapS, yes, I asked about the password but it seems my question didn't get through.
<elopio> so I assume that there's a config.yaml for the myslq charm where you can change the password too.
<SpamapS> elopio: if you bzr update in the mediawiki dir, you should get revision 80 .. which allows 'juju set mediawiki admins="user:pass"'
<SpamapS> elopio: for mysql you don't actually need root access ever. ;)
<SpamapS> elopio: the charm has it, but doesn't expose it
<elopio> SpamapS, but what if I want to pimp my mysql?
<elopio> well, I suppose I should make a charm for that too.
<SpamapS> elopio: you can ssh in
<SpamapS> elopio: and yeah, anything you need to tune should be in config.yaml as a tunable
<SpamapS> I've been meaning to go through all the mysql tuning parameters and put them into the mysql charm
<elopio> mediawiki up and running \o/
<elopio> awesome.
<SpamapS> elopio: bonus points if you get haproxy in front of it, and a mysql slave added. ;)
<elopio> SpamapS, ja, I'll ask for some vacations to keep playing with juju. I guess that my boss will say no :p
<elopio> lunch's over, so I'll get back to the things I should be doing. But I hope to talk to you again.
<mjfork> looking at environments .yaml, does s3 on the API node have to listen on public ip
<hazmat> mjfork, re s3 and openstack, the s3 url needs to be accessible to the juju client and the machine nodes
<hazmat> s/machine/virtual
<mjfork> ok, can i use objectstore
<hazmat> mjfork, either the nova/objectstore/s3server.py or swift with the s3 middleware should work
<hazmat> we've primary done testing with the nova s3server
<mjfork> i just config'ed objectstore to listen on 0.0.0.0
<mjfork> hazmat: i assume you can use a regular user in the envionrments file?
<hazmat> mjfork, not sure what you mean by regular user?
<mjfork> doesn't have to be an admin user
<hazmat> mjfork, as long as the openstack credentials are authorized to create machines, it should be fine
<hazmat> objectstore/s3server.py doesn't do any actual auth on the s3 side of it
<mjfork> what is admin-secret?
<mjfork> I am getting unauthorized CreateSEcurityGroup
<mjfork> guessing i need to assin some special permission in nova (using keystone for auth)
<hazmat> admin-secret is just any random string unique to the environment
<hazmat> mjfork, ^ ... also out of curiosity what version of openstack are you using?
<mjfork> Diablo
<mjfork> got bootstrap to run, i needed to run nova-manage role add juju cloudadmin
<mjfork> (i also did netadmin, itsec, not sure if all were needed tho)
<hazmat> SpamapS, re the libvirt group membership missing.. did bootstrap have an error?
<hazmat> it looks like it caused an error on network start
<SpamapS> hazmat: logout/back in solved all problems I had
<_mup_> juju/test-api r237 committed by kapil.thangavelu@canonical.com
<_mup_> merge trunk
<SpamapS> btw, the error message for ambiguous endpoints is AWESOME now
#juju 2011-10-18
<RoAkSoAx> SpamapS: adam_g had documentation publicly available AFAIK
<backburner2> is there a readme or docs for setup with openstack?
<mrsipan_> would it be possible to use redhat based ami rather than a ubuntu one?
<SpamapS> mrsipan_: would need cloud-init
<mrsipan_> SpamapS, would it be the only need package?
<mrsipan_> needed*
<SpamapS> mrsipan_: its a pretty huge need, and there will be some things that won't work.. like PPA's
<mrsipan_> right
<SpamapS> mrsipan_: juju seeds new nodes with cloud-init to say things like "install these packages" so it can start its agents
<mrsipan_> makes sense, I guess the cloud-init is also required to get the ec2 user (meta) data for bootstraping
<bcsaller> and all the written formula currently expect .deb packages
<bcsaller> err charms
<SpamapS> well, they expect apt
<bcsaller> fair enough
<SpamapS> and they make heavy use of pre-seeding when appropriate.
<SpamapS> suffice to say, the charms would have to grow OS independence
<SpamapS> mrsipan_: there is no technical reason it won't work. There is, however, a lot of duplication of effort needed.
<mrsipan_> right
<SpamapS> wow, we definitely need to fix this bit where txzookeeper can't reconnect on session expiry
<SpamapS> 2011-10-18 05:12:24,050: twisted@ERROR: Failure: zookeeper.SessionExpiredException: session expired
<SpamapS> its t3h suck
<kim0> http://cloud.ubuntu.com/2011/10/ubuntu-cloud-deployment-with-orchestra-and-juju/
<m_3> kim0: awesome man!
<TeTeT> kim0: does this work in kvm virtual machines as well, or limited to real iron?
<m_3> kim0: should you maybe add that we're targeting to be "production-ready" by 12.04 (and hence still in development mode for orchestra/juju)?
<kim0> TeTeT: not sure about that, because of needing 6 servers, I couldn't play with the whole thing
<kim0> m_3: yeah makes sense .. I'll add that
<TeTeT> kim0: yeah, I face the same limitation, I don't have access to 6 servers, but might be able to get 6 vms on 3 computers up and running or so
<kim0> TeTeT: generally I think it's easy to configure openstack to use qemu (inside a VM) yes .. it's only a single flag in nova config file .. so "should" work yeah .. just didnt do it
<TeTeT> kim0: ok, will give it a try eventually :)
<kim0> cool :)
<kim0> TeTeT: if you do .. be sure to share your experience on cloud.u.c :)
<TeTeT> kim0: lol, i'll send you an email and you can feel free to publish it at most ;)
<kim0> TeTeT: if you have a blog, I'm happy to add it though :)
<TeTeT> kim0: I don't blog
<kim0> wise man :)
<fwereade> offhand, does anyone know why we have separate 'environment_name' and 'config' attrs on a provider? we pass the config around everywhere, but in general you need a provider to actually find out the env name
<rog> fwereade: isn't the environment_name used for naming things?
<rog> fwereade: hmm, i think i've probably misunderstood the question
<fwereade> rog, sorry, missed you
<rog> fwereade: np
<fwereade> rog: I just had an urge to name something based on the environment_name
<fwereade> rog: as we do for, say, security groups
<rog> fwereade: yeah, that's what i was thinking of
<fwereade> rog: but that takes the environment name from the provider, not from the config dict
<rog> ah. aren't they the same?
<fwereade> rog: nah, the provider object has .environment_name and .config
<mjfork> I grabbed juju yesterday and get an error that store.juju.ubuntu.com is not resolable when deploying...anyone else see that?
<rog> fwereade: isn't the former initialised from the latter?
<fwereade> mjfork: we've had some trouble getting the online docs to auto-update
<fwereade> mjfork: stick "local:" on the front of the charm name
<fwereade> juju deploy --repository=blah local:mysql
<fwereade> rog: the provider is initialised by an Environment object, which passes in environment_name and config, yes
<mjfork> tells me that the charm was not found in the respitory
<mjfork> i see a directory with the name
<fwereade> mjfork: is the repo flat, or does it have an "oneiric" subdirectory with the charms?
<mjfork> i am in the oneiric subdir
<mjfork> do i need to go up a level?
<mjfork> that did it
<rog> fwereade: maybe the reason is that the environment name isn't actually part of the config for the given environment
<rog> fwereade: it's outside of it in the yaml
<rog> i don't see why it shouldn't be put in there though
<fwereade> rog: I think that's the reason indeed
<fwereade> rog: anyway, not something I actually need, just an idle curiosity
<fwereade> mjfork: glad that helped
<rog> fwereade: yeah. it's an interesting oddity.
<mrsipan> how do i tell juju which key pair to use?
<fwereade> mrsipan, you can set authorized-key-path for the environment in environments.yaml
<mrsipan> fwereade: thanks
<fwereade> mrsipan, sorry, authorized-keys-path
<fwereade> mrsipan, pointing to a .pub file
<mrsipan> cool
<mrsipan> what user does juju use to ssh into instances?
<fwereade> mrsipan, "ubuntu", as I recall
<fwereade> mrsipan, are you having problems?
<mrsipan> fwereade, I keep getting ERROR SSH authorized/public key not found
<mrsipan> when trying to bootstrap
<fwereade> mrsipan, odd
<fwereade> mrsipan, if you don't specify anything for authorized-keys-path it should just use whatever it finds in your ~/.ssh
<fwereade> mrsipan, any of
<fwereade> If your version of Ubuntu is not listed above, it is no longer supported and does not receive security or critical fixes.
<fwereade> oops, sorry
<fwereade>     key_names = ["id_dsa.pub", "id_rsa.pub", "identity.pub"]
<fwereade> do you have any of those set up?
<mrsipan> I have -d_rsa.pub
<mrsipan> id_rsa.pub
<fwereade> mrsipan, would you run bootstrap with "-v", and pastebin me the output please
<mrsipan> sure
<wckd> does 11.10 have xen built in?
<wckd> or do I have to use kvm
<mrsipan> fwereade: http://paste.ubuntu.com/712039/
 * fwereade makes thinking noises
<wckd> should have done some research before asking; both are supported
<fwereade> mrsipan, and you saw that error just the same before and after adding a authorized-keys-path?
<mrsipan> fwereade: yes, I did, I can try again with the authorized-key-path option
<fwereade> mrsipan: the altrnative is to set "authorized-keys" in the environment, with the contents of your .pub file
<fwereade> (and delete authorized-keys-path)
<mrsipan> k, will do that
<fwereade> mrsipan: I should note that both those options should have "key", singular, not "keys" in their names
<fwereade> mrsipan: but they don't, and it's confusing
<mrsipan> fwereade, k, thanks for the clarification
<fwereade> mrsipan: I'm sorry, but I need to get to a shop before it closes -- I'll be back later, but you might want to follow up with someone else if you keep having problems
<fwereade> sorry to abandon you :(
<mrsipan> fwereade, np, thanks a lot for your help
<mjfork> i was working with SpamapS to replicate the OpenStack keynote demo, evertying starts ok, but terasort.sh fails with connection refused on port 8020
<mjfork> i believe its a hadoop thing, but not sure
<SpamapS> mjfork: hey
<SpamapS> mjfork: if terasort.sh is failing then something went wrong starting hadoop
<mjfork> what machine should be lsitinging on 8020?
 * SpamapS really needs to finish polishing up those instructions and blogify it
<SpamapS> mjfork: namenode
<SpamapS> m_3: ^^ mjfork is in need of guidance using your hadoop charms.
<mjfork> SpamapS: if i run juju status I can see the data cluster, jobonitor, and namenode
<mjfork> all with a status of up
<mjfork> juju ssh namenode/1
<mjfork> if I run a ps, i don'tsee any hadoop process
<mjfork> nor anything bound rto that port
<SpamapS> mjfork: java
<SpamapS> mjfork: dig around in /var/lib/juju/units/namenode-0/charm.log
<mjfork> on the controller of namnode?
<SpamapS> on namenode/1
<SpamapS> actually make it /var/lib/juju/units/namenode-1/charm.log
<mjfork> unknown host excption
<mjfork> the host name was never reset
<SpamapS> hm?
<SpamapS> mjfork: can you pastebin the log?
<mjfork> but cloud-init set /etc/hosts to have server_252
<SpamapS> ew
<SpamapS> server_252 .. thats no good
<SpamapS> mjfork: should be server-252 .. I wonder if your nova-network is misconfigured
<mjfork> i wonder why cloud-init didn't set hostname
<mjfork> let me fix that
<mjfork> may do it
<mrsipan> I'm getting address 'store.juju.ubuntu.com' not found, when deploying the example, any ideas why?
<mrsipan> it seems a dns resolution error
<mjfork> if your charm is local. prefix it with local:
<mjfork> i ran into this earlier this AM
<mrsipan> mjfork, kthx
<mjfork> SpamapS: i rebooted that node to pick up hotname change
<mjfork> and it says waiting for unit to come up...but the system isbooted
<SpamapS> reboots ont work
<SpamapS> dont
<mjfork> doh
<m_3> mjfork, SpamapS: hi guys
<mjfork> ok
<mjfork> so rebooting breaks it?
<mjfork> whats the alternative to get the hostname set right?
<m_3> mjfork: so hadoop had a problem with the 127.0.1.1 entry that cloudinit puts in /etc/hosts
<SpamapS> mjfork: rebooting is a top priority on the production bugs list (for obvious reasons) ;)
<m_3> mjfork: hadoop requires the hostname to resolve to a real (non-loopback) interface
<m_3> mjfork: are you seeing this problem?  i.e., `netstat -lnp | less` and see hadoop binding to localhost only?
<mjfork> hadoop wasn't bound at all
<mjfork> presumably because the host name didn't resolve
<m_3> mjfork: hmmmm.... that's strange
<mjfork> using openstack + oneiric guest + cloud-init
<m_3> mjfork: so let's start on the namenode/0
<SpamapS> mjfork: I would try destroying the namenode service, terminating the machine its running on, and then deploying namenode again.
<m_3> yeah, worth starting fresh
<m_3> SpamapS: hadoop refuses to bind based on ip address
<_mup_> juju/ssh-passthrough r403 committed by jim.baker@canonical.com
<_mup_> Initial commit
<m_3> SpamapS, mjfork: so we need some sort of name that will resolve across the openstack install
<m_3> it's actually a bug I'd like to push upstream
<mjfork> juju must keep VMs around after destorying service?
<SpamapS> we had no problems with hostnames in our openstack.. they were server-### .. and resolved fine
<mjfork> every VM I log into has ubuntu as hostname
<SpamapS> mjfork: yeah, the reason for that is that its very likely that all services will deploy inside a container in the VM, for quick cleanup/re-use
<SpamapS> mjfork: I say very likely because its not part of the code.. yet. ;)
<_mup_> juju/ssh-passthrough r404 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<SpamapS> Would be cool to use containers that way now.. without network namespaces.
<mjfork> how did you wrok around?
<mjfork> i must be missing someting for the hostname not tbe set right, but the /etc/hosts is
<m_3> mjfork: what's written in /etc/hosts once the unit is up?
<SpamapS> mjfork: _'s in hostnames would be an openstack/DHCP server problem
<SpamapS> mjfork: you may need to look at nova-network's configuration
<mjfork> ec2metadata --local-hostname returns server_221
<mjfork> so does look like OS problem
<SpamapS> mjfork: yeah I'm sure there's some default hostname template somewhere that needs changing
<mjfork> yah, foudn bug report
<mjfork> its 1 line!
<mjfork> if i could contrib code i would :-)
<mjfork> shoot
<mjfork> still didn't set hostname
<m_3> mjfork: what's `hostname -f` and `hostname -f | xargs dig +short` return?
<mjfork> relaized i probably didn't set it in right spot
<mjfork> tryin gagain
<m_3> gotcha
<mjfork> hostname -f says Name or serice not known
<m_3> mjfork: dang...
<m_3> mjfork: what gets written into /etc/hosts by cloudinit?  i.e., typically a 127.0.1.1 entry
<mjfork> yep
<mjfork> shoot...still says server_268
<mjfork> instead of with -
<mjfork> i do see i am running older build
<mjfork> should upgrade
<SpamapS> older build of what?
<mjfork> OpenStack
<mjfork> i see my RPMs have a date of 0727
<mjfork> sight
<mjfork> sigh
<_mup_> juju/ssh-passthrough r405 committed by jim.baker@canonical.com
<_mup_> Testing to verify passthrough of args
<m_3> jimbaker: yay, I'll actually use that quite a bit
<rog> where can i view a list of merge requests that i have been asked to review?
<rog> i'm sure i was told a URL recently, but i stupidly didn't write it down!
 * rog wishes that launchpad emails were more easily filterable
<bcsaller> rog: the first link in the channel topic, http://j.mp/juju-florence
<_mup_> juju/ssh-passthrough r406 committed by jim.baker@canonical.com
<_mup_> Refactoring
<rog> bcsaller: that only seems to show the person that's submitted the code, not who's been asked to review it.
<bcsaller> rog: people don't typically get asked, its a pull process
<rog> bcsaller: ok. i must be misremembering.
<jimbaker> m_3, good to know. i think it will be a very nice feature, being able to create a tunnel (no need to expose) or change config options, or do stuff like juju ssh unit/0 cat foo
<hazmat> rog, i explicitly asked on one of the go  reviews for you
<hazmat> but typically, we go to the kanban view and pick items off the review queue there
<rog> hazmat: ah, that's what i'm remembering. it's got buried in my email and i can't remember which one it was.
<m_3> jimbaker: tunnels on the fly to unexposed services is the big one (-L8888:localhost:80)
<hazmat> rog, its go-new-revisions
<jimbaker> m_3, sounds good. it should land in review this afternoon, just need to write a few more tests
<rog> hazmat: thanks
<jimbaker> m_3, then i will add support for juju scp
<hazmat> jimbaker, whens the provisioning agent network extraction coming onto your plate?
<_mup_> Bug #877597 was filed: ec2 bootstrap fails when specifying instance type <juju:New> < https://launchpad.net/bugs/877597 >
<jimbaker> hazmat, what do you mean? i can certainly work on that, but i'm not certain what the specific bug/feature is
<hazmat> jimbaker, re the expose-retry review comments
<hazmat> specifically [2]
<jimbaker> hazmat, ok, i think you mean bug 873108
<_mup_> Bug #873108: Move firewall mgmt support in provisioning agent into a separate class <juju:New> < https://launchpad.net/bugs/873108 >
<jimbaker> sounds like a good one, so i put it in WIP
<hazmat> jimbaker, awesome thanks
<m_3> hazmat, bcsaller: let me know if you'd like explicit examples to drive the colocated services features
<m_3> they're probably pretty obvious though
<SpamapS> shouldn't the focus be on the production bugs?
<SpamapS> Like.. handling reboots.. HA for bootstrap.. colo'd services..
<SpamapS> exposing/unexposing seems to work right now.. :p
<rog> i'm off now. travelling to a conference in madrid for thurs and fri, so won't be online much. see y'all monday.
<m_3> later rog... enjoy madrid!
<m_3> SpamapS: so in the course of cleaning up charms...
<m_3> there's a littering of one-off hooks for infrastructure aspects like logging, monitoring, etc
<m_3> we'll split those off into separate charms (munin-node, log-source, rsync-source, nfs-client, etc) once colocation lands
<SpamapS> or packages
<SpamapS> They only need to be charms if they have network duties
<m_3> but they're pretty ugly atm...
<m_3> trying to keep templates inline and stuff so they're easy to move around (i.e., _one_ hook)
<m_3> oh, hmmmm
<m_3> that's what I was getting around to asking.... are there any better ways to do this?
<m_3> but even a single package solution, like munin-node requires config right?
<m_3> does that belong in the "primary" service on that machine?
<hazmat> SpamapS, the focus is on production issues and bugs generally, but co-location is a key msising feature to being able to use juju in the real imo
<SpamapS> hazmat: I consider co-location production critical. :-D
<SpamapS> hazmat: I didn't see that bcsaller was actually working on co-location when I whined about it. ;-)
<bcsaller> SpamapS: I'll be pushing an updated spec for colo soon (hopefully by end of day)
<SpamapS> w00t
<zul> SpamapS: i think most people do
<m_3> is anyone else having problems bootstrapping from today's ppa in ec2?
<hazmat> m_3, testing..
<m_3> hazmat: digging through the logs on the bootstrap instance to see where it's hanging up
<hazmat> m_3, do you just get an instance that's stuck bootstrapping?
<hazmat> hmm.. no that works
 * hazmat tries a deploy
<m_3> bootstrap comes up, but then status can't get to it
<m_3> is status returning for you?
<hazmat> m_3, it is
<hazmat> m_3, i'm running against trunk
<m_3> damn
<m_3> ppa's 0.5+bzr408-1juju1~oneiric1
<hazmat> m_3, yeah.. same rev
 * m_3 double-checking paths, envs, yamls
 * hazmat does a reset on his env to double check
<m_3> hazmat: thanks for the confirm... I must be on drugs... still can't connect ebfore timeout
<m_3> brb
<hazmat> m_3, can you pastebin the console log or the cloud-init log
<m_3> hazmat: your keys are added ubuntu@ec2-174-129-179-28.compute-1.amazonaws.com
<hazmat> m_3, thanks
<hazmat> m_3, machine looks okay on first glance
<m_3> I know, it totally does =><=
<hazmat> m_3, what's the error on status, do you have multiple envs?
<hazmat> the zk tree is fine as well
<m_3> I do have multiple envs
<hazmat> m_3, possible you booted one and statd the other?
<hazmat> m_3, outside of that we're into the realm of tcp issues, ssh is running/working, zk is running/initialized
<hazmat> perhaps and ssh host fingerprint mismatch on the server, but that should have an error message
<m_3> watching from awsconsole too
<hazmat> s/and/an
<m_3> cleaned up s3
<m_3> http://paste.ubuntu.com/712524/
<hazmat> m_3, it looks like cloud init hadn't finished installing the keys
<m_3> hazmat: same setup (acct,version,env) worked fine from a different VM... dunno what happened to my laptop between yesterday and today
<hazmat> ugh.. watches definitely have no delivery guarantees
#juju 2011-10-19
<SpamapS> hazmat: does that mean that some degree of polling is necessary?
<hazmat> SpamapS, actually it was a bad test, watches are sent reliably in the event disconnection, when the client reconnects
<hazmat> SpamapS, i'm trying to tackle our zk error handling with eye to solving disconnect problems
<hazmat> i think i've got a good game plan, just writing some tests to verify behavior
<hazmat> i ended up having to create a tcp proxy, else the whole thing is way too timing dependent
<hazmat> MITM FTW ;-)
<SpamapS> nice
<SpamapS> sad reality.. my laptop is slower than EC2 for deploying
<SpamapS> iterating on a 2 node ceph cluster over and over takes about 3 minutes per deploy/add-unit ..
<SpamapS> EC2 can do it in 2
<SpamapS> (assuming my units go from pending to running in the usual 30 seconds)
<jimbaker> bcsaller, if we ever have a sprint in boulder: http://www.yelp.com/biz/zudaka-healthy-latin-food-boulder-2 - all vegetarian (and vegan friendly), and it's south american cuisine, which is the unusual part
<bcsaller> jimbaker: nice
<_mup_> txzookeeper/four-letter-client r44 committed by kapil.foss@gmail.com
<_mup_> four letter command admin client
<_mup_> Bug #878114 was filed: launch multiple bootstrap nodes <juju:New> < https://launchpad.net/bugs/878114 >
<uksysadmin> hi guys
<uksysadmin> anyone used juju with OpenStack - is it as simple as configuring the environments.yaml and away you go or is there more to it?
<fwereade> uksysadmin: I haven't used it myself, so I'm afraid I don't know the details, but I'm pretty sure people have
<fwereade> uksysadmin: if hazmat's around he may be able to tell you more details
<uksysadmin> cheers - I'm about to have a go, but nice to know any gotchas up front
<fwereade> uksysadmin: we've certainly fixed bugs related to openstack compatibility
<fwereade> uksysadmin: please do let us know if you find any more :)
<uksysadmin> will do
<uksysadmin> before I start - I'm using 11.10 - do I need to add the ppas as per the documentation or is it ok to use the ones provided in universe?
<uksysadmin> (ignore - all is going ok so far... :))
<fwereade> uksysadmin, sorry I missed you, the ones in universe should be just fine
<uksysadmin> great cheers
<uksysadmin> one things (and I'll cross post) - to do a bootstrap of a server under openstack, do I need 11.04/11.10 and if so where can I get suitable images from?
<HarryPanda> uksysadmin: look at Orchestra
<uksysadmin> Cheers HarryPanda, I am. just getting the basics going first by being able to do the juju bits
<uksysadmin> although... that comes with images...
<uksysadmin> I see your thinking
<uksysadmin> no dice (if that was what you were implying)
<uksysadmin> seeing if these work instead: http://uec-images.ubuntu.com/server/oneiric/current/
<hazmat> uksysadmin, its pretty much as simple as configuring the environment
<hazmat> uksysadmin, use euca-describe-images.. to find an image that you can specify in environments.yaml
<hazmat> uksysadmin, course that has to be set up in openstack glance
<uksysadmin> sure, cheers hazmat
<uksysadmin> kim0, (or anyone) I'm trying to get the juju charms as per https://www.linux.com/learn/tutorials/495738-conducting-clouds-the-ubuntu-way-an-introduction-to-juju and I get bzr errors
<uksysadmin> not only under 11.10 am I getting a warning about bzrlib being different to bzr (fresh installs still get this) it says
<uksysadmin> bzr: ERROR: Not a branch: "http://bazaar.launchpad.net/~charmers/charm/oneiric/mysql/trunk/sql/".
<uksysadmin> if I forget the bzr repos and just do a juju deployment it says it can't find store.juju.ubuntu.com
<kim0> crap .. principia is now gone, this article has been waiting at linux.com for too long
<uksysadmin> if it makes anyone feel better my juju bootstrap worked a charm ;-)
<uksysadmin> d'oh
<uksysadmin> good article though kim0 - is there a slightly modified version you've done elsewhere?
<kim0> uksysadmin: prolly the most updated would be docs: https://juju.ubuntu.com/docs/
<uksysadmin> I followed that too/similar stuff -but failed at: juju deploy --repository=examples mysql
<SpamapS> oh wow it used principia?
<SpamapS> yeah thats been gone for like 6 weeks
<uksysadmin> DNS lookup failed: address 'store.juju.ubuntu.com' not found: [Errno -5] No address associated with hostname.
<uksysadmin> 2011-10-19 16:20:08,676 ERROR DNS lookup failed: address 'store.juju.ubuntu.com' not found: [Errno -5] No address associated with hostname.
<SpamapS> kim0: will they let us edit it?
<SpamapS> uksysadmin: the docs haven't been updated in a while.. its supposed to be   juju deploy --repository=examples local:mysql
 * SpamapS is getting rather annoyed that the website still says the wrong thing
<uksysadmin> ok I'll see if that makes a difference... though strange if it fixes a DNS error...
<SpamapS> uksysadmin: its not a "DNS error" .. the store isn't live yet.
<SpamapS> uksysadmin: though we are pondering changing it so local: is always assumed
<SpamapS> uksysadmin: bug 872164 if you'd like to comment/track it
<_mup_> Bug #872164: [Oneiric] Cannot deply services - store.juju.ubuntu.com not found <juju:In Progress by hazmat> < https://launchpad.net/bugs/872164 >
<uksysadmin> ta - will do
<uksysadmin> in the meantime - how do I get those lucky charms?
<_mup_> Bug #873907 was filed: Security group on EC2 does not open proper port <juju:New> < https://launchpad.net/bugs/873907 >
<hazmat> uksysadmin, easiest way is to apt-get install bzr,  bzr branch lp:charm-tools, cd charm-tools.. ./charm get-all ../path/to/place-to-store charms
<hazmat> er.. s/bzr/mr
<hazmat> that will fetch all the charms of lp
<hazmat> s/of/off
<uksysadmin> cheers - no docs mention that (or ones that I'm being referred to)
<hazmat> juju.ubuntu.com/Charms should mention it
<SpamapS> hazmat: we're about to get re-flooded.. linux.com article went out with the same problems.
<uksysadmin> would be good if the tutorial at /docs/ mentioned it then...
<hazmat> uksysadmin, we've been working on a charm store, that's directly integrated with the client, its not expected that folks will normally have to go down the charm-tools road
<hazmat> SpamapS, bummer
<uksysadmin> sorry for the bearer of bad news - was good up until the point about principia
<hazmat> uksysadmin, we've got a technical problem getting our docs updated that we're waiting on a sysadmin to fix..
<uksysadmin> oh dear - I'll not pester you guys.  I've probably got enough to get me going, thanks.
<hazmat> uksysadmin, its very good to know where the bad docs are ;-)
<hazmat> but the rest should be good
<uksysadmin> :)
<SpamapS> uksysadmin: for 'lp:principia' you can replace it with 'lp:charm' and that should solve those problems
<SpamapS> I think somebody already said that
<uksysadmin> ok, I'll give that a shot whilst the charms are checking out using the tool
<SpamapS> kim0: btw, juju.ubuntu.com/docs is actually quite out of date at the moment. We're still trying to figure out why its not updating from the bzr tree.
<uksysadmin> on my 11.10 machine I did the tools getall, and in the directory I specified I needed to do a quick symlink to the current dir: (ln - s . oneiric) as it was failing with 2011-10-19 16:43:37,007 ERROR Charm 'local:oneiric/mysql' not found in repository ...
<hazmat> uksysadmin, hmm.. yeah. the charms need to prefix with their release series
<uksysadmin> after that it seems to be working
<uksysadmin> one could say its working like a charm
<uksysadmin> (are there openinings for a PR person? ;-))
<SpamapS> uksysadmin: too easy. ;)
<robbiew> in terms of the docs needed to be updated from bzr....if someone has hosting space and time to setup the docs, we can fix this fast than apparently IS can
<robbiew> at least until they are able to respond
<hazmat> robbiew, sure i can do it
<hazmat> robbiew, i'm tired of waiting
 * hazmat setups a dns entry
<robbiew> hazmat: cool
<SpamapS> have we tried #is yet?
<hazmat> SpamapS, i tried them last week, someone (?) took a look around didn't see the cron job
<hazmat> or where to update, feel to try again
<SpamapS> on it
<SpamapS> ticket#?
<hazmat> SpamapS, 48456
<SpamapS> ty
<uksysadmin> thanks all - apart from environmental restrictions (you try sanely running virtual under virtual) all seems to be good!
<SpamapS> LXC isn't virtual. :)
<SpamapS> its just contained
<SpamapS> but yeah, my box hits a load of 12 quite often
<uksysadmin> not running under lxc - vbox
<uksysadmin> temporary issue fortunately
<uksysadmin> hometime here in the good ol' uk. thanks for your help.
<hazmat> ugh.. mongodb randomly restarted on me
<SpamapS> hazmat: chaos monkey support.. disable with --no-chaos-monkey
<hazmat> SpamapS, lol
<fwereade_> I was thinking we should have a chaos-monkey built into juju
<SpamapS> We do, his name is Ben and he's a vegan
<jimbaker> :)
<jimbaker> i'm increasingly vegan. except for honey. or fish. also, i couldn't give up yogurt. and i have a soft spot for cheese, especially parmigiano-reggiano. but perhaps a little more vegan than not ;)
<robbiew> SpamapS: hazmat:  docs fixed with hazmat's workaround until IS can respond
<robbiew> https://juju.ubuntu.com/Documentation  should have the right stuff
<SpamapS> jimbaker: maybe you just hate dairy cows and bees
<hazmat> jimbaker, thats awesome ;-)
<jimbaker> SpamapS, to the contrary, i love them too much ;)
<SpamapS> robbiew: at least we have somewhere to send them now. :)
<SpamapS> robbiew: we should really fix that frame to be an iframe so there aren't two scroll bars.
<robbiew> eh..whatever
<robbiew> SpamapS: now you
<robbiew> are just trying to get cute
<SpamapS> no.. the embedded window is way too small to hold the whole page
<robbiew> SpamapS: I can change that...one sec
<robbiew> SpamapS: reload
<robbiew> ;)
<SpamapS> robbiew: better than before :)
<SpamapS> I wonder if there's a way to say "embed this and make it as big as it wants to be"
 * SpamapS *hates* html
<robbiew> probably
<robbiew> one sec...let me try
<hazmat> the doc html should scale down pretty well
<robbiew> bah! I haven't written html in forever....Spamaps, it's a wiki, so feel free to try :D
 * SpamapS will just use the direct link
<robbiew> lol
<SpamapS> So, my new favorite use for the local provider is to spin up a giant EC2 instance and use it there
<SpamapS> because it just *kills* my laptop
<SpamapS> $0.50/hour for an m2.xlarge is better than 40 $0.08/hour m1.small's per hour ;)
<robbiew> lol
 * SpamapS wonders if lenovo makes a thinkpad w/ 12GB
<SpamapS> just need 8GB of RAM for charms and the rest for compiz
<SpamapS> heh.. piping 'debug-log' through 'ccze' makes the day a lot more fun I have to say
<SpamapS> oooooooo
<SpamapS> juju status | ccze -m ansi
<SpamapS> *pretty*
<hazmat> interesting
<SpamapS> So, with peer relations.. there needs to be something analogous to 'remove-relation ; add-relation'
<SpamapS> I have all these idempotent hooks, I want to run them again
<hazmat> SpamapS, i don't follow
<hazmat> SpamapS, you can remove a peer relation, and add it
<hazmat> SpamapS, juju does auto activate peer relation is all
<SpamapS> OH
<SpamapS> very useful. :)
 * SpamapS did not actually *try* removing it
<SpamapS> so thats how I can get refreshes after upgrading charms
<SpamapS> Tho that also calls broken.. which will likely take services down
<hazmat> SpamapS, m_3 made a nice suggestion on how to add relation iteration capabilities
<hazmat> before we sort of blocked on the anonymous relations from a server provides.. but the easy solution is to just qualifies though during iteration with the end point service name in addition to the local relation name
<SpamapS> EPARSE
<SpamapS> huh?
<hazmat> SpamapS, just thinking of making relations addressable from upgrade hooks
<SpamapS> *that* is 100% necessary
<SpamapS> (you may recall, I suggested very early that upgrade would require re-running every hook)
<hazmat> SpamapS, one of the issues is that for something that provides an interface, it could have multiple relations from things that require to effectively the same named relation interface
<hazmat> ie. provides creates what are effectively anonymous/non-addressable relations
<SpamapS> bug 873116 and bug 767195 may be duplicates.. both I think opened by me. ;)
<hazmat> an easy way to qualify those would be to just suffix the endpoint service name
<hazmat> so they can be iterated and addressed in non ambigious fashion
<SpamapS> relation-get variable_name [ unit_name ] [ relation_name ]  ?
<hazmat> mysql charm ... list-relations -> db-wordpress, db-drupal  .. to pass to relation-get
<SpamapS> Oh the active relations
<SpamapS> yeah thats bug 767195 .. you opened it.. I recall discussing this a while back
<hazmat> yeah...
<SpamapS> Very high order stuff.. important, but can be worked around for now.
<SpamapS> It would be quite useful in the ceph charm. What I have to do there is just store relation data locally and keep regenerating stuff from that.
<hazmat> fwereade_, any interest in looking at that? or do you want to roll with ha stuff, i can comment on your proposal on the bug
<SpamapS> fwereade_: btw, great discussion on the SSH key management stuff. I feel like that is going to be really cool when we get to it.
<hazmat> i just sent out a large mail on regarding conn failure and session expiration analysis and what my plan is
<SpamapS> hazmat: oh good. :) I just got hit by it
<hazmat> SpamapS, yeah.. making local provider work through hibernate is a great test scenario ;-)
<jimbaker> that would definitely be nice
<SpamapS> for me it just stops working after lunch
<_mup_> juju/ssh-passthrough r408 committed by jim.baker@canonical.com
<_mup_> PEP8, PyFlakes, docstrings
<fwereade_> SpamapS, thanks :)
<fwereade_> hazmat, sorry, need to catch up on context, was putting laura to bed
<hazmat> fwereade_, no worries
<fwereade_> ah, I just saw that email, and suspected it would be relevant to my interests :)
<hazmat> fwereade_, the context for the relation stuff is in the bug links, just responding to your other email re ha, and then going to try and do some reviews
<hazmat> the review queue is overflowing
<hazmat> fwereade_, bcsaller, jimbaker if you have some time, we really should get some more reviews in
<fwereade_> hazmat, duly noted :)
<jimbaker> hazmat, will do, just want to get this ssh passthrough stuff done first
<bcsaller> hazmat: I'll try for a couple today
<hazmat> thanks guys
<jimbaker> taking too long but almost there (too many things that should be easy in argparse, aren't ;) )
<fwereade_> and, hazmat, I can happily work on whatever seems most sensible to you
<fwereade_> hazmat, the HA stuff is interesting, but whether or not I should be working on it can probably be determined by how much I appear to have been on crack while writing that comment ;)
<hazmat> fwereade_, we should do a round discussion on it, the bug is probably a decent place for it, we can decide after that.. there's another low hanging but critical task, which is upstartifying all the agents
<hazmat> currently only local provider uses upstart for the unit agent, but creating an upstart module that can be used for all the agents would be a huge win on the way to ha
<SpamapS> +100 for that
<jimbaker> it would seem that the two features are hugely related
 * hazmat grabs some coffee
<jimbaker> i still think just relying on the fact that the provisioning agent can be restarted + a leader election would suffice for provisioning agent HA, for now. not the scalable solution of course
<SpamapS> The're related in that they both will help with the resiliency of the system
<SpamapS> jimbaker: ZK is mroe important
<SpamapS> more rather
<jimbaker> SpamapS, in terms of upstart of ZK?
<SpamapS> And as long as you're going to run two ZK's, why not run two provisioning agents? ;)
<SpamapS> jimbaker: in terms of HA for bootstrap
<jimbaker> SpamapS, exactly
<SpamapS> for upstart.. thats more about being able to reboot nodes
<jimbaker> every ZK should have a corresponding provisioning agent, just makes sense
<jimbaker> in terms of layout
<SpamapS> there's a second task for rebooting, which is making sure that agents that are disconnected can recover from a long absence gracefully.
<SpamapS> but really.. if you can just reboot them and block zk changes while something is gone.. thats a huge step forward.
<bcsaller> maybe something like: is_bootstrap = fib(num_active_machines +2) while num_active_machines < 4, on all those machines we run a PA and ZK and the lowest machine id is leader
<jimbaker> i think reboot is fine, certainly that works for the provisioning agent
<hazmat> for multi pa, i think fine grain locks are preferrable
<hazmat> their is parallel work to be done
<hazmat> for zk, its unesc. zk does its own leader election
<jimbaker> hazmat, exactly, that would support better scalability
<hazmat> and the clients can connect to all of them and route appropriately
<SpamapS> bcsaller: simple and elegant.. I would have no problem with that solution. :)
<jimbaker> hazmat, leader is only for determining whether a given provisioning agent is active, other than the too simple solution i have in mind :)
<hazmat> upstart is a good first step though
<SpamapS> true, no need for a leader PA
<jimbaker> sorry, under the too simple solution
<hazmat> the zk service and pa just become another service managed by juju is the end goal i'd like get to
<hazmat> make it just another service managed through juju
<bcsaller> hazmat: +1
<jimbaker> it just removes the SPOF. but i agree with the end goal
<bcsaller> thought like we've mentioned namespaces for things like status seem even more important then
<bcsaller> because you don't want to see juju internal services by default
<SpamapS> I don't know if its that clear
<SpamapS> some would say they want to see *everything* they are responsible for.
<SpamapS> I mean, every machine is effectively related to bootstrap
<hazmat> bcsaller, yeah.. one step at a time though... namespacing might be a nice alternative to do strange internal service name checks for protection... with an option to show all namespaces
<hazmat> it also helps when we start to exploring hierarchies of services
<SpamapS> Cars existed a long time w/o seatbelts. :)
<SpamapS> I don't know if you need safety up front. Just sanity.
<bcsaller> did rockets ever not have seat belts though?
<SpamapS> Oh dear.. please don't tell me I've wandered into the rocket science lab? ;)
<jimbaker> again, wouldn't it make more sense to follow this plan: remove the SPOF by just having one active provisioning agent + some number of standbys. the PA is extensively tested to follow its design, that it can always be restarted
<jimbaker> then implement a better provisioning agent, which in fact does parallelize work
<jimbaker> this can also get to the more desirable quality that the PA is just another service
<SpamapS> Honestly
<SpamapS> even the most active site with 1000's of nodes
<SpamapS> I doubt can overwhelm a single PA
<SpamapS> ZK will be the choke point there
<jimbaker> SpamapS, agreed. i think we might have some issues in how we iterate the topology, etc. but not in instructing the cloud provider on what to do next
<jimbaker> SpamapS, for instance, i think we could be smarter about the watch mgmt in the expose logic in the PA. too much duplicate work when it's just operating on the the toplogy node. but that's just a matter of having a better watch setup. (or alternatively, moving expose to the machine agent!)
<SpamapS> I kind of like how we're taking advantage of the provider firewall
<jimbaker> SpamapS, sure, and it is transparent in the usage, so point well taken
<SpamapS> 2011-10-19 19:03:20,032: hook.output@ERROR: + relation-get hostname
<SpamapS> 2011-10-19 19:03:20,222: hook.output@ERROR: Traceback (most recent call last):
<SpamapS> Failure: juju.hooks.protocol.NoSuchUnit: The relation 'mon' has no unit state for 'ceph/11'
<SpamapS> So...
<SpamapS> in a departed hook..
<SpamapS> I can't get the relation data?
<hazmat> SpamapS, the provider firewall is just an optimization for ec2 at this point, it impl as the sole network security  prevents security for other providers
<SpamapS> Yeah seems like both are useful
<hazmat> in the future with a machine level firewalls, the ec2 firewall can be maintained just as a provider specific optimization
<hazmat> i've got another long email in the works on that topic
<hazmat> time for a doctor's appt, bbiab
<evandev> does anyone have any experience with installing hbase on ec2? I am trying to find any documentation. I currently have Juju installed with hadoop-master / slave charms and would like to get Hbase running
<SpamapS> evandev: IIRC, the hadoop master and slave charms only give you HDFS
<evandev> Correct, I was just wondering if anyone had taken it a step further and installed hbase on top of that
<SpamapS> Have not.. but if you want to take a crack at it, I'm sure m_3 would be interested in helping. :)
<SpamapS> m_3: ^^
<_mup_> Bug #878462 was filed: resolved --retry does not retry the hook <juju:New> < https://launchpad.net/bugs/878462 >
<evandev> cool thanks
<SpamapS> evandev: can HBase and HDFS share the same set of namenode/slaves ?
<SpamapS> evandev: if so it might be best implemented as a config option on top of the existing hadoop charms.
<SpamapS> ugh.. departed hook executes in parallel with unit removal...
<SpamapS> no way to gracefully remove a ceph monitor node from another one then. HRM
<SpamapS> I suppose the stop hook would work
<evandev> Yea that was my next question
<evandev> I thought about modify the charms
<SpamapS> evandev: if you have pulled them down with 'charm getall', simplest thing to do is to unbind the charm, commit your changes, and then push to a bzr branch.
<evandev> modifying*
 * SpamapS realizes thats not in a wiki page and remedies the situation
<robbiew> SpamapS: thnx
<evandev> ahh I think ill try that
<evandev> thanks SpamapS
<hazmat> SpamapS, you mean in parallel across different units i assume, yeah..
<hazmat> SpamapS, stop hooks are not executed atm, nother topic for discussion
<SpamapS> hazmat: so, yeah, we need to provide graceful shutdown for clusters
<SpamapS> hazmat: I'm thinking that you should not actually destroy the unit until its departed hooks have finished on all related nodes
<SpamapS> this is already well documented in bugs tho..
<_mup_> juju/ssh-passthrough r409 committed by jim.baker@canonical.com
<_mup_> Test for parse errors
<jimbaker> bcsaller, looks like you meant to do an approve on https://code.launchpad.net/~hazmat/juju/unlocalize-network/+merge/79476
<bcsaller> jimbaker: I thought the second person did the approve
<jimbaker> sure, and i can do that, but your comment was just a normal comment, not an approve comment
<jimbaker> bcsaller, ^^^
<bcsaller> yeah, it should have been an approve then
<jimbaker> ok, i've just approved it, since it's pretty clear in the merge proposal the intent
<jimbaker> bcsaller, do you want to propose your branch for bug 873643? it looks good to me
<_mup_> Bug #873643: config values are re-set to their default values when only one is changed <juju:In Progress by clint-fewbar> < https://launchpad.net/bugs/873643 >
<bcsaller> jimbaker: I never got a reply from SpamapS about what he wanted done, I can propose my extension of his branch or he can merge and repush.
<jimbaker> SpamapS, maybe you can delete your merge proposal for that bug? i don't know what the process should be, but i'm ready to approve bcsaller's work, between your trivial change and the reasonable test, it looks good with just the caveat that there's a grammatical error
<jimbaker> in a comment
<hazmat> bcsaller, you can just unlink the old branch and link yours
<hazmat> its been pending for a while
<bcsaller> hazmat: that might be best then
<jimbaker> and it's high
<hazmat> on life
<jimbaker> it's really impacting actual usage, so we should get it in
<SpamapS> Yeah if bcsaller's is complete please do move forward
<SpamapS> thre
<SpamapS> handled
<SpamapS> https://code.launchpad.net/~bcsaller/juju/config-do-not-overwrite/+merge/79890
<SpamapS> hazmat: great email about the timeouts. I think I hit that just when my system load gets high because some things take 3+ seconds
<SpamapS> hazmat: I think this may be another "production" bug.. when this hits.. the agents basically are dead in the water.
<hazmat> SpamapS, ?
<SpamapS> hazmat: The weirdness I reported last week seems to be a timeout
<hazmat> SpamapS, what's the defect?
<SpamapS> bug 875903
<_mup_> Bug #875903: Zookeeper errors in local provider cause strange status view and possibly broken topology <juju:New> < https://launchpad.net/bugs/875903 >
<hazmat> SpamapS its two different issues
<hazmat> SpamapS, one the session expired, so the units are dead
<hazmat> SpamapS, two status was reporting based on the recorded state instead of taking into account the presence of the connected agent
<hazmat> the second issue has been addressed by a branch fwereade_ has in the review queue
<hazmat> the first by the timeout email
<SpamapS> Ok
<SpamapS> I have been running into a lot of the weird status..
<SpamapS> not suspending/hibernating/anything
<SpamapS> just using it through the day
<hazmat> SpamapS, hmm.. with high load / swapping?
<SpamapS> some load, no swap
<SpamapS> disk is definitely *slammed*
<hazmat> SpamapS, i'd probably attribute it to the same
<SpamapS> I have no doubt that occasionally some things block for 3 seconds
<SpamapS> which is why I moved my testing to an m2.xlarge for a while today
<SpamapS> with a giant tmpfs volume
<SpamapS> no such issues over there. :)
<SpamapS> at $0.50/hour, its a bargain compared to dealing with my silly laptop
<SpamapS> 2011-10-19 23:48:45,173:480(0x7fa28ae7f700):ZOO_ERROR@handle_socket_error_msg@1621: Socket [192.168.122.1:48263] zk retcode=-112, errno=116(Stale NFS file handle): sessionId=0x1331e917b100004 has expired.
<SpamapS>  16:49:25 up 3 days, 17:12,  2 users,  load average: 9.03, 5.04, 3.26
<SpamapS> hazmat: so .. yeah.. this is frustrating.
 * SpamapS realizes he's late to pick up the little one and signs off for the day
<hazmat> SpamapS, yeah.. the fix is actually pretty small and straightforward, just needs some good tests
<hazmat> i'm in progress on it, but trying to take some time today to reviews
<hazmat> SpamapS,  is suspect on ec2 its the vagaries of virtual and the load from the multiple units
<hazmat> s/is/i
#juju 2011-10-20
<hazmat> bcsaller, that first paragraph in the spec is really hard to parse
<bcsaller> huh, I'll try and re-phrase it
<hazmat> bcsaller, why the extra name qual on a service.. ie. isn't  wordpress.logging sufficient?
<hazmat> ah
<hazmat> nm
<bcsaller> hazmat: service A colocates _charm_ L (logging), service B colocates charm _L_, is A.B the same service as B
<bcsaller> B.L
<bcsaller> but ok
<bcsaller> I'll stop explaing
<hazmat> bcsaller, wouldnt i want to associate the same logging service to multiple co-located
<hazmat> or couldn't i
<bcsaller> their lifecycle is tied to the parent in each case
<bcsaller> it might be they all get their own logging-client interface and those talk to a common service via normal relationships
<hazmat> ie. i want to configure my rsyslogging params once for the environment in the logging service and associate it to each of the other services in the environment
<hazmat> not configure it once per other deployed service
<bcsaller> I think I explained that above
<bcsaller> but if its lacking tell me
<SpamapS> hazmat: no it never happens on my m2.xlarge .. only when doing local on my slow laptop disk
<hazmat> SpamapS, good to know, i'm hoping i'll have a few branches in the queue for it, probably by end of day monday
<bcsaller> hazmat: so wp.wp-logging relates to logging-master and sql.sql-logging relates to logging-master
<SpamapS> hazmat: does seem like 3s is *awfully* low given the intended application of juju
<hazmat> bcsaller, hmm.. i think its better to just have --with-service
<hazmat> bcsaller, i don't want a hundred separate munin-node services that i have to configure.. i want a munin service, and a munin-node service, and i deploy other services with the munin-node service.
<bcsaller> hazmat: I don't follow, --with is still there and it behaves as before, removed the optional service name from the cli
<SpamapS> I think there are two use cases.
<hazmat> the units are getting pair-wise relationships, else its just another service relation
<bcsaller> hazmat: thats a good point
<SpamapS> one is a "policy service" deployed with multiple services with shared configuration and relatable *one time* to other things..
<SpamapS> the other is add-on component services, which are only deployed with one service
<SpamapS> the second though, is a subset of the first, so the first would be the more ideal use case
<hazmat> SpamapS, they individual units of the colo will still be able to establish relations with their parent (colo) service to effect service specific changes
<hazmat> er. parent service unit
<SpamapS> I have to agree with hazmat. Basically you want a generic "web machine" service (in puppet this would be a class) which provides all the usual instrumenting and tweaking that all of your web app nodes get
<SpamapS> so you deploy web nodes --with web-machine
<bcsaller> the dotted name notation can buy us that but wp.logging and mysql.logging have different lifecycles even w/o the name change
<SpamapS> hrm.. I duno.. I think you can effectively consider each co-lo'd service unit its own entity that just happens to go inside those units.. not seeing where one needs to address them separately.
<SpamapS> I see merits in both..
<hazmat> bcsaller, looks like i have a lot of comments, i'll followup on list
<bcsaller> hazmat: great, and maybe at the meeting on Friday :)
<bcsaller> I'm going to sign off, looking forward to the email and optimistic about this new feature :)
<hazmat> bcsaller, awesome!
 * hazmat wants some more optimisim
<hazmat> maybe its that beer in the fridge
<SpamapS> I'm optimistic as well btw. :)
 * SpamapS fires up another m2.xlarge to hopefully finish this ceph thing
<hazmat> SpamapS, re 3s, its actually 10s for a session to go awol
<hazmat> and expire
<SpamapS> but the heartbeat fails after 3s right?
<hazmat> the pings happen every 3.3s, but its only at 6.6s that its a considered an issue, and 10s at which its fatal
<SpamapS> well anyway, I've given up running the local provider
<SpamapS> on my laptop anyway
<SpamapS> its fan-frickin-tastic on this giant instance. :)
<hazmat> SpamapS, fair enough.. at the moment local provider is a perfect playground for failure testing ;-)
<hazmat> so it definitely will get better soon.. testing connection failures on ec2 was always a bit spotty, but suspend/resume clocks jumps are very nice
<SpamapS> to be clear, I have not been seeing this with suspend/resume.. just loads of 8+
<hazmat> SpamapS, but local is always limited to the class of machine for sure in terms of what it can run concurrently.. out of curiosity what where you running in the containers on the laptop when it failed?
<SpamapS> ceph
<hazmat> SpamapS, how many units?
<SpamapS> 3
<SpamapS> doing very little
<hazmat> hmm.. that's very odd
<SpamapS> just the churn of dpkg/apt
<SpamapS> the disk response time goes to hell
<hazmat> i've seen m_3 do a dozen, and i normally get to a half dozen pretty regularly (no ssd)
<SpamapS> 2011-10-19 23:48:40,630:480(0x7fa28ae7f700):ZOO_WARN@zookeeper_interest@1461: Exceeded deadline by 3509ms
<SpamapS> 2011-10-19 23:48:45,173:480(0x7fa28ae7f700):ZOO_WARN@zookeeper_interest@1461: Exceeded deadline by 1210ms
 * SpamapS sets up an 'at' job to shut down his m2.xlarge "now + 3 hours"
<SpamapS> hazmat: ok, just got it on my EC2 instance. :(
<hazmat> SpamapS, how many units?
<SpamapS> adding two units to my ceph ring
<SpamapS> 2011-10-20 06:47:37,873:272(0x7f217bb29700):ZOO_ERROR@handle_socket_error_msg@1621: Socket [192.168.122.1:46463] zk retcode=-112, errno=116(Stale NFS file handle): sessionId=0x133201298a60003 has expired.
<SpamapS> load of 0.84
<hazmat> what's odd is that unit addition is serial in the machine agent
<SpamapS> each unit does have 4 peer relations
<SpamapS> not sure if that plays a factor
<hazmat> that shouldn't matter, overall load and avail mem does
<hazmat> maybe its just because ceph is experimental
<SpamapS> 16G of memory.. :)
<SpamapS> haha
<SpamapS> experimental service somehow dooms zk? ;)
<SpamapS> it is a fairly complex charm
<hazmat> SpamapS, indeed.. but more seriously.. we'll definitly have fixes for this shortly, but the fact that your triggering it so easily with ceph ..
<SpamapS> 3 separate daemons to manage for ceph
<SpamapS> happens when the new units are setting up during install
<SpamapS> 2011-10-20 06:47:31,529:272(0x7f217bb29700):ZOO_ERROR@handle_socket_error_msg@1528: Socket [192.168.122.1:46463] zk retcode=-7, errno=110(Connection timed out): connection timed out (exceeded timeout by 0ms)
<SpamapS> 2011-10-20 06:47:31,523: hook.output@INFO: using /usr/bin/prename to provide /usr/bin/rename (rename) in auto mode.
<SpamapS> 2011-10-20 06:47:31,523: hook.output@INFO:
<SpamapS> 2011-10-20 06:47:31,530: hook.output@INFO: Setting up libterm-readkey-perl (2.30-4build2) ...
<SpamapS> actually, it happened before the 2nd unit was even added really
<SpamapS> Oct 20 06:47:31 ip-10-82-145-26 dnsmasq-dhcp[8370]: DHCPDISCOVER(virbr0) 92:40:49:b0:a5:45
<SpamapS> I wonder..
<SpamapS> happens when the new containers first request dhcp
<SpamapS> maybe this interrupts networking somehow
<SpamapS> hazmat: indeed, regardless of what I'm doing to trigger it.. resiliency to this, and any other unknown weirdness would be the first order of business
<SpamapS> seems sporadic
 * hazmat wants gluster
<hazmat> an actual production quality clustered file system that works well in virtualized environments, and does data redundancy
<SpamapS> I think ceph will actually basically destroy gluster.
<SpamapS> Already have a few friends who say its impossible to get performance out of it.
<SpamapS> gluster that is
<SpamapS> that said.. juju should make benchmarking them side by side quite easy!
<SpamapS> hazmat: go to sleep!
<hazmat> SpamapS, that's odd i've seen very good benchmarks of gluster
<hazmat> and i've used it in production b4
<SpamapS> hazmat: I used it briefly before leaving the last place. Had to fight with it to get basics working.
<SpamapS> And then a few weeks after leaving, the guys were picking my brain about lockups and inconsistencies after they had tuned it too far toward the dark side.
<SpamapS> It may have been the app and the way it was using the FS..
<SpamapS> but to get the app working right the most careful mode had to be used.
<SpamapS> They switched to MogileFS and never looked back, since all they really needed was object storage
<hazmat> perhaps.. its got lots of tunables.. i've used it for wonky things like svn repo sharing to trac instances and large file storage for media transformation
<hazmat> s/distributed media
<hazmat> anyways time for bed... g'nite
<SpamapS> This was millions of images a day churning (pics of home foreclosures, sadly)
<SpamapS> 'nite!
<SpamapS> we should have a gluster/ceph shootout once there are charms. :)
<hazmat> :-)
<TeTeT> hazmat: hi kapil, just reading the network security and dynamic ports thread, i wonder if a sort of 'conflicts' solution like in dpkg would help with this for charms? E.g. two charms both utilize port 80, so they conflict and cannot be co-located ever. While auto detection of this conflict is nice to have, a manual field might be good enough to start
<SpamapS> TeTeT: heh, thats basically what I think too
<SpamapS> has worked this way quite well in Debian and Ubuntu
<TeTeT> SpamapS: working late night? Or are you in europe nowadays?
<SpamapS> very late
<SpamapS> trying to be prepared for UDS and the surrounding madness. :)
<SpamapS> Also trying to write a CEPH charm so I can remind myself of all the things complex charm writers need fixed. :)
<TeTeT> SpamapS: he he, I still haven't gotten around to write a charm myself. now with 11.10 out I consider Landscape Dedicated Server as a test bed
<SpamapS> the more eyeballs the merrier
<SpamapS> anyway, time to go pass out
<TeTeT> good night
<fwereade> bcsaller: I'm just writing a response to you and hazmat, and I realised I'm unclear about something -- is there a strict distinction between colo and non-colo charms, or are people free to colo whatever they want?
<mrsipan> what happen if  for some reason the zookeeper server dies, is it possible to rebuild it and let the unit services/instances  know  about its (new ip, etc)?
<mrsipan> or do I need to redeploy it all again?
<robbiew> mrsipan: I believe presently you have to redeploy
<robbiew> fwereade: ^?
<mrsipan> robbiew: thanks
<fwereade> mrsipan, robbiew is right
<fwereade> mrsipan, we're planning to have high availability by 12.04 but there's nothing there yet
<mrsipan> fwereade, thanks, that make zookeeper a SPOF for juju
<mrsipan> for now
<fwereade> mrsipan: exactly so
<fwereade> mrsipan: be reassured that the HA story is a big and important one :)
<robbiew> a **VERY** big and important one ;)
<mrsipan> fwereade, cool, thanks
<hazmat> fwereade, people are free like birds, any one can be a co-located jail bird... some however are born in prison and destined for a life there (standalone: false)
<bcsaller> hazmat: how poetic
<hazmat> bcsaller, predestination is rather harsh
<bcsaller> I do like that we relaxed the ideas around what can co-located from the original run
<robbiew> hazmat: ping
<hazmat> robbiew, png
<backburner> I saw a video demoing juju but now I can't find it. anyone know where one is?
<robbiew> kim0: ^?
<jimbaker> fwereade, i think i'm going to change my mind on the doc stuff related to juju ssh. the command itself is not documented. really we need to incorporate more detailed docs on each subcommand in juju
<jimbaker> fwereade, so that should be done in a separate branch/bug
<fwereade> jimbaker, sounds sensible to me
<fwereade> jimbaker, my approve still stands ;)
<jimbaker> fwereade, perfect :)
<_mup_> juju/ssh-passthrough r410 committed by jim.baker@canonical.com
<_mup_> Fix doc typo
<_mup_> Bug #878948 was filed: need to be able to define a "virtual" or "external" service <juju:New> < https://launchpad.net/bugs/878948 >
<jimbaker> anyone else seeing this problem when building the docs on trunk with make clean && make html? btw, it works fine with make singlehtml, which is normally how i look at the docs
<jimbaker> http://paste.ubuntu.com/714369/
<_mup_> juju/ssh-passthrough r411 committed by jim.baker@canonical.com
<_mup_> Merged trunk
<_mup_> juju/ssh-passthrough r412 committed by jim.baker@canonical.com
<_mup_> Addressed review points
<_mup_> juju/scp-command r412 committed by jim.baker@canonical.com
<_mup_> Merged upstream
<hazmat> SpamapS, relating two things at once to haproxy is not a race condition
<hazmat> SpamapS, the hooks execute serially on haproxy
<hazmat> at least it shouldn't be a race condition
<hazmat> looking at the haproxy charm, it looks fine
<SpamapS> hazmat: it will produce a single service, only pointing to the last one related to
<hazmat> SpamapS, ah.. its not multi service aware for backing
<SpamapS> one listen section
<SpamapS> needs host header stuff .. which would likely have to come from backend service config options
<SpamapS> which did not exist when I wrote it :)
<SpamapS> hazmat: so, about the capistrano renderer..
<SpamapS> hazmat: if we're not going to accept that, how are we going to recommend people integrate with juju?
<hazmat> SpamapS, that would subume very easily into a misc command we could distribute if we had a REST interface
<hazmat> or that people could innovate on outside the core
<SpamapS> I have misc/status2(gource|hostname).py in several branches already.. would like to be able to just write a simple python module as a plugin that does what they do.
<SpamapS> mine currently poll juju status by execing it (I know they could import juju.control.status but I don't want to assume thats a public API)
<hazmat> SpamapS, the same problem exists for the capistrano renderer, it would need to poll to be accurate
<SpamapS> Which is *fine*
<SpamapS> I see where its not core to what juju wants to do..
<SpamapS> but neither is dot
<hazmat> but there are so many other one off consumers of status, it seems like just having a REST interface would solve this
<hazmat> but that doesn't solve distribution
<SpamapS> Yeah REST to replace execing juju status.
<hazmat> sadly i don't think the python plugin stuff would fly (it does have a distribution model though)
<SpamapS> you don't have to distribute it. you'd want a capfile on the same boxes you'd want to run juju status on
<SpamapS> oh
<SpamapS> distribution of the tool?
<hazmat> yup
<SpamapS> So, yeah, bzr gets this right, and so does git
<SpamapS> put something in your ~/.juju that makes it understand this
<SpamapS> thats what I really want
<SpamapS> and also the ability to put something in thepythonpath that is like a juju plugin so I can package it up
<hazmat> yeah.. i'd be happy with easy_install juju_gource && ./bin juju gource
<SpamapS> err
<SpamapS> we have this distro thing
<hazmat> ;-)
 * hazmat sheds a tear
<SpamapS> but yeah, it should ultimately be easy_installable so that we can spread it wider than Ubuntu
<hazmat> its just not going to fly, its non compatible with other impl
<SpamapS> So, anyway, point being, if we're going to say "no more status renderers in juju" then we need to start a juju status renderer project.
<SpamapS> and probably need to rip out dot and png
<hazmat> the question is if we add it, where do we draw the line?
<SpamapS> we don't :)
<SpamapS> we open our arms as wide as they'll go
<SpamapS> and love everybody. :)
<hazmat> i mean all my friends are using fabric.. do we add fabric output..
<hazmat> that's why i was thinking its pretty trivial to replace that with a status2capistrano misc thingy
<SpamapS> your point that this is not something core to juju is well taken. Give me the appropriate integration point... right now its juju stats | mything
<SpamapS> status rather
<jimbaker> the other thing is, it would be nice if users of status could also watch the /status node introduced by bcsaller's statusd branch, instead of polling
<bcsaller> jimbaker: yeah, thats the plan
<jimbaker> bcsaller, good to hear!
<jimbaker> juju status --watch ?
<bcsaller> yeah
<hazmat> bcsaller, what would that do?
<hazmat> delay status output till its changed?
<bcsaller> not clear how it interacts with --output, but ignoring that it just runs its collect/output when the topo changes
<hazmat> or terminal screen refresh
<jimbaker> ideally it just waits until the change, then immediately returns. long poll. but i suppose it could be convenient if it resets itself again for people just wanting to watch it from a term
<SpamapS> just stream it
<bcsaller> blocking till change is key, otherwise you can just poll with $watch juju status
<SpamapS> give it an additional root yaml tree element of    status_snapshot: $unix_epoch
<jimbaker> bcsaller, agreed. and SpamapS, i think that would be a common case. just want a one-shot option too
<SpamapS> Tho yaml probably doesn't have a SAX like mode
<hazmat> SpamapS, negronjl  so is piping status to an external command an integration problem?
<hazmat> SpamapS, its not a stream for sure..
<SpamapS> hazmat: Its fine with me, I like the unix way. I think we can probably write a single tool that is extensible with plugins for any output desired.
<SpamapS> If the /status node comes into being, then just execing it with --wait-for-change over and over would be fine since it would block until a change
<negronjl> At this point, I see enough friction against the output renderer that it would be good to gave either a plugin system, api or unified tool
<jimbaker> negronjl, +1 for a plug in system. i know this is something that m_3 really wants to see as well
<SpamapS> negronjl: how about we stick our heads together in Orlando and produce a single tool that lives outside of juju?
<SpamapS> negronjl: we can make it a Recommends: of juju in Ubuntu. :-D
<negronjl> +1 on the single tool.
<negronjl> Just dump said tool in charm-tools.
<hazmat> SpamapS, negronjl that sounds good, i'd be happy to lend a hand on that, and make it pluggable via pypi
<hazmat> or you could just crib from the cli-plugins.. its pretty trivial
<SpamapS> I want to keep charm-tools to just the bare minimum for maintaining charm
<jimbaker> there's something to be said for juju status supporting a variety of renderers including dot, capistrano, out of the box. having the ability to plugin new output formats into it will just make our necessary limited support there justifiable
<hazmat> and very functional
<negronjl> Ok with me.  Im sure we can all come together and create something awesome :)
<hazmat> https://code.launchpad.net/~hazmat/juju/cli-plugins
<SpamapS> hazmat: !! ?
<negronjl> A happy medium could be to leave Capistrano in but still create the tool for other formats for now.
<SpamapS> this may already have a framework?
<hazmat> SpamapS, just using entry points into python packages for a plugin, its like a half-dozen lines of code
<negronjl> Once the tool is done,  we move the different renderer to the tool
<hazmat> SpamapS, its been veto'd for juju
<SpamapS> True, the work is done.. why not just accept it and make it clear that it will go away when there is an external alternative?
<SpamapS> hazmat: of course it has. :-/
<negronjl> hazmat, what has been vetoed?
<hazmat> python cli plugins for juju
<jimbaker> re cli-plugins, short & sweet, nice. we still need to specific support for stuff like status that should support an output format plugin
<jimbaker> but that's also short work too
<hazmat> jimbaker, there already is effectively a registry of outputers, that a plugin could register for
<hazmat> its just we dont have any internal distinction of internal api vs public api
<hazmat> so we'd have to do some versioning around the entry point
<hazmat> the goal is to have language neutral plugins around a rest interface at some point
<jimbaker> hazmat, exactly, it's already there, so just need to support
<negronjl> Spamaps, I think that at this point we can move faster and more efficiently by moving new renderers and such outside of juju and into charm-tools.  At least we could bypass the veto votes :)
<jimbaker> anyway, +1 for plugins, i hope we can do them, since it's so simple
<negronjl> hazmat, is there an ETA on the REST interface or API?
<hazmat> negronjl, i started it but, its really a backburner, prod tagged issues and colo are the highest priority atm
<hazmat> specifically for myself getting the disconnected/expiration stuff going is priority atm
<negronjl> hazmat, thx
<jimbaker> indeed, one of the things about the current command setup is that it is almost wants to do plugins, otherwise why not use a class structure instead of modules
<hazmat> negronjl, i'd definitely like to see it for 12.04 though
<hazmat> there's alot up in the air on resources and goals for 12.04 though
 * hazmat wanders off to take a break, bbiab
<SpamapS> negronjl: lets call it juju-plus ;)
<SpamapS> jimbaker: true we coul djust abuse juju.control and write our own commands :-D
<jimbaker> SpamapS, well, one *could* do that, but i think it's better to avoid some sort of monkey patching. just use a plugin system, that's all
<negronjl> Spamaps, my sentiments exactly.  We can use existing commands, modules, interfaces, etc. and bypass some of the non-technical issues.
<SpamapS> I have to run back to the kid stuff now.. but good talk everybody. :)
<SpamapS> negronjl: lets chat about this next week for sure.
<negronjl> Spamaps: sure thing
 * hazmat scores four touchpads in the mail
<hazmat> xmas shopping is done ;-)
#juju 2011-10-21
<_mup_> txzookeeper/session-and-conn-fail r44 committed by kapil.foss@gmail.com
<_mup_> watch delivery tests and some additional session expire tests
<beber> hi
<beber> I've got some problems with juju and the local provider
<beber> I can't deploy more than a single service
<beber> is this a bug ?
<fwereade> hi beber, sorry I missed you, I was having lunch
<fwereade> it may be a bug; would you describe what you're trying to do please?
<beber> hi
<beber> ok, I'm just trying to use juju with the local provider
<beber> I'm using VirtualBox as a test machine (with Oneiric)
<beber> Once juju is installed, I bootstrap the environment
<beber> and deploy the mysql charm
<beber> everything goes well, the mysql unit is started
<beber> but then, when I want to deploy another charm or add a mysql unit, nothing happens
<fwereade> nothing at all?
<beber> (despite juju tells me "INFO 'deploy' command finished successfully"
<beber> I use watch -n1 "ps afx | tail -n 45"
<fwereade> ah ok, so when you "juju deploy wordpress" it claims to deploy but nothing happens
<beber> exactly
<fwereade> and similarly for "juju add-unit mysql"?
<beber> yes
<fwereade> beber, there are a couple of logfiles that might be helpful
<fwereade> there should be a data-dir defined in your environments.yaml
<beber> could you tell me which one to look at ?
<beber> yes
<fwereade> ah, hmm, just a sec, I have an alternatve "what's-going-on" command
<fwereade> pgrep lxc| head -1| xargs watch pstree -alU
<fwereade> but the interesting logfiles would be
<fwereade> data-dir/units/master-customize.log
<fwereade> data-dir/machine-agent.log
<fwereade> ...and the output of:
<fwereade> /usr/lib/juju/juju/misc/devel-tools/juju-inspect-local-provider
<fwereade> would also be helpful
<fwereade> beber, if you pastebin those over I will gladly take a look
<beber_> sorry, wifi problem...
<fwereade> beber_, np, I'll repaste it privately to avoid spam ;)
<hazmat> good mroning
<hazmat> morning
<SpamapS> FYI, today I am going to switch the build recipe to use the packaging from Ubuntu (for natty -> precise) ... Maverick and Lucid had to have changes applied
<jimbaker> fwereade, what if we used mixin instead of colo? to me, colocation sounds much more like placement as an equal on the same machine. which is important, just the other story :)
<fwereade> jimbaker, that's quite nice
<jimbaker> in any event, programming has a whole set of words we can choose from with these sorts of ideas
<fwereade> jimbaker, I spent the afternoon writing an email trying to define a minimal set of changes to get this story up, and I started using the word "buff", which I've grown quite fond of
<jimbaker> buff ;) ?
<fwereade> it sounds magicy and geeky and has about the right meaning ;p
<jimbaker> as in waxing something?
<fwereade> nah, as in games
<fwereade> casting a spell on something to give it more strength or speed or whatever
<jimbaker> i see, this is very jargony however (never heard of that usage before), but it's there on http://www.urbandictionary.com/define.php?term=buff
<fwereade> I have a theory that it's an immediately comprehensible name to a good proportion of our target demographic, but, er, I'm 0 for 1 so fr :/
<jimbaker> fwereade, what if we used something like blend?
<jimbaker> or maybe we can find words from potion making
<fwereade> jimbaker, my quibble with that is that mixin/blend implies a two-way relationship, which is not necessarily applicable
<fwereade> I also considered "aura" as a semi-appropriate magicy word, but it didn't seem to work so well
<jimbaker> fwereade, actually that's the good thing about mixin, it's not 2 way at all
<fwereade> jimbaker, in theory :)
<fwereade> but I do take your point
<jimbaker> well, again from its usage in programming, that's the case. but i may be the less target. i have never played a MMORPG
<fwereade> jimbaker, you're not missing all that much (says the man who played WoW for 4 years ;))
<jimbaker> fwereade, reality consumes enough of my time it seems
<fwereade> jimbaker: yeah, RL is OK, but the surrounding of most major cities are totally overrun with farmers
 * fwereade shamelessly steals a joke from somewhere he can't remember
<jimbaker> :)
<jimbaker> fwereade, so mixin sounds good, and if we can find a good charm-oriented synonym that connotes that little extra special (pixie dust, unicorn hair, powdered dragon bone), that would be nice
<fwereade> jimbaker, sounds good :)
<_mup_> txzookeeper/session-and-conn-fail r45 committed by kapil.foss@gmail.com
<_mup_> make the proxy easier to use as for blackholing communications, and verify session expiration with event and exception
 * hazmat ponders failure
<jimbaker> hazmat, sometimes we must fail before we can succeed ;)
<jimbaker> i believe there are successories to consider too
<SpamapS> failure is the only teacher really
<hazmat> indeed its hard to model/test failure without the experiencing it
<hazmat> i guess the team meeting got scrapped
<SpamapS> speaking of failure.. I have been failing at reading email the last 3 days
 * SpamapS implements more auto-labelling for sup
<robbiew> I thought the meeting was punted to next week?
<robbiew> SpamapS: filters...filters...filters
<jimbaker> that's my understanding too
<jimbaker> re meeting
<hazmat> cool
<hazmat> add-buff ;-)
<SpamapS> robbiew: yeah, I'm finally giving in and implementing some
<jimbaker> in my mind, 'juju add-mixin mysql logging' is not as obscure as 'juju add-buff mysql logging', but maybe i'll learn to like the obscurity
<SpamapS> ** huge rocks fall from the sky and kill everyone.
<SpamapS> fwereade: thank you for the LOLs
<_mup_> txzookeeper/session-and-conn-fail r46 committed by kapil.foss@gmail.com
<_mup_> extant watches recieve errors on session expiration
<hazmat> uh oh... exceptions.SystemError: error return without exception set
<hazmat> bindings bug
<_mup_> txzookeeper/session-and-conn-fail r47 committed by kapil.foss@gmail.com
<_mup_> capture a test case that exposes a bug in the libzk python bindings
<careo> gl
<careo> wrong window, sorry
<hazmat> no worries
 * hazmat debates the value of meta programming error handling 
<hazmat> generally not a good idea, but for a retry facade it seems appropriate
<SpamapS> hazmat: I ran into a guy at ODS who said he had a lot of problems with libzk
<SpamapS> hazmat: said he had developed a pure python ZK library because of it
<hazmat> SpamapS, yeah.. the error handling is delciate, and without the twisted bindings, using libzk is painful imo, but in general its been pretty solid as of 3.3.3.. i contributed a few patches/bug reports upstream when we were first getting started
<hazmat> SpamapS, most of it the issue is actually not the python zk binding, though there are some, but just understanding libzk itself.. i'd be curious to look at an alternative py lib though
<hazmat> SpamapS, a few weeks ago i started another one zk python wrapper ( still built on py libzk) using a coroutine greenlet approach.. still its infancy though..
<hazmat> ^in
 * SpamapS imagines hazmat surrounded by bubbling flasks of liquid over bunsen burners and tubes 
<hazmat> SpamapS, at the moment its hp touchpads and hard drives for a nas ;-)
<SpamapS> I need to bite the bullet and buy an SSD
<SpamapS> Keep debating with myself about what size and whether to get two or one SSD and one honking big rotational drive. :-P
<hazmat> SpamapS, i'm planning on  waiting for the new ocz onyx  and samsung 830s, should be out at the start of nov.
<SpamapS> I don't really do the "wait for the best" thing
<SpamapS> I do the "whatever costs me the least amount of time" thing :)
<SpamapS> Right now 5400rpm is costing me time.. so I need an SSD now. :)
<hazmat> SpamapS, talk about breaking backwards compatiblity ;-)
<SpamapS> hazmat: which thing?
<hazmat> SpamapS, the commit diff stuff
<SpamapS> oh, well we can turn autocommit on for the first release. ;)
<SpamapS> And its selective backward incompatible.. you can turn on the "old mode" if thats what your scripts expect. ;)
<SpamapS> hazmat: we can also just punt that off to a wrapper if we have import/export
<hazmat> SpamapS, why is this more repeatable?
<hazmat> vs. just import/export
<SpamapS> hazmat: because you get a single thing, in VCS, that is the exact way to repeat what you have
<SpamapS> If you've had an env for 2 years, and youw ant to repeat, you don't want to repeat *every* deploy, add-relation, add-unit, remove-unit .. ;)
<hazmat> so more of an oplog
<SpamapS> the thing in vcs is the exported env
<hazmat> vs. just load this graph
<SpamapS> I think we're agreeing
<SpamapS> give me exports and imports and I can implement this w/o juju's help
<SpamapS> oplog would be a disaster. I want a snapshot. :)
<hazmat> yeah.. if your building it on the graph, its not clear to me what the extra value is.. but i think my perspective is long, given the 2 year running env, with import/export, you just load up the export and your done. the distinction here is being able to verify the evolution of the system,
<hazmat> so effectively a snapshot audit log
<SpamapS> It could be done simply with user discipline
<SpamapS> but I like the idea of being able to edit the local copy with the same commands you would use to edit the live copy
<hazmat> hmm but all of the ops are effectively standalone transactions, ie. its not atomic
<SpamapS> Yeah if one of them fails I understand, you can't back them out
<SpamapS> --dry-run to the rescue? ;)
<hazmat> yeah.. probably not, dry-run is effectively print the dump ;-)
<SpamapS> with the import..
<SpamapS> you'd need a way to tell the user what you're going to do to the env
<hazmat> but actually understanding what its doing to the units is non starter unfortunately.. hooks are  binaries
<SpamapS> like, I'm going to destroy service X, and create a new service called Xasdf... etc
<hazmat> you can see what its doing to the env, but what its doing to the machines is a different matter
<SpamapS> yeah thats more of an operational issue
<SpamapS> if stuff fails.. you're going to have to resolve that yourself
<SpamapS> But what the user needs to see is the *diff*
<SpamapS> what is this import going to do to my environment?
<SpamapS> With single commands.. you know.. because you're running them. ;)
<SpamapS> Puppet goes through the same problem. --dry-run tells you its going to edit file X and put value Y in it.. but that may still fail for some reason
<SpamapS> hazmat: anyway, export/import seems the key to repeatability
<SpamapS> Ok well I've muddied the waters enough for today. Back to syncs and merges. :-p
<_mup_> txzookeeper/session-and-conn-fail r48 committed by kapil.foss@gmail.com
<_mup_> a zookeeper client facade that transparently retries on various non fatal connection errors
<hazmat> SpamapS, but the import itself is the diff
<hazmat> i'd rather it just bail before attempting to modify anything existing in the env
<hazmat> and just add a prefix op
<hazmat> for importing it back into a running env that may have those existing services
<hazmat> perhaps the delta application is useful and we can grow that, and a diff op against that
 * hazmat goes back to pondering failures
<hazmat> argh.. this is tricky
<_mup_> txzookeeper/session-and-conn-fail r49 committed by kapil.foss@gmail.com
<_mup_> retry wrapper for watch methods, run the full client test suite against the retry facade via test subclass, disable white box tests
<jimbaker> bcsaller, sorry i'm just re-reviewing statusd. would it be possible to address my review points? it does look like you have fixed things, but obviously a doublecheck would be nice
<jimbaker> the more important thing: watch_status does not sufficiently watch the changes in the environment with respect to expose services and opened ports
<bcsaller> jimbaker: I'll iterate in the proposal, thanks
<jimbaker> these are not in the topology node
<jimbaker> bcsaller, you probably should take advantage of the provisioning agent here. you don't want to redo the watch structure for expose. trust me on this ;)
<jimbaker> bcsaller, this is actually a good requirement for the refactoring in bug 873108; you should be able to use the observer capabilities to register an interest in these changes. right now, they are just there to support testing, and support just one observer. multiple observers might be the right solution here
<_mup_> Bug #873108: Move firewall mgmt support in provisioning agent into a separate class <juju:In Progress by jimbaker> < https://launchpad.net/bugs/873108 >
<bcsaller> jimbaker: thanks, looking into it
<jimbaker> bcsaller, the other thing to consider is the impact of https://code.launchpad.net/~fwereade/juju/dynamic-unit-state/+merge/79560, this will add more ways for the status to change
<bcsaller> jimbaker: I'm thinking about the idea of us to chase these down, vs these somehow triggering a status change (touching /status to trigger the watch)
<jimbaker> bcsaller, agent status is an ephemeral node, but its structure is not one easy to watch without recursive watches
<jimbaker> bcsaller, but definitely the expose refactoring can do this triggering, via the observer mechanism
<jimbaker> bcsaller, so agent status will definitely require more thought. this might be a good thing to point on its review, my first impression was that fwereade's branch was too big in scope
<hazmat> woot it works
<_mup_> txzookeeper/session-and-conn-fail r50 committed by kapil.foss@gmail.com
<_mup_> get_children_and_watch test with transparent retry on connection lost
<SpamapS> hah.. wow.. I've been running my blog on 11.04 and now 11.10 for 8 months.. just now realized there's a massive incompatibility between wordpress 3.0.5 and the jquery version that natty and oneiric have
<SpamapS> Just figured my drizzle mods were causing the problems
<_mup_> txzookeeper/session-and-conn-fail r51 committed by kapil.foss@gmail.com
<_mup_> additional transparent retry tests with watchers, remove bad session test based on erroneous upstream faq entry.
<_mup_> txzookeeper/session-and-conn-fail r52 committed by kapil.foss@gmail.com
<_mup_> remove management of the connected attribute on clients from error handler, libzk is going to be transparently reconnecting under the hood, also supports retry much better
#juju 2011-10-22
<_mup_> txzookeeper/session-and-conn-fail r53 committed by kapil.foss@gmail.com
<_mup_> set with transparent retry (using retry_change), correct doc string to note session timeout value is in milliseconds
<_mup_> Bug #879731 was filed: Better error handling for session failure and connection loss <juju:New for hazmat> <txzookeeper:New for hazmat> < https://launchpad.net/bugs/879731 >
<_mup_> txzookeeper/session-and-conn-fail r54 committed by kapil.foss@gmail.com
<_mup_> improve retry delay
<_mup_> txzookeeper/session-and-conn-fail r55 committed by kapil.foss@gmail.com
<_mup_> simplify retry, add retry delay to watch op retries
<_mup_> txzookeeper/session-and-conn-fail r56 committed by kapil.foss@gmail.com
<_mup_> test for core retry & retry watch in isolation
<_mup_> txzookeeper/session-and-conn-fail r57 committed by kapil.foss@gmail.com
<_mup_> version increment, and pep8
<_mup_> txzookeeper/session-and-conn-fail r58 committed by kapil.foss@gmail.com
<_mup_> update license headers, add author tag
<tauren> just discovered juju, looks cool, but have a question
<tauren> can I only deploy on AWS?
<tauren> is there a way to use my own servers and virtualization?
<tauren> nm, found the answer in the FAQ. https://juju.ubuntu.com/docs/faq.html
<hazmat> tauren, thats outdated
<hazmat> tauren, you can deploy on openstack or directly onto physical machines, or deploy entire environments on a single machine with lxc
<tauren> hazmat: sweet! is there a howto somewhere?
<hazmat> tauren, which would would you like to do?
<hazmat> tauren, we're having some problems updating our docs atm
<tauren> probably lxc
<hazmat> tauren, i've put up some updated ones at http://charms.kapilt.com/docs/provider-configuration-local.html
<tauren> but when you say directly onto physical machines, is there something that will automatically set them up for virtualization?
<hazmat> tauren, we deploy openstack with juju onto physical machines
<hazmat> tauren, juju uses cobbler/orchestra as a machine provider for that scenario
<tauren> hmmm... here's what I want to do...
<tauren> I have a single physical server
<tauren> and I'd like to configure it to be able to run multiple services.
<tauren> currently it has kvm installed
<tauren> but i plan to start over.
<tauren> what solution would you suggest?
<hazmat> tauren, local provider which uses lxc for isolation
<hazmat> its lightweight virtualization
<tauren> i'm running openvz on another server. i assume it is similar?
<hazmat> well its more properly not virtualization.. more like a solaris zone, freebsd jail, or chroot on steroids.
<hazmat> tauren, very similiar to openvz
<tauren> ok
<hazmat> tauren, except in kernel
<tauren> nice
<tauren> so i could deploy charms for wordpress, etc. onto it then.
<tauren> what if i also need to run some java services? my understanding is kvm is preferred over openvz for that. same for lxc?
<tauren> hazmat: also, can I scale only locally, or can I expand out to EC2 if my local resources are exhausted?
<tauren> hazmat: i need to head out. thanks for your help!
<hazmat> tauren, you can only use one provider for a single environment atm
<hazmat> tauren, but you can have multiple environments managed by juju
<hazmat> ie. test local, deploy ec2
<tauren> so I couldn't have a wordpress blog hosted locally and add EC2 resources to scale it?
<hazmat> tauren, cheers
<hazmat> tauren, no, we don't have public/private bursting, and probably won't for a while
<tauren> good to know.
<tauren> is openstack something I run on my servers?
<hazmat> tauren, yes
<tauren> or is that an API that rackspace offers?
<tauren> does openstack use lxc, or something else?
<hazmat> tauren, openstack is a cloud provider with its own api and a minimal ec2 api, it supports multiple virtualization backends
<hazmat> from kvm, xen, lxc, etc ...
<tauren> alright... so the link you sent doesn't show how to get openstack going, just plain local with lxc, right?
<hazmat> tauren, openstack usage is the same as ec2
<hazmat> tauren, google has some answers for juju + openstack + orchestra
<hazmat> https://wiki.ubuntu.com/ServerTeam/UbuntuCloudOrchestraJuju
<_mup_> Bug #880023 was filed: machine agent disconnects from zoopkeeper on heavy loads <juju:New> < https://launchpad.net/bugs/880023 >
#juju 2011-10-23
<_mup_> Bug #880023 was filed: machine agent disconnects from zoopkeeper on heavy loads <juju:Confirmed> <juju (Ubuntu):Confirmed> < https://launchpad.net/bugs/880023 >
#juju 2013-10-14
<kenn> Hi guys. Is there a way I can get the private IP address of my machine? I know unit-get private-address returns the hostname, but I specifically need the IP.
<stub> kenn: In my environments, private-address is an IP address. If you are in Python, socket.gethostbyname(hostname) should convert either to the IP. Hopefully you don't end up with the loopback address or something useless like that.
<stub> Can someone give me the minimal steps to firing up a Swift server? My simple one failed with  Bug #1238660, but I'm hoping it was my fail rather than the charm.
<_mup_> Bug #1238660: Default installation fails <swift-proxy (Juju Charms Collection):New> <https://launchpad.net/bugs/1238660>
<routelastresort> Must verify that any software installed or utilized is verified as coming from the intended source. Any software installed from the Ubuntu archive satisfies this due to the apt sources including cryptographic signing information.
<routelastresort> and Must call Juju API tools (relation-*, unit-*, config-*, etc) without a hard coded path. Should default to use software that is included in the Ubuntu archive, however we encourage that charm authors have a config options for allowing users to deploy from newer upstream releases, or even right from VCS if it's useful to users.
<routelastresort> so, if deploying from VCS, e.g. github.......
<routelastresort> I was just going to see how everyone else was doing it
<routelastresort> and write packages later
<stub> routelastresort: If you are writing your charm in Python, charm-helpers has some support for deploying from VCS. I think only bzr at the moment, but would be easy to add git support too. Most people seem to be using packages wherever possible.
<kenn> stub: thanks I wrote some quick bash to run it through dig if it wasn't already and IP. I only get a hostname on AWS, not locally
<rick_h_> marcoceppi: question if you've got a sec. Working on this api endpoint I know deployer files can be json/yaml but do they allow any fancy yaml bits?
<rick_h_> marcoceppi: I want to know if I can ask you to send me json every time or if that'll never fly, you need the option for both to validate properly?
<TheMue> stub: ping
<marcoceppi> rick_h_: I'd rather send the original format, can send with content type, but translation from yaml -> json might not translate 100%
<rick_h_> marcoceppi: yea, sorry. The more I thought about that the more I went "bad plan"
<Anju> rick_h_ : ping
<rick_h_> Anju: pong
<Anju> rick_h_:  can we find the usage of block storage and object storage by juju
<rick_h_> Anju: hmm, outside my wheel house. Assume you mean in openstack/swift or something?
<Anju> openstack
<rick_h_> marcoceppi: any hints on where to point Anju at?
<Anju> rick_h_:  thanks
<rick_h_> Anju: yea, not sure what reporting tools are available in openstack for monitoring stuff like the disk usage
<Anju> rick_h_:  i just want to know about ceph
<Anju> I want to use that in opensatck
<rick_h_> Anju: using the ceph charm then?
<Anju> rick_h_:  means
<Anju> rick_h_:  please help
<rick_h_> Anju: means?
<marcoceppi> Anju: rick_h_: not able to go in to it ATM, you can ask on askubuntu.com
<rick_h_> marcoceppi: cool thanks
<rick_h_> Anju: so yea, in checking out a quick search of the ceph docs http://ceph.com/docs/master/rados/operations/monitoring/#checking-a-cluster-s-status looks like it'd give you some basic usage info
<rick_h_> Anju: so you could juju ssh to your ceph service and maybe that will get you started.
<Anju> rick_h_:  yes rick_h_
<Anju> i followed this one
<Anju> a command ceph -s
<rick_h_> Anju: if you've got more details then marcoceppi makes a great point that it would be a great askubuntu.com question and easier to provide details on what you're running and what you want to get out of it.
<Anju> rick_h_:  some questions I want to know
<Anju> sorry if I am stealing your time
<rick_h_> Anju: not at all, the issue is just I'm not a ceph expert. I'm honestly not going to have the answer for you right now. The best thing to do is to hit up askubuntu.com and write them out. Then others can see/respond better.
<Anju> okk rick_h_
<Anju> thanks
<rick_h_> np, once you write them out make sure to tag them juju and ceph.
<Anju> rick_h_:  thanks
<m_3> does open-port take ranges yet?
<rick_h_> marcoceppi: got a sec?
<crombar> I have a quick question about juju on aws: Is there a way to setup a raid 10 ebs using juju, or do I have to set that up manually? I am thinking about deploying the mongodb charm.
<zradmin> has anyone gotten a  HOOK Host key verification failed. error on charms with brand new hosts?
<kurt_> Anyone - I discovered bug 1276734 was actually not fixed in 1.16.0 as it said it was.  I noted this in the bug tracker, but no one has responded.  What can I do about this?
<davecheney> kurt_: go to #juju-dev, complain loudly
<kurt_> davecheney: ok, thanks
<kurt_> davecheney: were you guys already talking about it?
<davecheney> nope
#juju 2013-10-15
<gnuoy> Is juju-core 1.16 from the the stable ppa missing  maas-tags support ? juju with "--constraints maas-tags=compute" returns
<gnuoy> error: invalid value "maas-tags=compute" for flag --constraints: unknown constraint "maas-tags"
<TheMue> gnuoy: did you tried simple "tags=..."
<TheMue> gnuoy: ?
<gnuoy> TheMue, I did not, I went by the wiki
<gnuoy> TheMue, is seems happy with that, thanks !
<TheMue> gnuoy: yw
<Dotted> how do you get the service name for use in hooks?
<mthaddon> hi folks, the local provider on doesn't seem to be respecting the "default-series: precise" for 1.16.0-0ubuntu1 - is that the correct parameter?
<mgz> mthaddon: that's the default anyway. it's giving you your machines ubuntu version instead?
<mthaddon> mgz: er, nm, I'm an idiot - the bootstrap node is saucy, but any provisioned nodes are precise
<mgz> right.
<mgz> the "bootstrap" node in the local provider case is just your machine :)
<mthaddon> yeah...
<mthaddon> mgz: agent-state-info: '(error: container "mthaddon-local-machine-1" is already created)' <-- any ideas what I can do about this (I had to do some pretty heavy surgery recently when I was migrating to an encrypted home dir, may have killed/rm-ed some things I shouldn't have)?
<mgz> mthaddon: destroy environment, then cleanup any lingering lxc containers should do it
<mgz> mthaddon: see bug 1227145
<_mup_> Bug #1227145: Juju isn't cleaning up destroyed LXC containers <local> <juju-core:Fix Released by thumper> <https://launchpad.net/bugs/1227145>
<mthaddon> mgz: should I remove the "mongodb" lxc container? not sure if that one's related to juju
 * mthaddon has destroyed mthaddon-local-machine-1
<mgz> we don't put mongo in a container
<mgz> so, can leave that one.
<mthaddon> k, thanks
<marcoceppi> rick_h_: I do now
<marcoceppi> rick_h_: if it's not too late
<rick_h_> marcoceppi: cool, I've got my api proofing branch in review. I had some question re: opinions on api output and got some second opinions from another person on the team
<marcoceppi> rick_h_: cool
<rick_h_> marcoceppi: I'll see what comes out of review and then get your docs/details. If you've got any feedback then we can go back and tweak.
<jose> marcoceppi: ping
<marcoceppi> jose: pong
<jose> hey, are we having any more charm school on air sessions?
<marcoceppi> jose: we should, there might be one this Friday. jcastro?
<mthaddon> marcoceppi: any chance of a trivial review of a charm for me? it's a config documentation update only - https://code.launchpad.net/~mthaddon/charms/precise/nrpe-external-master/non-root-checks/+merge/191179
<marcoceppi> mthaddon: I'll take a look
<mthaddon> thx muchly
<mthaddon> marcoceppi: cool, thank you
<kurt_> Can someone tell me when updating the agent via juju upgrade-juju - is the switch for "--upload-tools" just circumventing the "sync-tools" process?  If I do a sync-tools after upgrading the juju binary, is using "upload-tools" unnecessary?
<mgz> kurt_: you should not be using upload-tools
<mgz> it's a developer option, for when you're building from a local source copy of juju-core
<kurt_> mgz: I have it from older notes.  Ok, thanks.
<rick_h_> marcoceppi: so http://paste.mitechie.com/show/1045/ is what things are looking like.
<rick_h_> marcoceppi: first branch is landing, will be updating/etc the next two over today/tomorrow. Should be able to start hitting staging.jujucharms.com and testing stuff out in a day/two
<rick_h_> marcoceppi: let me know if any of that doesn't make sense or you're not a fan of
<marcoceppi> rick_h_: what if there are multiple things with a service?
<rick_h_> marcoceppi: so right now there's plans for two
<rick_h_> marcoceppi: can we find it
<rick_h_> marcoceppi: and does the config look ok for it
<rick_h_> marcoceppi: if we cna't find it, then there's nothing to do for config since we can't check it
<marcoceppi> rick_h_: multiple things, being multiple errors
<marcoceppi> rick_h_: or does it just stop when an error is found
<rick_h_> marcoceppi: so right now there's a single message and you rerun proof as you fix things
<rick_h_> marcoceppi: well it'll run through as much as it can. But there will be some level of 'run again' unless we make things more complex.
<rick_h_> I'm already a bit unhappy with the nested level of stuff. It seems like it should be simpler, but with multiple bundles per file, relations vs services, etc it has to be more complicated
<marcoceppi> rick_h_: hum, this'll work for a first ver
<marcoceppi> rick_h_: Ideally I'd like to tell the user everything that's wrong up front, so they can fix it all without having to do things in steps
<rick_h_> marcoceppi: so you're saying if 3 config fields are bad. That's the stuff I'm doing now. I wanted to keep it consistent with not found but you're right. As I'm doing tihs only the last config error will be retruend :/
<rick_h_> marcoceppi: so yea, I could do something where the message was generic "Service config failed to validate" and an object of "field": "message on issue" in another key?
<rick_h_> I just feel that parsing this on your end is going to be a bear.
<marcoceppi> rick_h_: I really just want a message for the user, so you could have the message say "Service config failed to validate: key, ..."
<marcoceppi> rick_h_: at the end of the day, I'm going to just show the message to the user
<rick_h_> marcoceppi: ok, I had hoped the debug info would be useful as maybe a -v flag or something.
<rick_h_> if the config is failing to validate because we're looking under the wrong charm I feel like that's important debug info
<marcoceppi> rick_h_: so, probably? Possibly having an extra key per bundle name, say "output" or "messages" with a formatted message of "<service>: full message to user"
<marcoceppi> rick_h_: thn you can collect the debug info as you see fit and I can find better ways to expose it as a -v flag
<marcoceppi> though, right now there is no concept of increase verboseness on proof
<rick_h_> marcoceppi: ok, thinking/will look at adjusting it.
<rick_h_> marcoceppi: thanks for looking it over and for the feedback
<marcoceppi> rick_h_: np
<marcoceppi> rick_h_: glad you're doing it and not me :)
<rick_h_> marcoceppi: on the validating of config, do you happen to know if juju will do type coercion?
<rick_h_> marcoceppi: e.g. if an int is passed to a string config field juju will reject, cast to string, or just ignore?
<marcoceppi> rick_h_: I have no idea, rogpeppe hazmat ^
<rick_h_> marcoceppi: meh, since we're proofing might as well play it as strict as possible.
<Guest62958> Hey guys, I just lost contents of my home ~/.ssh and I think that that caused the bootstrap keys to be lost. I can ssh to the bootstrap (or any host) using my user's maas keys...
<Guest62958> But juju can't communcate. I'm trying to avoid having to bootstrap again
<Guest62958> I can run juju-status, but deploy or terminate machine won't work
<marcoceppi> Guest62958: what version of juju are you using?
<Guest62958> 0.6.1
<ahasenack> hi, can someone mark this bug as "low" for me please? https://bugs.launchpad.net/charms/+source/landscape-client/+bug/1235281
<_mup_> Bug #1235281: Landscape-client charm does not distribute SSL cert to clients for LDS installs <landscape-client (Juju Charms Collection):In Progress by ahasenack> <https://launchpad.net/bugs/1235281>
<Guest62958> It has something to do with this ControlMaster setting in ssh
<Guest62958> and ~/.ssh/socket-%r@%h:%p
<Guest62958> probably used as ssh identity
<gary_poster> hey jcastro, could you escalate https://code.launchpad.net/~hazmat/charms/precise/hadoop/trunk/+merge/191278 in the charm review list, please, if that's reasonable?  The charm is broken in core and gui atm without these changes
<jcastro> hey marcoceppi, got time to review this asap? ^^^
<marcoceppi> jcastro gary_poster: yeah
<gary_poster> thank you marcoceppi
<omgponies> nothing happens when I run shift-D in the juju gui ...  it used to work,  but today... .no.   Has the charm changed recently ?
<gary_poster> omgponies, hm.  hasn't changed in at least a week or so.  This is standard charm, not changing the config to use trunk or anything right?
<omgponies> correct,  just ran 'juju deploy juju-gui'
<gary_poster> omgponies, does the export button on top right work?  looks like an open box with an arrow coming out?
<omgponies> tried in both firefox and chrome ...  shift-D does nothing in both
<bac> omgponies: i just tried it on comingsoon.jujucharms.com and it works.
<omgponies> nope that doesn't appear to do anything either
<gary_poster> bac, but that's trunk, not 0.10.1
<gary_poster> omgponies, very weird.  I'll try spinning one up.  Can you check the JS error console to see if it is complaining at us?
<omgponies> I can cofirm it works for me at comingsoon. also
<gary_poster> I wonder if it is a GUI bug specific to the charms you have...
<omgponies> I get this in the JS console
<omgponies> http://pastebin.com/kKYazJKA
<gary_poster> ack
<gary_poster> doesn't look related actually :-/
<omgponies> figure if there's any javascript error it probably breaks all the javascript
<gary_poster> not always, and I'm assuming other things still work?  You can click on a service and see the inspector, and you can browse charms on the left, and so on?
<omgponies> a second console error when I shift-D didn't notice it before - http://pastebin.com/X7G9bEnw
<gary_poster> ah-ha! that looks more like it
<omgponies> yeah I get half-working stuff when I click on the service
<omgponies> the inspector comes up but is blank
<gary_poster> wow.  ok.
<omgponies> hrm,  in chrome it is blank,  in firefox it shows the inspector fine
<gary_poster> (this would be a great time to request an export to see if we can dupe :-P )
<omgponies> I thought I had one from last week but when I import it it complains ... so I went to build a new one
<omgponies> buuuut,  I  do have a .sh that uses the juju cli to build it
<gary_poster> omgponies, if you are willing to share that would be awesome
<omgponies> https://github.com/paulczar/charm-championship/blob/master/monitoringstack.sh
<omgponies> don't steal my contest entry! :P
<gary_poster> thanks omgponies.  No worries, I won't. :-)  Am I right in assuming that you used the .sh to start the environment?
<omgponies> yessir
<omgponies> the gui is painful for doing complicated things ;)
<omgponies> plus I don't want to be mistaken for a windows admin if somebody sees me pointing and clicking
<bac> ha
<gary_poster> :-)
<gary_poster> cool thanks a lot omgponies.  I'll try to dupe and get back to you, though prob'ly won't be till tomorrow.  that ok?
<omgponies> sure
<gary_poster> cool, thanks again
<omgponies> I'll do some messing around on my side too .. .see if I can find anything
<omgponies> I hope I win the contest ...  I think it's the only way I'll be able to pay my amazon bill trying to get the damn thing to work :D
<gary_poster> lol
<omgponies> btw, I do have a .yaml file from the other week in that repo ... but the  juju-deployer errors on it -
<omgponies> 2013-10-15 15:31:50 Deployment name must be specified. available: ('envExport',)
<gary_poster> that sounds like a normal error
<gary_poster> I mean
<gary_poster> it is just telling you to specify what name you want
<gary_poster> I forget option
<gary_poster> try --help
<omgponies> ahhhh I got it
<gary_poster> omgponies, am I making sense?  I don't have deployer hanging around atm but can get it if you need.  though I have to step out in 5 or 10 for awhile
<gary_poster> oh cool
<omgponies> juju-deployer -c monitoringstack.yaml envExport does it ... I guess the exporter names it as 'envExport'
<gary_poster> right.  I think the deployer makes you specify even if there is only one
<omgponies> yeah ...   and now I find a bug where juju-deployer doesn't know how to deal with subordinate services
<gary_poster> !
<gary_poster> hazmat ^^^ ?
<omgponies> http://pastebin.com/TbrSyFKx
<gary_poster> mmm...may be gui's fault.  I mean, the deployer could be less fragile but...I bet if you omit line 85 from https://github.com/paulczar/charm-championship/blob/master/monitoringstack.yaml it will work, omgponies
<omgponies> trying that
<marcoceppi> uh, hazmat gary_poster jcastro what happens with an already deployed service if you change the name of all the configuration options?
<marcoceppi> seems like that could severely break upgrade-charm
<gary_poster> marcoceppi, I suspect you lose them all
<marcoceppi> I'm pretty conflicted on this change, I get why it's there
<marcoceppi> But seriously, what's the deal with juju not handling periods? first charm names now this?
<gary_poster> bcsaller, I see the exact same qa issues as before.  well, wait, maybe I haven't merged recently enough
<gary_poster> marcoceppi, it's an issue from not protecting ourselves from mongo enough
<bcsaller> gary_poster: I hope not :-/
<omgponies> you can never protect yourself from mongo too much
<gary_poster> heh
<ryanc> Hello all.  I'm looking for anyone that has done work modifying nova-compute and/or quantum-gateway charms to use a separate network interface for quantum networks/OVS instead of piping the GRE tunnels over the default interface
<marcoceppi> gary_poster: so this is an issue with just juju-core? I'm inclined to say this is a bug in juju-core to be sorted. This charm's existed before 1.X and to my understanding has worked back in the ZK days
<gary_poster> ooh better bcsaller.
<marcoceppi> gary_poster: breaking deployed version of th charm to address a juju/mongo issue doesn't seem like the solution
<gary_poster> marcoceppi, bug is in core and gui.  I think this is a practical resolution for an immediate problem
<gary_poster> bcsaller omgponies marcoceppi I have to run now.  will return later.  bcsaller so far so good
<bcsaller> great, thanks
<marcoceppi> gary_poster: it will break deployments for anyone who has this deployed and runs upgrade charm, not a very practical solution imo
<marcoceppi> gary_poster: will post to the merge request
<omgponies> marco: could you mark in the description of the old config names that they're depreciated,  and then have upgrade-charm look for them having values and config-set the new config names to the same value ?
<marcoceppi> omgponies: not really, juju can't see descriptions. It seems like this is only broken in juju-core. I guess we can safely merge this since technically < 0.7 is deprecated already.
<hazmat> omgponies,  gary_poster, marcoceppi deployer definitely knows subordinate services.
<marcoceppi> and you'd need > 1.0  to actually deploy and change config for this charm
<marcoceppi> hazmat: omgponies it definitely can handle subs, amulet makes heavy use of subs and deployer
<marcoceppi> omgponies: what problems are you seeing?
<omgponies> It actually looks to be a bug with the GUI export
<marcoceppi> :)
<omgponies> here's the problem line in the export  .yaml - https://github.com/paulczar/charm-championship/blob/master/monitoringstack.yaml#L85
<hazmat> there are a couple of other issues with the gui export
<hazmat> it exports default config options as though they were explicitly set
<omgponies> I removed that line and am re-running to see if it still deploys fine
<hazmat> but those shouldn't be problematic for deployer usage
<omgponies> here's my deployer output - http://pastebin.com/TbrSyFKx
<omgponies> with the error I get
<hazmat> nice the config value export seems to fixed on trunk,  nice
<hazmat> omgponies, sorry talking about a different issue if you remove line 85 and 93 from that export you should be good
<hazmat> omgponies, also re elasticsearch.. i'd suggest fixing that charm to not make it ec2 only
<omgponies> it's not ec2 only ...  the ec2 portion is to allow it to use ec2-auto-discover for clustering
<omgponies> because it wants to use multicast by default
<omgponies> running juju-deployer ... it never seems to exit ...   is this on purpose?  or does it suggest somthing else whacky in the .yaml ?
<marcoceppi> omgponies: is it outputting?
<omgponies> yeah it did a bunch of  'Deploying server .....' lines and then has been sitting for 15mins
<omgponies> with nothing else
<hazmat> omgponies, i always run it with -v -W and it will tell you what its doing / waiting on
<omgponies> ahhh k, will run again later and see
<hazmat> it has some built-in waiting for various things to happen and timeouts as wel
<hazmat> omgponies, multicast is available by default in most of the public clouds (hpcloud, rackspace, google, etc)
<hazmat> er.. isn't
<hazmat> its pretty easy to setup the discovery with a peer relation
<omgponies> yeah I know ...  EC2 discovery plugin makes it easier ... so I targeted that first
<omgponies> I'll have some stuff in there to do unicast discovery,  but I haven't had the chance to test it properly
<omgponies> in the config.yaml of elasticsearch :-  zenmasters
<omgponies> description field explains how it works
<omgponies> I'll probbly do something like the way hadoop or mysql charms set roles via `service groups`
<omgponies> deploy elasticsearch:master,  deploy elasticsearch:slave,   deploy elasticsearch:nodata, etc
<hazmat> its not really nesc.
<hazmat> it only exists to do introductions, juju can do introductions for you
<hazmat> ie. a peer relation, and any unit of elasticsearch knows the addresses of all other nodes
<omgponies> elasticsearch only needs to talk to one member of cluster which will then tell it about all over members.    common pattern is one or two 'masters' which you set as zenmasters in the configs for all other nodes and they handle the introductions.    setting organizational groups allows for further breakdown of clustering ... for instance if you want to make it rack aware,  or you want to make a search node that holds no data of its own
<omgponies>  ... you can create an organizational group and then have config settings for that group to create the elasticsearch config required to make it perform in the correct way
<hazmat> omgponies, the primary purpose is just a way to get addresses of the other es nodes.
#juju 2013-10-16
<hallyn> when i do a juju bootstrap with saucy targets, juju status hangs, and the /var/log/cloud-init-output.log file shows "
<hallyn> The program 'juju-admin' is currently not installed. To run 'juju-admin' please ask your administrator to install the package 'juju'
<hallyn> (but juju is installed, juju-admin is not)
<davecheney> hallyn: i'm confused
<davecheney> which are you running juju status ?
<davecheney> and were are you observing output to /var/log/cloud-init-output.log ?
<hallyn> davecheney: juju bootstrap from my laptop to ec2.  juju status from my laptop.  /var/log/cloud-init-output.log  from the bootstrap node
<hallyn> is default-series: saucy supported?
<davecheney> hallyn: yes, but not recommended
<davecheney> only precise charms are used heavily
<hallyn> davecheney: yeah, just tried precise, it worked.  i can work with that for now.  will look into the saucy bit later :(
<hallyn> davecheney: thanks
 * hallyn out
<davecheney> hallyn: the simple fact is
<davecheney> there are few (i'd almost say no) saucy charms
<synergy_> Help Help!!! :-)
<synergy_>  Import of boot images started on all cluster controllers. Importing the boot images can take a long time depending on the available bandwidth.
<synergy_> It's been almost 24 hours...
<synergy_> BitMessage: BM-NB7JjF6C3KfsT7tK1v8QKJJLjBMPsFPs
<synergy_> How long does it take to import boot images?
<synergy_> Hello?
<jamespage> synergy_, can be quiet here in the mornings :-)
<jamespage> synergy_, which maas version?
<gnuoy>  I'm doing a fresh deployment with juju on a MaaS cluster. I can bootstrap the juju env fine using a tag to specify the bootstrap node but when I try and deploy a charm I don;t see any physical servers getting allocated in the maas UI and after a minute os so juju reports "error: cannot run instances: gomaasapi: got error back from  server: 409 CONFLICT" as the agent state info for the new machine
<gnuoy> I have 16 servers in the Ready state using juju-core  1.16 and maas 1.2+bzr1373+dfsg-0ubuntu1~12.04.2
<gnuoy> maas.log shows: NodesNotAvailable: No matching node is available.
<gnuoy> When bootstrapping I specified the bootstrap server using a maas tag if thats relevant
<synergy_> I can check.
<synergy_> juju-core (1.10.0.1-0Ubuntu1~ubuntu13.04.1)
<synergy_> maas 13.04
<synergy_> sorry, the juju is from my laptop...
<synergy_> maas 13.04
<synergy_> (came with Ubuntu Server 13.04).
<jamespage> synergy_, hmm - that message might be a red herring; have you been able to commission and boot nodes?
<jamespage> gnuoy, can you check that the servers are tagged correctly in maas - you can see that through the webui
<gnuoy> jamespage, ~10 have tags and 4 do not
<gnuoy> shouldn't maas just use an untagged server /
<gnuoy> ?
<jamespage> gnuoy, might be that the tag constraint for the bootstrap node is applying to all subsequent deploys of charms
<jamespage> and I guess you only have one marked for bootstrap tag right?
<gnuoy> thats correct
<jamespage> gnuoy, OK - check juju get-constraints
<gnuoy> tags=bootstrap
<gnuoy> is that telling me it will only use servers with that tag ?
<gnuoy> for all charms deployments not just for specifying the bootstrap node?
<jamespage> gnuoy, yup
<jamespage> you can unset the constraint
<gnuoy> ok, I'll give that a try but I think this is a bug. I'm not trying to do anything exotic. Specify my smallest server as the bootstrap server and then deploy subsequent charms to any other server
<gnuoy> jamespage, do that seem fair or am I missing the point ? ^
<jamespage> gnuoy, the problem is that when you bootstrap an environment with --constraints, the constraints are applied environment wide
<jamespage> unless you a) override then during charm deploy or b) unset them post bootstrap
<gnuoy> jamespage, ok, how do I remove it post bootstrap ? juju set-constraints "tags=" ?
<jamespage> hrm - probably
<gnuoy> jamespage, that seems to have done the trick. thanks for all your help
<FourDollars> Hi, I follow the instructions of https://juju.ubuntu.com/docs/getting-started.html on LXC local provider (Linux). But it failed after I upgrade to 1.16.0-0ubuntu1~ubuntu13.04.1~juju1. It does work on 1.14.
<gnuoy> Does juju support booting  instances from a ceph volumes (on Openstack grizzly) ?
<jamespage> gnuoy, yup
<jamespage> the charms should support that
<gnuoy> jamespage, its not a questions of the charms supporting it is it ? juju would need away of specifying a volume when bringing up the VMs ?
<jamespage> gnuoy, oh - I see
<jamespage> in which case no
<gnuoy> jamespage, are there any plans to support it that you know of ?
<jamespage> no idea -  sorry
<gnuoy> ok, np
<gnuoy> jamespage, does this mean that when using openstack the root volumes for your instances are always going to be the local disk on the compute host ?
<jamespage> yes
<gnuoy> thanks
<hallyn> davecheney: the charms aren't an issue.  a saucy host won't bootstrap.  There is a packaging issue in saucy juju
<adeuring> bac: could you please have a look at this MP: https://code.launchpad.net/~adeuring/charmworld/fix-config-yaml-linting/+merge/191391 ?
<bac> adeuring: sure
<bac> adeuring: i think you have a typo in the MP description s/charmworld tarball/charmtools tarball/.  Could you that just to avoid confusion?
<adeuring> bac: argh... yes, that should be "charmtools tarball", soory
<bac> adeuring: approved, thanks.
<adeuring> bac: thanks!
<sidnei> marcoceppi: seems you're the one doing charm reviews this week? i got 3 that have been through multiple reviews over almost a year and should *really* get landed
<marcoceppi> sidnei: yes, I'm on review this week and will be going through them today/tomorrow
<marcoceppi> sidnei: link them here and I'll peak at them first
<sidnei> https://code.launchpad.net/~sidnei/charms/precise/squid-reverseproxy/trunk/+merge/190500
<sidnei> https://code.launchpad.net/~sidnei/charms/precise/apache2/trunk/+merge/190504
<sidnei> https://code.launchpad.net/~sidnei/charms/precise/haproxy/trunk/+merge/190501
<sidnei> have fun *wink*
<jcastro> jamespage, I missed this: "Juju 1.16.0 is also available for Ubuntu Server 12.04 LTS in the Ubuntu Cloud Tools Archive."
<jcastro> congratulations/thanks!
<jcastro> jamespage, I added some bullets to the release notes, I miss anything major? https://wiki.ubuntu.com/SaucySalamander/ReleaseNotes
<sinzui> bac, benji_ : https://bugs.launchpad.net/charmworld/+bug/1229179 is killing me with hate mail. I think the root problem is that routing doesn't know how to select tip when it does not find a version in the URL
<_mup_> Bug #1229179: Revisionless bundle requests raise ValueError <oops> <charmworld:Triaged> <https://launchpad.net/bugs/1229179>
 * bac looks
<rick_h_> sinzui: just filed a bug for that on the charm side. Adding cards to the board for those.
<bac> sinzui: i can confirm 'gui' is not a base-10 number!
<rick_h_> :)
<bac> thanks rick_h_ for the cards
<arosales> Hello, we are getting kicked off for the weekly charm sync if anyone would like to join us
<arosales> Taking notes @ http://pad.ubuntu.com/7mf2jvKXNa
<arosales> Google G+ URL: https://plus.google.com/hangouts/_/683a5a7220f041d63d29ffd87cbe2e8a031ce20b?authuser=0&hl=en
<arosales> also being brodcast @ ubuntuonair.com
<marcoceppi> https://code.launchpad.net/~hazmat/charms/precise/hadoop/trunk/+merge/191278
<jose> marcoceppi: have a min?
<marcoceppi> jose: I will in about 30
<jose> k
<marcoceppi> jose: o/
<jose> hey marcoceppi, I'm having a problem with this: http://paste.ubuntu.com/6246832/
<marcoceppi> jose: try re-installing lxc
<jose> will do
<marcoceppi> jose: make sure juju-local package is also installed
<jose> it is
<jose> reinstalled lxc and same prob
<hazmat> sidnei, have you used the lxc thin provisioning bits you added?
<sidnei> hazmat: i have, locally. the patch hasn't landed in lxc yet, need to polish it a little bit.
<sidnei> and of course the branches in juju didn't land either because of that.
<gary_poster> omgponies, hey.  fwiw, I landed a fix for the gui problem that caused the deployer to be upset about subordinates in the exported file.  I'm trying to dupe your other gui issues now, using both gui 0.10.1 and trunk.  we should have a new release tomorrow with at least the first fix; we'll see on the second
<omgponies> cool thanks :)
<Guest62958> I have a problem with relation-joined or -changed not firing between services. In particular it's in my hue charm that i'm writing and trying to relate to hadoop namenode and jobtracker
<Guest62958> I do see  hook.output DEBUG: Cached relation hook contexts on 'hive:122': ['jobtracker:120', 'namenode:119']
<Guest62958> Not sure if something is preventing their hooks from firing
<Guest62958> f I remove relation between hive and jobtracker, the -departed hook fires
<Guest62958> but when I add the relation, no hooks fire whatsoever
<Guest62958> Basically I can remove the relation
<Guest62958>  relationworkflowstate: transition complete depart (state departed) {}
<Guest62958> then when adding it again,  relationworkflowstate: transition start (None -> up) {}
<Guest62958>  relationworkflowstate: transition complete start (state up) {}
<Guest62958> but no hooks
<Guest62958> Can someone help?
<Guest62958> juju 0.6.1
<davecheney> Guest62958: are your hooks executable ?
<Guest62958> Hooks are symlinks to an executable file
<Guest62958> They indeed work if I redeploy BOTH of the services
<Guest62958> davecheney: But hadoop master gets somehow "stuck" and I can't get the relation hooks to fire on the existing node without bringing it down
<Guest62958> So basically something happened to the relation state so that this particular service is prevented from firing hooks or something
<davecheney> Guest62958: hmm
<davecheney> i don't have any useful suggestions apart from upgrading to Juju 1.14.1
<davecheney> but that is quite an upgade jump
<Guest62958> yeah, can't do that any time soon. The machines in question are used b the development team...
<Guest62958> davecheney: I did find an intersting detail about the problem though
<Guest62958> there was a "service" in my environment that was called just hadoop (not hadoop-master or hadoop-slave)
<Guest62958> but it was not up... not sure for what reason. and when I tested some stuff I accidentally tried to relate to i, rather than hadoop-master
<Guest62958> When I destroyed that service, I got a bunch of fired repeated events  unit.lifecycle DEBUG: processing relations changed when i tried to create the relation from hue to master
<Guest62958> as if something got released
<Guest62958> from some stck queue
<Guest62958> stuck*
<marcoceppi> davecheney: 1.16.0 *
<davecheney> hallyn: oh crap
<davecheney> is there a bug for saucy not working ?
<hallyn> davecheney: i didn't open one
<hallyn> davecheney: you've reproduced?
<davecheney> hallyn: no, i am childless
<sidnei> lol
<hallyn> davecheney: good to know :)  on a different note, have you run into the bug yourself?
<davecheney> no, i did not attempt to reproduce
#juju 2013-10-17
<thumper> marcoceppi: still around?
<thumper> jose: ping
<jose> thumper: pong
<thumper> jose: hey, you were having problems with the local provider?
<jose> thumper: yep, that was me
<thumper> jose: I have good news and bad news
<jose> what is it?
<thumper> well, when lxc is installed it sets up a bridge by default
<thumper> normally this is lxcbr0
<thumper> but you can change it
<thumper> now you can configure the bridge for the local provider to use
<thumper> jose: can you pastebin an ifconfig?
<jose> sure thing, just give me a minute
<thumper> jose: the bad news, I have found this morning that the local provider is broken in 1.16 (and trunk)
<thumper> and just submitted fixes a few hours ago
<jose> yay!
<jose> but when will the next release be done?
<thumper> well, I'm pushing for a 1.16.1 ASAP
<thumper> so, maybe tomorrow even
<thumper> we have hoops to jump through
<jose> http://paste.ubuntu.com/6248564/
<thumper> I may push for  a 1.17 into the juju ppa
<thumper> jose: hmm... no bridge there
<jose> yeah, I was about to say that
<thumper> jose: here's mine http://paste.ubuntu.com/6248565/
<thumper> jose: apt-cache policy lxc ?
<thumper> jose: also, what's in /etc/lxc/default.conf ?
<jose> http://paste.ubuntu.com/6248566/
<thumper> jose: I take it you are on raring
<jose> yep
<thumper> I wonder why it didn't create the bridge for you...
<jose> bug!
<thumper> hallyn: any idea why lxc wouldn't create the lxcbr0 bridge when installed?
<thumper> I may be being hopeful that hallyn is online
<thumper> jose: however there is another bit of bad news...
<jose> thumper: which is...
<thumper> jose: I found out today that lxc isn't working in precise just now
<thumper> due to another bug
<hallyn> thumper: jose: does your kernel have bridges built-in?  what does 'brctl show' say?
<thumper> this is either in progress or in a review queue
<jose> wasn't that on quantal? :P
<jose> hallyn: no bridges
<hallyn> jose: your kernel doesnt' support them?
<jose> http://paste.ubuntu.com/6248592/ is what I get
<thumper> jose: when juju starts the precise ubuntu-cloud image in lxc, there is a package configuration problem that causes most charms to fail install hooks
<hallyn> jose: does 'brctl addbr br0' work?
<jose> hallyn: operation not permitted
<hallyn> sudo !!?
<jose> ah, right!
<hallyn> uh, !! in the history sense, not in the i'm shouting sense :)
<jose> done
<thumper> :)
<jose> thumper: it explains why it failed when I demoed it at a conference months ago :P
<thumper> a month ago it worked
<thumper> although that is sub optimal
<jose> then, that wasn't it
<jose> hallyn: done, btw
<hallyn> jose: and that worked?
<jose> http://paste.ubuntu.com/6248605/
<hallyn> is anything in /var/log/upstart/lxc-net.log ?
<thumper> ah... I wondered where to look for logs
<jose> http://paste.ubuntu.com/6248609/
<jose> hmm, /me checks the dhcp server, maybe?
<thumper> jose: you use 10.0.3.0/24?
<jose> no, not at all
<hallyn> hm
<thumper> that's very weird then
<hallyn> so you have dnsmasq itself installed...  but lxc should have made it - oh, di dyou *just* install lxc?
<jose> even my dhcp for the occasional ad-hoc network I enable is 10.42.0.0/24
<hallyn> maybe /etc/init.d/dnsmasq stop; /etc/init.d/dnsmasq start would fix it
<jose> hallyn: I think yes
<hallyn> jose: is there a /etc/dnsmasq.d/lxc file?  there should be...
<hallyn> my guess is dnsmasq hasn't been restarted to read that
<jose> there is
<jose> yeah, let me restart it
<hallyn> "have you tried turning it off and on again" would have solved this then :)
<thumper> haha
<hallyn> (then to 'stop lxc; start lxc' to have it try to restart lxc-net)
<jose> oh wait dnsmasq was not isntalled
<hallyn> well then what's pinning 10.0.3.x?
<jose> should I install dnsmasq?
<hallyn> nope
<jose> I have bind9 installed, though
<hallyn> can you do 'netstat -na' and look for 10.0.3' ?
<jose> found, using port 53
<hallyn> bind9?
<jose> state is blank, though]
<jose> bind9 is a dns server
<hallyn> no, i mean that's the one holding it
<hallyn> ?  just to make sure all is making sense :)
<jose> then, any ideas?
<hallyn> what do you mean by state is blank?
<hallyn> i guess a 'listen-on' statement in bind9 config.
<jose> on the table, it says proto udp, recv-q 0, send-q 0, local address 10.0.3.1:53, foreign address 0.0.0.0:*, state [here, this is completely blank'
<hallyn> waiting for packages to install so i can test (and see the stock config)
<jose> sure
<hallyn> jose: a stock bind9 install followed by stock lxc install doesn't prevent lxcbr0 from being created in saucy (for me)
<hallyn> how custom is your config?
<jose> erm, just enabled a couple domains
<jose> let me check, though
<hallyn> jose: can you try adding 'listen-on { your ip addr }' in /etc/bind/named.conf.options ?
<jose> hallyn: wait, there is no problem if I turn it off as I'm not using it right now
<jose> and won't for the rest of this session
<hallyn> oh, you can do listen-on { ! 10.0.3.1 };
<hallyn> jose: ok, in the meantime can you open a bug against lxc?  we should do this automatically probably
<jose> sure thing
<hallyn> (though bind9 configs tend to be custom so automating it may be scary)
<hallyn> thx - /me heads afk
<jose> thanks to you!
<jose> hallyn: wait, turning bind9 off will solve the prob?
<jose> looks like it did
<hallyn> yeah, long as htat doesn't mean you can't resolve anything locally :)
<jose> it's all going good :)
<jose> thumper: then, problem solved until the fix is pushed to repos, thanks! :)
<thumper> :)
<thumper> if you are a fanatic and want to run from trunk...
<thumper> you might get it working
<marcoceppi> thumper: I am now
<thumper> marcoceppi: hey, it was just about the local provider and jose's problem
<thumper> marcoceppi: but we got it mostly sorted
<marcoceppi> thumper: ah, cool
<thumper> marcoceppi: however we have a few problems
<thumper> marcoceppi: the local provider is broken in 1.16
<thumper> marcoceppi: we need to get 1.16.1 out to fix that, sinzui is in the loop
<marcoceppi> thumper: that's what I've heard, any idea if you guys will be fixing and addressing that in 1.16.1 ?
<marcoceppi> thumper: awesome
<thumper> marcoceppi: the juju issue is fixed, but there is another lxc/precise issue
<thumper> which means that while juju might "work", you can't deploy anything
<marcoceppi> awesome
<thumper> as the install hooks fail
<thumper> there is another group looking at that
<thumper> marcoceppi: you are there next week?
<marcoceppi> thumper: I will be, should be touching down Saturday evening local time
 * thumper nods
<thumper> I arrive Sunday just after lunch I think
<thumper> 11:15am landing actually
<thumper> so after noon at the hotel certainly
<marcoceppi> thumper: cool
<jose> hey guys, any idea why while trying to deploy rais it tells me ERROR no settings found for "rack"?
<marcoceppi> jose: how are you deploying it?
<jose> marcoceppi: juju deploy rails --config rails.yaml
<marcoceppi> jose: what does rails.yaml look like?
 * jose pastebins
<jose> http://paste.ubuntu.com/6248944/
<marcoceppi> jose: that's not a properly formatted --config yaml file for what you're doing
<jose> how should it be, then?
<marcoceppi> jose: https://juju.ubuntu.com/docs/charms-config.html see "Configuring a service at deployment"
<jose> ah, got it, thanks
<jose> marcoceppi: if I don't specify a value it'll use default, correct?
<marcoceppi> jose: Correct
<jose> thanks! :)
<marcoceppi> using --config is completely optional to begin with
<jose> just to confirm, are we having a charm school this week?
<marcoceppi> jose: no
<jose> then the cal was right :)
<marcoceppi> We'll have a new schedule figured out after the sprint
<jose> ok!
<jcastro> marcoceppi, hey
<jcastro> cache-relation fails with wordpress/memcached
<marcoceppi> jcastro: this has been a known issue for months
<jcastro> oh
<jcastro> is there a bug?
<jcastro> why yuo no fix memcached?
<marcoceppi> because plugin author + time
<jcastro> oh this is the plugin one!
<jcastro> I remember now
<marcoceppi> https://bugs.launchpad.net/charms/+source/wordpress/+bug/1170034
<_mup_> Bug #1170034: integration with memcached broke <wordpress (Juju Charms Collection):Confirmed> <https://launchpad.net/bugs/1170034>
<jcastro> ok well wordpress keeps running
<jcastro> so I think I'll use it as an example
<jcastro> http://ec2-54-221-127-143.compute-1.amazonaws.com:7777/
<jcastro> this charm is in the incoming queue
<jcastro> it's a nodejs pastebin, it runs hastebin.com
<marcoceppi> jcastro: link?
<marcoceppi> jcastro: I'll try to fix WP charm on the plane to san fran
<marcoceppi> no promises
<jcastro> nah
<jcastro> I am doing the demo today
<jcastro> https://jujucharms.com/~davidolf/precise/haste-server-0/
<marcoceppi> jcastro: k
<jcastro> marcoceppi, free time is for the discourses
<marcoceppi> oic ;)
<marcoceppi> jcastro: I don't see haste in http://manage.jujucharms.com/tools/review-queue
<jcastro> huh, the guy mailed me and asked me what to do
<jcastro> I'll follow up
<jcastro> but that was like 2 days ago, I would have assumed he would have submitted it
<jcastro> unless it's for the contest maybe and he's waiting?
 * marcoceppi checks bug list
<jcastro> marcoceppi, it's pretty slick
<jcastro> the software itself I mean
<marcoceppi> jcastro: cool
<marcoceppi> jcastro: time to replace paste.ubuntu.com ;)
<marcoceppi> jcastro: charmers was not subscribed https://bugs.launchpad.net/charms/+bug/920797
<jcastro> yeah
<_mup_> Bug #920797: Charm needed: Haste server <Juju Charms Collection:Fix Committed by davidolf> <https://launchpad.net/bugs/920797>
<jcastro> I'll mail him
<jcastro> or I can just fix it
<marcoceppi> jcastro: already did it
<jcastro> this is the one jcsakket was mailing us about
<jcastro> ones like this I mean
<marcoceppi> cool
<jcastro> I'm going to have it if it's a bug in charms and there's a branch attached with no resolution, add it to the queue anyway
<jcastro> that way when people don't follow step 23434 of our amazing launchpad submission process we still catch them
<marcoceppi> jcastro: yeah, branch attached, fix committed, bug on charms? show in the queue
<jcastro> yup
<jcastro> ok I've got wp, mediawiki, discourse, and their db's, haste
<jcastro> anything else I can demo for the ubuntu on air today?
<marcoceppi> jcastro: oh, for the demo - not sure
<marcoceppi> rails?
<jcastro> is there a way for a charm to tell the gui "don't show green, I am still doing stuff"
<jcastro> like the discourse charm shows up green right away
<jcastro> but needs like another 5 minutes to finish stuff up
<marcoceppi> jcastro: no, because there's no way to provide feedback
<jcastro> but if you look at the gui it looks ready to go
 * marcoceppi really wants to be able to have juju describe it's state
<jcastro> hazmat, is that something we can talk about in SFO?
<marcoceppi> pending, installing, configuring, starting, started, relating
<marcoceppi> for when those events are firing
 * marcoceppi dreams on
<jcastro> mysql has passed 10k downloads!
<ehw> marcoceppi: I have a jujucharms.com question: why does 'openstack' never show up in the "providers" list?
<ehw> could be that I'm misunderstanding what the providers list does
<marcoceppi> ehw: Possibly, could you link to a page where you're seeing "providers"?
<ehw> https://jujucharms.com/precise/mysql-27/#bws-code
<ehw> marcoceppi: ^^ but really any charm that I look at usually doesn't say 'openstack'
<marcoceppi> ehw: Ah, that might be a bug. Those are showing the results of our testing environment
<ehw> another example, https://jujucharms.com/precise/ceph-16/
<marcoceppi> ehw: right, we test "openstack" against HP Cloud, so it might show as that. We're still working out the kinks for testing stuff, just because it doesn't  say it explictitly doesn't mean it's not supported
<ehw> marcoceppi: ok, was curious about that
<ehw> marcoceppi: thanks
<marcoceppi> ehw: we'll be working over the next several weeks to make those results (both testing and supported providers) better
<jose> hallyn: ran apport, it collected info from bind9, but doesn't seem to appear on launchpad
<jose> oh, there we go
<jose> no, not in there yet
<hazmat> jcastro, it sounds good to bring up, the notion of steady state is effectively a poll/time period with no active
<jamespage> marcoceppi, I got asked that at the ceph day last week as the ceph charm does not advertize as tested on MAAS
<marcoceppi> jamespage: none do at the moment, since we don't have testing yet for MAAS, I think we should just hide those results for now, they're really misleading
<jamespage> marcoceppi, yeah - agreed
<jamespage> the assumption was that it was not tested on MAAS so did not support it
<marcoceppi> I think that's part of charmworld, sinzui are you guys working on these pages? https://jujucharms.com/precise/ceph/
<sinzui> marcoceppi, juju-gui is the pages, the data is from manage.jujucharms.com (charmworld)
<marcoceppi> sinzui: thanks! gary_poster thoughts on above ^ ?
<sinzui> jcsackett, are we deploying charmworld?
<jcsackett> sinzui: not at the moment, but we can. that's via RT, yes?
<sinzui> jcsackett, I was just inquiring as it would be factor if a jujucharm.com page had an issue
<rick_h_> marcoceppi: ? I'm confused. We don't run tests on maas so we can't say the charm is tested on maas. What's the goal?
<jcsackett> sinzui: ah, yeah, to my knowledge there's nothing going on with production right now.
<rick_h_> marcoceppi: the goal is that we've had charms that we KNOW fail on ec2 before and wanted to provide info to the user
<marcoceppi> rick_h_: it's misleading, since testing isn't strong, to say that "Providers" and not list providers because there are no tests
<sinzui> jcsackett, if we release today, charmers get askubuntu and github integration
<marcoceppi> rick_h_: had two reports today of this causing end user confusion
<marcoceppi> just passing along the info
<rick_h_> marcoceppi: yea, it's been around 100 times. at one point I got only failures to show, but then others wanted all listed.
<rick_h_> marcoceppi: yea, understand. It's been a long running point of fun discussion
<marcoceppi> rick_h_: cool, here's more data for your pocket then :)
<jcsackett> sinzui: yes, as long as we're releasing to revno 421 on charmworld and use charmworld-79 tag for the charm.
<marcoceppi> rick_h_: maybe even just saying "Tested Providers" with a caveat to mention that you can attempt to deploy charms on any provider we only have definitive results for XYZ
<jcsackett> wait, we don't need all the way through 421.
<jcsackett> many things have landed. :-P
<jcsackett> sinzui: through revno 415 for github. it looks like there have several bundle related landings as well--i don't know the QA status of all of thsoe.
<rick_h_> marcoceppi: yea, you can toss feedback out to Juju-gui-peeps if you want. I'm not sure where to take it at this point.
<marcoceppi> rick_h_: cool, I'll take this discussion to their court
<rick_h_> jcsackett: yea, we've got proof api stuff and things in progress that are not complete for release.
<sinzui> jcsackett, can you do a quick review of staging. The app.log will show any misadventures with new proof. We can look at some charms to verify we see icons warning.
<jcsackett> sinzui: popping into staging now.
<jcsackett> sinzui: app.log shows a few tracebacks for bad YAML as IngestErrors, and some no branch, but that's all logging normal for charms with errors, iirc.
<jcsackett> sinzui: i'm not sure i follwed the second part; what charms and what about icons warning?
<sinzui> jcsackett, tabs about smoke
<sinzui> ?
<jcsackett> sinzui: yup.
<sinzui> I wrote the branch owner. He replied that he will fix them when he has time
<rick_h_> jcsackett: sinzui so in checking no reason to not go to 421 if you want.
<sinzui> jcsackett, We want to confirm that charmworld is showing the proof that we include.
<sinzui> jcsackett, i looked at a few charms and I see sensible proof warning
 * sinzui thinks the underling of them is ugly and a bug
<jcsackett> sinzui: ok. my lunch finished cooking a moment ago. right after food, i'll start the release process.
<sinzui> fab
<sinzui> thank you rick_h_
<gary_poster> for those following along, marcoceppi caught up with me in #juju-gui .  His point made sense to me, and he filed https://bugs.launchpad.net/juju-gui/+bug/1241075 , with my thanks.
<_mup_> Bug #1241075: Wording of "Providers" confusing on charm information page <juju-gui:Triaged by lucapaulina> <https://launchpad.net/bugs/1241075>
<rick_h_> marcoceppi: ping
<marcoceppi> rick_h_: pong
<rick_h_> marcoceppi: got http://paste.mitechie.com/show/1048/ as the new setup for you. You can just use the root error messages. They're prefixed by the bundle name.
<rick_h_> marcoceppi: the details are nested and they're lists now so that you can support multiple errors per service (e.g. multiple config issues)
<marcoceppi> rick_h_: it doesn't indicate severity - or are these all always error?
<rick_h_> marcoceppi: what I'd suggest for a first cut is that, if there are errors in here, that you provide a link at the least to the api call for a "for full details go to:" until some sort of verbosity can come up
<rick_h_> marcoceppi: right now, all the things in that blueprint we're checking are errors
<rick_h_> we can expand, but currently it's all we have use cases for
<marcoceppi> rick_h_: cool, I'm fine with that just need to know how to prefix these messages
<marcoceppi> thanks! This looks good
<rick_h_> https://blueprints.launchpad.net/charm-tools/+spec/charm-bundle-support says they're all critical
<rick_h_> so error == critical and if we get to a warnings then we'll add "warning_messages", "warnings"
<rick_h_> (note that's a typo, it should be error_messages (with an s)
<marcoceppi> rick_h_: right, missed "error" in the error_messages key
<rick_h_> marcoceppi: cool, I've got some tests to write around this new stuff for the config checking and will get it pushed up. I'll let you know when it's on staging and you can poke at it with fury
<marcoceppi> thanks for working on this, looks good!
<rick_h_> marcoceppi: probably sometime tomorrow
<CaptainTacoSauce> hi guys, still trying to understand some basics, with juju-local, I should be able to run precise charms on a saucy "host" right?
<CaptainTacoSauce> assuming lxc
<marcoceppi> CaptainTacoSauce: yes, despite your host being saucy, all LXC containers launched will be precise (unless you specify otherwise)
<marcoceppi> CaptainTacoSauce: however, I think there's a bug in 1.16.0 that breaks the local provider, so if you're experiencing an issue, it might be related to that
<CaptainTacoSauce> marcoceppi: ah, indeed I am, but I've been playing with it for all of 5 minutes now, thanks
<marcoceppi> CaptainTacoSauce: let me find the bug for you
<marcoceppi> CaptainTacoSauce: https://bugs.launchpad.net/juju-core/+bug/1240709
<_mup_> Bug #1240709: local provider fails to start <local-provider> <juju-core:Fix Committed by thumper> <juju-core 1.16:Fix Committed by thumper> <https://launchpad.net/bugs/1240709>
<paulczar> ^ I was seeing that bug when I installed on my laptop yesterday
<paulczar> how does the juju-gui charm get the admin-secret from enviroments.yaml  ?    I would like to be able to use the same for my own apps
<marcoceppi> paulczar: you have to manually enter it last I tried
<paulczar> when I do juju deploy juju-gui it somehow gets the admin-secret out of my juju environment
<paulczar> but I'm not seeing how in the charm hooks
<marcoceppi> paulczar: hum, I'll ping gary_poster but also poke at the charm if he doesn't answer
<paulczar> thx
<gary_poster> paul czar, it does not--it just remembers in browser for a single url in a single browser session.  However, https://launchpad.net/juju-quickstart gets out the admin secret with Python (and eventually we have talked about doing what you describe)
<gary_poster> (that is, starting you logged in to the GUI)
<gary_poster> paulczar, sorry ^^^
<jcsackett> sinzui: release has finished, heartbeat is coming up good.
<sinzui> I just checked the review queue
<jcsackett> sinzui: be a bit before askubuntu items show up in it.
<sinzui> jcsackett, I expect new things to show by tomorrow
<jcsackett> sinzui: that would be accurate.
<jcsackett> i have a card for the release in assessment, but have moved releasable cards to done.
<jcsackett> s/releasable/released/
<jcastro> jcsackett, about how long until questions show up?
<jcastro> we talking hours or ... ?
<jcastro> jcsackett, thanks for working on this btw, it will be awesome!
<thedac> Just upgraded from 1.14.x to 1.16.0 and now getting "ERROR cannot start bootstrap instance: index file has no data for cloud" What needs changing?
<jcsackett> jcastro: the job runs daily, so should be up by tomorrow morning.
#juju 2013-10-18
<gnuoy> jamespage, do you think it makes sense to add functionality to the quantum-gateway charm to configure the IP, gateway etc of the ext-port ?
<Roconda> Hey, can anyone point me in the right direction with JuJu? Looking for a solution to use juju on multiple servers with lxc(similar to multiple local env.). Lets say IÂ´ve got multiple local JuJu enviroments with lxc, how do they communicate with each other?
<marcoceppi> Roconda: They don't at the moment. We have an item on the roadmap to allow for "cross-environment relations" it's not implemented yet
<marcoceppi> Roconda: we have limited contanier support for deployed servers/services
<Roconda> marcoceppi: what do you recommend me, based on my situation?
<Roconda> marcoceppi: good to know this issue is something being worked on :-)
<marcoceppi> Roconda: so you can create LXCs in deployed environments and co-locate/create more dense services
<Roconda> marcoceppi: using JuJu of without?
<marcoceppi> Roconda: so containers is a great way to achieve what you want to do, and there is support for this currently, however there are still issues with this solution as there are networking issues
<Roconda> marcoceppi: Okay, I guess I will use lxc at the moment and keep an eye on the cross-env. relation. Once cross-env. works I can migrate to JuJu
<marcoceppi> Roconda: if you're not already, subscribe to the juju mailing list and watch the [ANN] emails
<marcoceppi> Roconda: https://lists.ubuntu.com/mailman/listinfo/juju\
<Roconda> marcoceppi: Thanks! Will do. Thanks for your help
<marcoceppi> Roconda: while I don't have exact dates, we're looking to address these two features (and many others) within the next 6 months
<Roconda> marcoceppi: Ah, in that case IÂ´ll currently use the local environment so I can easily migrate to the cross-environment. I case I do need to scale out I could migrate to Amazon. The reason why I would like to use LXC is about costs. Working on a startup so I canÂ´t afford to run many servers on amazon atm.
<marcoceppi> Roconda: there are more affordable cloud providers that Juju supports available. Last time I checked HP Cloud was pretty low in cost compared to AWS
<kurt_> jcastro: ping
<jcsackett> sinzui: so we're hitting the request limit on the askubuntu API, thus no items.
<sinzui> yay!
<jcsackett> sinzui: there's a way to get more requests, requiring auth fun. thought we could avoid it, but evidently even once a day is too often.
<sinzui> We disable staging and give prod a chance to do its job
 * jcsackett nods
<jcsackett> that's a thought.
<jcsackett> we'll disable it on staging, see if prod picks it up tomorrow, and i'll work on the better version of api querying.
<sinzui> lets do that for now. We saw it work on staging. We can discuss this with charmers next week
 * jcsackett nods
<jcsackett> sinzui: disabling on staging is done, and a card has been added.
<sinzui> thank you
<gary_poster> paulczar, last night's release of the GUI should have fixes for the bugs you reported.  please let me know if you encounter other issues.
 * gary_poster working on announcements
<paulczar> yessir appears to have fixed my issues with the gui
<paulczar> now I have bug with juju-deployer :)
<paulczar> https://bugs.launchpad.net/juju-deployer/+bug/1241721
<_mup_> Bug #1241721: juju-deployer never finishes <juju-deployer:New> <https://launchpad.net/bugs/1241721>
<gary_poster> paulczar, :-P looking
<paulczar> it's still in bug-state ...so I can dig out extra debug info if you want anything
<gary_poster> paulczar, plus small internal "yay" for gui ;-)
<gary_poster> paulczar, hazmat probably is person to investigate eventually.  Out of curiosity though, do I understand correctly that, according to status and gui, you have all expected services but no relations?
<paulczar> looks like rabbit is in pending state
<gary_poster> huh
<paulczar> so that might be holding it up
<gary_poster> that smells more like a juju issue, yeah :-/
<hazmat> odd.. still deployr should timeout w error
<paulczar> oh I found it!
<paulczar>   "4":
<paulczar>     agent-state-info: '(error: invalid URL "http://cloud-images.ubuntu.com/releases/streams/v1/index.sjson"
<paulczar>       not found)'
<paulczar> I'll add the whole juju status
<paulczar> added as a comment
<paulczar> but that url works fine for me
<gary_poster> paulczar, I'm fishing for help for you...
<paulczar> thanks :)
<paulczar> finally hit some sort of timeout and juju-deployer continued through
<paulczar> but obviously still a failed deploy
<gary_poster> paulczar, I guess that's good, in that you can probably work around it after the deployment.  So far everyone in the juju-dev pool I asked were at their EoD.
<paulczar> yeah, just going to suck if I submit a charm contest entry which can't follow the contest instructions
<paulczar> will need big walls of text in the readme :)
<marcoceppi> paulczar: 1.16.0 ?
<paulczar> yessir
<paulczar> appears to be intermittant
<paulczar> I ran again and it went through
<paulczar> so guessing its not error handling well when a URL doesn't respond briefly
<paulczar> the juju-gui export doesn't seem to export which services are exposed.  is this on purpose? or should I file bug ?
<gary_poster> paulczar, hm.  that sounds like a bug, yeah, assuming normal deployer formats support this. :-/ thank you
<gary_poster> should be easy to do, but inconvenient timing
<paulczar> I'll test through again to make sure ... I didn't miss setting it
<paulczar> then will file
<gary_poster> ack
<adam_g> is it possible to configure the network used by the local provider?
<marcoceppi> adam_g: not that I know of at this time
<adam_g> that stinks
<paulczar> in my experience if you can get the local provider to work at all you should be happy :)
<adam_g> paulczar, its working good for me.. maybe too good. i have a local provider running on my local system and a remote system, but would like to be able to address them both
<paulczar> mine doesn't seem to be able to get working ssh keys in the lxc containers ... so I can't juju debug-log or juju ssh or anything
<adam_g> paulczar, oh, maybe you're hitting this? https://bugs.launchpad.net/ubuntu/+source/lxc/+bug/1236577
<_mup_> Bug #1236577: container's /home/ubuntu/ spawns with incorrect permissions, preventing SSH access <theme-oil> <lxc (Ubuntu):Fix Released> <https://launchpad.net/bugs/1236577>
<adam_g> paulczar, i had the same issue
<paulczar> it says they fixed ...  did you have to pull something from a master branch?   or was it fixed through to the apt-repos ?
<adam_g> paulczar, it was fixed in LXC, not juju
<sidnei> hazmat: patch landed in lxc, so progress
<hazmat> sidnei, awesome
<ahasenack> is there a way to use, say, the "ubuntu" charm, but on saucy?
<ahasenack> the charm seems to exist only in the precise namespace
<sidnei> ahasenack: if you check it out locally, put into a saucy directory in your local repo, then --repository=... local:ubuntu?
<ahasenack> that I didn't try yet
<ahasenack> I was looking for command-line options I may have missed
<ahasenack> weird
<ahasenack> I had to use this:
<ahasenack> root@server-1e8400be-2e1c-469a-b794-62a5c587da47:~# juju deploy  --to 2 --repository $(pwd)/charms local:saucy/ubuntu ubuntu-saucy-container
<ahasenack> without the saucy bit in local:saucy/ubuntu it would still try precise
<ahasenack> I have charms/saucy/ubuntu/resto-of-the-charm
<sidnei> ahasenack: probably from default-series in the environments.yaml defaulting to precise
<ahasenack> could be. There was no default-series under the "local" environment though, only the others, so i assumed it didn't apply
<ahasenack> since local will use whatever I'm running
<ahasenack> and indeed, the bootstrap node is saucy
<sidnei> i think that was the case in pyjuju
<ahasenack> hm, I wanted to relate landscape-client to this one, but doesn't work, also because landscape-client only exists in the precise namespace
<ahasenack> bummer, not friendly to non-precise workloads
<sidnei> file a bug! i think this could well be a command-line option
<ahasenack> the question is if all charms are supposed to work on all series
<ahasenack> probably not
<sidnei> nope
<Nik_> does anyone know if juju clients newer than 0.6.1 (precise) would be backwards compatible with agents running on machines running precise. Since I can't easily move up the release for juju (using MAAS), I wonder if I can upgrade the clients for deploy --to functionality and some bugfixes
<cotton> how does os x juju work.. does it create containers directly inside OS X?  or does it require a linux VM for an LXC host..
<sarnold> cotton: there are a variety of "providers" that actually speak to whatever "cloud" backend you have. it might be aws or azure or openstack or "local" (lxc, linux-only) or "null" (bad name, hope that changes, that uses ssh)
<cotton> ah ok
<sarnold> cotton: the ssh-based provider may be able to host workloads on OS X alright, but I'm not sure how far that has been tested.
<sarnold> cotton: using the juju frontend on OS X ought to be well-tested, or at least intends to be well tested. :)
<cotton> so i can run juju on my mac via brew, then control a 'local' lxc cloud inside an ubuntu vm also?
<cotton> also, does juju have charms that take care of HA clusters?  aka corosync/drbd/etc
<marcoceppi> cotton: yes, so for the local provider on Mac/Windows, we utilize Vagrant to spin up a VM and run a special Ubuntu image inside that
<cotton> yea ok i see HAcluster http://d.pr/i/8QEH
<cotton> ah ok
<cotton> cool
<marcoceppi> cotton: so you need to vagrant up/vagrant ssh, then run juju commands inside the VM
<marcoceppi> cotton: it's not a perfect story yet, just a stop gap :)
<cotton> ok hah
<marcoceppi> cotton: but outside of the local provider, all other providers/commands work on OSX with the native OSX juju client from brew
<cotton> nice thanks
<sarnold> marcoceppi: ha! you've got vagrant support going already? damn you guys are busy :)
<marcoceppi> sarnold: the other way around
<sarnold> marcoceppi: ha! vagrant has juju support already? damn those guys are busy! :)
<marcoceppi> sarnold: we have a juju vagrant image which has local provider dependecies. So you can up a vagrant instance and get juju all installed witha  deployed local environment and the gui
<marcoceppi> sarnold: hah!
<sarnold> marcoceppi: that's pretty cool :D
#juju 2013-10-19
<stokachu> in a juju-local setup the storage port 8040 is the default for what? mongo server?
<stokachu> i installed juju-core first in saucy then saw the config-local doc and installed juju-local. do i have to remove juju-core at this point?
<stokachu> reason i ask is this error message shows up doing a fresh juju-local setup: ERROR Get http://10.0.3.1:8040/provider-state: dial tcp 10.0.3.1:8040: connection refused
<stokachu> the comments from generate-config dont really tell me what storage-port 8040 is
<stokachu> https://bugs.launchpad.net/juju-core/+bug/1240709, nm
<_mup_> Bug #1240709: local provider fails to start <local-provider> <juju-core:Fix Committed by thumper> <juju-core 1.16:Fix Committed by thumper> <https://launchpad.net/bugs/1240709>
<adam_g> marcoceppi, FYI you can in fact change the network the local provider ends up using by updating the lxc network configuration in /etc/default/lxc-net and running sudo restart lxc-net
<synergy___> You can boot this node using Avahi-enabled boot media or an adequately configured DHCP server. See https://maas.ubuntu.com/docs/nodes.html for instructions.
<synergy___> What about the idea of an Ubuntu Tarot deck written in Java like this one: http://www.osho.com/Main.cfm?Area=Magazine&Sub1Menu=Tarot&Sub2Menu=OshoZenTarot&Language=English
<synergy___> The user could jusr choose a card, and it might say "Try editing your DHCP settings ." :-)
<Azendale> I'm having trouble with getting juju to connect to the environment after I run juju bootstrap. The first time, it asks if I want to confirm the SSH key, which I do. Then it just hangs
<Azendale> I've tried sshing to the address using the username ubuntu, and that connects just fine and I can run commands on the bootstrap node
<Azendale> how should I go about debugging/trying to figure out where the problem lies?
<Azendale> the bootstrap node is a saucy VM running on a KVM virtual machine (for testing purposes) that is managed by MaaS. The computer I'm running the "juju status" command from is the actual host computer, running raring. I've looked at the console of the VM, and it looks like everything completed, I see stuff like "Cloud-init v. 0.7.3 finished" on the VM display
#juju 2013-10-20
<bladernr_> Hey, anyone around with ideas on how to destroy a unit in agent state down that failed to deploy in ec2?
<bladernr_> via juju :)
<marcoceppi> bladernr_: the agent-state is down?
<bladernr_> marcoceppi: yep...
<bladernr_> http://pastebin.ubuntu.com/6273454/
<jcastro> kurt__, around?
<bladernr_> what happened is I did a juju-deploy quantum-gateway.  The instance started to spin up in ec2 but failed to completely start
<bladernr_> so now I have no way to access the node, and juju won't actually destroy it since it's agent-state is down.
<bladernr_> :(
<marcoceppi> bladernr_: if agent-state is down then it'll be hard to communicate with the node. What you can try to do is run `juju resolved quantum-gateway/0` a few times, to see if that moves this process along
<bladernr_> already tried :( I was hoping for a way to just tell juju to forget about that unit and machine manually
<bladernr_> juju should have a --force option to forcibly remove things
<bladernr_> IMO
<marcoceppi> bladernr_: there's probably a bug on that, let me check
<bladernr_> thanksâ¦ I tried looking but my search-fu is weak
<marcoceppi> bladernr_: https://bugs.launchpad.net/juju-core/+bug/1089289
<_mup_> Bug #1089289: destroy-unit --force <destroy-unit> <doc> <juju-core:Triaged> <https://launchpad.net/bugs/1089289>
<bladernr_> marcoceppi: you are awesome :)
<bladernr_> subscribed and commented on.  Thanks for finding that :)
<marcoceppi> bladernr_: np! Thanks for the update, it'll help to show more and more people encountering this issue
#juju 2014-10-13
<gnuoy> jamespage, unicast support for hacluster if/when you have a moment https://code.launchpad.net/~gnuoy/charms/trusty/hacluster/unicast-support-take2/+merge/238124
#juju 2014-10-14
<mwhudson> er
<mwhudson> what might cause bootstrap to hang?
<mwhudson> local provider
<thumper> mwhudson: hey
<thumper> mwhudson: which version?
<thumper> mwhudson: unfortunately it can be one of several things
<thumper> it may have the appearance of hanging
<thumper> or it may really be hanging
<thumper> we have fixed the latter in recent releases
<mwhudson> this was pretty hung
<mwhudson> but it was an automated test, so i just killed it and started again
<mwhudson> also: not local provider, manual provider
<gnuoy`> jamespage, neutron-api nsx mp https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/nsx/+merge/238237
<jamespage> gnuoy, ack - looking
<jamespage> gnuoy, I think we should probably switch to using nsx-* for the config options
<jamespage> gnuoy, so that its all inline with upstream naming
<gnuoy> jamespage, but not inline with nsx transport charm
<jamespage> gnuoy, hmm
 * jamespage muses this issue
<gnuoy> jamespage, I think it makes sense to switch to nsx
<jamespage> gnuoy, hmm
<jamespage> so I started a renamed charm - https://code.launchpad.net/~openstack-charmers/charms/trusty/nsx-transport-node/trunk
<jamespage> but I've held off publishing that due to lack for 14.04 support from upstream as yet
<jamespage> gnuoy, lets switch neutron-api to nsx-* now otherwise we are stuck with it for ever
<jamespage> and as its >= icehouse it really is nsx!
<gnuoy> agreed, will do
<jamespage> gnuoy, added a comment to the mp - one of the config options is obsolete
<gnuoy> ta
<gnuoy> jamespage, mp updated
<jamespage> gnuoy, https://code.launchpad.net/~james-page/charms/trusty/nova-compute/fixup-rbd-libvirt/+merge/238296
<gnuoy> jamespage, did you see dosaboys comment on that ?
<gnuoy> jamespage, approved
<jamespage> gnuoy,
<jamespage> ta
<bladernr_> Hey, I have a question I haven't been able to find an answer for yet.  How would I use Juju to deploy into multiple physical zones on a single MAAS cluster?  e.g., lets say I have one maas server (region and cluster on one machine) and two physical zones with some nodes on each.
<bladernr_> Any docs on how to tell juju to bootstrap them separately, or to bootstrap and then deploy one workload to zone A and one workload to Zone b?
<bic2k> I upgraded to juju 1.20.7 two weeks ago, new machines are now reporting private ip's as their public ips. An attempt to upgrade to 1.20.9 appears to have left the cluster in a very invalid state.
<marcoceppi_> bic2k: sorry to hear about that, what cloud is this?
<bic2k> marcoceppi_: AWS. It looks like it got part way through upgrading from 1.20.7 to 1.20.9
<bic2k> marcoceppi_: I'm finishing that by updating the /var/lib/juju/tools symlinks and restarting the services
<bic2k> once I get machine-0 running I hope it can resolve the rest of machines
<bic2k> any way to tell how far along the juju-upgrade is? We can now access juju again, and all but two machines are updated. So that looks like it worked.
<marcoceppi_> bic2k: not really, in the next version of juju, after 1.21, we'll be adding better state exposure like upgrading, installing, etc
<bic2k> marcoceppi_: cool, I'm wondering if there is an official guide related to manually finishing an upgrade. O
<bic2k> I have done it a few times now, it isn't really hard, but it does help if things get stuck.
<bic2k> I could write something up someplace or blog it.
<marcoceppi_> bic2k: please write something up!! we don't have anything. If you do blog it up (or place it in the docs) that would be fantastic
<marcoceppi_> https://github.com/juju/docs is where the docs live, this would be super helpeful in the troubleshooting section
<mwhudson> is there some way i can prep a disk image so that after installing it, juju deploying to a container on it will be faster?
<mwhudson> well, obviously there must be
<mwhudson> but can someone tell me what it is? :)
#juju 2014-10-15
<bradm> anyone able to help debug a maas node that's been added to juju that's hanging around in agent-state: pending?
<thumper> mwhudson: which provider?
<mwhudson> thumper: manual
<thumper> mwhudson: ok, you just need to have preseeded the image with the lxc ubuntu-cloud template
<mwhudson> thumper: lxc-create -n foo -t ubuntu-cloud; lxc-destroy -n foo?
<thumper> mwhudson: that'd probably work
<mwhudson> cool
<thumper> mwhudson: with caveats around differing series...
<thumper> but if you are just using the defaults
<thumper> I think that'll work
<mwhudson> all trusty all the time
<mwhudson> (also arm64, trying to create a precise lxc not going to go well)
<bradm> so no ideas on why a juju unit would be stuck in pending?
<thumper> bradm: a unit would be stuck in pending if the unit agent never started
<thumper> bradm: which ment the machine agent didn't deploy the unit properly
<thumper> bradm: which means, most likely, the machine agent isn't running... ?
<bradm> thumper: no, it has them installed and running
<thumper> bradm: the machine agent is running?
<bradm> /var/lib/juju/tools/machine-31/jujud machine --data-dir /var/lib/juju --machine-id 31 --debug
<bradm> thats what you mean? ^^
<thumper> bradm: yeah
<thumper> bradm: is there a jujud unit running?
<bradm> oddly out of 16 hosts, only one succeed
<bradm> thumper: I've only done a juju add-machine so far
<bradm> we're trying to work out whats going on here, we're having some fun with bug #1263181
<mup> Bug #1263181: curtin discovers HP /dev/cciss/c0d0 incorrectly <canonical-bootstack> <curtin:Triaged> <https://launchpad.net/bugs/1263181>
<bradm> thumper: but I can confirm all 16 nodes have the machine agent running
<bradm> thumper: and only one is in a started state
<thumper> bradm: but is there a unit agent running?
<thumper> bradm: can you ssh to the machine and see if the charm has been deployed?
<bradm> thumper: no, none of them have a unit agent
<bradm> thumper: we haven't deployed charms yet, we're having booting issues
<thumper> bradm: also, does juju status show a machine for the unit?
<bradm> thumper: at this point we're just doing a juju add-machine
<thumper> bradm: ok, I'm confused
<thumper> you did say "unit stuck in pending"
<thumper> did you mean machine?
<bradm> ah, I did say unit too, sorry
<bradm> I should have said machine
<bradm> I was using unit in the generic sense, didn't remember that it had a juju specific meaning
<thumper> bradm: ah, ok...
<bradm> thumper: the other fun part is we're using a 3 node HA bootstrap, so maybe somethings going on there
<thumper> bradm: so the machines have been deployed, and the machine agent is running, but showing pending?
<bradm> thumper: correct, on all but one of the hosts
<thumper> bradm: so this would occur if the machine agent can't contact the api server
<thumper> bradm: check the local logs on the machines
<bradm> thumper: they're not particularly enlightening
<bradm> thumper: let me pastebin one for you
<thumper> kk
<bradm> thumper: http://pastebin.ubuntu.com/8562185/ <- one that didn't work
<bradm> thumper: foo-os-[123].maas is the HA'd bootstrap nodes
<thumper> bradm: and that's it?
<bradm> thumper: yup
<bradm> thumper: like I said, not particularly enlightening
<thumper> bradm: looks like the websocket handshake is failing for some reason
<thumper> bradm: that's the first thing it does
<thumper> bradm: if you look at the logs for the state servers, do you see the incoming connection from the other machines
<thumper> ?
<thumper> bradm: also, make sure the state servers have a decent logging-config
<thumper> like DEBUG
<thumper> bradm: by default the logging-config is set to WARN if you don't specify
<thumper> if you bootstrapped with --debug, it should stay debug
<bradm> thumper: http://pastebin.ubuntu.com/8562198/ <- one that did work
<bradm> thumper: ok, let me see..
<bradm> thumper: so its definately in debug
<bradm> thumper: there's a lot going on, have you got an example of what an incoming connection whould look like in the logs?
 * thumper looks
<bradm> debug is pretty noisy, as you'd expect, its hard to tell what to look for
<thumper> grr
<bradm> HA bootstrap nodes doesn't help either, it throws enough of its own logs too
<thumper> trying to get some info for you bradm
<thumper> but hitting other local issues
<thumper> gimmie a few minutes
<bradm> thumper: sure, thats fine - I'm probably going to have some lunch soon anyway
<bradm> I can hear my wife making something in the kitchen now..
<bradm> thumper: in fact, lunchtime now!  will be back in a bit
<thumper> kk
<thumper> brad: 2014-10-15 02:07:01 DEBUG juju.apiserver apiserver.go:156 <- [1] machine-0 {"RequestId":<id>, ... Entities":[{"Tag":"machine-1"}]}}
<thumper> 2014-10-15 02:07:01 DEBUG juju.apiserver apiserver.go:163 -> [1] machine-0 311.032us {"RequestId":<id>,"Response":{...}}
<thumper> bradm: that was for machine-1
<thumper> bradm: use whichever number you have
<thumper> I thought there was a login logged, but seems not
<bradm> thumper: righto, let me see..
<bradm> thumper: curious, I have juju.state.apiserver, not juju.apiserver
<thumper> bradm: oh... which version of juju?
<thumper> we did move it
<thumper> but perhaps that was 1.21
<bradm> thumper: juju 1.20.9
<bradm> machine-1: 2014-10-15 00:34:45 INFO juju.state.apiserver apiserver.go:165 [32] machine-40 API connection terminated after 3m0.104868462s
<bradm> aha, here we go
<bradm> thumper: http://pastebin.ubuntu.com/8562529/
<bradm> thumper: so it looks like machine-40 does a whole bunch of requests, there's a response sent back, and then it just times out
<thumper> bradm: definitely looks like a bug... :-(
<bradm> thumper: this is on a customer site too :(
<thumper> bradm: do you see the same if not doing HA?
<bradm> thumper: I can try that out easily enough.
<thumper> bradm: please
<bradm> huh, what, that just failed
<bradm> maybe I tried a bit too quickly
<bradm> ah, this is better, it was far too quick last time
<bradm> thumper: will let you know when its up, this is all HP kit and maas, so its not exactly fast
<thumper> kk
<bradm> thumper: ok, bootstrap is done, doing the add-machine now
<bradm> thumper: right, all 16 have started, now its waiting time.
<bradm> thumper: well look at that, a lot of them are coming back as started
<thumper> bugger
<thumper> bradm: looks like the HA bit is screwing things up
<bradm> thumper: sure does - I'd like to try this again with HA once all these either hit started or we decide to give up
<thumper> bradm: can I get you to file a bug and mark one of us will mark it critical
<thumper> bradm: when you test it again that it
<thumper> bradm: something is horribly wrong
<thumper> bradm: please include all the version information you can
<thumper> and provider info
<bradm> thumper: sure, is there any particular logs or anything you need?  other than it not working with HA?
<thumper> grab the logs from the state servers (the various HA machines), and at least one failing log for the machine again on one showing as pending
<bradm> thumper: interestingly we've done something similar in a staging environment, and using HA worked fine - although its not as many hosts, and its with softlayer rather than physical HP kit
<bradm> so its SeaMicro kit there, I think
<thumper> bradm: either way, something is wrong...
<thumper> bradm: and I'm not sure what
<bradm> thumper: all 16 hosts are now in a agent-state of started.
<thumper> bradm: when you did the test with HA, were all three HA machines up and running?
<thumper> and stable?
<bradm> thumper: yes
<bradm> I'll quickly grab juju status output from this, and fire up the HA again
<thumper> if you can confirm again, I bet it is something about the HA-ness of it all
<thumper> not something I've had anything to do with I'm afraid
<thumper> but lets get the bug filed and someone on it
<bradm> right
<thumper> cheers
<bradm> thumper: this will be pretty nice once its working though, we have openstack deployed in HA mode to the ha bootstrap nodes into LXC, its working fairly well in testing
<bradm> I've just filed bug #1381340 as per discussion with thumper, some kind of HA bootstrap node bug
<mup> Bug #1381340: HA bootstrap mode causes machines stuck in agent-state pending <canonical-bootstack> <juju-core:New> <https://launchpad.net/bugs/1381340>
<mattrae> hi, i'm doing debug-hooks and according to the docs, 'exit 1' will halt the queue. When i run 'exit 1' i see it moves on to the next queued hook, rather than halting. is there a step i'm missing? juju 1.18.4
<marcoceppi_> mattrae: exit in debug-hooks is ignored in 1.18
<mattrae> juju remove-relation doesn't appear to be completing. i still see the relation in juju status, and re-adding the relation says 'relation already exists'. what's the best way to debug?
<mattrae> marcoceppi_: thanks for confirming the issue. sigh i feel like it should be fixed in the juju versions we ship with trusty.
<catbus1> Hi, I juju deploy ceph on 3 physical nodes, each has 1TB on sdb. How long should I expect for nodes to finish the install?
<catbus1> it's been at least 2 hours.
<marcoceppi_> catbus1: shouldn't take more than a few mins
<catbus1> ok, I will start over
<marcoceppi_> catbus1: what is the status?
<marcoceppi_> pending?
<catbus1> pending
<marcoceppi_> are the machines stuck?
<catbus1> I can juju ssh in the unit and no busy process seen via top
<marcoceppi_> catbus1: yeah, something is stuck
<marcoceppi_> catbus1: can you `ps -aef | grep hooks` ?
<catbus1> just one entry returned: ubuntu   13681 13661  0 12:29 pts/1    00:00:00 grep --color=auto hooks
<marcoceppi_> catbus1: yeah, no hooks are running, something is borked
<catbus11> marcoceppi_: I got ceph installed successfully and started now.
<catbus11> it took about 5 minutes or so.
<marcoceppi_> catbus11: sweet!
<marcoceppi_> catbus11: rule of thumb, if nothing has happned for 10-20 mins, get suspicious
<catbus11> ok
#juju 2014-10-16
<bloodearnest> sanity check - can you relate subordinate charms to on another?
<jcastro> hey lazyPower
<bloodearnest> looks like you can relate two subordinates in latest 1.20. Any one know if that works in 1.18?
<bloodearnest> ftr, it does
<lazyPower> hey jcastro o/
<jcastro> lazyPower, hey we need a 2nd opinion on the xpcc charm wrt. the install hook working
<jcastro> can you check it out tomorrow?
<jcastro> iirc that's when your shift is
<lazyPower> xpcc?
<lazyPower> oh
<lazyPower> hpcc!
<jcastro> oh yeah, sorry
<lazyPower> ack. I can take a look now, i'm in between tracks @ hadoopworld
<lazyPower> lemme pull it and takea look
<jcastro> oh that's this week!
<jcastro> thought it was next week!
<lazyPower> no sir, i'm out of town, attending big data talks :)
<lazyPower> jcastro: i cant run a deploy test, my wifi is crapping out when i attempt to deploy the charm. I can give it a cursory code review however, there are a few things in here that raise an eyebrow - but i dont want to just add another log on the fire that is this review chain.
<lazyPower> jcastro: are you OK with me takng a look at this later when i've got more bandwidth avaiable to do a proper review?
<jcastro> lazyPower, yeah I just wanted to get it on your radar
<lazyPower> ack. I'll try to get this out before EOW - there's a lot to cover here.
<jcastro> heh, everyone is either on swap, conference, on holiday this week
<lazyPower> yeah, its a crazy time in the eco sphere.
<lazyPower> i thought you were still in brussels
<tvansteenburgh> jb
<jhobbs> juju ssh <unit> is failing for me if my system can't resolve the hostname MAAS has given the unit - http://paste.ubuntu.com/8574665/
<jhobbs> this used to work and I'm wondering what's different now
<ayr-ton> When I create a charm with juju charm create -t bash. There are other templates different than bash?
<tvansteenburgh> ayr-ton: yes
<ayr-ton> tvansteenburgh, Where I can found them?
<ayr-ton> find*
<tvansteenburgh> ayr-ton: `charm create -h` should list them
<ayr-ton> thanks
<ayr-ton> tvansteenburgh, No. ;x
<tvansteenburgh> ayr-ton: paste the output
<ayr-ton> tvansteenburgh, http://paste.ubuntu.com/8574729/
<tvansteenburgh> ayr-ton: "Installed templates: python, bash"
<ayr-ton> Hmmm. Not too friendly, but helps (:
<ayr-ton> Actually is a strange place too see the charms list.
<tvansteenburgh> ayr-ton: agree, could be friendlier, patches welcome :)
<ayr-ton> tvansteenburgh, Thanks (: I will fill a enhancement.
<ayr-ton> tvansteenburgh, I will do that (:
<ayr-ton> tvansteenburgh, juju-core, right? https://github.com/juju/juju
<tvansteenburgh> ayr-ton: https://launchpad.net/charm-tools
<ayr-ton> tvansteenburgh, Should I fill a blueprint or bug first? Or no problem to submit the merge directly?
<tvansteenburgh> ayr-ton: you can file a bug first, or just submit a merge proposal directly, either is fine, thanks!
<ayr-ton> Ok o/
<ayr-ton> tvansteenburgh, https://bugs.launchpad.net/charm-tools/+bug/1382127
<mup> Bug #1382127: Theres no specific command to show charm templates list <Juju Charm Tools:In Progress by ayrton> <https://launchpad.net/bugs/1382127>
<ayr-ton> tvansteenburgh, WIP at: https://code.launchpad.net/~ayrton/charm-tools/charm-list-command
<tvansteenburgh> ayr-ton: great, thanks
<ayr-ton> tvansteenburgh, other question. How to create new templates?
<tvansteenburgh> ayr-ton: create a class that extends charmtools.generators.CharmTemplate and add it to setup.py as a new entrypoint
<tvansteenburgh> ayr-ton: examples are in charmtools/templates
<ayr-ton> tvansteenburgh, okay \o/
<catbus1> marcoceppi_: Hi Marco, the ceph node I am deploying now keeps printing "ERROR juju.worker runner.go:218 exited "api": unable to connect to "wss://01.maas:17070/" . The node can ping 01.maas fine.
<catbus1> 01.maas is the juju bootstrap node.
<marcoceppi_> catbus1: sounds like a DNS issue, can you dig 01.maas from that node?
<catbus1> marcoceppi_: it could find the ip address of the 01.maas. ping gets response.
<marcoceppi_> catbus1: is the bootstrap node running?
<marcoceppi_> can you jujustatus, etc?
<catbus1> marcoceppi_: http://pastebin.ubuntu.com/8574929/
<catbus1> marcoceppi_: http://pastebin.ubuntu.com/8574918/
<marcoceppi_> catbus1: well the error seems clear, it can't dial home
<marcoceppi_> catbus1: on bootstrap node, can you run `sudo initctl list | grep juju` ?
<catbus1> jujud-unit-juju-gui-0 start/running, process 15886
<catbus1> juju-db start/running, process 15731
<catbus1> jujud-machine-0 start/running, process 15791
<marcoceppi_> catbus1: netstat -lt on the bootstrap node
<catbus1> marcoceppi_: http://pastebin.ubuntu.com/8574960/
<marcoceppi_> catbus1: interesting, it's only listening on the IPv6 address
<marcoceppi_> that's a new one for me
<catbus1> yeah
<marcoceppi_> what version of juju ?
<catbus1> juju-core:
<catbus1>   Installed: 1.20.9-0ubuntu1~14.04.1~juju1
<marcoceppi_> catbus1-afk: that's weird
<cory_fu> Is there any documentation, or a reference implementation for a "proxy charm"?
<marcoceppi_> cory_fu: not really
<cory_fu> No examples at all?
<cory_fu> Do we even have a definition of what a "proxy charm" is?
<marcoceppi_> I mean, there's the aws-elb charm
<marcoceppi_> and a few others
<marcoceppi_> they exist but it's not really documented except in theory
<cory_fu> I've heard the term thrown around as The Way to integrate with existing, non-Juju solutions, but have never heard any more detail than the term itself
<marcoceppi_> cory_fu: that's basically it
<marcoceppi_> using configuration you can expose parts of your infrastructure and then implement the relationships they would provide in the charm
<marcoceppi_> nagios-external-master is another example
<jcastro> jose, around?
<jcastro> natefinch, check your email please, I need your address for your juju shirt.
<natefinch> jcastro: thanks
<natefinch> jcastro: is this American sizes or UK sizes?  It makes a difference :)
<jcastro> natefinch, UK size
<natefinch> jcastro: good to know
<ayr-ton> tvansteenburgh, Problems to install dependencies from requirements.txt from charms-tools: http://paste.ubuntu.com/8575344/
<ayr-ton> Could not find any downloads that satisfy the requirement requirements.txt, after a pip install with --allow-external
<tvansteenburgh> pip install -r requirements.txt --allow-external launchpadlib
<tvansteenburgh> ayr-ton: try that ^
<tvansteenburgh> ayr-ton: in general it's probably better to just use the make targets
<tvansteenburgh> ayr-ton: i.e. `make clean && make`
<ayr-ton> tvansteenburgh, Okay, trying now
<mburleigh> hello, have newbie question, can I switch environments while using the juju-gui? I have it running a "local" lxc but want to manage my "MaaS" environment from the other?
<catbus1-afk> marcoceppi_: I started over, and it's still the same
<marcoceppi_> catbus1-afk: I'm looking to see if there's a way to disable ipv6 in juju
#juju 2014-10-17
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/openstack-zeromq/tidyup/+merge/238712
<lazyPower> hey jcastro, another follow up on yesterdays request
<lazyPower> i got HPCC reviewed, looks good to go, i need to finish doing the cross cloud deploy test
<jcastro> rock
<gnuoy> jamespage, https://code.launchpad.net/~gnuoy/charms/trusty/neutron-api/zmq-disable-notif-enforce-juno/+merge/238724
<bic2k> my juju mongod instance keeps locking agents out, I forced a manual agent update earlier this week to 1.20.9 (and used update-juju to 1.20.10 yesterday). Anyway to tell if that service is out of date and needs an update as well?
#juju 2014-10-18
<mwak> hey
#juju 2015-10-12
<blahdeblah> Re: "Juju on multiple public clouds" thread on the mailing list a couple of weeks back, what's the current recommendation if we need to communicate in an environment which is a roughly 50/50 split between two different providers (in my case openstack + bare metal maas)
<blahdeblah> ?
<stub> blahdeblah: Your choices are using 2 environments without relations between it, or bringing up a single environment and using manual provisioning for the other provider (where we have done this, it is OpenStack environment with the MaaS provisioned servers plugged in with the manual provider)
<blahdeblah> stub: And if using the former, then work out some sort of manual replacement for relations that passes the required data?
<stub> yeah, which is why the latter is usually the better options I think.
<stub> With the latter you just have to handle the firewall stuff yourself.
<blahdeblah> cool - thanks
<blahdeblah> lazypower: there seem to be 3 versions of the dns charm in jujucharms.com - any plans to make one of them official at some point?
<blahdeblah> lazypower: also, what are your plans for autogeneration?
<sg> I want to use apache http server for load balancing, can any one tell me how to configure apache server for loadbalancing using apche2 charm?
<sg> I want to use apache http server for load balancing, can any one tell me how to configure apache server for loadbalancing using apche2 charm?
<jamespage> gnuoy`, morning
<gnuoy`> hi jamespage
<jamespage> gnuoy`, thanks for the reviews and merging of my charm branches fro mover the weekend
<gnuoy`> np, thanks for the mps!
<jamespage> gnuoy`, :)
<bdx> openstack-charmers, charmers, core, dev: hows it going everyone? Can someone give a little feedback on whether or not I am going about parsing the network-interfaces yaml/string/dict in a sane way? I feel like it could be done cleaner, but am having a trouble understanding how the input could be parsed in a cleaner fashion...Thanks!
<bdx> heres the MR I'm referencing: https://code.launchpad.net/~jamesbeedy/charms/trusty/nova-compute/add-iface-support/+merge/274159
<bdx> jamespage: ^^
<jamespage> bdx, hey - I have you on my TODO for today re that merge proposal
<jamespage> bdx, completely ack the requirement, but I need to fill you in on where that feature will come from
<jamespage> (i.e. its not the charm)
<jamespage> bdx, in meetings all day so it will be a few hours before i respond in full
<bdx> jamespage: ok, no worries...I'll leave her rest then
<jamespage> bdx, please - don't want you to waste cycles on this when its going to be resolved in a different way
<bdx> jamespage: ok, thanks!
<lazypower> blahdeblah, i'm not sure if my last messages sent. so i'll repeat:  It will get official once we have some actual tests around it that properly validate the charm. It presently requires the ability to store and ship with secrets in order to do that (infra keys for rt53) - and we dont have an answer for storing secrets. I've been toying with the idea of charming up vault and adding relations between CI and Vault to do secret storage/recovery/co
<lazypower> nfiguration  but thats pie in the sky atm
<lazypower> In terms of the bind provider, thats stable enough that we could probably omit that, depreciate the current namespace charm across the board, and move to a generation pattern. The charm is constructed in a way that would lend itself nicely to that.
<lazypower> all the "providers" are plugin based, and fairly arbitrary
<blahdeblah> lazypower: I didn't understand a single thing in that line about the bind provider. :-(
<blr> Have an issue with juju deployer - if a deployment fails for any reason, before payloads are copied to /var/lib/juju/agents/unit-foo-0/files subsequent runs will not recopy payloads to the unit afaict
<blahdeblah> lazypower: So is your version of the DNS charm the closest thing to official at present?
<blr> ok possibly not juju deployer, but juju rather.
<blr> nevermind, a problem with my understanding, rather than mojo/juju.
<nottrobin> you know when you do "juju bootstrap" it asks for sudo password? what command is it actually trying to run? I'd like to use the sudoers file to allow a user to bootstrap juju but nothing else
<rick_h__> nottrobin: it's starting the lxc that needs it. lxc-create?
<nottrobin> rick_h__: hmm I got "sudo lxc-create" to work without password, but "juju bootstrap" still asks for password. Also when I look in auth.log I see "command not allowed ; .. COMMAND=/bin/bash -s". Which suggests I'd need to let that user run bash as sudo. Which is obviously too permissive. Maybe there's no solution.
<rick_h__> nottrobin: what version of juju?
<rick_h__> nottrobin: and lxc?
<rick_h__> nottrobin: there was a point where that switched over to not need sudo perms but not sure when that flipped
<nottrobin> rick_h__: juju: 1.22.6-trusty-amd64. lxc: 1.0.7
<nottrobin> maybe if I was on utopic?
<rick_h__> nottrobin: looking
<rick_h__> nottrobin: https://github.com/juju/docs/blob/master/src/en/config-LXC.md#bootstrapping-and-destroying
<nottrobin> rick_h__: Isn't that about "juju bootstrap" itself needing to be run with sudo, which was a long time ago. I don't need that. This is a completely clean install of juju-local on an up-to-date version of trusty.
#juju 2015-10-13
<matt_dupre> How long does it usually take for updates to a charm to make it from launchpad through to the charm store?
<rick_h__> matt_dupre: 2hrs
<matt_dupre> rich_h__: thanks - is there any way I can check on the status of a change?  It's been over 4 hours
<matt_dupre> rick_h__*^^
<rick_h__> matt_dupre: which url?
<rick_h__> ping urulama_ and the team can look into it.
<urulama_> matt_dupre: sure thing, which charm/bundle?
<matt_dupre> urulama_: https://code.launchpad.net/~project-calico/charms/trusty/neutron-calico/trunk and https://jujucharms.com/u/project-calico/neutron-calico/trusty
<matt_dupre> former is at revision 63, latter is still at 61
<urulama_> matt_dupre: sometimes bzr revisions get out of sync, i'll take a look shortly
<matt_dupre> thanks
<urulama> marcoceppi: "Id":"cs:~project-calico/trusty/neutron-calico-6","PublishTime":"2015-10-13T10:46:48.591Z"
<urulama> ah, sorry, marcoceppi
<urulama> matt_dupre: "Id":"cs:~project-calico/trusty/neutron-calico-6","PublishTime":"2015-10-13T10:46:48.591Z"
<urulama> matt_dupre: seems that the charm there is the latest version, but the bzr info is out of sync. we're working on improved charm publishing and solving that issue with it
<matt_dupre> urulama_: Ahh OK, thanks
<urulama> matt_dupre: np. you can check the charm's BZR digest here https://api.jujucharms.com/charmstore/v4/~project-calico/neutron-calico/meta/extra-info/bzr-digest
<urulama> matt_dupre: this API provides more accurate info on the latest charm in the store
<bdx> core, dev, charmers, juju: hey whats going on guys? juju-deployer is throwning some errors I can't seem to work around...any ideas? ....here is my juju-deployer.yaml : http://paste.ubuntu.com/12776352/ .....
<bdx> the error I'm getting is: http://paste.ubuntu.com/12776367/
<bdx> I don't have 'branch' specified for any services anywhere in my juju-deployer.yaml ..... does juju-deployer cache things??
<marcoceppi> bdx: one min
<bdx> marcoceppi: ok
<marcoceppi> bdx: remove series: trusty
<bdx> marcoceppi: from everywhere?
<lazypower> stupid bouncer lag...
<marcoceppi> bdx: the key, on line two
<bdx> marcoceppi: when I do I get another error... http://paste.ubuntu.com/12776730/
<bdx> marcoceppi: I am trying to siplify things....heres my updated charmconf.yaml: http://paste.ubuntu.com/12776764/
<bdx> marcoceppi: does juju-deployer cache configs anywhere on the os?
<marcoceppi> bdx: not really
<bdx> or in the state server somewhere
#juju 2015-10-14
<stub> tvansteenburgh: Have you seen the apt failures in the bundletester CI system? Something odd going in inside the charmbox container best I can tell from here.
<tvansteenburgh> stub: yes, this is the root cause https://bugs.launchpad.net/ubuntu/+source/python-urllib3/+bug/1500768
<mup> Bug #1500768: python3.4.3 SRU break requests <regression-update> <trusty> <verification-done> <python-urllib3 (Ubuntu):Invalid> <python-urllib3 (Ubuntu Trusty):Fix Committed by doko> <https://launchpad.net/bugs/1500768>
<stub> tvansteenburgh: ta
<suchvenu> hi
<suchvenu> In my juju local container I am getting the issue like shown below :
<suchvenu> Err http://security.ubuntu.com trusty-security Release.gpg   Could not resolve 'security.ubuntu.com' Err http://archive.ubuntu.com trusty Release.gpg   Could not resolve 'archive.ubuntu.com'
<suchvenu> Can anyone help pls
<suchvenu>  Begin Install. 2015-10-14 13:48:04 INFO install Err http://security.ubuntu.com trusty-security InRelease 2015-10-14 13:48:04 INFO install 2015-10-14 13:48:04 INFO install Err http://archive.ubuntu.com trusty InRelease 2015-10-14 13:48:04 INFO install 2015-10-14 13:48:04 INFO install Err http://archive.ubuntu.com trusty-updates InRelease 2015-10-14 13:48:04 INFO install 2015-10-14 13:48:04 INFO install Err http://security.ubuntu.com 
<suchvenu> This is the section from debug-log
<marcoceppi> suchvenu: o/
<marcoceppi> I was just reading your email
<marcoceppi> suchvenu: this seems to be a networking/dns resolution issue. Do you have an HTTP proxy set?
<suchvenu> No
<marcoceppi> suchvenu: can you log into the machine and run `ping 8.8.8.8` ?
<suchvenu> ok
<suchvenu> Its pinging
<kwmonroe> hey aisrael, have you gone up to vbox 5 yet?  and if so, any gotchas doing vagrant/juju stuffs with that?
<aisrael> kwmonroe: I've been running virtualbox 5 for a while. No issues that I've run into.
<kwmonroe> cool, thx aisrael!
<shilkaul> Hi , I have a scenario in which a user is deploying service A and service B simulataneously , but service B should be executed after service A is deployed successfully.
<shilkaul> Is there some way where there can be a delay in the order which service B is called after A
<cory_fu> shilkaul: Yes.  You can use relations to model the dependency between services.  Specifically, service A can use relation-set to set a value on the relation interface, and service B can block its actions until it sees that flag from service A.
<cory_fu> shilkaul: I should note that these sorts of cross-service dependencies can be tricky to keep track of for multiple services, or when combined with dependencies on config values, etc.  That is a big part of the motivation behind the reactive & layers pattern that we developed: https://jujucharms.com/docs/stable/authors-charm-composing
<cory_fu> The idea there being that you depend on relation interface layers to set states, and you use @when and other decorators to respond to combinations of states from whatever interfaces you depend on.
<cory_fu> It's still under development, though, so consider that pattern in the "early adopter" or "public beta" phase, but it should hopefully give you an idea of how this problem can be solved
<cory_fu> You can also solve the problem with traditional style hooks, especially with simpler charms, but you have to think carefully about what the overall state of your service might be during any given hook invocation (since they can be called in many possible orders)
<shilkaul> thanks cory , the issue I am facing is that these two hooks are not participating in any join relation , they are unrelated.in hook A I am mounting some directory.In hook B I want to run some services when successfull mounting happens in hook A.
<cory_fu> I'm a bit confused.  If there is no relation between the services, then why does B care whether a directory is mounted on A?
<shilkaul> I have a storage service A and another Service B which is my master of cluster.To add a slave node ie service C , I am adding relation between A and C and B and C.So when C joins A , it will mount directories on C from A.and when C joins B , some services I need to run from bin folder of mounted folder.
<cory_fu> Ah.  Hrm.  I guess you would have to propagate state along the relations that exist.  Specifically, A should tell C that its storage is ready to use, then, once it has that info, C should tell B that it is ready to act as a slave.  We do something much like this in the big data charms, though those use the older "services framework" to manage the inter-dependencies, which is not nearly as easy to follow as the reactive + layers pattern
<shilkaul>  can you please share some big data example which i can explore on this type of sceanrios ?
<cory_fu> Certainly.  The bundle that contains the core set of charms is: https://jujucharms.com/apache-core-batch-processing/
<cory_fu> In that case, apache-hadoop-hdfs-master (and apache-hadoop-yarn-master) depends on apache-hadoop-compute-slave to do useful work.  Then, apache-hadoop-plugin depends on apache-hadoop-hdfs-master (and apache-hadoop-yarn-master) to tell it when it is ready to do work (i.e., has at least one compute-slave)_
<shilkaul> ok , thanks
<cory_fu> shilkaul: The specific bits of code you're interested in are the "requires" blocks in common.py in each charm, such as: http://bazaar.launchpad.net/~bigdata-dev/charms/trusty/apache-hadoop-plugin/trunk/view/head:/hooks/common.py#L87
<cory_fu> In the parlance of the services framework, those say "don't do anything in callbacks unless these relations have set their ready flag"
<cory_fu> The relation classes are a bit more complicated than that due to other constraints in big data, but that's the core of it.  They're also hosted in a shared library:
#juju 2015-10-15
<mbruzek> Can someone explain how the certificates work in Juju?
<mbruzek> I am getting an error in the Juju debug-logs:
<mbruzek> machine-0: 2015/10/15 10:40:19 http: TLS handshake error from 127.0.0.1:47964: remote error: bad certificate
<mbruzek> According to the .juju/kvm/server.pem the certificate expired today:  Not After : Oct 15 15:31:39 2025 GMT
<mbruzek> I don't remember generating this certificate, how can I get a new one?
<lazypower> mbruzek, i'm fairly certain the KVM provider generates that certificate when you bootstrap...
<lazypower> thats weird that its expired today
<mbruzek> lazypower: I used openssl x509 -text -in server.pem to display the details
<mbruzek>         Validity
<mbruzek>             Not Before: Oct  8 15:31:40 2015 GMT
<mbruzek>             Not After : Oct 15 15:31:39 2025 GMT
<lazypower> ah so its not expired
<mbruzek> lazypower: The way I read that it is expired.  today
 * admcleod- confused. 
<admcleod-> mbruzek: have you tried checking /var/lib/juju/server.pem on the bootstrap node?
<mbruzek> admcleod-:  Sorry I stepped out for lunch.
<mbruzek> admcleod-: I can't ssh to the bootstrap node, on kvm that is the localhost
<mbruzek> (my laptop)
<mbruzek> admcleod-: And there is no pem in /var/lib/juju
<plars> 2015-10-15 20:07:39 ERROR juju.worker.resumer resumer.go:69 cannot resume transactions: document is larger than capped size 1939208 > 1048576
<plars> I found https://bugs.launchpad.net/juju-core/+bug/1454891 which seems to be the same error I'm getting, but supposedly it's fixed a long time ago?
<mup> Bug #1454891: 1.23.2.1, mongo: document is larger than capped size <landscape> <mongodb> <juju-core:Fix Released by wallyworld> <juju-core 1.23:Fix Committed by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1454891>
<plars> I'm on 1.24.6-0ubuntu1~15.04.1~juju1 from the stable ppa
#juju 2015-10-16
<jamespage> gnuoy`, can you point me at the mojo spec used for bug 1506287
<jamespage> ?
<mup> Bug #1506287: ceph-disk: Error: Device is mounted: /dev/sdb1 (Unable to initialize device: /dev/sdb) <openstack> <uosci> <ceph (Juju Charms Collection):New for james-page> <https://launchpad.net/bugs/1506287>
<gnuoy`> jamespage, sure if you look at the spreadsheet it's column B
<jamespage> gnuoy`, branch for specs?
<gnuoy`> jamespage, ~ost-maintainers/openstack-mojo-specs/mojo-openstack-specs
<jamespage> brain still in seattle atm
<jamespage> gnuoy`, beisner: fwiw I suspect some sort of systemd race type thing for the ceph errors in vivid and wily
<plars> Good morning
<plars> 2015-10-15 20:07:39 ERROR juju.worker.resumer resumer.go:69 cannot resume transactions: document is larger than capped size 1939208 > 1048576
<plars> I found https://bugs.launchpad.net/juju-core/+bug/1454891 which seems to be the same error I'm getting, but supposedly it's fixed a long time ago?
<plars> I'm on 1.24.6-0ubuntu1~15.04.1~juju1 from the stable ppa
<mup> Bug #1454891: 1.23.2.1, mongo: document is larger than capped size <landscape> <mongodb> <juju-core:Fix Released by wallyworld> <juju-core 1.23:Fix Committed by wallyworld> <juju-core 1.24:Fix Released by wallyworld> <https://launchpad.net/bugs/1454891>
<plars> any ideas?
<admcleod-> mbruzek: any luck with your cert problem?
<mbruzek> admcleod-: yeah I got it working.  The cert I posted in the channel here was not expired, but I was getting tls errors because of an expired cert
<mbruzek> admcleod-: I don't know how I got it, and I don't know how I fixed it.
<admcleod-> oh. weird.
<mbruzek> yeah thanks for following up
<Muntaner> hello guys
<Muntaner> I'm having a bootstrap problem
<Muntaner> anyone can help me?
<Muntaner> I get this error
<Muntaner> caused by: Get http://controller:8774/v2/1e20e71ea2ee4ed2baa337e147c211df/servers/750311f9-b2f5-4300-be14-95a824fd23b4: dial tcp: lookup controller: no such host
<Muntaner> seems like the vm isn't able to resolve controller
<Muntaner> which is a 192.168.0.62 in my LAN
<Muntaner> how can I workaround this?
<lazypower> Muntaner, this sounds like a DNS issue. which provider is this?
<Muntaner> it is openstack lazypower
<Muntaner> it is a private openstack installation, all-in-one, running on my 192.168.0.62 address
<Muntaner> (server address)
<Muntaner> can I pass dns infos to juju bootstrap so that it solves controller with that ip?
<lazypower> Not that I'm aware of
<lazypower> 1 sec, let me get my coffee and ponder on this for a moment
<lazypower> Muntaner, can your workstation resolve the hostname controller?
<Muntaner> lazypower, sure
<Muntaner> it can
<lazypower> weird
<lazypower> it shouldn't have any issues bootstrapping then
<lazypower> can i see the full output of your bootstrap command?
<Muntaner> lazypower, sure
<Muntaner> lazypower, http://paste.ubuntu.com/12800082/
<Muntaner> lazypower, any ideas? :) i'm quite lost
<lazypower> Muntaner, 1 sec i'm in a meeting
<Muntaner> ok lazypower sry
<lazypower> no worries :) context switching like crazy today
<Muntaner> lazypower, solved it by changing API endpoints into openstack ;)
<lazypower> Muntaner, :thumbsup: glad you got it sorted
<cholcombe> in reactive how does leader election work?
<lazypower> cholcombe, the same as it does in non-reactive
<lazypower> cholcombe, is_leader and leader_set are in charmhelpers.core.hookenv
<cholcombe> lazypower, ok cool i was hoping that was the case
<Icey> are there good documentation on starting a new charm that will install something from a deb and then setup config when a relation changes?
<Icey> with the reactive pattern?
<luqas__> hi lazypower, I'm apuimedon colleague, I've just finished testing the midonet charms, they should be ready for review, the last patches of https://code.launchpad.net/~lezbar will be merged into https://code.launchpad.net/~celebdor
<lazypower> Icey, the vanilla charm is a great example and its in the docs
<lazypower> 1 sec and i'll get you a link
<Icey> http://pythonhosted.org/charms.reactive/
<Icey> and https://jujucharms.com/docs/stable/authors-charm-composing
<lazypower> Icey, https://jujucharms.com/docs/stable/authors-charm-composing
<Icey> are what I'm looking at :)
<Icey> thanks lazypower
<lazypower> Icey, stay tuned to the list, i'll be posting vlogs on reactive pattern charming next week
<Icey> I'm trying to write a really basic charm to go with some monitoring stuff cholcombe and I are working on :)
<lazypower> OH
<lazypower> YOUR ICEY
<Icey> :)
 * lazypower lightbulbs
<lazypower> was nice hangin with ya last week virtually :D
<Icey> one and the same :)
<lazypower> er, thi week
<Icey> heh yea
 * lazypower is fried from sprints
<Icey> no worries :)
<Icey> building a subordinate charm for https://github.com/influxdb/telegraf
<Icey> basically, you can attach it to any other charm and get system metrics  into influxdb :)
<Icey> is the idea anyways :)
<lazypower> i like this
<lazypower> +1
<Icey> but, I figure I should try to write it in the reactive pattern :)
<lazypower> the only 2 things you need to start is a metadata.yaml and a composer.yaml (or layers depending on branch of charm tools)
<lazypower> the rest is just reactive/foo.py
<lazypower> and "magic"
<lazypower> Icey, i can hop on a hangout real quick to get you started if you want
<lazypower> i'm basically EOW at this point
<Icey> sure
<lazypower> so, whats another 10 minutes :)
<Icey> that would be awesome :)
<lazypower> linky?
<lazypower> https://plus.google.com/hangouts/_/canonical.com/reactive-overlords
<lazypower> cholcombe, ^
#juju 2015-10-17
<stokachu> with charms.reactive should i be able to log into the juju machine and run ./hooks/{hook} directly?
#juju 2016-10-17
<blahdeblah> So hallyn was telling me on Friday about using KVM with the local provider; is it also possible to use this with the manual provider, i.e. to control a remote KVM host?
<blahdeblah> Or alternatively, change the container type on a manual host after the environment has been bootstrapped?
<spaok> if you put maas in a container and use the maas provider, setup ssh keys for the maas user, you can specify the qemu url to a remote machine for the power control, just need ssh
<spaok> juju add-machine would spin up a KVM instance on that server
<blahdeblah> spaok: Yeah - I've already done that; but I'd like to avoid having to add KVM instances to the MAAS controller
<kjackal_> Good morning Juju world!
<SDBStefano> hi  kjackal
<kjackal> hi SDBStefano, what's up?
<SDBStefano> how could I create a charm for Xenial instaed of trusty ?
<kjackal> in the metadata.yaml you set the proper series, let me find an example
<kjackal> SDBStefano: https://github.com/juju-solutions/layer-apache-bigtop-base/blob/master/metadata.yaml#L10
<kjackal> SDBStefano: although the charm build command when there is a single series decides to put it under the trusty build path (but that might be some missconfiguration I have on my side)
<SDBStefano> so, I have added into the yaml file  - series:   - xenial
<SDBStefano> and I done the build
<SDBStefano> are you saying that the stuff under the trusty directory is now for Xenial ?
<kjackal> SDBStefano: when you do a charm build first line should say where the output directory is
<kjackal> SDBStefano: go ahead and open the <build-output-dir>/metadata.yaml . It should say that the series is xenial, right?
<SDBStefano> ok, it creates a new dirctory name 'builds' there the metadata.yaml contains "series": ["xenial"]
<SDBStefano> so, now I should deploy using 'juju deploy $JUJU_REPOSITORY/builds/use  --series xenial'
<kjackal> SDBStefano: awesome! then this charm is xenial. Yes, you can deploy it now!
<SDBStefano> I have to remove the previous app, but it has an error : - ude                       error        1  ude      local         6  ubuntu
<SDBStefano> so the 'juju remove-application ude' is not removing the app
<SDBStefano> is it possible to force the removal ?
<kjackal> you can remove-machine where the unit is with --force
<kjackal> SDBStefano: ^
<SDBStefano> yes, it worked, I'm deploying, thanks for helping
<kjackal> you can also deploy the same application with a different name like so: juju deploy $JUJU_REPOSITORY/builds/use myappname --series xenial
<kjackal> SDBStefano: ^
<neiljerram> jamespage, good morning!
<jamespage> morning neiljerram
<jamespage> how things?
<neiljerram> Quite well, thanks!
<neiljerram> You suggested that I ping you here in your review of https://review.openstack.org/#/c/382563/
 * magicaltrout knows from experience, anyone who greats another user specifically whilst saying good morning in reality has an issue or favour to ask..... "quite well" or not ;)
<jamespage> neiljerram, yes!
<magicaltrout> s/greats/greets
<neiljerram> So doing that.  I think your proposal is basically right, so just looking for pointers on how to start reworking the code into the correct form...
<neiljerram> magicaltrout, I just thought that 'good morning' was a little more friendly than 'ping'!
<jamespage> neiljerram, so since the original calico integration was written, we've done quite a bit of work to minimize the amount of vendor specific code that is required in the core principle charms
<jamespage> neiljerram, you already have a subordinate for nova-compute
<jamespage> neiljerram, that's been complemented with the same approach for neutron-api
<neiljerram> jamespage, yes, agreed, and makes sense.
<jamespage> neiljerram, so that all of the SDN specific bits can reside in an SDN specific charm
<jamespage> neiljerram, right and the nice bit of this is that gnuoy's just completed some work to make writing those a whole lot easier
<neiljerram> jamespage, Ah, nice.
<jamespage> neiljerram, we've done quite a bit of work on making reactive and layers work for openstack charming this cycle
<jamespage> neiljerram, so your charm can be quite minimal
<jamespage> neiljerram, https://github.com/openstack/charm-neutron-api-odl has been refactored as an example for reference
<jamespage> neiljerram, but we also have a template for charm create
<jamespage> gnuoy`, ^^ is that live yet?
<jamespage> neiljerram, I'd love to get a neutron-api-calico charm up and running, so we can deprecate the neutron-api bits for ocata release and remove them next cycle
<gnuoy`> jamespage, the template ? That is very nearly ready. I just need to run the guide using the template to check they are both in sync and work
<neiljerram> jamespage, Just as a thought, might it even make sense to have a single 'neutron-calico' charm that provides both the compute and the server function?  I assume it can detect at runtime which charm it is subordinate to?  If it's subordinate to nova-compute, it would provide the compute function; if it's subordinate to neutron-api, it would provide the server side function.
<jamespage> neiljerram, that's absolutely fine
<jamespage> +1
<jamespage> neiljerram, neutron-calico already exists right?
<neiljerram> jamespage, Thanks.  So, is there an example of another SDN subordinate charm that already uses gnuoy's new facilities?
<neiljerram> jamespage, Yes, neutron-calico already exists (for the compute function).
<jamespage> neiljerram, https://github.com/openstack/charm-neutron-api-odl does
<jamespage> neiljerram, so approach re neutron-calico as a 'does both' type charm
<jamespage> neiljerram, right now its not possible to upgrade from a non-reactive charm to a reactive charm
<jamespage> neiljerram, neutron-calico is an older style python charm I think
<gnuoy`> neiljerram, I'm still smoothing of the rough corners but https://github.com/gnuoy/charm-guide/blob/master/doc/source/new-sdn-charm.rst may help
<neiljerram> jamespage, TBH we've never really tested for upgrading yet at all.
<jamespage> neiljerram, how many live deployments are you aware of using calico deployed via juju?
<gnuoy`> haha good point
<jamespage> neiljerram, in which case I'd take the hit and move to the new layers+reactive approach now
<neiljerram> jamespage, Just two: OIL, and a Canonical customer that I'm not sure I can name here.
<jamespage> neiljerram, ok OIL is manageable
<jamespage> neiljerram, I'll poke on the other one :-)
<jamespage> gnuoy`, what do you think to the single subordinate doing both roles approach discussed above?
<neiljerram> jamespage, I think you're right about neutron-calico being an older style charm.  So perhaps it would be a simpler first step to make a separate neutron-api-calico, in the most up-to-date style (reactive)
<gnuoy`> jamespage, I'm fine with that
<jamespage> neiljerram, ack
<jamespage> gnuoy`, that charm-guide update is for the hypervisor integration - do we have the equiv for API?
<gnuoy`> jamespage, yep, https://review.openstack.org/#/c/387238/
<jamespage> gnuoy, sorry I mean't the neutron-api subordinate charm version
<gnuoy> jamespage, no, not atm.
<jamespage> gnuoy, ok that's what neiljerram will be after
<gnuoy> ack
<jamespage> neiljerram, you might need to give us a week or two to pull that bit into shape
<gnuoy> jamespage, then https://github.com/openstack/charm-neutron-api-od is the best bet
<gnuoy> * https://github.com/openstack/charm-neutron-api-odl
<neiljerram> jamespage, But I could start by looking at https://github.com/openstack/charm-neutron-api-odl for inspiration, and ask any questions here?
<jamespage> neiljerram, ^^ yeah that byt example is our current doc - but we'll be working on that
<jamespage> neiljerram, yeah that's fine - but we've move openstack charm discussion over to #openstack-charms
<jamespage> but either is still fine
<neiljerram> jamespage, Ah OK, I'll go there now...
<jamespage> neiljerram, ta
<SDBStefano> Hi hackal, I deployed a local charm, but it's stucked into :
<SDBStefano> UNIT    WORKLOAD  AGENT       MACHINE  PUBLIC-ADDRESS  PORTS  MESSAGE ude/11  waiting   allocating  20       10.15.177.80           waiting for machine  MACHINE  STATE    DNS           INS-ID          SERIES  AZ 20       pending  10.15.177.80  juju-19dc1e-20  xenial
<SDBStefano> ok, it has just changed into :
<SDBStefano> MACHINE  STATE    DNS           INS-ID          SERIES  AZ 20       started  10.15.177.80  juju-19dc1e-20  xenial
<SDBStefano> so it seems very slow
<icey> bdx, do I remember you trying to get Juju working with Digital Ocean?
<Spaulding> Hmm...
<Spaulding> Why my "juju status" saying to me that:
<Spaulding> Unit    Workload     Agent  Machine  Public address  Ports  Message
<Spaulding> sme/5*  maintenance  idle   5        10.190.134.241         Enabling Apache2 modules
<Spaulding> But as I see all of my tasks has been made.
<Spaulding> So the last one should set up state "active"
<Spaulding> How I can track it?
<kjackal> Spaulding: is there any info in juju debug-log
<Spaulding> kjackal:
<Spaulding> unit-sme-5: 14:31:32 INFO unit.sme/5.juju-log Invoking reactive handler: reactive/sme.py:95:enable_apache_mods
<Spaulding> Hm... it looks like reactive ran this task again
<Spaulding> that's why my status changed...
<lazyPower> Spaulding - thats the rub. you cannot gauarantee ordering of the methods in reactive unless its a tightly controlled flow through states. if you want something to be run only once, it would be a good idea to decorate with a @when_not, and then subsequently, set the state its decorating against.    do note, that you'll have to handle removal of that state if you ever want ot re-execute that body of code again.
<Spaulding> lazyPower: yeah, i already noticed that
<Spaulding> and I'm using when & when_not
<Spaulding> it's like puppet
<Spaulding> lazyPower: I found it!
<Spaulding> @when_not('http_mods_enabled')
<Spaulding> it should be sme.http ...
<lazyPower> Spaulding  - Sounds like you're on the right track :)
<Spaulding> because of you guys! :)
<magicaltrout> s/greats/greets
<lazyPower> o/ magicaltrout
<fbarilla> I've the 'juju machines' command that does not list any machines but trying to add one leads to the following error message: ERROR machine is already provisioned
<rick_h_> fbarilla: ah, is the machine you're trying to add in use in another model?
<fbarilla> I've two models, 'controller and default' . None of them list the machine I want to add
<fbarilla> In the 'controller' model I've the LXD container where juju has been bootstrapped
<Spaulding> lazyPower: i broke it! :(
<Spaulding> https://gist.github.com/pananormalny/765622f3d2c332bd9dece6f35b9ff267
<Spaulding> maybe someone can spot an issue?
<Spaulding> it's running in a loop - again... :/
<Spaulding> the main idea was to run those tasks one by one...
<Spaulding> from the top to the beginning... in that order
<Spaulding> to the bottom**
<lazyPower> Spaulding - i left a few comments on the gist, nothing stands out other than the state of sme.ready being commented out, and instead being set on 331
<lazyPower> Spaulding i dont see a remove_state however, and thats the other tell-tale, as removing states cause the reactive dispatcher to re-test the state queue to determine what needs to be run, which makes it possible to enter an inf. loop if you're not careful with how you're decorating the methods.
<lazyPower> but that doesn't appear to be the case, so that last message is more FYI than anything else.
<Spaulding> lazyPower: about jinja2 I'll probably use it
<Spaulding> but basically i would like to have any "prototype" of juju - before OpenStack Barcelona
<Spaulding> so I'm trying to do this ASAP
<Spaulding> After that I'll have more time to do it properly... now it's just proof-of-concept
<lazyPower> right, just a suggestion
<lazyPower> its not wrong to put heredocs in there, its not best practice.
<Spaulding> I know
<Spaulding> It's dirty way - but in this case - it's working
<lazyPower> Spaulding - get me an update on that gist with the output of charms.reactive get_states
<lazyPower> i imagine whats happened is this isn't a fresh deploy, and you've modified state progression and run an upgrade-charm and now its misbehaving - is that consistent with whats happened?
<Spaulding> basically - i'm trying every time to deploy it from scratch
<lazyPower> Spaulding - `juju run --unit sme/0 "charms.reactive get_states" `
<lazyPower> assuming sme is the name of your charm and its unit 0
<Spaulding> lazyPower: will do
<Spaulding> lazyPower: is there any other way to remove application if it fails at install hook?
<lazyPower> juju remove-machine # --force
<Spaulding> cause right now I'm destroying the model
<lazyPower> that'll strip the underlying machine unit from the charm and the charm should finish removal on its own
<Spaulding> i tried with force with 2.0rc3... couldn't get that working...
<Spaulding> maybe it's fixed(?) now...
<lazyPower> http://paste.ubuntu.com/23339079/
<lazyPower> seems like its functioning fine in 2.0.0
<lazyPower> notice that it leaves/orphans the application/charm for a short while before reaping the charm.
<cory_fu> petevg: I added a bunch comments in reply to and based on your review on https://github.com/juju-solutions/matrix/pull/2
<cory_fu> bcsaller: Our feedback awaits you.  :)
<bcsaller> cory_fu, petevg: thank you both
<petevg> np
<petevg> cory_fu: thx. Reading your comments ...
<aisrael> cory_fu: Do you know what might be causing this? http://pastebin.ubuntu.com/23339455/
<aisrael> From running charm build
<cory_fu> aisrael: Not sure, but I'd guess that the log message contains utf8 encoded data and should maybe be .decode('utf8')ed before being logged?
<cory_fu> aisrael: Can you drop a breakpoint into /usr/lib/python2.7/dist-packages/charmtools/utils.py and see what the output var holds?
<aisrael> cory_fu: Not yet. I'm doing a charm school/training and they hit it. I'll dig in deeper, though. Thanks!
<cory_fu> aisrael: I'd look for unicode characters in their yaml files, then.  metadata.yaml and layer.yaml specifically.  Otherwise, I'm not really sure
<aisrael> Weird. We haven't touched those at all.
<aisrael> cory_fu: looks like it may be related to an actions.yaml that's UTF-8-encoded
<cory_fu> aisrael: Strange.  Well, we should definitely handle that better
<lazyPower> is there any particular reason we're limiting that to ascii? (just curious)
<aisrael> cory_fu: definitely. Once I confirm, I'll file a bug
<cory_fu> Thanks
<aisrael> It definitely looks like a locale/encoding issue. I had them send me the charm and I built it locally with no problem
<cory_fu> Hrm
<cory_fu> Is anyone having an issue with 2.0 and lxd where the agents never report as started?  Just started for me with the GA release
<cory_fu> Seems to be an issue with the ssh keys
<bdx> cory_fu: I've been having successful deploys so far
<bdx> cory_fu, cmars: should we also set status to 'blocked', or 'error' here -> http://paste.ubuntu.com/23339654/
<cory_fu> bdx: I'd say "blocked", yeah
<bdx> cory_fu, cmars: or is setting the supported series in metadata enough?
<bdx> probably both for good measure?
<cory_fu> bdx, cmars: Setting the supported series in the metadata could (probably would) be overwritten by the charm layer.
<cmars> bdx, cory_fu please open an issue. iirc think you can force series with juju deploy --series
<cmars> cory_fu, i've noticed something with that.. if you have multiple layers that both specify the same series in metadata, the series gets output twice. and the CS rejects that
<cmars> cory_fu, i think I opened a bug..
<cmars> (but i forget where.. so many projects)
<cmars> bdx, interested to get your thoughts on https://github.com/cmars/layer-lets-encrypt/issues/1 as well
<cmars> i think we might be able to make this a little more reusable without introducing too much complexity
<cory_fu> cmars: https://github.com/juju/charm-tools/issues/257  I do think the series list needs to be de-duped
<cmars> cory_fu, thanks :)
<cory_fu> Odd.  It seems to only be trusty lxd instances that get stuck in pending.  Guess I'll try torching my trusty image
 * magicaltrout hands cory_fu the matches
<cory_fu> magicaltrout: Thanks, but it didn't help
<magicaltrout> awww
<cmars> cory_fu, xenial host and trusty container?
<cory_fu> cmars: Yes
<cory_fu> cmars: It looks like the trusty image isn't getting the lxdbr0 interface for some reason, but I can't find anything obvious in any of the logs I could think of to check
<cmars> cory_fu, one thing it could be, is a known systemd issue
<cory_fu> Oh?
<cmars> cory_fu: 22-09-2016 15:23:18 < stgraber!~stgraber@ubuntu/member/stgraber: cmars: if you mask (systemctl mask) the systemd units related to binfmt (systemctl -a | grep binfmt) and reboot, then the problem is gone for good. This is related to systemd's use of automount and not to binfmt-misc (which just ships a bunch of binfmt hooks)
<cmars> cory_fu, a telltale symptom is that the trusty container gets stuck in mountall
<cory_fu> cmars: Huh.  Worth a shot.  FWIW, this only started with the GA.  It was fine on the last RC
<cory_fu> cmars: How do I tell where it's stuck?
<cmars> cory_fu, a ps -ef would show a mountall process, and /var/log/upstart/mountall.log shows some binfmt errors
<cmars> cory_fu, a quickfix is this (pardon my language): https://github.com/cmars/tools/blob/master/bin/lxc-unfuck
<cmars> but masking the systemd units is a better solution
<cmars> cory_fu, unless this is a completely different issue, in which case, forget everything I've said :)
<cory_fu> cmars: Yep, that seems to be exactly the issue!
<bdx> cory_fu: http://paste.ubuntu.com/23340056/
<bdx> cory_fu: can't seem to replicate :-(
<cory_fu> bdx: That looks like what I'm seeing.  The trusty machine never goes out of pending
<cory_fu> cmars: I'm not really familiar with systemd mask.  Do you have the exact commands handy, or can you point me to some docs?
<bdx> cory_fu: it started .... http://paste.ubuntu.com/23340072/
<cory_fu> bdx: Oh, well, it never does for me.  But it sounds like cmars has the solution for me
<bdx> strange ... nice
<cmars> cory_fu, this *should* do it: sudo systemctl mask $(systemctl -a | awk '/binfmt/{print $2}')
<cmars> cory_fu, this might break binfmt on your host, if you run .NET exe binaries directly, for example, that might stop working
<cory_fu> cmars: I don't think I do, but good to know.  No way to have both work, I take it?
<cmars> cory_fu, ideally, someone would fix this issue in systemd or binfmt or wherever it needs to be done
<cory_fu> cmars: I don't think that awk is right.  It just gives me "loaded" four times
<cmars> cory_fu, what do you get from: systemctl -a | grep binfmt ?
<cory_fu> I assume it should be $1 instead
<cory_fu> cmars: http://pastebin.ubuntu.com/23340090/
<cmars> cory_fu, on my machine, $1 is a big white dot for some reason. could be my terminal type?
<cmars> cory_fu, yep, $1 for you
<cory_fu> Could be.  I just have leading whitespace.
<cory_fu> cmars: In case this causes issues, what would be the command to undo this?
<cmars> cory_fu, systemctl unmask the units that were masked
<cory_fu> Ok, thanks
<troontje> hi all :)
<troontje> I followed this instruction : https://jujucharms.com/docs/stable/getting-started
<troontje> its pretty clear, except what exaclty does that 20GB mean?
<marcoceppi> troontje: during the lxd init step?
<troontje> exaclty
<troontje> when setting the loop device
<marcoceppi> troontje: the size of the loop back device to use for storage on LXD machiens
<marcoceppi> when LXD boots machiens, they'll all be allocated a slice of that storage
<troontje> marcoceppi: ah ok, so when setting that 20GB that will the 20GB that will be shared with all the machines together?
<marcoceppi> troontje: exactly
<troontje> marcoceppi: I was trying to install openstack base with that 20 GB ,athough it does not fail it just not continues the process
<troontje> I first thought that value was per machine
<marcoceppi> troontje: yeah, it's for all machines, and each machine boots with like 8GB of storage from that 20
<troontje> haha, well I cant blame it that it sort of failed
<troontje> marcoceppi: ok, so I set everything back and the canvas is clean again
<troontje> How can I increase that value for the storage
<troontje> destorying it and make it new is no problem btw
<cory_fu> bcsaller, petevg: Resolved merge conflicts and added TODO comment re: the model connection.  Going to merge now
<bdx> cmars: what happens when you don't have an 'A' record precreated -> http://paste.ubuntu.com/23340240/
<bdx> cmars: or created pointing at another ip :/
<cmars> bdx, pretty sure in that case that the standalone method will fail
<bdx> yea
<bdx> we should call that out in layer le readme
<cmars> bdx, ack, good call
<cory_fu> bcsaller, petevg: Merged
<petevg> bcsaller, cory_fu: Just pushed a small fix to master: glitch now grabs context.juju_model rather than context.model (good name change; just needed to update glitch's assumptions).
<troontje> how can I increase that zfs disk for JUJU?
<bdx> cmars: it would be worth noting that layer le cannot be used on lxd/contianer deploys
<cmars> bdx, true, true. wish there was a way to expose containers through the host machine..
<kwmonroe> cory_fu: can you have a hyphen in auto_accessors?  https://github.com/juju-solutions/interface-dfs/blob/master/requires.py#L25
<cory_fu> kwmonroe: Hyphens are translated to underscores
<kwmonroe> cool, thx cory_fu
<cory_fu> cmars: That systemctl mask fix worked perfectly.  Thanks!
<jrwren> bdx, cmars, you could use it with some tweaking of juju-default lxd profile, but its not likely to do what you want.
<cmars> cory_fu, sure thing
<cmars> jrwren, ah, that's true. i do some lxd bridging at home, but that doesn't always work on public clouds. ipv4 addresses are in short supply
<bdx> cmars, jrwren: I'm thinking layer:letsencrypt might be best consumed by a public facing endpoint
<bdx> the public endpoint (nginx/haproxy), could also be a reverse proxy
<bdx> cmars, jrwren: so it could be used as an ssl termination + reverse proxy
<cmars> bdx, a frontend that negotiates certs for its backends, and then does a passthrough to them (haproxy's tcp mode)... interesting
<bdx> cmars: exactly
<bdx> ^ is a skewed model bc it takes for granted that a user would be deploying lxd continers on the aws provider :/
<bdx> but not in all cases
<bdx> just the one I want
<cmars> bdx, i think i'd like to keep layer:lets-encrypt lightweight and usable in standalone web apps like mattermost.. but also usable in such a frontend charm as you describe
<bdx> entirely
<cmars> bdx, with the layer:nginx part removed, i think we'll have something reusable to this end
<bdx> right
<arosales> bdx: did you see the fixes for lxd/local
<arosales> it was indeed to tune kernel settings
<arosales> bdx: ping if you are still hitting the 8 lxd limit
<cmars> bdx, it'd be really cool if the frontend could operate as a DNS server too. then it could register subdomains AND obtain certs
<bdx> arosales: no I haven't .... just the warning to reference the production lxd docs when bootstrapping .... is this what you are talking about?
<bdx> cmars: thats a great idea ... just what I've been thinking too
<bdx> arosales, rick_h_: I've seen and heard bits and pieces about the rbd lxd backend, and also that getting rbd backend compat with nova-lxd isn't on the current roadmap. Can you comment on the status/progress of lxd rbd backend work to any extent?
<arosales> bdx: yes, you have to tune settings on the host to spawn more than 8 lxd
<arosales> bdx: re nova-lxd rockstar may have more info
<arosales> bdx: but for general bootstrapping with lxd make sure to tune your host settings per the bootstrap info
<arosales> if you want > 8
<arosales> :-)
<jhobbs> is there a way to rename a model?
<rick_h_> jhobbs: no, not currently
<jhobbs> rick_h_: ok, thanks
 * magicaltrout went to production with a model called Jeff recently.....
<arosales> magicaltrout: even for testing I would have been thought of a more whimsical model name than Jeff
<magicaltrout> ah well, when you're trying to  get LXD containers running inside Mesos my ability to come up with names  fails me
<arosales> mesos is taxing :-)
<icey> arosales: I deployed the entirety of openstack (16 units?) onto LXD at the charmer's summit without tuning lxd at all...
<bdx> icey: you were using ubuntu desktop which has different default /etc/sysctl
<icey> touche bdx :)
<bdx> :)
<arosales> icey: I have as well. It depends on your host settings. We do know on stock ubuntu server that the host settings need to be adjusted
<anastasiamac> lazyPower: ping
#juju 2016-10-18
<kjackal_> Good morning Juju world!
<Spaulding> morning kjackal_
<magicaltrout> https://jujucharms.com/docs/2.0/reference-releases <- not sure what the state of doc updates is, but that one is out of date
<kjackal_> Thank you magicaltrout. Just oppented an issue ticket to track this: https://github.com/juju/docs/issues/1475
<magicaltrout> what a treat
<magicaltrout> kjackal_: you going to apachecon?
<kjackal_> magicaltrout: nope, I do not think so!
<magicaltrout> thank god
<kjackal_> magicaltrout: god had nothing to do with it, it was all skill!
<magicaltrout> lol
<cargill> hi, I'm trying to destroy an old controller so that I can with a clean 2.0, but destroy-controller --destroy-all-models is just waiting and  kill-controller eventually says "unable to retrieve hosted model config: no such request - method Controller(3).HostedModelConfigs is not implemented (not implemented)"
<cargill> it's local LXD, is there any way I can clear this up manually?
<magicaltrout> just remove the LXD containers via the LXD commands and then remove the controller from ~/.local/share/juju/controllers.yaml
<cargill> magicaltrout: what are the lxd commands? man lxd is not much help on this
<magicaltrout> lxc stop ...
<magicaltrout> lxc destroy
<magicaltrout> sorry, lxc list first
<cargill> oh, lxc tools, sorry
<magicaltrout> to find the controller id
<cargill> looks like it's gone now, thanks
<SDB_Stefano> hi kjackal, can I ask your help ?
<kjackal_> SDB_Stefano: hi, sure go ahead
<kjackal_> SDB_Stefano: whats up?
<SDB_Stefano> I'm working for www.scaledb.com and I'm working on the creation of a charm of our product scaledb (same name of the company).
<kjackal_> Awesome!
<SDB_Stefano> the objective is to instanciate  an app/unit and install on it our product.
<SDB_Stefano> so I created the unit and than logging into it using 'juju ssh ude/..' I installed manually the product (2 deb pkgs)
<SDB_Stefano> the installation went fine, but when I tried the start our engine it failed with the error :
<SDB_Stefano> 2016-10-18 10:41:42 ScaleDB Message: [Warning] [IO Error] [File open failed] [OS error:] [22] [Invalid argument] [File Name:] [/usr/local/scaledb/data/direct_io_sector_size_data.dat.10127]
<SDB_Stefano>  
<SDB_Stefano>     ******************************** IO Error *********************************
<SDB_Stefano>  
<SDB_Stefano>     * 2016-10-18 10:41:42 File open failed                                    *
<SDB_Stefano>     * File name: /usr/local/scaledb/data/direct_io_sector_size_data.dat.10127 *
<SDB_Stefano>     * OS Error: 22 - Invalid argument                                         *
<SDB_Stefano>     ***************************************************************************
<SDB_Stefano> 2016-10-18 10:41:42 ScaleDB Message: [Warning] [IO Error] [File open failed] [OS error:] [22] [Invalid argument] [File Name:] [/usr/local/scaledb/data/direct_io_sector_size_data.dat.10127]
<SDB_Stefano>  
<SDB_Stefano>     ******************************** IO Error *********************************
<SDB_Stefano>     * 2016-10-18 10:41:42 File open failed                                    *
<SDB_Stefano>     * File name: /usr/local/scaledb/data/direct_io_sector_size_data.dat.10127 *
<SDB_Stefano> so it seems it's not able to write into the fils /usr/local/scaledb/data/direct_io_sector_size_data.dat.10127
<SDB_Stefano> the grant are fine
<SDB_Stefano> and we use to install in Ubuntu without problems
<SDB_Stefano> any suggestions ?
<SDB_Stefano> I tried uwing Ubuntu 16, I'm trying right now using 14 (just for try a diffeent way
<SDB_Stefano> )
<kjackal_> SDB_Stefano: That is a strange error. Never seen it before.
<kjackal_> SDB_Stefano: Lets see... do you have the charm somewhere so that I can try it myself?
<SDB_Stefano> I'm testing/developing locally, I could send you a tar file
<kjackal_> SDB_Stefano: why did you have to login into the unit and issue deb install?
<kjackal_> SDB_Stefano: is the charm closed source?
<kjackal_> I would prefer it to be somewhere on github, but if there is no other options I guess a tar would be fine
<kjackal_> SDB_Stefano: ^
<magicaltrout> also is local testing inside LXD? I assume so, if so can you just launch a LXD image and deb install it like usual?
<SDB_Stefano> yes, I using a local dev environemnt on a Ubuntu 16 running in Vbox, so I'm using LXD
<magicaltrout> looks to me like it might be open file limit or something like that imposed by the underlying OS or container
<SDB_Stefano> at logging and doing the manual installation,as :
<SDB_Stefano> 1- I'm at the first stage in using JUJu so it's a first step before the automation of all the steps into the hook/install
<SDB_Stefano> 2- I'm using the previous version of our software that requeries an interactive input during the installation
<kjackal_> SDB_Stefano: I see! Where can I find the installation instructions you are following?
<SDB_Stefano> out pkg set the ulimit  during the installation :
<SDB_Stefano> /etc/security/limits.conf
<SDB_Stefano> scaledb soft nofile 65536
<SDB_Stefano> scaledb hard nofile 65536
<SDB_Stefano> so it is  after the installation :
<SDB_Stefano> ulimit -n
<SDB_Stefano> 65536
<SDB_Stefano> I'm going to prepare all the needed, give me some minutes.
<kjackal_> BTW SDB_Stefano you do not have to fight by yourself. If you sign a partnership http://partners.ubuntu.com/programmes/charm yu will get "Tailored training on Charm creation and best practices", and I do not think it costs anything
<magicaltrout> it does not
<kjackal_> SDB_Stefano: But thats another story, lets see what we can do with the charm first :)
<SDB_Stefano> ok, I will talk about that to my manager, thanks fr the adivce.
<rock> Hi. we developed "cinder-storage driver" charm. Our charm dependent on Github. During the execution of the charm it will go and get latest files from Git and it will keep those files in cinder node. We will deploy our charm post deployment of Openstack setup. So during execution of our charm it was giving "git ERROR"[Like git is not there].
<rock> For some juju openstack setups , cinder node already has git service .
<magicaltrout> you don't want to add your files as charm resources rock?
<magicaltrout> anyway, i dunno how you're pulling your stuff but I guess you need to install the git package during the install hook
<rock> magicaltrout: Hi. Thanks. I have that option. But apart from that do we have any other option?
<neiljerram> Afternoon all, I have a question about running Juju (2.0) on GCE: is it possible to configure Juju so that it does _not_ allocate a GCE External IP for each machine?
<magicaltrout> apart from installing git?
<rock> magicaltrout: Yes. Apart from installing "git"  as part of install hook. [If already "git" is there it will not install git otherwise it has to install "Git".
<magicaltrout> I don't know what you're asking any more. If you need to use git, you have to install git, in which case you can use layer-apt to install git... or not.
<rock> magicaltrout: Do we have juju charm for "git"?
<magicaltrout> https://github.com/jamesbeedy/layer-git-deploy.git
<magicaltrout> that probably does what you want
<magicaltrout> and if not, you can install git using layer-apt
<magicaltrout> https://git.launchpad.net/layer-apt
<rock> magicaltrout: Hi. If I go for "layer git", Just I need to create layer.yaml file and in that I have to mention that "git" under apt:  packages : right?
<rock> Then when we deploy charm it will automatically execute that git package right.
<kjackal_> SDB_Stefano: there?
<kjackal_> SDB_Stefano: where do you see the error you reported above?
<SDB_Stefano> the deploy is ok - the manual installation should be executed after the deploy
<SDB_Stefano> I have attached a readme in the email with the steps
<SDB_Stefano> at the step : scaledb-one init
<SDB_Stefano>  
<SDB_Stefano> it fails
<SDB_Stefano> the log is available : cat /usr/local/scaledb/tmp/cas*
<Spaulding> Hm, how I can set up ports to expose for juju? Is there a variable for that?
<kjackal_> Spaulding: There is a call you have to do to tell juju what to expose. Just a sec
<kjackal_> Spaulding: What language? Python?
<kjackal_> Spaulding: for bash there should be open-port like they do here: https://jujucharms.com/docs/1.25/authors-charm-writing
<Spaulding> kjackal_: thanks, but I've found that there is a variable provides in metadata.yaml
<Spaulding> provides:
<Spaulding>   website:
<Spaulding>     interface: http
<Spaulding> y
<Spaulding> something like that
<magicaltrout> thats the relations Spaulding
<magicaltrout> you wont' expose anything with that
<kjackal_> Spaulding: https://pythonhosted.org/charmhelpers/api/charmhelpers.core.hookenv.html?highlight=port#charmhelpers.core.hookenv.open_port
<Spaulding> magicaltrout: kjackal_ thx
<Spaulding> why Juju docs are so confusing? :/
<magicaltrout> better confusing docs than lack of docs
<Spaulding> magicaltrout: sad but true
<magicaltrout> the boss of the docs is evilnickveitch have a word with him :)
<evilnickveitch> Spaulding, I agree that some of the charm developer docs are confusing! But we moved to a completely new way to write charms
<Spaulding> Is this new way is a good way?
<evilnickveitch> Spaulding, you should start here: https://jujucharms.com/docs/2.0/developer-getting-started
<magicaltrout> better than the old way! ;)
<evilnickveitch> well, it's a better way :)
<Spaulding> I hope it's not dead end... :)
<magicaltrout> the reactive charms are infinetly better than the non reactive charms
<magicaltrout> but of course the old stuff still exists, so the docs do as well
<Spaulding> btw. Anyone of you guys are going to OS Barcelona?
 * magicaltrout isn't hipster enough
<Spaulding> reactive is for sure more flexible
<Andrew_jedi> http://askubuntu.com/questions/777475/ceph-drive-failure-and-replacement-procedure
<Andrew_jedi> Are there any Ceph folks here ?
<magicaltrout> you're probably better off in #openstack-carhsm Andrew_jedi
<magicaltrout> er
<magicaltrout> openstack-charms
<Andrew_jedi> magicaltrout: Thanks :)
<cargill> is there a way yet to find which charms use a given layer?
<Spaulding> ok, so 1st part of my charm is done...
<Spaulding> i got lamp+appliance...
<Spaulding> now i need to handle mysql relation...
<magicaltrout> don't think so cargill you can look up interfaces but I've not seen a layer search
<Spaulding> to add / fill the db
<lazyPower> Spaulding congrats on completion :D
<Spaulding> any tips how to do it in a good way?
<lazyPower> well, first step completion
<Spaulding> I'll try to see how it's done on "wordpress" or any other PHP app charm
<lazyPower> Spaulding - there are a few examples. typically you just need the mysql-client package, and then run your migrations or use whatever language tools to run migrations.
<lazyPower> not sure if you're using laravel or symphony or cake - i know those three have utilities to aid in db migrations.
<shruthima> hi kwmonroe , For IBM-IM charm we have modified with a simple series check code and tried to test it but trusty machine is not getting created on s390x it is going to pending state .. how to go ahead
<lazyPower> cargill - you can pull the charm and run `charm layers`
<lazyPower> cargill if its a reactive charm, it'll read the build manifest and tell you what layers assembled the charm
<cargill> lazyPower: yes, but I'm looking at the reverse
<cargill> given a layer, ask "what charms use this layer?"
<magicaltrout> Spaulding: https://github.com/johnsca/juju-relation-mysql
<lazyPower> cargill - ah, no i dont think we have any graphs of that put together.
<cargill> mostly laziness at the moment, trying to find a charm that uses the nginx layer to band up a few http endpoints from the server under for same server
<kwmonroe> interesting shruthima, i hadn't considered that the base OS would need to be present before the charm can assert "series / architecture mismatch".  i'll need to think about that one a bit more..
<cargill> lazyPower: is there a service that lets you examine the metadata/layer/.. yamls for all charms in the store?
<cargill> or plans for one?
<cargill> that would provide exactly that
<lazyPower> cargill - the charm store api will give you a file list of whats in the charm, and you can then use the API links to pull the metdata/layer.yaml from the API.   the charm store already provides this functionality
<lazyPower> but you would need to write the glue code to make it as concise as you're outlining.
<shruthima> kwmonroe: ok if any other way please mail us thank you
<lazyPower> cargill - as an example, http://jujucharms.com/u/containers/kubernetes-master   -- there's a file list on the right side you can click on and view the contents of that charms files using the API
<lazyPower> https://api.jujucharms.com/charmstore/v5/~containers/kubernetes-master/archive/metadata.yaml
<cargill> lazyPower: that's a lot of scraping first, isn't it? as in: you can't ask "show me all layer.yaml's provide keys" without downloading all those files first?
<lazyPower> nope
<lazyPower> at least not as the api exists today
<Spaulding> magicaltrout: wow, that looks nice!
<Spaulding> thx again!
<magicaltrout> thats why reactive stuff exists ;)
<cargill> lazyPower: are there rate limits in the charm store or another reason why setting up a scraper wouldn't be advisable?
<lazyPower> cargill - it may be prudent to mail the list so we can start that conversation, and filing a bug would be a good start to having that added to teh teams roadmap
<Spaulding> so i just need to add this to includes
<Spaulding> as "interface:juju-relation-mysql" ?
<lazyPower> cargill let me poke the store maintainer to see if there's any feedback there, one moment
<magicaltrout> Spaulding: you include it in layer.yaml
<magicaltrout> then use something like the snippet on the readme to get the username/password/url etc
<magicaltrout> when a relation to mysql is joined
<Spaulding> I'm checking that! :)
<magicaltrout> at which point its then up to your charm to action on that
<magicaltrout> so like lazyPower said, you might plumb it into mysql client
<magicaltrout> you might write it to a config
<magicaltrout> you might do both
<lazyPower> cargill i'm having some trouble getting a straight answer, so still investigating.
<cargill> lazyPower: I've sent an email to the mailing list
<cargill> so you have something to reference as well
<lazyPower> ok, thats probably for the best in terms of visibility. Thanks
<cargill> (at least I hope it's reached the list)
<cargill> it has
<skay> I'd like to have a charm that is able to configure lxd on the machine it creates. https://github.com/checkbox/pmr-configs/tree/master/charms/xenial/cert-pmr
<skay> I'd like to add the steps to do that to that charm
<skay> it is possible to set up lxd automatically like I want?
<skay> that charm sets it up so that when we merge branches a hook can run that starts a container and runs some tests
<skay> previously people used old lxc. I'd like to use the new lxd stuff instead
<skay> I've got it working manually on my laptop and want to automate it
<skay> I want to charmify it
<lazyPower> Cynerva ryebot - re: kubectl get pv --all-namespaces, was this 1.4.0 or 1.4.1?   that historically worked on 1.4.0.....
<Cynerva> lazyPower: looks like kubectl 1.4.0
<ryebot> marcoceppi lazyPower standup?
<marcoceppi> ryebot: no
<Baqar> opps sorry for spamming
<Baqar> opps sorry for spamming
<lazyPower> Baqar if its any help, i didn't see any spam from you.
<Cynerva> Is there a way to set default model constraints? I see there's a `juju set-model-constraints` command for one model, but I'd like to set a default for newly created models as well.
<marcoceppi> Cynerva: yes, `juju model-defaults`
<marcoceppi> Cynerva: if you provide key=val to that it'll set the defaults for new models
<marcoceppi> Cynerva: you can also provide defaults when bootstrapping
<Cynerva> marcoceppi: cool thanks
<cory_fu> petevg: Correction to running test_prog: You have to actually activate the .tox env, not just run matrix from the bin dir
<cory_fu> I think
<cory_fu> Hrm.  No, that's not working for me now, either
<petevg> cory_fu: well, yes. Or build a virtualenv with all the stuff installed. The .tox enviroment is convenient, though.
<petevg> cory_fu: what error are you getting?
<cory_fu> petevg: I needed to rebuild my tox env.  Doing that made it work without full activation
<cory_fu> petevg: Error I was getting was: pkg_resources.DistributionNotFound: The 'matrix' distribution was not found and is required by the application
<petevg> cory_fu: cool. It works for me, too.
<petevg> ... and errors show up nicely in the log. Yay.
<petevg> cory_fu: hrm. Deploy doesn't seem to be actually deploying anything when I run test_prog. Either that, or glitch isn't waiting for it to finish ...
<cory_fu> petevg, bcsaller: It looks like the updated libjuju got dropped during my merge conflict resolution, so the deploy probably isn't working for you
<cory_fu> ha
<petevg> Aha.
<cory_fu> petevg: Let me update the wheelhouse in master real quick
<petevg> cory_fu: cool.
<cory_fu> petevg, bcsaller: Pushed
<bcsaller> thanks
<petevg> thx :-)
<cory_fu> bcsaller, petevg: Damnit.  That doesn't seem to have fixed it
<petevg> cory_fu: huh. I get mysql deployed, but the rest of the mediawiki bundle.
<cory_fu> petevg: Oh, ok.  I forgot to rebase my branch to the master I just pushed.  >_<
<petevg> Yeah. Rebasing would be important :-)
<petevg> (I did rebase, but I only see mysql deployed ...)
<cory_fu> Yeah, also seeing that
<cory_fu> Not sure why
<petevg> cory_fu: AttributeError: module 'juju.client.connection' has no attribute 'rpc'
<petevg> Want me to post the rest of the traceback?
<cory_fu> petevg: This is what I'm seeing: http://pastebin.ubuntu.com/23345387/ (tvansteenburgh)
<cory_fu> petevg: So, the error you got means the connection didn't get established.  Let me test on master.  One sec
<petevg> cory_fu: Interesting. That's completely different than what I'm seeing.
<petevg> cory_fu: http://paste.ubuntu.com/23345392/
<petevg> (I also get errors from glitch about there not being a valid model, which suggests that it's not waiting for deployer to do stuff.)
<cory_fu> Yeah, like I said, that means the juju_model doesn't have a connection, but I thought I fixed that
<cory_fu> wth.  I swear this worked yesterday
<cory_fu> Hrm.  And it does on my branch.  Let me see what I changed.
<cory_fu> Ugh.  Nothing relevant
<cory_fu> And, now it's failing on my branch too
<petevg> Exciting!
<cory_fu> tvansteenburgh: It looks like even after calling `await model.connect_current()`, `model.connection` is still sometimes None, but not consistently.  Any advice?
<cory_fu> bcsaller: Just to confirm, deploy should never get triggered until `await self.run(context)` is called in `RuleEngine.__call__` right?
<tvansteenburgh> cory_fu: i have never seen that. where's the code that's causing this?
<bcsaller> cory_fu: I don't see how it could
<cory_fu> tvansteenburgh: https://github.com/juju-solutions/matrix/blob/master/matrix/rules.py#L309
<ryebot> What are best practices for shutting down services when applications are stopped/removed? My concern is that I'll have two or more applications running on the same machine using the same background service - if a charm stops that service on its stop hook, that would conflict with other charms depending on it.
<cory_fu> tvansteenburgh: After that, during the `await self.run(context)` bit, it attempts to call `context.juju_model.deploy()` and it fails with petevg's stack trace
<cory_fu> tvansteenburgh: I just realized that the stacktrace was very similar to the one I was getting when model.connection was None, but not the same
<tvansteenburgh> cory_fu: one sec
<tvansteenburgh> cory_fu, petevg: see how it does now (after pulling latest commit)
<cory_fu> :)
<cory_fu> One sec, rebuilding and trying
<cory_fu> tvansteenburgh: Strange that it was inconsistent for me
<cory_fu> tvansteenburgh: That fixed that one error, but I'm still getting the KeyError from my pastebin
<cory_fu> petevg, bcsaller: matrix master updated with new libjuju
<cory_fu> tvansteenburgh: It seems to be a timing issue, as it goes away as the services come up
<cory_fu> tvansteenburgh: Full log: http://pastebin.ubuntu.com/23345468/
<tvansteenburgh> cory_fu: ah, yeah, that logic is wrong, i'll need to do that a different way
<cory_fu> Ok
<cory_fu> petevg: With latest master, if you re-run it with the bundle already deployed, it should go w/o error
<petevg> cory_fu: I'll try it w/ the bundle deployed (I see the KeyError when it isn't.)
<tvansteenburgh> cory_fu, petevg: pushed fix for KeyError
<petevg> tvansteenburgh: sweet. ty
<tvansteenburgh> it should be done a little differently, but that'll get you unblocked for now
<cory_fu> petevg, bcsaller: matrix master updated with tvansteenburgh's KeyError fix
<cory_fu> tvansteenburgh: Thanks!
<petevg> cory_fu: cool. thx. :-)
<cory_fu> bcsaller: So, I'm now seeing the health rule stuck in "pending"
<cory_fu> bcsaller: Also, how can I make the log portion of the TUI "window" taller?
<cory_fu> nm, found it
<bcsaller> cory_fu: also you can tail -F matrix.log, as well as use -s raw when running matrix
<cory_fu> bcsaller: Yeah, but I like the fancy UI.  :)
<bcsaller> cory_fu: oh, good :)
<cory_fu> Also, the task_view is helpful
<lazyPower> ryebot - i tend to agree with you. I feel thats a good question to ask on the list
<ryebot> lazyPower: cool, thanks
#juju 2016-10-19
<Guest50105> I have an input.yaml file like this:
<Guest50105> http://pastebin.ubuntu.com/23346821/
<Guest50105> When I use the above yaml file in 'juju deploy' I get the following error
<Guest50105> ubuntu@juju-api-client:~$ juju deploy --series trusty --to Compute-Storage-3.maas --config input.yaml ./contrail-tsn Deploying charm "local:trusty/contrail-tsn-35". ERROR unknown option "install-sources"
<Guest50105> What is wrong with yaml format?
<Guest50105> The above mentioned yaml passes the online yaml parsing
<Guest50105> The above mentioned yaml passes the online yaml parser
<Guest50105> Never mind. I fixed the issue
<kjackal_> Good morninf Juju world!
<kjackal_> *morning
<deanman> Hello, using xenial with juju 2.0 and juju bootstrap command halts without completing. I'm running behind a proxy. Is there a way to setup proxy for consecutive LXD containers before running bootstrap command ?
<suchvenu> Hi..
<suchvenu> I have an issue while deploying a service uisng MAAS provider. Anyone here who can hep me with this ?
<cargill> if a service goes down, it's the job of update-status hook to identify and report it if it affects the unit, right? does it do that using status_set or returning something?
<cargill> and what would be the appropriate status to set? the docs seem to imply that error state is only for hooks failing
<kjackal_> cargill: yes, update-status is a good hook to report service failures. Seting the status is the right way to go. Of course the update status hook could try to restart the service. The appropriate status is to say "blocked" in case user action is required or "waiting" in case there is an automated recovery on the way
<MrDan> hi, is there documentation, besides the charm descriptions, on how to deploy openstack with neutron-gateway external network, and tunneling data network between neutron and compute nodes?
<MrDan> with OVS
<MrDan> i've read that i need os-data-network
<MrDan> but how does it corelate with bridge-mappings and data-port?
<zeestrat> MrDan: You might get some more action in #openstack-charms
<MrDan> ah, ok, did not know about that channel, thanks
<zeestrat> MrDan: The guys that developed the charms are there :)
<cargill> can I combine the @hook and @when decorators?
<cargill> seems quite useful for the update-status hook
<lazyPower> cargill - its not obvious why you cant, but mixing @hook and @when will yield bad behavior. We dont currently list that as a supported way to use the reactive decorators
<lazyPower> you can use either/or, but not both on the same method.
<lazyPower> cargill if you have need of introspecting state in that method body, i can recommend decorating with the @hook decorator, and you can import is_state and use that to check in the method body if a state is set.
<cargill> yeah, that's what I see, nothing really happening, but I think that just guarding on the desired state will do the right thing
<cargill> I'm not sure the charms even see hooks anymore as such
<lazyPower> or just decorate for the state, and handle the cases where its being called.
<lazyPower> right, thats been the authors intent with reactive is to fully abstract teh hook mechanism and urge developers to only use reactive states.
<cargill> yeah, is what I ended up doing (well, waiting for the test to finish)
<cargill> seems super useful to think of the reactive states in terms of a petri net, makes it quite simple to alter the design (add new states or separate processes/relations)
<Spaulding> kjackal_: did you recommend to me yesterday juju-relation-mysql?
<Spaulding> cause i've got problem with it
<lazyPower> Spaulding - yep, that was mentioned by magicaltrout. What can i do to help you?
<kjackal_> Spaulding: I do not think it was me but what is the problem, I might be able to help
<Spaulding> oh yes, sorry - my bad... to many nicknames to remember!
<Spaulding> One sec, I'm creating gist :)
<Spaulding> https://gist.github.com/pananormalny/58ee2c54acf127dc279bf34990b94238
<Spaulding> After I deployed my charm - I've " juju add-relation mysql sme"
<Spaulding> In debug-log I can see that hook is executed
<magicaltrout> your includes is incorrect
<magicaltrout> interface:mysql
<Spaulding> So I need to add mysql interface as well or I should change last line?
<magicaltrout> i don't define it in yaml
<magicaltrout> but in mine
<magicaltrout> i put
<magicaltrout> includes: ['layer:basic', 'interface:mysql', 'interface:pgsql']
<magicaltrout> so i assume yours should say interface:mysql
<magicaltrout> s/yaml/yaml list
<Spaulding> ok... so how to use juju-relation-mysql?
<Spaulding> I think I should provide it as well?
<Spaulding> But I'm not sure about that
<magicaltrout> i have no idea what juju-relation-mysql is
<magicaltrout> so i think you've made it up :)
<magicaltrout> oh its just the name of the github repository
 * lazyPower smirks
<magicaltrout> you are allowed to ignore that
<Spaulding> https://github.com/johnsca/juju-relation-mysql
<Spaulding> magicaltrout you dirty devil!
 * Spaulding hides
<magicaltrout> https://github.com/johnsca/juju-relation-mysql/blob/master/interface.yaml#L1
<magicaltrout> thats the bit you're interested in Spaulding
<Spaulding> exactly
<Spaulding> you proposed that yesterday
<lazyPower> Spaulding - i think he's calling out that in line25 of your gist, you're including 'interface:juju-relation-mysql'
<lazyPower> which will never resolve
<lazyPower> it should instead be 'interface:mysql'
<Spaulding> surprisingly it's resolved to mysql
<Spaulding> build: Processing interface: mysql
<Spaulding> build: Processing interface: mysql
<magicaltrout> its like magic
<lazyPower> magicaltrout - did you see the recent announcement that elastic is teaming up with prelert for machine learning against operational data?
<lazyPower> https://www.elastic.co/blog/welcome-prelert-to-the-elastic-team -- for context
<magicaltrout> very interesting
<lazyPower> this is what merlijin was talking about a while ago right?
<magicaltrout> this is the future for log analytics stuff, where you have beats, splunk, graylog whatever
<magicaltrout> just pumping logs through an indexer is okay, but where is the added value?
<magicaltrout> it needs so magical powers to enhance the logging to do more and tell you stuff you didnt' already know
<lazyPower> doing deep machine-learning analysis on log data to identify system issues and auto-diagnose problems... seems like the first step to yet another layer of auto-healing
<magicaltrout> yup
<lazyPower> s/auto/self/
<lazyPower> kwmonroe cory_fu - that article above is probably of interest to you two as well.
<magicaltrout> the only articles kwmonroe is interested in are swimming pool enhancements
<lazyPower> well, i used to install those too. if he wants a discount on hayward/jandy hardware i can probably put him in touch with the right people
<lazyPower> http://www.choicepoolsandhardscapes.com/ -- some of my previous work (both the site and the construction projects used as illustration)
<jrwren> lazyPower: sounds awesome. with enough security data it would make a sweet platform for a SIEM or ESM system.
<lazyPower> i dont know what those acronyms are, but they sound very official
<jrwren> Security information and event management / Enterprise Security monitoring.
<lazyPower> oh right, i knew that
<lazyPower> indeed jrwren, indeed
<jrwren> sorry lazyPower, i previously worked in security.
<magicaltrout> lazyPower previously worked in swimming pools
<magicaltrout> and very nice they look too!
<lazyPower> ^
<lazyPower> thanks :)
<lazyPower> i'm happy i'm done with what we dubbed as hebrew labor, carting stones across someones bakc yard to build those retaining walls.
<lazyPower> not sure if thats offensive, so if it is, i apologize in advance
<magicaltrout> probably :P
<magicaltrout> i dug out my telescope foundations by hand which were a 3m cubic hole, that was the hardest thing I've ever one
<lazyPower> yeah, thats hard work man. anybody that said ditch digging is easy is either a liar or has a bobcat
 * magicaltrout on NASA call "we've written our UI and middleware in Python... do you mind writing the backend in python"..... "sure.... not a problem"
 * magicaltrout furiously googles python stuff
<lazyPower> magicaltrout welcome to the painnnnnnnn ;) i kid i kid, python isn't so bad
<magicaltrout> na
<magicaltrout> this flask stuff looks pretty straightforward
<lazyPower> flask is very simple, and very extensible
<magicaltrout> have to write a dockerfile generator for machine learning toolkits
<lazyPower> yeah, combine that with some jinja templating logic and you've got yourself a web service to spit out dockerfiles
<skay> I need advice to know if it is illadvised to try and make a charm that configures lxd for me
<skay> and also, how the heck I'd do it
<magicaltrout> well the lovely lazyPower maintains layer-docker i believe
<skay> I've got lxd configured by hand on my laptop and I have some scripts that make containers for me and run through tests to check if my code it good to merge
<magicaltrout> so I don't see why you wouldn't have layer-lxd
<skay> and a coworker has a charm that they use for checking out repos and running scripts that happent o use old lxc commands
<lazyPower> thats tricky, its fine if the charm is going to maintain its own lxd container and its setup, but wholesale reconfiguring the lxd daemon would likely disrupt the native juju lxd components
<skay> this is the charm that works with all the old lxc stuff https://github.com/checkbox/pmr-configs
<skay> my other coworker just uses travis, but his repo is on github
<skay> mine's on lp
<skay> the lp teammates have that charm to set up a little ci environment for us
<lazyPower> skay - i think its fine to manage a set of lxd containers on a host with a charm
<lazyPower> skay i just wouldn't wholesale reconfigure the lxd service on the unit if you ever plan on using juju's native lxd bits on that unit, eg: --to lxd:#
<lazyPower> marcoceppi or others may disagree. but i'm in favor of not limiting what you can do, just so long as you're aware of the potential caveats
<skay> lazyPower: magicaltrout: thank's y'all.
<skay> whoops that ' cloned its self and hoped back 2
<skay> needs more coffee
<hml> is there a way to remove charms where the agent is âlostâ?  remove-unit and remove-machine are not doing the job.  i do want to remove it, iâve already replaced the âlostâ unit with a new one
<ryebot> hml: don't suppose you tried remove-machine with --force?
<hml> ryebot: trying now, i missed the âforce option
<hml> ryebot: that did it - thanks.
<Spaulding> could someone look at it and tell me why it stuck on "Please add relation"
<Spaulding> https://gist.github.com/pananormalny/aa3be95bfb285ecaa2d0b3e2486101e4
<ryebot> hml: awesome! np
<Spaulding> Even when I've added a relation...
<Spaulding> I followed the layer-vanilla...
<bdx> jose: ping
<natefinch> marcoceppi: pretty sure your response to my email distills down to "don't use colocation or manual provider" :)  Which, btw, I agree with :)
<marcoceppi> natefinch: yeah, but we're forced to do these things because not everything works in lxd machines
<natefinch> marcoceppi: yeah, I really wish we could just default to tossing everything in a contain
<marcoceppi> lazyPower mbruzek I wonder if we can explore using --to kvm: for some of these workloads?
<natefinch> er
<marcoceppi> natefinch: ultimately, that would be the best thing
<lazyPower> marcoceppi - we're going to have the same networking issues.
<marcoceppi> lazyPower: ack, ta
<lazyPower> which is why we've modeled our dense deployment as is today
<natefinch> marcoceppi: btw, on the stop hooks, the docs are pretty clear.  They might possibly be wrong, but they're clear at least: https://jujucharms.com/docs/2.0/authors-charm-hooks#stop
<marcoceppi> natefinch: I just looked them up, that's not what they were always set to
<marcoceppi> maybe this is just a failure to name things appropriately
<marcoceppi> but stop =/= uninstall to me
<lazyPower> i wrote that docpage
<lazyPower> if its wrong, its my fault
<natefinch> marcoceppi: certainly.  start and stop vs. install and uninstall
<natefinch> I guess we have an install hook
<marcoceppi> natefinch: I also think stop hook is run on all containers where the parent machine is about to be rebooted because of `juju-reboot`
<marcoceppi> natefinch: I will experiment
<natefinch> marcoceppi: would be good to know. I'd be surprised if it ran on reboot, but only slightly surprised :)
<Spaulding> marcoceppi: could you help me with interface:mysql? I can't get it working... :(
<marcoceppi> Spaulding: absolutley, what's not working?
<Spaulding> it stuck on "add relation"
<Spaulding> but relation is there... and i can see creds in debug-log as well
<Spaulding> https://gist.github.com/pananormalny/aa3be95bfb285ecaa2d0b3e2486101e4 my reactive
<Spaulding> database stuff is at the bottom of file
<marcoceppi> Spaulding: what does juju status say atm?
<marcoceppi> Spaulding: and can I see your metadata.yaml?
<Spaulding> blocked "Please add relation to mysql"
<Spaulding> sure
<Spaulding> here you have: https://gist.github.com/pananormalny/68680a0a104c881506ee0f6881103ca8
<marcoceppi> Spaulding: so, this is just a mismatch in naming
<marcoceppi> Spaulding: you named the relations "db" but you refer to it as "database." in reactive. Either change metadata.yaml to be database or change your decorators to be db.connected and db.available
<Spaulding> ok... so this db should be mysql?
<Spaulding> yeah
<Spaulding> now i got it!
<marcoceppi> Spaulding: yeah, thta's probably not highlighted very well in the readme, I'll make sure we update that
<marcoceppi> Spaulding: everything else looks great!
<Spaulding> and i spent like 3 hrs
<Spaulding> changing when and when_not
<Spaulding> thx again marcoceppi !
<marcoceppi> Spaulding: sorry :( hopefully this gets you sorted
<Spaulding> yeah, now I can proceed
<Spaulding> I don't have much time today but tomorrow I hope I'll be able to restore sql dump finish this charm finally
<Spaulding> and finish***
<marcoceppi> Spaulding: awesome, feel free to ping here if you have any more questions!
<Spaulding> will do - for sure!
<jose> bdx: yes?
<bdx> jose: https://github.com/jamesbeedy/charm-mailman
<jose> bdx: sorry, I don't work with layers, but if you want to push that, feel free to
<bdx> jose: oh, I thought you wrote the OG mailman charm
<bdx> ?
<jose> yes, but again, I don't work with layers, if you want to push a trusty/xenial version for it, feel free to
<jose> I probably won't get around to it until a month from now at least
<jose> maybe even two, considering my schedule
<bdx> no worries, just thought you would like to have a looksie
<bdx> I'm not pressing anything lol .... just thought I would share w/you :-)
<jose> cool
<neiljerram> Hi all, a question about running Juju (2.0) on GCE: is it possible to configure Juju so that it does _not_ allocate a GCE External IP for each machine?  (I also asked this yesterday morning, but hopefully there are different folks around now...)
<rick_h_> dimitern: any idea? ^
<dimitern> neiljerram: I don't think so, but let me double check..
<bdx> hey guys, are the bootstrap configs/constraints enumerated anywhere in the docs .... can't seem to find anything really ...
<neiljerram> rick_h_, dimitern - thanks!
<rick_h_> bdx: not at the moment. We need to build one. Most of it is there if you dump the list of controller config.
<bdx> rick_h_: thanks
<rick_h_> bdx: or honestly I poke at code most of the time. We've got a todo to dump things out into an official doc list
<bdx> rick_h_: I remember hearing about a new ssl config that can be specified at bootstrap to allow proxying to the gui ..... I'm about to bootstrap for the long haul ... trying to make sure I have all of the relevant configs in there
<dimitern> neiljerram: no way to disable this at the moment, sorry :/
<bdx> rick_h_: from the user perspective, its a bummer I have to go searching to this extent to find the pieces I need to get up and going
<neiljerram> dimitern, Thanks for checking that.  Would it be of useful or of interest for me to register a wish for that somewhere?
<dimitern> neiljerram: of course! :) I see a bunch of TODOs in the source about "let's make this configurable", so it was always planned to be possible, just not yet
<neiljerram> dimitern, Do you mean in network.go after "Type: NetworkAccessOneToOneNAT"?  I was just looking there too :-)
<dimitern> neiljerram: yeah, that's just one of many apparently  :)
<neiljerram> dimitern, So where is the right place to request this?
<dimitern> neiljerram: I'd suggest filing a bug against lp:juju https://bugs.launchpad.net/juju/+filebug
<dimitern> neiljerram: that way we can schedule it and track it better
<neiljerram> dimitern, Thanks, I'll do that.
<dimitern> neiljerram: awesome! thank you ;)
<ahasenack> when using the openstack provider, how do I tell juju at bootstrap time which neutron network to use for the instance, if there are multiple?
<ahasenack> error info: {"conflictingRequest": {"message": "Multiple possible networks found, use a Network ID to be more specific.", "code": 409
<ahasenack> what's the parameter name for --config?
<bdx> ahasenack: --config network=xxx.xxx.xxx.xxx
<ahasenack> bdx: thx
<bdx> ahasenack: np, too bad its not an available bootstrap config for aws ;-(
<aluria> hi, in juju-deployer, I used to use "overrides" for config variables that existed in multiple services (now applications) -- I haven't found such an option on juju2 bundles -- would I need to repeat the same key-value on all charms that have it as config param?
<jhobbs> that's what i've done aluria
<jhobbs> afaik that's the only way
<aluria> jhobbs: ack, asking just in case -- thanks
<bdx> hey guys, I'm getting this error intermitently when sending off deploys -> http://paste.ubuntu.com/23349938/
<bdx> not sure if its been noted yet
<bdx> rick_h_: ^^?
<rick_h_> bdx: looking
<rick_h_> bdx: hmm, looks like a header auth issue.
<rick_h_> bdx: need to file a bug on that to get the charmstore folks looking at it please. github.com/canonicalltd/jujucharmscom
<rick_h_> bdx: quick fix is probably to logout/back in and maybe to remove cookies.
<rick_h_> jrwren: where are the go cookies these days?
<rick_h_> jrwren: it's not .go-cookies any more right?
<jrwren> rick_h_: it is still that.
<rick_h_> jrwren: ah ok
<rick_h_> bdx: clear ~/.go-cookies
<bdx> rick_h_: -> http://paste.ubuntu.com/23349960/
<bdx> jwren: ^
<rick_h_> bdx: juju login?
<bdx> oooh
<bdx> rick_h_: wow, .local/share/juju/accounts.yaml doesn't have my most current controller in there ... I may as well go back to sleep today
<bdx> now that I've logged out
<bdx> I have no record of the password
<bdx> foolish
<bdx> oh well
<bdx> redo
<rick_h_> bdx: ?!
<bdx> rick_h_: http://paste.ubuntu.com/23349979/
<rick_h_> well that seems scary if you can dupe that I'd be curious what happened to lose a controller from there w/o an unregister command.
<bdx> rick_h_: neither of those accounts/controllers exist anymore
<bdx> rick_h_: and my recent bootstrap to aws isn't listed
<rick_h_> bdx: oh, so those were there and have been torn down or the like? that's what unregister is for cleaning up
<bdx> rick_h_: yeah, but accounts.yaml doesn't show my aws controller
<bdx> the one I just logged out of trying to get charmstore to work
<bdx> ahhh ..... logout removes the entry from accounts.yaml
<bdx> I didn't previously know that
<rick_h_> bdx: hmmm, but it's still in the controllers list?
<rick_h_> bdx: e.g. you can juju controllers and see ones you're not logged in with?
<bdx> rick_h_: yes -> http://paste.ubuntu.com/23349994/
<rick_h_> bdx: ah ok, that makes me feel less freaked out
<bdx> but since I neglected to grab my account details before I logged out .... I'm out of luck
<bdx> not that I had anything deployed yet
<bdx> but good thing to note for the future
<rick_h_> bdx: :/ yea that sucks. /me needs to figure out how we should do that better...
<bdx> rick_h_: possibly, if an admin logs out of the controller we should add a message that alerts them that they will not be able to log back in unless they have their password saved somewhere outside of Juju
<rick_h_> bdx: yea, and seems like a documented way for admins to get their password from the controller maybe
<rick_h_> not sure if it's available over there, if you can ssh there can you get the password out.
<bdx> rick_h_: I'm in
<rick_h_> bdx: can deploy?
<bdx> rick_h_: no, I was able to ssh into the controller node
<rick_h_> bdx: ah ok
<rick_h_> bdx: yea, I have no idea if the password is retrievable
<rick_h_> perrito666: any ideas? ^
<bdx> ooho, I thought you were saying "if I could get in, there is a way"
<rick_h_> bdx: sorry, I was saying "I wonder if we can document some path that allows you to get your password if ssh works"
<bdx> rick_h_: aaah
<bdx> rick_h_: its not a biggie, I can just tear down and redeploy
<bdx> rick_h_: thx
<rick_h_> bdx: k, sorry. Thanks for bringing up the issue
<petevg> cory_fu, bcsaller: Work in progress PR for the glitch plan/actions refactor here: https://github.com/juju-solutions/matrix/pull/4
<petevg> (I merged the two dicts, but haven't actually attempted to execute the code just yet -- it is very much a WIP!)
<petevg> https://github.com/juju-solutions/matrix/pull/4
<bcsaller> petevg: thanks :)
<petevg> np
<perrito666> rick_h_: bdx no
<lazyPower> ryebot Cynerva  - bite sized one here https://github.com/juju-solutions/interface-sdn-plugin/pull/4
<ryebot> lazyPower: looking
<lazyPower> ryebot Cynerva less bite sized review here: https://github.com/juju-solutions/charm-flannel/pull/20
<ryebot> lazyPower: +1 also looking
<lazyPower> ryebot Cynerva - and finally - the consumer of those changes https://github.com/juju-solutions/layer-docker/pull/92
<ryebot> lazyPower: cool looking
<lazyPower> that should be it though, thanks for taking a look
<ryebot> lazyPower: commented!
<lazyPower> ryebot good shout, seems odd but it is indeed possible
 * lazyPower amends
<bdx> rick_h_, perrito666: when I pass the debug flag to `juju bootstrap`, I see the subnets in my vpc/region listed as constraints, see http://paste.ubuntu.com/23350343/
<bdx> rick_h_, perrito666: does this mean I can somehow add a constraint to select one of the subnets on bootstrap?
<bdx> to bootstrap to?
<perrito666> bdx: wouldn't know sorry :(
<mgz> bdx: you may be able to if you create some spaces that encapsulate them, I'm not quite sure of the current state of that on ec2
<bdx> mgz: you have to have a controller first though, to be able to create spaces
<mgz> hm, not in the maas model
<mgz> but yeah, that's an issue with ec2
<bdx> from the context of juju though
<mgz> so, from the context of juju on maas, you can create a maas space that includes a subnet, then bootstrap with a constraint that the machine must be in that sapce
<bdx> mgz: totally
<bdx> and the openstack provider accepts a 'network' config on bootstrap
<mgz> that's mostly a hack to deal with multiple neutron networks existing
<bdx> mgz: hack or no hack it provides a solution
<mgz> so, I'm not actually sure what the finished solution should look like for ec2/openstack
<bdx> mgz: is there a similar hack for ec2 that you know of?
<bdx> :-)
<mgz> I added the openstack one, I did not add an ec2 one at the same time, let me just check the provider
<mgz> hm, no you can force a particular vpc
<mgz> so, you can maybe create a seperate vpc with what you want and make juju use that? but pretty hard limits on a bunch of these network bits in ec2
<bdx> yeah, now thats a hack
<mgz> sorry bdx
<bdx> mgz: I feel it would be such an easy add
<bdx> just filter the list of subnets returned on bootstrap and ensure the one selected exests and is in the region and vpc
<bdx> is there something I'm missing here?
<mgz> I think we probably don't want to do that without a good model that works across providers, is the holdup
<mgz> but I'm still somewhat out of the loop on networking plans currently
<bdx> meh
<bdx> mgz: thanks for your input here
<mgz> bdx: it may be worth a message to the list just to spell out your use case and find out what the plans are
<bdx> mgz: entirely
<bdx> mgz, rick_h_: so, a new environSchema field for 'subnet' here -> https://github.com/juju/juju/blob/staging/provider/ec2/config.go#L28
<bdx> and a 'getUserSubnet()' here -> https://github.com/juju/juju/blob/staging/provider/ec2/environ_vpc.go#L198
<bdx> or possibly 'getBootstrapVPC'
<bdx> rick_h_: again, http://paste.ubuntu.com/23350619/
<bdx> rick_h_: `charm list -u createivedrive` shows my charm -> http://paste.ubuntu.com/23350624/
<bdx> but I can't seem to deploy it
<bdx> I have a feeling this is highly related to the reason I can't see my charms for any other namespace other than my user namespace in the store
<bdx> via juju-gue
<bdx> *juju-gui
<bdx> while logging into jujucharms.com, I am still able to view namespaces other than my personal namespace, see -> https://s18.postimg.org/skemexkk9/Screen_Shot_2016_10_19_at_2_16_15_PM.png
<bdx> when logged into the charmstore via juju-gui, I don't have access to any of the charms in team namespaces, see -> https://s17.postimg.org/thm3ooapb/Screen_Shot_2016_10_19_at_2_18_15_PM.png
<magicaltrout> bdx: i've started work on a Juju cookbook, when you finish up your networking/subnetting and any other crazy stuff, i'll be picking your brains
<magicaltrout> content for the "crazy and advanced" section
<bdx> magicaltrout: ha, thanks, hopefully I can have some good adds for you! ... (harsh segue) )do you have a launchpad team, or charmstore namespace other than your personal namespace?
<magicaltrout> i don't, i caused enough havoc in the charmstore by renaming my user in launchpad
<magicaltrout> that made enough shit break so i leave it well alone ;)
<bdx> bahahah
<bdx> your next mission, should you choose to accept it
<bdx> create a launchpad team, push some charms up to different channels, add 50 users, have the users start pushing charms, rename the launchpad team, remove and re-add 1/2 of your users to the newly named team, change team name againn, submit bug
<MrDanDan> juju set/set-config, and get as well, was removed in 2.0.0?
<anastasiamac> MrDanDan: not removed. renamed - feel free to browse release notes for 2.0 for changes :D https://jujucharms.com/docs/2.0/reference-release-notes
<MrDanDan> thanks
#juju 2016-10-20
<kjackal_> Good morning Juju world
<junaidali> Guys I'm trying to bind shared-db on internal space, but it gives error "ERROR cannot add application "mysql": unknown space "public-api" not valid"
<junaidali> here are my spaces define http://paste.ubuntu.com/23352337/
<junaidali> I'm running the following command to deploy juju deploy cs:xenial/percona-cluster mysql --config=openstackha.cfg --to lxd:13 --bind "shared-db=public-api
<junaidali> sorry this one:
<junaidali> juju deploy cs:xenial/percona-cluster mysql --config=openstackha.cfg --to lxd:13 --bind "shared-db=internal-api"
<junaidali> is there any way to delete a space that we defined via juju?
<junaidali> or update an existing space
<aluria> hi -- on juju2 status output, what does an asterisk at the end of a unit name means? ie. "unitname/0*  active  idle ..."
<magicaltrout> its the elected leader
<aluria> ah, cheers
<rick_h_> bdx: :/ yea need to file a bug and get the UI team to chase it down.
<rick_h_> bdx: they might want to get an idea of how large your cookie files are and see what's up
<bryden> Hello, I just started using juju and previously I was able to load juju-gui on a local controller and access it through my browser no problem. For some reason all of a sudden it's now showing status "unknown" and workload unknown.. i can still access the gui through my browser but it hangs before I can login, saying "connecting to the juju model" with a spiraling circle. If anyone has any advice what I should investigate in order
<bryden> to fix this please let me know!
<bryden_> a bit more info.. in the command line 'juju models' hangs as does 'juju controllers --refresh'
<bryden_> juju controllers shows me a controller but juju destrory-controller <controller-name> hangs
<tvansteenburgh> bryden_: what version of juju?
<bryden_> 2.0-rc3-xenial-amd64
<tvansteenburgh> bryden_: was the controller created with that version?
<bryden_> @tvansteenburgh : 'juju controllers' returns controller with version listed as 2.0-rc3
<tvansteenburgh> bryden_: ok, not sure then. if it were me i'd upgrade to 2.0 stable and then bootstrap a new controller and use that
<petevg> cory_fu, bcsaller: I rebased my feature/glitch-more-actions branch, deleted the remote, and repushed. New PR here: https://github.com/juju-solutions/matrix/pull/5
<petevg> If you've got the branch checked out locally, please delete it and refetch it.
<petevg> (Apologies for the bit of git evil.)
<hml> i have a charm whose config-changed hook is failing on read-only file system - trying to write to a file but that device is out of space - it feels like the issue if you donât set up lxd correctly with zfs - however iâm using dir with lxd not zfs - any ideas?
<bryden_> tvansteenburgh ok, thanks I'm trying that out now
<devops> hi everyone, can we change dns-name being displayed in juju status
<devops> dns-ip*
<devops> public-address*
<bryden_> tvansteenburgh: juju upgrade-juju --debug complains that no model is in focus
<rick_h_> devops: no, it's not mallable from the outside after the fact.
<rick_h_> bryden_: juju switch controller?
<bryden_> rick_h_ hangs
<rick_h_> bryden_: try with a --debug and see if it's got anything interesting in there?
<bryden_> https://paste.gnome.org/pip5jsct0
<bryden_> last line keeps repeating
<rick_h_> bryden_: are you able to reach that network?
<rick_h_> bryden_: not sure about your setup there, but seems like a network issue not able to reach the controller
<rick_h_> bryden_: or that the controller went down for some reason, can you ssh to the controller machine?
<bryden_> rick_h_: ssh to the controller gives me connection refused
<rick_h_> bryden_: is this maas or something?
<rick_h_> bryden_: I would expect the machine to be down in that case
<rick_h_> bryden_: so curious if someone pulled a plug on you or something
<bryden_> rick_h_: this is all on my localhost,
<rick_h_> bryden_: so the lxd provider?
<rick_h_> bryden_: can you run sudo lxc list
<rick_h_> bryden_: and see the status of the containers there?
<bryden_> https://paste.gnome.org/pwwmcia6s
<bryden_> rick_h_:  also this is the logfile for lxd not sure if anything from this is interesting...
<bryden_> https://paste.gnome.org/ptldqvfzi
<rick_h_> bryden_: so that seems to say you've got no lxd containers running which means whatever you had was removed
<rick_h_> bryden_: so the only thing from here would be to juju unregister the controller and remove it from your list of controllers and rebootstrap
<bryden_> rick_h_: working on a new bootstrap now thanks
<rick_h_> bryden_: let me know if you can see what happened there. The only two ways those containers should go away is if you destroy the controller or manually remove them with lxc tools
<hml> rick_h_: hi.  a while back i had an issue where I setup zfs for lxd by accident and ran out of space on charm units - i have a new issue that feels like that one - with openstack-novalxd bundle - lxd/0 is finding a readonly filesystem on /sys - however i double check not using zpool - any ideas?
<rick_h_> hml: so I know we landed a fix that tried to cap things at 90%, however if you were attacking too fast then it was still possible to trigger.
<rick_h_> hml: but should be much harder to do
<rick_h_> hml: as far as sys being read-only I'm not sure. It seems odd that only /sys would be read-only? is the whole filesystem read-only or just sys is on a different partition/etc?
<hml> rick_h_: iâm deploying the openstack-novalxd bundle
<hml> rick_h_: hereâs the error message from the unit log: config-changed IOError: [Errno 30] Read-only file system: '/sys/module/ext4/parameters/userns_mounts'
<rick_h_> hml: oh hmm, so no zfs here? ext4?
<hml> rick_h_: I hope so - i ran zpool list and the containers didnât show up.
<hml> rick_h_: only lxd/0 seems to be having a problem
<bryden_> rick_h_: I'm back to the beginning of my issue.. i did juju deploy juju-gui; juju expose juju-gui and when i go to the public address in my browser it just hangs with a turning circle and a message: "Connecting to the juju model".
<rick_h_> bryden_: k, can you juju status --debug and connect?
<bryden_> https://paste.gnome.org/ppcjs5unn
<rick_h_> bryden_: is there a reason you're deploying the gui vs using the built in gui?
<bryden_> rick_h_: ignorance of the builtin gui
<rick_h_> bryden_: ah ok, guessing the charm gui might need some love or something
<rick_h_> bryden_: but running juju gui should get you one
<rick_h_> bryden_: it's built in these days so you don't need to deploy it
<bryden_> rick_h_:that's brilliant
<bryden_> rick_h_: and it works
<bryden_> rick_h_: thank you for spending time with me working on this
<rick_h_> bryden_: awesome, glad that's useful and let us know how it goes from there
#juju 2016-10-21
<devops>  /query junaidali
<stokachu> guessing there isa  problem with the charmstore?
<stokachu> http://paste.ubuntu.com/23357102/
<bdx> https://github.com/juju/charmstore/issues/686
<kjackal_> Good morning Juju world
<cargill> when following the getting started document from docs, are you supposed to end up with ~/.juju/environments.yaml configured? 'charm test' is confused that it doesn't exist
<cargill> yet juju (2.0) is fine with everything
<magicaltrout> cargill: where did you read that?
<magicaltrout> thats 1.2.x stuff
<magicaltrout> a) the location these days is ~/.local/share/juju
<magicaltrout> b) environments.yaml doesn't exist any more
<cargill> $ charm test
<cargill> juju-test INFO    : Starting test run on None using Juju 2.0.0
<cargill> juju-test CRITICAL: /home/oku/.juju/environments.yaml file does not exist
<cargill> this is what I get
 * magicaltrout defers to kjackal_ 
<magicaltrout> it may not have been revamped for 2.x I dunno
<kjackal_> hello!
<cargill> I'll check for updates just in case
<cargill> kjackal_: hi!
<kjackal_> hi cargill, magicaltrout
<kjackal_> hm... charm test I thought this is not anailable on the recent juju+charm tools
<magicaltrout> hammer and nailable?
<kjackal_> cargill: where did you read about "charm test"
<cargill> oh, it's meant to be bundletester now?
<cargill> just tried it, thinking charm build, charm proof, charm test would be the sequence :)
<kjackal_> https://jujucharms.com/docs/2.0/tools-charm-tools look at this it is here!!
<magicaltrout> seems to suggest charm test is indeed valid terminology
<cargill> so do I use charm test or bundletester? and is bundletester in the repos/ppa?
<kjackal_> cargill: I would suggest you use bundletester, at least this is what i am using here
<kjackal_> bundle tester should be installable by pip
<kjackal_> sudo pip install bundletester
<cargill> hmm, pip means you don't get updates unless you remember to ask for them...
<cargill> (the joys of system package managers :))
<neiljerram> Morning all!
<neiljerram> In my tests, I sometimes see 'juju ssh ...' failing with "Permission denied (publickey)." for just one or two units in the deployment; it's fine for all the other machines.
<neiljerram> There are google hits for this ("juju ssh permission denied (publickey)"), but they're quite old.
<neiljerram> So it looks like Juju is failing to copy my public key onto those machines...
<neiljerram> Does anyone else see this, and is there a workaround?
<junaidali> neiljerram, are you able to manually ssh to those nodes?
<junaidali> ssh ubuntu@<node-ip>
<junaidali> from maas
<neiljerram> junaidali, I'm not sure, because this happens in the middle of an automated test.  I can try modifying the script to add that in, if you think it will help.
<neiljerram> junaidali, I'm trying that now, let's see...
<neiljerram> junaidali, Unfortunately I am still seeing 'Permission denied (publickey)' with one of the units, even when I do 'ssh ubuntu@<ip>' first.  I get 'Permission denied (publickey)' for 'ssh ubuntu@<ip> ls', and then also for 'juju ssh <unit> ls'.
<SDB_Stefano> hi Hjackal
<icey> is it possible for a juju action to take a large JSON blob as a value?
<magicaltrout> icey: its a pain in the balls
<magicaltrout> you can, but you need to escape it all
<icey> wtf -_-
<icey> escape it even though it's wrapped in single quotes ?
<magicaltrout> yeah, I found for a charm where the save format is json, I couldn't get it to ingest the stuff unless it was escaped
<magicaltrout> this was a while ago, but I doubt its changed
<icey> magicaltrout: yeah, giving errors like: ERROR json: unsupported type: map[interface {}]interface {}
<magicaltrout> yeah it tries to parse it.....
 * icey cries
<SDB_Stefano> Hi, I'm creating a charm for our product ScaleDB UDE, I have some questions, could I propose them ?
<magicaltrout> go wild SDB_Stefano
<SDB_Stefano> ok, I have to create a plain/easy charm that the only thing we need is to execute a install/hook to install our software, so :
<SDB_Stefano> I started using https://github.com/juju-solutions/layer-basic as template
<SDB_Stefano> about the  layer.yaml  :
<SDB_Stefano> should it contains ' includes: ['layer:basic']'  ? or it's not mandatory, I don't need it
<icey> SDB_Stefano: it should
<magicaltrout> when you said "use layer basic as a template" you mean you git cloned it?
<magicaltrout> or did `charm build` ?
<SDB_Stefano> yes git clone
<magicaltrout> wrong answer
<SDB_Stefano> I :
<marcoceppi> SDB_Stefano: the better way to start, is to run `charm create`
<magicaltrout> sorry yeah create
<marcoceppi> SDB_Stefano: charms incur a compilation process, which means things like layer basic will be included in the final, compiled product
<magicaltrout> https://jujucharms.com/docs/stable/authors-charm-writing <- SDB_Stefano
<SDB_Stefano> the current charm is working, I think that I have only to refine, it's need that I restart using ' charm create'
<marcoceppi> SDB_Stefano: I'd behappy to help shape your charm into a layer if it's available somewhere
<SDB_Stefano> ok, great , I'm going to deploy the current version in github, just a moment
<magicaltrout> marcoceppi is like a charm rewriting ninja
<kjackal_> Hello SDB_Stefano, what's up?
<SDB_Stefano> Hi kjackal, I have some questione about the charm
<SDB_Stefano> could we have a short skype call, maybe it could be easier ?
<kjackal_> Sure
<junaidali> neiljerram, looks like your node is not up
<cargill> hmm, great, using bundletester complains about python linting issues in the pgsql layer and that makes it fail
<cargill> not that I have control over that layer in the first place...
<neiljerram> junaidali, I'm sure that the node was up; juju status had given a successful report for it.
<neiljerram> junaidali, you can see this in the transcript of the bug that I just reported, at https://bugs.launchpad.net/juju-core/+bug/1635622
<mup> Bug #1635622: 'juju ssh <unit> ...' fails with Permission denied (publickey), for only one or two machines in a deployment <juju-core:New> <https://launchpad.net/bugs/1635622>
<kjackal_> cargill: if you run bundletester with -F it will run all the tests and not stop on the first error
<cargill> how is that handled when I submit the charm for review?
<kjackal_> cargill: we usualy ping the author of the other layer to fix the problems, but that might block you as well since we might not be able to verify the correctness of your charm either.
<cargill> if it's just python linting issues?
<kjackal_> cargill: linting errors will most likely not block you
<BlackDex> If i want to set config-flags for nova-compute i can't set nfs_mount_options=nfsvers=3
<BlackDex> because there are 2 = signs
<marcoceppi> BlackDex: quote the value
<marcoceppi> BlackDex: furthermore, I don't see nfs_mount_options as config on the charm https://jujucharms.com/nova-compute/
<BlackDex> marcoceppi: tried that not working
<marcoceppi> BlackDex: lwhat's the error you're getting, exactly?
<BlackDex> its juju set nova-compute config-flags='nfs_mount_options="nfsvers=3"'
<BlackDex> that is what needs to be done
<BlackDex> that the config-flag value is not correct
<marcoceppi> BlackDex: taking a look
<marcoceppi> BlackDex: you may want to ask in #openstack-charms this might be a bug with how commands are being parsed
<cargill> how do I propose a charm to the store again?
<BlackDex> marcoceppi: thx i will go there :)
<marcoceppi> cargill: http://review.jujucharms.com/
<cargill> marcoceppi: thanks
<cory_fu> mbruzek: Your wish is my command: https://github.com/juju-solutions/charms.reactive/pull/86
<cory_fu> (And thank you for keeping me honest)
<mbruzek> Sorry to be so pedantic
<mbruzek> cory_fu: merged
<mbruzek> Cynerva: ping
<Cynerva> mbruzek: pong
<mbruzek> Cynerva: or ryebot: I am getting this error with a defined action ERROR invalid params schema for action schema restore-snapshot: bool is not a valid type
<mbruzek> type: bool
<Cynerva> mbruzek: i believe the correct type is boolean
<ryebot> mbruzek: hmm I think the type needs to be 'boolean'
<mbruzek> I knew you guys would know.
<mbruzek> Thank you
<Cynerva> mbruzek: cool, glad to help!
<mbruzek> I should have tried that before asking but I gave up after I searched the documentation and found no boolean options
<Cynerva> mbruzek: I've found it useful to refer to json-schema docs for action stuff. The primitive types (including boolean) are here for example: http://json-schema.org/latest/json-schema-core.html#anchor8
<mbruzek> Thanks Cynerva
<suchvenu> Hi
<suchvenu> I am trying to deploy Openstack charm from the store : https://jujucharms.com/openstack-base/
<suchvenu> I have a KVM and MAAS is configured in one guest from that box. Juju is installed and could deploy a sample charm as well.
<suchvenu> However when i deploy Openstack charm from store I get "Failed deployment" as the status
<suchvenu> The MAAS log ahows as :
<suchvenu> Oct 20 15:38:32 maascontroller maas.node: [INFO] vm3: Status transition from DEPLOYING to FAILED_DEPLOYMENT Oct 20 15:38:32 maascontroller maas.node: [ERROR] vm3: Marking node failed: Node operation 'Deploying' timed out after 0:40:00. Oct 20 15:38:37 maascontroller maas.node: [INFO] vm4: Status transition from DEPLOYING to FAILED_DEPLOYMENT
<suchvenu> Any idea on this error ? Will increasing the timeout fix the issue ?
<gaurangt> Hi, for the storage support in MAAS for juju version 1.25.. the documentation says - Â This storage provider is static-only; it is currently only possible to deploy charms requiring block storage to a new machine in MAAS, and not to an existing machine.
<gaurangt> What exactly new machines mean here?
<deanman> Trying to learn Juju 2 by setting up a local cloud (lxd). Controller gets created just fine and i can see it with `lxc list` but when i try to deploy a charm it just hangs. Any hints?
<rick_h_> deanman: check out juju status --format=yaml and see if there's anything fishy
<rick_h_> deanman: or juju debug-log
<deanman> ok it seems its downloading a trusty image to deploy the charm by default where i would have expected that it would use a xenial image, same as with controller.
<deanman> thanks rick_h_
<rick_h_> deanman: hmm, something in the config set trusty as the default?
<deanman> just following getting started guide from stable docs
<rick_h_> deanman: you on xenial?
<deanman> xenial 64 VM on MacOS host
<deanman> the deployed charm was mysql
<rick_h_> deanman: oic, ok so that's probably because mysql charm isn't xenial ready it looks like
 * rick_h_ notes it's less crazy then
<deanman> ok rick_h_ fair enough, so in general having a xenial64 VM and then using local as a cloud controller to deploy consequent charms in LXD is OK for dev. workflow?
<rick_h_> yep
<skay> just when I want to create a charm, github is down. charm-create fails due to trying to get templates from there
<skay> :(
<skay> is that mirrored anywhere else?
<deanman> rick_h_: is juju gui deployed with controller by default?
<kwmonroe> cory_fu: this world wide dns problem just got real!  "unit-namenode-0: 18:05:54 INFO unit.namenode/0.install Error: Could not connect to https://forgeapi.puppetlabs.com".  i didn't care before, but now i'm inclined to write a letter to congress.
<kwmonroe> (and hope al gore sees it)
<rick_h_> deanman: yes
<rick_h_> deanman: just run 'juju gui'
<cory_fu> kwmonroe: lol
<cory_fu> But yeah, it's a pretty significant attack.  I think python.org is being affected as well
<kwmonroe> good news is that the ddos people failed to remember never to release on a friday.  we can all just go home.  if they really wanted havoc, they would have waiting till monday.
<suchvenu> Can anyone please tell me on how to get charm-tools 2.1.4 ? Mine is in 2.1.3
<suchvenu> I already did sudo apt update && sudo apt upgrade
<brad_nokia> anyone can help with charm storage? using a cinder volume and the charm hangs in a pending mode awaiting attachment
<mbruzek> suchvenu: you can sudo pip install charm-tools
<mbruzek> to get the latest verison
<cory_fu> suchvenu: Best I can tell, from https://github.com/juju/charm-tools/blob/master/setup.py#L11, a version newer than 2.1.2 doesn't exist
<cory_fu> pypi would disagree with me https://pypi.python.org/pypi/charm-tools
<lutostag> question... so with reactive how many reactive functions can trigger in a file for a single hook?
<cory_fu> lutostag: As many as match
<cory_fu> As long as their preconditions match, they will be triggered.  There's no limit otherwise.
<lutostag> cory_fu: ok, that's what I thought, if I'm in debug-hook I assume I can list the states, so I'll try that, must not have a state I should
<cory_fu> lutostag: Easy way to see the states from the CLI: charms.reactive -y get_states
<lutostag> cory_fu: that was easy, thanks
<lutostag> charm publish -> charm release... that was recent I'm not crazy right?
<kwmonroe> yeah lutostag, you're not crazy.  it was right around the beta->rc timeframe
#juju 2016-10-22
<abhay> , i am running functional tetsing for murano charms using bundletester.... while running commands juju status murano...i could see below message "waiting for agent initialization"
<abhay> http://paste.ubuntu.com/23363205/
<abhay> when i run the command juju debug-log ..i could see message as below:
<abhay> http://paste.ubuntu.com/23363204/
<abhay> i am running with juju version..1.25.6-xenial-amd64..
<abhay> please help me here ...as i am stuck ....
<bdx> consistently, my machines will start assuming private ips -> http://paste.ubuntu.com/23366879/
<bdx> such a piss off
<bdx> ooohh
<bdx> notice the region has changed
<bdx> ^^
<bdx> everything is in us-east-1a, and the black sheep is in 1b
<bdx> also coincidentally happens to have a different ip then the rest ....
<bdx> errrrg
<bdx> even thought I deployed to my space "common-infrastructure" the instance deploys to some other subnet
<bdx> http://paste.ubuntu.com/23366898/
<bdx> http://paste.ubuntu.com/23366903/
<bdx> rick_h_: possibly you've heard me squacking about this the last few weeks?
<bdx> rick_h_: huge show stopper
<bdx> rick_h_: trying to take things to production, ^ needs to be top priority
<bdx> `juju deploy ./fiche --constraints "spaces=common-infrastructure" --constraints "instance-type=t2.small"`
<bdx> thats all I'm trying to do, nothing complex
<bdx> anyways
<bdx> I'll be afk till moday, just wanted to get some eyes on this
<bdx> thanks
<bdx> squawking*
#juju 2016-10-23
<junaidali> Hi guys, I'm trying to deploy openstack over openstack using juju but I'm getting this error http://paste.ubuntu.com/23368673/. Any idea what might be the issue?
#juju 2017-10-16
<skay> juju attach thinks my 0 byte file is a negative byte file. https://paste.ubuntu.com/25752774/
<stub> skay: You have a bright future in compression technologies
<skay> HAHA
<skay> don't know if I should open a bug for that or not
<skay> I expect so, but I don't have time to debug it right now
<stub> skay: Its actually kind of important to be able to attach 0 byte files, since we need them to fake optional resources
<skay> I know, definitely I know
<skay> has anyone decided that a detach command is worthwhile?
<skay> I went ahead and filed it. lp:1723970
#juju 2017-10-17
<bdx> SOS
<bdx> oops
<bdx> for #MAAS^
<cnf> hmm, juju is giving me: panic: cannot determine host series: unknown series ""
<Spads> We're seeing what may be a bug: juju on ps4.5 seems to be demanding an already-deleted image from nova/glance despite the stream endpoints no longer listing it
<Spads> Is there an easy way to refresh the controllers' listings there?
<Mmike> Hello, lads. How do I specify, in the juju2 bundle, that I want to get charms from bzr or github?
<rick_h> Mmike: it doesn't support that unfortunately. The idea is to git clone ... and then the bundle can reference the local path on disk
<Mmike> rick_h, eh :(
<Mmike> rick_h, thnx
<rick_h> Mmike: charm push on git commit? :)
<rick_h> Mmike: with channels/etc can totally do an edge channel that tracks source
<Mmike> rick_h, it's actually for openstack charm testing - with 1.xx we had juju-deployer and then inside a bundle you can use the 'branch:' to tell it from where to deploy - so the deployer would then do 'git clone' or 'wget' or 'bzr branch' or whatnot, and blablabla
<Mmike> now I'm trying to replicate this with juju2
<Mmike> I have a customer that's running old charms and I want to test how the upgrade will fare
<Mmike> but, I'll script all of that out, just felt I'm missing something with juju2 bundles
<rick_h> cory_fu: can I steal a couple when you get a minute? I'm hitting reactive trying to load charm files and missing deps and there must be something that's changed/I've got off here.
#juju 2017-10-18
<c01nwarr10r[HD]> VOISE, the largest music exchange on the planet in the next few years, based on the latest ethereum smart contract technolgy, VOISE offers a free decentralized market for artists to promote and sell 100% of their work on their own terms
<c01nwarr10r[HD]> going to be a huge market
<c01nwarr10r[HD]> anyone and everyone can participate, even the transactions are processed by anyone and everyone as it's decentralized
<c01nwarr10r[HD]> has all the right ingredients to take over
<c01nwarr10r[HD]> it's not going to look like the largest music exchange in the world early on, it's going to look like what VOISE looks like now, with the potential to be the largest music exchange in the world
<c01nwarr10r[HD]> it would be to easy otherwise
<c01nwarr10r[HD]> but if you'd prefer to play it safe, no big deal you can always see the glory, just don't expect to be the glory
<c01nwarr10r[HD]> livecoin.net
<andreas_s> Hi, does anybody know if it's possible to have a single node lxd cloud and then to add the localhost as bare metal machine via ssh?
<andreas_s> I tried it, the add-machine command succeeds, but the agent on the localhost (the lxd host) is stuck in pending
<andreas_s> seems like it does not have the api ip set for t
<andreas_s> some reason
<fallenour> o/
<fallenour> got an issue thats pretty weird, haproxy working fine, giving UP status, /var/log/haproxy.log showing things are good, initial landing page loads just fine, generic http currently, go to log in, processes request, returns with 502 bad gateway error. Only recent changes from last known good was updates to pike from ocata, openstack-charmers model
<rick_h> bdx: https://photos.app.goo.gl/WAD6pZtkjt74AqsP2 cmr telegraf -> prometheus with network_get finally working
<fallenour> any thoughts rick_h bdx jamespage
<rick_h> fallenour: nothing in the haproxy logs about the 502? what's it hitting behind haproxy? which api endpoints?
<bdx> rick_h: no way!!! I was just going to ping you and see how that was going
<fallenour> rick_h: no, thats the crazy thing, /var/log/haproxy.log shows everything fine, and that it can reach the openstack dashboard no issues.
<fallenour> rick_h: I checked all three haproxy instances just to ensure I was crazy.
<fallenour> rick_h: also, keystone and mysql systems both show as fine.
<Fallenour> Sorry rick_h what did I miss? Am afk but still listening
<Fallenour> .
<Fallenour> .
<bdx> fallenour: its hard for anyone to say without diving into your setup
<fallenour> o/
<fallenour> having a weird issue with juju xenial openstack model, I cant log in after applying pike updates. HAproxy loads landing dashboard login, but I cant log in after submitting creds, getting a 502 bad gateway error. Chceked haproxy, haproxy is loading portal on its side properly.
<magicaltrout> fallenour: I can't help, but more importantantly do you own a kettle?
<fallenour> magicaltrout: actually I do, but alas, I too, cannot help. I have no teleporter in which to send my kettle :9
<rick_h> magicaltrout: but does it need to be electric or stovetop?
<magicaltrout> rick_h: you go camping
<magicaltrout> we all know you own a kettle
<rick_h> magicaltrout: yea, electric in the house but stovetop in the camper so you can heat it over a fire/gas grill
<rick_h> only way to make pour over coffee while camping :)
<fallenour> is there a way to test functionality of keystone?
<magicaltrout> i do love a stovetop kettle, sadly i have electic hobs which suck
<magicaltrout> you tried summoning any of the openstackers fallenour ?
<rick_h> fallenour: sorry, will need a more openstack expert than myself on diagnosing the openstack bits
<fallenour> magicaltrout: If I only had the spellpoints my good friend, if only. I am but a meager lvl 14 wizard. Openstackers require Summon Monster lvl 7 :(
<magicaltrout> try prodding blindly in  #openstack-charms
<magicaltrout> see if anyone is alive
<magicaltrout> or thedac he was complaining about the time difference when i was sat in the openstack room
<magicaltrout> so  logic dictates he's working somewhere
 * thedac waves
<fallenour> thedac: 8D
<fallenour> any ideas my friend?
<thedac> reading backscroll now
<magicaltrout> don't trust thedac though
<magicaltrout> he doesn't own a kettle
<thedac> :)
<fallenour> Ill build a CA then, as I can be implicitly trusted, as I own a kettle
<magicaltrout> there you go
<fallenour> Im sure Mozilla will accept this sound logic
<magicaltrout> damn  right
<fallenour> thedac: The gist of the current situation, prior to updating the xenial-charmers openstack model, system works fine. I update to pike packages, horizon loads dashboard login screen ,but after login, I get a 502 bad gateway error. I chcekd Haproxy systems, all say UP. I chcek keystone, keystone says up.
<thedac> fallenour: oh, did you manually install the packages or did you let the charms update to pike?
<fallenour> thedac: I used juju to run the update command on the system
<thedac> ok
<fallenour> juju run --unit <all the units 1 by 1> 'sudo apt-get update'
<fallenour> thedac: do I need to update the model?
<thedac> ah, ok, so that is manual
<fallenour> thedac: screwed the pooch I did, didnt I?
<thedac> fallenour: so, the charms to a lot of work to make sure the whole stack works on upgrade.
<thedac> The charm is looking at the openstack-origin config value and will get very confused if that does not match what packages are installed
<thedac> So the way to upgrade is to change that value.
<thedac> One sec let me get you some docs
<fallenour> thedac: ahh I see, so future lesson, update model? not packages manually?
<thedac> Exactly
<fallenour> ooh shit, I pulled form teh wrong repos didnt I?
<thedac> fallenour: See https://jujucharms.com/keystone/269#charm-config-openstack-origin and https://jujucharms.com/keystone/269#charm-config-action-managed-upgrade
<thedac> fallenour: the problem is any future hook that runs will get conflicting information about what release of OpenStack is in play
<fallenour> thedac: so I guess my question is, what is my course of action to fix the issue? I have a feelign its a juju config command?
<fallenour> thedac: right now it reads as cloud:xenial-pike
<thedac> Oh, well that is correct
<fallenour> thedac: do I need to confirm that for every system in question?
<thedac> For each charm 'juju config $CHARM openstack-origin=cloud:xenial-pike'
<fallenour> thedac: openstack-dashboard (horizon) also reads as cloud:xenial-pike
<thedac> You can check it by not adding the =cloud:xenial-pike
<fallenour> thedac: im not, im simply typing in juju config <application> openstack-origin and pressing [ENTER]
<thedac> ok, yes, that shows what it currently set to.
<fallenour> thedac: do you think it might be mysql that is the issue by chance?
<fallenour> I dont think it is, but at this point, im at a loss.
<thedac> fallenour: I guess I need to know what steps you took and in what order to attempt to help
<thedac> at some point you did set openstack-origin was that before or after running apt commands?
<fallenour> thedac: no, I just ran the update commands. Im actually interested myself how its set to pike myself. I built it ocata initially
<thedac> hmm, so, I suspect this will take a fair amount of manual restaring of services. It is very difficult to know what state things are in
<fallenour> thedac: Do I need to restart every service manually?
<thedac> that is only a guess. What you might want to do is check each service one by one from cli.
<fallenour> thedac: what should I check for?
<fallenour> thedac: and shoudl I reset in any particular order?
<thedac> fallenour: I would start by checking the services. openstack catalog list, openstack image list, openstack server list. etc and see if any of those fail
<fallenour> thedac: um...odd question, but where would I run that? Normally when im working with openstack id run that on a controller, a nova system, or a neutron box, but I dont have openstackpythonclient installed anywhere. Im assuming ill need to install that first and grab the adminrc file?
<thedac> fallenour: yes, you'll need the openstack client, and adminrc file and a host that has access to all the APIs
<fallenour> thedac: ok, next challenge. This has actually been an issue for me for a while. How do I get my ssh key added to the systems? ive tried juju add-ssh-key with no success
<thedac> fallenour: I don't think there is an automated way. You can juju ssh into everything and manually add an ssh authorized key
<fallenour> thedac: I try: juju add-ssh-key admin/conjure-openstack-base-29c /dir/to/id_rsa.pub ; but all I get is a invalid ssh key error
<fallenour> thedac: cant do that. I already tried to juju ssh , but i get a permission denied (public key) error when I try the juju ssh command
<thedac> I am afraid I have never used add-ssh-key command. Any other juju peeps want to run with that?
<fallenour> thedac: what user do you normally log in with?
<fallenour> thedac: I tried the generic "juju" and "ubuntu" as well, since I use MAAS, neither work
<thedac> Usually juju ssh $APP/$UNIT logs in as the ubuntu user
<fallenour> thedac: ideas on using scp to get the key there, then cat it to authorized file?
<thedac> I am looking on how to use the add-ssh-key now. One sec
<thedac> looks like you need to send it as a string: juju add-ssh-key "$(cat ~/mykey.pub)"
<fallenour> wuh?
<fallenour> hang on, I just tried that
<thedac> I just confirmed that works
<thedac> You can list the ssh keys with: juju ssh-keys
<fallenour> thedac: now im getting error : "duplicate key" when I use the -m option >.>
<fallenour> thedac: I think it hates me Q___Q
<thedac> That would imply your key is already there
<fallenour> thedac: it says it has my key listed, but when I try to ssh, it says denied
<thedac> fallenour: are you doing "juju ssh $APP/$UNIT" or trying to ssh directly?
<fallenour> thedac: tried: juju ssh <$app/$unit> as well as: juju ssh <username>@<$app/$unit> , permission denied both times
<thedac> fallenour: and this is on the host you performed the deploy?
<fallenour> thedac: yeap. im on my juju controller now
<thedac> hold on :)
<fallenour> thedac: just confirmed the key in my /home/<username>/.ssh/id_rsa.pub was also the sam....permissions o.o
<fallenour> thedac: permissios seem fine o.o
<thedac> If you run 'juju status' where you are now does that work?
<fallenour> thedac: yeap
<thedac> I am very confused why you are not able to juju ssh then
<fallenour> thedac: I know right?!
<fallenour> thedac: item of interest;  according to juju run --unit openstack-dashboard/0 'ls -lisa /home/<username>/.ssh' the directory doesnt exist for me
<fallenour> thedac: but "ubuntu" does exist
<thedac> right, it is not going to create a user for you.
<thedac> juju ssh  is going to log in as ubunut
<thedac> ubuntu even
<thedac> fallenour: so juju run works?
<fallenour> thedac: I think I found the issue fo rthe ubuntu user as well, its only got 600 perms on the authorized_keys file
<fallenour> thedac: yea
<fallenour> thedac: gonna try to change the perms on ubuntu user ssh key, then try to log in as it
<thedac> I think that is pretty normal
<thedac> confirmed 600 is ok
<fallenour> thedac: not to the best of my knowledge. if I understand correctly, you need it to have 644 perms, not 600. It would mean only the user, not the system, could read the file, making it unable to auth your key?
<thedac> I am just saying that is what all mine are set to and I can juju ssh into them fine
<fallenour> thedac: huh...thats never worked for me, but then again, im not a linux god, and shall not argue XD
<fallenour> thedac: o.O
<fallenour> thedac: it works with ubuntu o.O
<thedac> ok
<fallenour> thedac: I feel so lawst x.x
<thedac> fallenour: do you have an .ssh/config that could be getting in the way? Mabye setting user or something
<fallenour> thedac: not that im aware of. note: do NOT OPEN, the /var/log/juju/mysql files, not the droids you are looking for
<fallenour> thedac: mysql error logs look clean, successfully syncing with group
<fallenour> thedac: what systems aside from Haproxy, keystone, and horizon are used during login on the dashboard?
<thedac> That is basically it.
<fallenour> thedac: then what could the issue be? and where at?
<thedac> Although it is going to make a bunch of api calls after that
<thedac> so again, I would verify each api one at a time. Start with 'openstack catalog list' which will validate keystone
<fallenour> thedac: yea, checking that now
<fallenour> thedac: also, I found that the systems do a daily apt download, lesson learned the hardway :(
<fallenour> thedac: idea, how to scp a file from a remote system to your system?
<thedac> Do you have keys setup on the remote system?
<fallenour> YAHTZEEE
<fallenour> thedac: nope, password. always a saving grace, at least one password based system
<fallenour> scp <username>@<ip address>:/path/to/target/file .
<fallenour> worked like a charm
<fallenour> thedac: /sigh
<fallenour> thedac: now its telling me auth url error, even though in the admin rc file its included >.>
<thedac> ok, and the correct creds for an admin user on the cloud?
<fallenour> thedac: yea, keystone is still on v2 right?
<fallenour> and its still OS_AUTH_URL= right?
<thedac> Depends on how you configure it. juju config preferred-api-version
<thedac> juju config keystone preferred-api-version
<fallenour> thedac: yea, its 2
<thedac> ok
<fallenour> thedac: the port wrong maybe?
<thedac> what is the error exactly?
<thedac> and RC looks similar to this? http://pastebin.ubuntu.com/25768571/
<fallenour> thedac: whooo, just spat a lot of angry at me, gimme a moment
<fallenour> thedac: is openstack-dashboard a bad idea place to be running this from? maybe nova instead?
<thedac> I would not install other packages on those units if you can avoid it. What about the machine where you ran juju commands. That is a natural fit.
<fallenour> thedac: yea thats my hope as well
<fallenour> thedac: Ive already also made that system my saltmaster.
<fallenour> thedac: hold the pony! it just kicked me out of my container, and dropped me onto the machine? Did this seriously just do a container escape?
<fallenour> thedac: holy shit batman, it sure did!
<fallenour> thedac: bug issue for later. Current question, im getting python output issues, ideas?
<thedac> We are all over the place. I have no idea where you are at at this point :)
<fallenour> thedac: moved back to controller, side lined the container escape issue for later, moved the admin rc file to the controller, chmod +x on the file to make it executable, now getting same python traceback output when I run: . admin-openrc.sh && openstack catalog list
<fallenour> thedac: juju controller, specifically
<thedac> what does the traceback look like? can you pastebin
<fallenour> thedac: yeap, un momento
<fallenour> thedac: https://paste.ngx.cc/ba
<thedac> try 'env |grep OS_'
<thedac> I don't think your RC is exporting
<thedac> again, it should look something like http://pastebin.ubuntu.com/25768648/
<fallenour> thedac: it gave me a lot of OS_ info found in my admin openrc file
<thedac> Then 'source RCFILE'
<thedac> ok
<fallenour> thedac: it took the source command
<fallenour> thedac: same issue on openstack catalog list
<fallenour> thedac: do you think there is an issue with keystone specifically?
<fallenour> are there local commands we can try on keystone directly?
<thedac> I don't think so.
<fallenour> thedac: just an fyi, keystone/0 is refusing ssh connections
<thedac> ?
<thedac> How so?
<fallenour> thedac: yea, getting a "cannot connect to any address: <keystone IP:22>
<thedac> ok, let's take another step back. Can you pastebin 'juju status' for me
<fallenour> thedac: sure
<fallenour> thedac: https://paste.ngx.cc/11
<fallenour> thedac: just an fyi, ignore apache and nfs servers, currently experimenting with those. As for haproxy, ive directly checked each of them individually, according to all of them, they are working properly.
<fallenour> more specifically, they are getting the openstack-dashboard-70, with UP status, on restart, and as current as last restart less than an hour ago.
<thedac> so haproxy as an external charm is not a regular part of our deploy. The api charms themselves install haproxy (i.e. cinder) to use internally
<fallenour> thedac: yea, I added an external one. its pointed at the current haproxy systems.
<thedac> What address are you using to access horizon?
<fallenour> horizon.eduarmor.com/horizon
<thedac> right but what IP does that resolve to? an haproxy or openstack-dashboard?
<fallenour> thedac: it resolves to openstack-dashboard
<thedac> Have you tried going directly to the horizon ip?  10.0.0.51 from the pastebin
<fallenour> thedac: on login though, its the same issue as with DNS, 502 bad gateway
<thedac> ok, can you check the apache logs on the openstack-dashborad unit?
<fallenour> thedac: sure. also, idea, odd one though. wget the api path? on keystone?
<fallenour> thedac: oo
<thedac> Sure, you will get an unathorized but that will tell us it is up
<fallenour> thedac: actually, I take that last back, I got a slightly different error this time.
<fallenour> thedac: actually, I take that last back, I got a slightly different error this time. "this page isnt working ; 10.0.0.51 didnt send any data" on chrome , "Connection reset" on firefox
<fallenour> thedac: checking apache now
<thedac> oh, from the browser that will be less useful. You also need to specify the port 5000
<fallenour> thedac: ok this is weird. check this out: 10.0.0.51 - - [18/Oct/2017:22:35:21 +0000] "POST /horizon/auth/login/ HTTP/1.1" 200 0 "http://10.0.0.51/horizon/auth/login/?next=/horizon/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:52.0) Gecko/20100101 Firefox/52.0" 10.0.0.51 - - [18/Oct/2017:22:37:31 +0000] "GET / HTTP/1.0" 200 946 "-" "-" 10.0.0.51 - - [18/Oct/2017:22:37:34 +0000] "GET / HTTP/1.0" 200 946 "-" "-"
<fallenour> thedac: https://paste.ngx.cc/b3
<thedac> hmm, that looks fairly normal
<fallenour> thedac: except that thats my traffic right now
<fallenour> thedac: with the connection reset error
<fallenour> thedac: how can it be a connection reset error and a http 200 ok code?
<thedac> The time stamp matches up with when you were trying to connect?
<fallenour> thedac: looks to be the case
<fallenour> thedac: yea, exactly. I just changed the page I was aiming at, matches with the logs, loading a http 200 946 "-" "-"
<fallenour> thedac: in the logs. Im so confused.
<thedac> But you get a connnection rest in firefox?
<fallenour> thedac: because according to this, it loaded the entire page just fine
<fallenour> thedac: yea. this is nuts
<thedac> Let me ask some more questions. Before you upgrading things this was all working as is?
<fallenour> thedac: yea, it was working perfectly
<thedac> I have not used haproxy in front of openstack-dashboard. So that is the only thing I have some doubts about.
<thedac> It *sounds* like you are hitting the haproxy and it doesn't think any back ends are valid
<thedac> or something like that
<fallenour> thedac: I have an idea, and it looks like you are on th esame page as me. I think it might be the haproxy to haproxy system, although it wasnt an issue before, and im curious if that cna be the case, specifically because I current....the router o.o
<fallenour> thedac: its gonna go to the switch, on the same lan, or the router, which is the gateway, to load the horizon page?
<fallenour> 10.0.0.0/24 network
<fallenour> thedac: im thinking it might be dropping my connection witha  connection reset on the 2nd haproxy system on the return, what are your thoughts?
<fallenour> thedac: OOO JACKPOT!
<thedac> my thought is let's remove complexity and try to go directly to horizion (without the external haproxy)
<fallenour> thedac: theres a pile of errors in the error.log file, all for the keystone system, all failing to login
<thedac> ok, so keystone may be in a bad state.
<fallenour> thedac: https://paste.ngx.cc/56
<fallenour> thedac: what is the chance the system is refusing all connections, not just ssh?
<fallenour> thedac: just tried a reboot command on the keystone system
<fallenour> thedac: its not responding? o.O
<thedac> how did you do that?
<thedac> Let's try 'juju status keystone'
<fallenour> thedac: yeap, already got it:  Unit         Workload  Agent      Machine  Public address  Ports     Message keystone/0*  active    executing  2/lxd/2  10.0.0.4        5000/tcp  (config-changed) Unit is ready
<thedac> And can you ping 10.0.0.4?
<thedac> How about 'nc -vz 10.0.0.4 5000'
<fallenour> thedac: yea, ms just droped from 18+ down to 2-3.
<thedac> fallenour: just fyi, I am running out road here
<fallenour> thedac: yea, me too. ok. another idea. juju config keystone/0 ; and then just simply reset the api port to 5000 again?
<thedac> ?
<fallenour> im still getting the same, failed user login, connection timed out
<thedac> did you try the nc command?
<fallenour> thedac: didnt try that. not very familiar with syntax, cna you give me an example?
<thedac> nc -vz 10.0.0.4 5000
<thedac> :)
<thedac> That will check if that ip is listening on that port
<fallenour> thedac: its non-responsive to the command. me thinks the system keystone/0 isnt repsonding to inquiries from other systems?
<thedac> hmm, ok
<thedac> I am really confused because juju status should be telling us it is down
<fallenour> thedac: I know, this is insane
<fallenour> even tried juju config keystone/0 service-port=5000 ; got a warning that it was already configured to 5000
<fallenour> still not working thedac
<thedac> yeah, we are down at the machine level problems at this point. You cannot ssh to the unit at all?
<fallenour> thedac: yea, cant use juju ssh keystone/0
<thedac> How about the host on machine 2.   juju ssh 2
<fallenour> thedac: on it now
<thedac> Try that nc command from on machine 2
<fallenour> thedac: lxc list shows 3 containers
<fallenour> thedac: they are on the same machine together 8O
<thedac> They are contained as lxcs
<fallenour> thedac: keystone and openstack-dashboard, yea, foudn it with lxc list ; I didnt know they were on the same box together, never realized.
<thedac> That should still work
<fallenour> thedac: no response yet from the nc -vz 10.0.0.4 5000 command
<thedac> ok
<fallenour> thedac: ping works
<fallenour> thedac: firewall issue on the system?
<thedac> no
<fallenour> thedac: yea, dont think so either, tons of inbound established from 10.0.0.46 on variety of ports.
<thedac> seems that container is not responding. Again I can't explain why juju is unaware of this
<fallenour> nothing for port 5000 though
<fallenour> thedac: lxc restart syntax to restart container ?
<thedac> We could try that. I was holding off yet
<thedac> Try an lxc exec and see if we can get a shell
<thedac> lxc exec juju-492c8a-2-lxd-2 bash
<thedac> Try that
<fallenour> thedac: we got shell
<thedac> ok see if you have network connectivity from within the container
<thedac> ping out or something
<fallenour> thedac: no ping to google
<fallenour> thedac: ping to dashboard good
<thedac> try the db 10.0.0.66
<fallenour> thedac: ping to google working now
<thedac> ok
<fallenour> thedac: ping to db working
<thedac> let's check keystone logs /var/log/keystone/keystone.log
<fallenour> thedac: wait...its intermittent
<thedac> That would be a problem
<fallenour> the 78% packet loss?
<fallenour> thedac: working fine now, really weird
<thedac> Well that indicates we may have bigger problems. If the network is unreliable
<fallenour> thedac: we have logs, massive amounts of logs, check this out: https://paste.ngx.cc/d7
<thedac> ok, but no ERRORS
<thedac> So keystone thinks it is running fine
<fallenour> thedac: yea, looks that way.
<thedac> Any chance we have a duplicate IP for 10.0.0.4? That would explain some of the intermittent behavior
<thedac> or for the DBs ip
<fallenour> thedac: maybe? id have to check maas, but Im not even sure were to look really for that
<thedac> If there are no manually configured hosts and maas controls everything then I am not worried abou it. It is if we have any manually configured hosts that it could be a problem
<thedac> So as a couple of baby steps. We could restar apache on the keystone unit. Test. Maybe restart the keystone unit container. test. But if this is a network issue none of that will help
<fallenour> thedac: its non responsive to the service apache2 restart command
<thedac> It is just hanging?
<fallenour> thedac: super slow, says active(running) now
<fallenour> thedac: its taking a long time
<fallenour> thedac: whats the command again for cpu util?
<thedac> Can you check how many keystone processes are running?  ps -ef |grep keystone |wc -l
<fallenour> thedac: 7
<thedac> ok, that is reasonable
<thedac> This feels like a performance issue somewhere
<thedac> fallenour: I need to head out. My parting advice is just to check for system load on the physical servers and see if you can isolate where there is load
<fallenour> thedac: alright, ill check it out. also just looked at the haproxy status on the container itself, its fine as well.
#juju 2017-10-19
<fallenour> what api runs on port 17070?
<sri_> I am exploring the possibility of setting k8 cluster using juju/conjure-up on aws. could someone help me find a working solution to set kubernetes cloud-provider as aws?
<sri_> I tried below commands and reboot the containers but containers does not come online anymore
<sri_> juju ssh MACHINE-NUMBER "sudo snap set kube-apiserver cloud-provider=aws"; juju ssh MACHINE-NUMBER "sudo snap set kube-controller-manager cloud-provider=aws"; juju ssh MACHINE-NUMBER "sudo snap set kubelet cloud-provider=aws"
<tvansteenburgh> sri_: if you deploy to aws with conjure-up, the cloud provider is set for you
<tvansteenburgh> sri_: in other words, you don't have to set it manually
<sri_> tvansteenburgh, i tried juju ssh MACHINE-NUMBER "sudo snap get kube-apiserver"; which returns nothing
<tvansteenburgh> sri_: cat /var/snap/kube-apiserver/current/args
<sri_> tvansteenburgh: https://ghostbin.com/paste/xq63r is the output for juju status
<sri_> tvansteenburgh: added the output of juju ssh 1 'cat /var/snap/kube-apiserver/current/args' also in the same link
<sri_> tvansteenburgh: sorry, pastebin is down hence using ghostbin, not sure if it is the best alternate :-)
<ryebot> sri_: Can you share the output of `kubectl get po --all-namespaces` ?
<sri_> ryebot: Actually i just pasted very detailed log of yaml files, pod description etc in the same link
<ryebot> sri_: excellent, taking a look
<sri_> ryebot: just added the output of kubectl get po --all-namespaces at the end of the doc...please help me here
<ryebot> sri_: any idea what's going on with "postgres_pv"?
<sri_> ryebot: i created the volume vol-0445bc1b6acae0622 in aws and referred to it here
<sri_> ryebot: Is it not correct?
<ryebot> sri_: I'm not sure. It looks like it's failing to mount, doesn't it?
<ryebot> namely, "...vol-0445bc1b6acae0622 does not exist"
<sri_> ryebot: thats exactly where i could not trace down to root cause
<sri_> ryebot: when i deploy postgres using ebs via juju, it works fine...it does not work only through kubernetes
<ryebot> sri_: alright, let me look at this a moment
<sri_> ryebot: Thanks a lot...A quick google search results points out that juju is not setting cloud-provider as aws w.r.t https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/346
<ryebot> sri_: I see, that's why you're trying to add the cloud-provider flag
<sri_> ryebot: but that did not help me :-(
<sri_> ryebot: After i set this flag manually, reboot of containers fails and they never successfully start
<sri_> ryebot: Not sure if there is a good way to reboot the cluster using juju
<ryebot> hmm
<ryebot> were you actually able to set the flag?
<sri_> yes
<ryebot> sri_: where do you see that it has been set?
<sri_> ryebot: let me paste those results
<ryebot> sri_: thanks
<sri_> ryebot: pasted those commands and output at the end of the doc
<ryebot> sri_: thanks, looking
<ryebot> sri_: How about kubelet?
<sri_> as per https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/346, i need to reboot those machines after setting these flags but if i do, i could not start anymore
<sri_> I did that for kubelet also in machine 0
<ryebot> alright, hmm
<sri_> ryebot: is there a good way to restart juju machines?
<ryebot> sri_: This is a bit confusing, because if you deployed with the latest conjure-up (e.g., http://blog.astokes.org/conjure-up-dev-summary-aws-cloud-native-integration-and-vsphere-3/), this should all just work for you.
<ryebot> sri_: you should be able to juju ssh in and `sudo reboot`, at the very least
<sri_> ryebot: Let me do that but i am sure it fails to start...but at-least i can show you the error it gives
<ryebot> sri_: that'd be good to see, thanks.
<sri_> now, it stuck at `kubernetes-master/0*  waiting   executing  1        13.58.179.90    6443/tcp        (update-status) Waiting to retry addon deployment`
<sri_> ryebot: now, it stuck at `kubernetes-master/0*  waiting   executing  1        13.58.179.90    6443/tcp        (update-status) Waiting to retry addon deployment`
<sri_> ryebot: This happens only after i add cloud-provider=aws and reboot the machine
<ryebot> okay
<ryebot> one moment
<sri_> ryebot: I am not sure which addon it is trying to deploy...shall i add new juju status, juju debug-log to the pastebin?
<ryebot> sri_: juju debug-log would be great
<ryebot> thank you
<sri_> ryebot: juju debug-log only shows that it cannot connect to kubernetes-master
<sri_> ryebot: could be because it is not ready
<sri_> ryebot: any other juju commands to get detailed logs per machine and per app?
<ryebot> sri_: you can try ssh'ing into the relevant machine and looking in /var/log/juju
<ryebot> that should bypass any erroring agents
<sri_> ryebot: kindly check https://ghostbin.com/paste/xq63r now
 * ryebot clicks
<ryebot> sri_: one moment
<ryebot> sri_: Can you check the logs for apiserver?
<sri_> ryebot: Sure take your time... I see failure logs at `  File "/snap/cdk-addons/169/apply", line 93, in <module>`
<sri_> sure
<ryebot> sri_: on the maste node, `journalctl -u snap.kube-apiserver.daemon`
<ryebot> master*
<sri_> ryebot: Pasted new log at the bottom...i think we have the root cause there...
<sri_> Oct 19 15:51:27 ip-172-31-25-174 kube-apiserver.daemon[4246]: error setting the external host value: "aws" cloud provider could not be initialized: could not init cloud provider "aws": error fi
<ryebot> sri_: can you do that with `journalctl -xn --no-pager`
<skay> I'm not seeing a flag I expect from a charm that provides the rabbitmq interface. from reading the src, anything that provides rabbitmq should be setting myname.connected, but I'm not seeing it
<skay> https://github.com/openstack/charm-interface-rabbitmq/blob/master/requires.py#L38
<sri_> ryebot: output added at the bottom
<ryebot> sri_: thanks
<ryebot> dangit, journalctl, I just want untruncated output.
<ryebot> okay, let's try journalctl -o cat -xn -u snap.kube-apiserver.daemon
<ryebot> sri_: or if you know a better way, by all means
<sri_> okay, let me add it
<sri_> its actually truncating and giving only last few lines :-(
<sri_> I redirected output to a tmpfile but its huge...
<sri_> ryebot: untracked error message seems to be: `Oct 19 16:00:38 ip-172-31-25-174 kube-apiserver.daemon[12360]: error setting the external host value: "aws" cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-08e7af80c7f9dc876: "error listing AWS instances: \"NoCredentialProviders: no valid providers in chain. Deprecated. \\n\\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors
<ryebot> sri_: okay, that's gross
 * ryebot thinks
<ryebot> sri_: Did you deploy with conjure-up, as is done here? http://blog.astokes.org/conjure-up-dev-summary-aws-cloud-native-integration-and-vsphere-3/
<sri_> ryebot: i-08e7af80c7f9dc876 is actually on us-east-2b and logs shows as us-east-2
<sri_> is that an issue?
<ryebot> sri_: yeah that's weird
 * ryebot thinks
<sri_> I am on macOS and i got conjure-up via brew
<ryebot> ahhh okay one sec
<sri_> ryebot: I deployed using just "conjure-up" command but the UI does not look same as the one shown in the link
<sri_> ryebot: I dont think i see addons support
<stokachu> sri_: https://github.com/conjure-up/conjure-up/issues/1195 is that you?
<sri_> stokachu: yes
<stokachu> sri_: ok let me look at your files you attached
<sri_> ryebot: I am on mac and my version is: conjure-up 2.3.1
<ryebot> sri_: ack, thanks
<sri_> ryebot: not sure it has addons and aws native integration in it or not
<ryebot> Looks like it errored out looking for juju wait?
<sri_> ryebot: sorry, i didn't get you
<ryebot> sri_: nvm, just thinking out loud
<stokachu> sri_: give me a few minutes im working on something then ill look at your issue
<sri_> Sure thanks alot for your time ryebot, stokachu
<sri_> ryebot: You want me to send full journal file in some way?
<ryebot> sri_ I think that's enough for now
<sri_> ryebot: Okay, do you know any workaround for me to use PVs?
<ryebot> sri_: Well, it looks to me like you're on the right track. The problem appears to be the aws credentials got mixed up, and also that you're somehow not getting the native cloud integrations. I suspect stokachu will be able to help with that when he takes a look at your issue.
<sri_> ryebot: Thanks. BTW, what do you mean by `aws credentials got mixed up`? anyway I can dig further?
<ryebot> sri_: Yeah, it seems strange to me that your instances are coming up in different regions. It makes me think there's a couple sets of credentials pointing to different regions.
<sri_> ryebot: But when i do conjure-up, I only give one set of credentials
<sri_> BTW, conjure-up only let us choose us-east-2 or us-east-1
<ryebot> sri_: Ah dang, I'm sorry, I misread that - -2b and -2 are different AZs, not two regions. I think that should be fine, actually.
<sri_> but aws, create instances with an alphabet at the end right
<sri_> all my machines are in us-east-2 but with us-east-2a as one us-east-2b as another
<ryebot> sri_: Yeah, I think that should be okay.
<ryebot> Still, the error breaking us looks like some credentials got mucked up somewhere.
<sri_> is there a way i can cleanup all the conjure-up history?
<sri_> ryebot: is there a way i can cleanup all the conjure-up history?
<sri_> Even though i delete .cache folder, it still remember somehow
 * ryebot thinks
<ryebot> sri_: I think you want to `snap remove conjure-up`, `rm -rf ~/snap/conjure-up`, and `rm ~/.local/share/juju` if it's there, at the least
<ryebot> ah sheesh, you're not using snaps, sorry
<sri_> ryebot: i am on mac so snap wont work right
<ryebot> I'm not sure what the brew cleanup looks like, tbh. stokachu might know. I'd also look for ~/.local/share/juju and wipe it (if you're not otherwise using it)
<sri_> okay
<sri_> ryebot: if i use conjure-up edge, cloud-provider config should be aws right
<ryebot> I would expect edge to have it, though I don't know why stable wouldn't.
<sri_> ryebot: I mean, If i follow the version from http://blog.astokes.org/conjure-up-dev-summary-aws-cloud-native-integration-and-vsphere-3/ and setup cluster and run `juju ssh 1 "sudo snap get kube-apiserver cloud-provider"` it should return aws?
<ryebot> sri_: I think that's correct, though I'd replace `1` with `kubernetes-master/0`
<sri_> Yes got you...thanks
<ryebot> Does juju create new subnets when it's deployed to a vpc?
<wpk> No, it uses the ones that are in the VPC
#juju 2017-10-20
<fallenour__> o/
<fallenour__> juju didnt build my openstack ceph-osd systems as expected
<fallenour__> I ran the juju config ceph-osd=/dev/sdb,/dev/sdc,/dev/sdd , etc command,but I dont see a change yet for application. does it require a reboot, or do I need to rebuild?
<fallenour__> changed the config file with juju config ceph-osd osd-devices=/dev/sdb,/dev/sdc,/dev/sdd, etc, but I dont see the changes yet. do I need to reboot something, or am I going to have to rebuild?
<burton-aus> wallyworld ping
<burton-aus> axw ping
<axw> pong
<axw> burton-aus: what's up?
<burton-aus> axw would you help to take a look at this PR? https://github.com/juju/juju/pull/7949
<burton-aus> axw so I can merge it.
<axw> burton-aus: LGTM
<burton-aus> axw OK, and do I have the green light to proceed the release process, given that the unit test failed again in a re-run, as described in my email sent out?
<burton-aus> axw I'm about to launch the release CI job now, so this is the last chance to abort.
<axw> burton-aus: yeah, that failure just needs a backport of the fix for go 1.9
<axw> burton-aus: it only affects tests
<burton-aus> axw OK, release CI job triggered.
<Soumaya> Hi , I have a query related to Juju deployment . We are working on integration of EPC bundle on opnfv and we have to run juju client on alpine docker .
<Soumaya> Is there any way to run juju client on alpine or any api available to bootstrap a new controller from alpine
<Soumaya> I have gone through python juju api but I have not see any bootstrapping api .
<junaidali> Hi everyone, is it necessary to set python3 as default for juju-wait to work?
<junaidali> I'm facing the following error when `juju wait` is run: AttributeError: 'module' object has no attribute 'which'
<junaidali> at this line -> https://git.launchpad.net/juju-wait/tree/juju_wait/__init__.py#n102
<stub> junaidali: How are you installing juju-wait? The deb package and snap should not have that problem. juju-wait does require Python3.
<junaidali> stub: I'm installing with pip
<junaidali> Haven't tried with deb/snap. I will try it now
<junaidali> Deb is working fine. Thanks
<junaidali> stub: I have installed the package with this ppa -> 'ppa:stub/juju'. Let me know please if there is any other ppa to use.
<stub> junaidali: yes, that is the correct PPA. Maybe it should go into one of the Juju ones one day.
<stub> junaidali: Pip will work, but you need to use pip3 to install it with the correct Python version. Or into a Python3 virtualenv.
<junaidali> Thanks stub:
<EnvY> hello, i am working on juju for my application to scale, however, i want new nodes will be deployment depends on coming traffic on my charm. can anybody help please
<pmatulis> EnvY, you want to be able to add more units if necessary?
<EnvY> while following the documentation i find that using a load balancer we can scale up or down
<EnvY> but it mean that those instance are deployed on lxd container and load balancer is balancing those resources
<EnvY> haproxy
<EnvY> i have followed this link to deploy
<EnvY> https://jujucharms.com/docs/1.25/charms-scaling
<EnvY> and with add-unit we are adding instances
<pmatulis> EnvY, eww, you are using juju 1.25 ?
<EnvY> juju add-unit -n5 mediawiki
<EnvY> 2.0.2
<pmatulis> EnvY, what operating system? if Ubuntu, and you're starting out, best go with 2.3, which will soon be released
<EnvY> yeah it ubuntu 16.04
<EnvY> is this option is available in juju or we have to use OSM
<EnvY> i think i get the answer
<magicaltrout> hello lovely jujuers
<magicaltrout> k8s stack manual deployment all green but kubectl get nodes is empty
<pmatulis> EnvY, what option? 2.3?
<magicaltrout> any ideas?
<EnvY> pmatulis, i will definitely switch to latest stable release. i get the solution of my question of scaling
<pmatulis> EnvY, ok good
<pmatulis> EnvY, 2.3 will be stable soon so you might want to start with the beta, at least for testing
<EnvY> ok
<pmatulis> EnvY, it has improvements in storage (dynamic) and the ability to relate applications across different models/controllers/clouds
<EnvY> Oh, thats very good
<admcleod_> is there a variable for timeout for machine provisioning?
<fallenour> getting an error: backups are not supported for hosted models. havent seen that before. any thoughts/
<fallenour> Potential Bug: https://bugs.launchpad.net/juju/+bug/1576184    Is there a status update on this bug other than whats posted here?
<mup> Bug #1576184: "juju create-backup" fails if you're not operating on the admin model  <docteam> <ensure-availability> <juju-release-support> <usability> <juju:Triaged> <https://launchpad.net/bugs/1576184>
<fallenour> Im operating as admin, with admin on all models
#juju 2017-10-21
<bdx> conjure-up is having ultimate failures with lxd/localhost provider right now
<bdx> it will only accept a lxd from snap
<bdx> and the darn snap lxd is not really ready yet
<bdx> well, I mean, I havent been able to get it to work
<bdx> tried all day
<bdx> and conjure-up forces you to use it
<bdx> blah
<bdx> this https://imgur.com/a/xMyRC
<bdx> wasnt working for me for some reason earlier
<bdx> trying on a fresh system now
<bdx> things seem to be going well so far
<bdx> but I'm not to the point where I was hitting failure earlier, I'll file a bug if it persists this round through
<stokachu> bdx: uhm what?
<stokachu> You can use cloud providers or openstack
<stokachu> And stgraber would love to see a bug report on why snap lxd isn't working
<bdx> stokachu: yeah, I wasnt able to debug as much as I wish I could have
<bdx> Im gathering the remaining scraps trying to put something together
<bdx> http://paste.ubuntu.com/25782412/
<bdx> ^ was one of the issues
<bdx> while trying to debug one of the others
<bdx> something tells me I didnt clean out system lxd and something was interfering/overlapping
<bdx> but yeah ... I just couldn't seem to get snap lxd working
<bdx> it works fine for you?
<bdx> stokachu:^?
<bdx> this is the other one I was hitting http://paste.ubuntu.com/25782424/
<bdx> I tried totally removing all the snaps, and all the system level stuff
<bdx> and start from 0
<bdx> but i kept having issues
<stokachu> hmm
<stokachu> yea i just launched a container
<bdx> ok
<stokachu> bdx: im in #ubuntu-server and just asked stgraber to look at the pastebin
<bdx> guessing it had something to do with cruft somewhere getting in the way
<stokachu> yea that's a new error ive not seen before
<bdx> Im not hitting it anymore
<bdx> on my new machine
<bdx> I was just able to successfully launch one too
<bdx> gd
<stokachu> bdx: out of disk space?
<stokachu> thats from stgraber
<EnvY> i want to design a charm. And it contains a snort IDS tool. I did not find any charm that install application using apt-get. can anybody help
<zeestrat> EnvY: check out charmhelpers, they have a bunch of nice tools for just such things. On mobile so don't have the docs, but something line  "from charmhelpers.fetch import apt_install"
<EnvY> zeestrat, thank you. can you share the charm that i can build and see then see the charm files to get help how to write. In the charm store they have build charms for different applications
<EnvY> zeestrat, thank you seem like i am getting direction
<EnvY> i am following https://pythonhosted.org/charmhelpers/getting-started.html
<EnvY> My directory structure of "charm create" is different from the one available on the link. e.g. their is not file as hooks.py once i create charm
<zeestrat> EnvY: the charmhelpers have moved their repo to GitHub recently and it looks like the docs on pythonhosted arent up to date. I'm out, but can probably point you in the right direction later.
<zeestrat> Basically, the current framework for developing charms uses something called reactive and layers. The basic layer has some info https://github.com/juju-solutions/layer-basic/
<zeestrat> If I forget, I'm sure you can ping rick_h to point you to some intros to charm development
#juju 2017-10-22
<siraj> while deploying a custom charm i get an error
<siraj> cannot parse URL "./wcharm": series name "." not valid
<siraj> Although in metadata.yaml has "series": defined as xenial
<siraj> get the solution. provide a wrong path of local charm
<saadi> unable to remove an application that has a status error using remove-application or remove-unit. Finally i delete container and even juju status is showing that application with error. how to remove it
#juju 2019-10-14
<timClicks> new (draft) guide up for anyone interested in what happens under the hood when `juju deploy` is executed (https://discourse.jujucharms.com/t/2209). critique very welcome - please add comments to the thread in discourse.
<babbageclunk> wallyworld: looks like the snap build didn't work because the built tarball wasn't ready yet? I
<babbageclunk> oops
<babbageclunk> I'm kicking it off again
<stickupkid> CR anyone, this ensures that we can reuse a bootstrapped controller without having to create one https://github.com/juju/juju/pull/10729
<wallyworld> babbageclunk: any luck narrowing down the user permission? good to get confirmation that the approach works. we can close that other PR now right?
<babbageclunk> wallyworld: I don't think it can be any narrower unfortunately - because the extend task has no target I think it's the top level permission that gets applied.
<babbageclunk> yes, I'll close Pedro's PR
<wallyworld> ok. i guess it's only relevant if they use a root disk size constraint and it's out of our hands. were you able to find the api to query to see if that perm has been granted?
<babbageclunk> bit of a head-desk moment when we realised it was creating the VMs with 4M of ram. But good to have it solved.
<wallyworld> yeah
<wallyworld> always use units of measure
<babbageclunk> There's a govc command that lists permissions, so I'm cribbing from that,
<wallyworld> \o/
<anastasiamac> wallyworld: as discussed yesterday when U get a chance PTAL https://github.com/juju/juju/pull/10730 - renaming option to --client-only and introducing --controller-only
<wallyworld> ok
#juju 2019-10-15
<wallyworld> thumper: PR to move proxy update worker to common manifold set for use in caas and iaas https://github.com/juju/juju/pull/10731
<anastasiamac> wallyworld: and PTAL https://github.com/juju/juju/pull/10732 - fixes for 'regions' command
<wallyworld> ok, real soon
<wallyworld> anastasiamac: worries about the yaml change since we try not to do that. remind me, was that something we said was ok this time?
<anastasiamac> wallyworld: so yaml has to change coz we previously did not have controller side info
<anastasiamac> wallyworld: the most yaml changes that r concerning r the ones that feed back into a correpsonding command
<anastasiamac> for example if we had to update regions using this output
<wallyworld> true, although pople to script stuff also
<anastasiamac> however... this is more of a show than a list
<wallyworld> but yeah, not much choice here
<anastasiamac> wallyworld: i srsly doubt many ppl scripted that but we are adding more info that did not exist so yes, have to change yaml...
<wallyworld> anastasiamac: lgtm, ty
<anastasiamac> wallyworld: \o/
<wallyworld> babbageclunk: what's status on vsphere role checK? still having "fun with groups"? i prefer "fun with flags"
<babbageclunk> wallyworld: still fun with groups - I think I've nearly cracked it
<wallyworld> \o/
<babbageclunk> wallyworld: ok, I think I've got it - went down a few wrong alleys first. Also there's the weird thing where if you don't have the read permission you also don't have permission to check to see if you have the permission, so you have to use the nopermission fault as the answer.
<babbageclunk> I'll finish coding it up tomorrow - have to go to choir.
<wallyworld> ok, ty
<anastasiamac> wallyworld: m inclined to lmit lists and show to just list and show rather than also do a diff... i feel like it's a different concern
<wallyworld> porbs, just on a call atm
<anastasiamac> wallyworld: besides we r only doing it in one, human readable format as it makes no sense in other formats...
<anastasiamac> wallyworld: ack
<stickupkid> is there away to make `--debug` flag be true all the time via an env var? I want it for some CLI integration testing, and it's very verbose to do that in every place
<stickupkid> jam, thoughts?
<achilleasa> stickupkid: how about 'alias juju=juju --debug' ?
<stickupkid> sick
<stickupkid> that might work
<stickupkid> achilleasa, nice, that worked
<stickupkid> anyone want to review the left overs from CI day, this ensures that we handle logging and bootstrap reuse correctly https://github.com/juju/juju/pull/10729
<stickupkid> it actually fixes an issue in one of the test suites, where underscores in model names failed to bootstrap
<nammn_de> manadart: controller do count as agents? If I want to run updatesteps on the controller and machines, in generall all agents I add those to targets?  []Target{AllMachines, DatabaseMaster} Or would be "allMachines" correct? I know we talked aobut this briefly yesterday..
<manadart> nammn_de: AllMachines should do it.
<nammn_de> got it thanks
<nammn_de> did any of you ever came across this err message? 'juju.worker.dependency "uniter" manifold worker returned unexpected error: subnet'
<achilleasa> manadart: can you take a look at https://github.com/juju/juju/pull/10734? I don't think it causes any conflict with hml's changes but I will wait till her PR lands to merge this in.
<manadart> achilleasa: Yep. Just grabbing a bite.
<manadart> achilleasa: Approved. One comment.
<hml> manadart:  can you take a look at a test for me pls?  https://jenkins.juju.canonical.com/job/github-make-check-juju/1563/consoleText  TestReplaceSpaceNameWithIDEndpointBindings - the data isnât getting into the db for endpointbindings so fails.  Iâm just not sure whatâs wrong with the bson insert.
<hml> manadart: https://github.com/juju/juju/pull/10718/files#diff-dd4ab00ea08f5a35293ab48af332ba38R3958
<hml> manadart:  another set of eyes would be great
<manadart> hml: Looking.
<nammn_de> manadart do you know what "JUJU_K8S_MODEL" in the makefile refers to?
<manadart> nammn_de: It looks like an env var that needs to be set as the model that "make local-operator-update" will run against.
<rick_h> manadart:  heads up, in talking with thumper last night this topic came up and might be of great interest to the ATOS conversations https://discourse.jujucharms.com/t/little-known-feature-workload-juju-communication/2218
<rick_h> heh looks like you've already seen/replied in it. cool
<manadart> rick_h: I also posted it to the Atos Slack.
<rick_h> manadart:  <3
 * manadart got you yo.
<manadart> hml: I think you have to add a model-uuid to the test docs that you create. None of them are found by the test.
<hml> manadart:  okay, iâll take a look,
<hml> I thought _id was sufficient
<nammn_de> manadart: just to make sure for bug report https://bugs.launchpad.net/juju/+bug/1848149 , does an error occur while deploying? Was the err msg.: "version string generation failed"?
<mup> Bug #1848149: Charm Deployed or Upgrade from Dir Ignores "version" <juju:Triaged> <https://launchpad.net/bugs/1848149>
<hml> stickupkid: reworded the CMR scenario, hopefully more clear
<manadart> nammn_de: There is no error. One way gets a version and the other does not.
<stickupkid> ta
<nammn_de> manadart: ahh, okay interesting. Maybe both bug reports where different nature I will update that thanks
<nammn_de> hml: do you have the PR at hand which I should run QA?
<hml> nammn_de: they should be linked from the trello cards in general.   hereâs the specific one in this case: https://github.com/juju/juju/pull/10718
<nammn_de> hml: ah right, thanks!
<nammn_de> @jam you were saying we should add an acceptance test because of pathing. Isn't there one used? https://github.com/nammn/juju/blob/088736ee1fea11a6d85eb4511e26239cc688dc43/tests/suites/cli/use_local_charm.sh#L43
<rick_h> nammn_de:  the idea was to test deploying a charm now in the cwd, e.g. ./path/to/charm
<rick_h> nammn_de:  along with the test for juju deploy .
<rick_h> nammn_de:  so we'd have both scenarios vs just the one
<nammn_de> rick_h: you mean adding a test with a relative path? Current test uses a absolute one
<rick_h> nammn_de: yes
<nammn_de> rick_h: ahhh I missunderstood initially. Got it
<nammn_de> rick_h hml: i just configured guimaas and would love to help QA the PR but a lot of things are new to me (Space bindings, CMR, bindings) someone has time to give me an introduction/give me links to some things to read up on?
<nammn_de> just ping me up if thats the correct documentation for the current system: https://jaas.ai/docs/deploying-advanced-applications#heading--deploying-to-network-spaces
<killcity> Hi all. Does anyone know what else i need to do when trying to set config values to env vars on a centos box via an install hook? they dont seem to be populating. works fine on ubuntu.
<killcity> ie: MYVALUE=$(config-get myvalue)
<killcity> the rest of the hook that doesnt rely on config values/vars works fine. ie: yum installs, etc
<rick_h> nammn_de:  will make time tomorrow we can run through stuff
<pmatulis1> when i use option `--metadata-source` with the 'bootstrap' command i don't see where this value is exposed. should it not show up in the output to 'show-controller' command?
<rick_h> pmatulis1:  I think it's controller-config?
<killcity> is there a chat log anywhere we can reference?
<killcity> anyone know if they are planning on moving to slack instead?
<killcity> irc is cool and old school, but...
<rick_h> killcity:  we're not looking to move to slack anytime.
<rick_h> killcity:  let me see if we have logs. We used to but I can't recall the url.
<rick_h> killcity:  https://irclogs.ubuntu.com/2019/
<rick_h> hml:  looks like jenkins is back up and happy-ish
<hml> rick_h: ty
 * rick_h rekicks last build
<killcity> any reason in particular you dont want to move to slack? not having to ask the same question over again because someone already asked it (because there is a convo history) is huge.
<killcity> rick_h thanks
<rick_h> killcity:  mainly not a fan of slack myself and I've avoided it so far personally. I understand not everyone agrees. I think that the main thing is it's standard for canonical projects to have a public IRC channel on freenode and it would be a large undertaking to migrate everyone/everything.
<killcity> looks like #juju isnt being logged.
<pmatulis1> rick_h, i forgot about controller-config. these two commands have always been confusing. anyway, metadata-source is in neither of them
<rick_h> pmatulis1:  :( ok
<rick_h> killcity:  oh, I saw it one one of the days.
<rick_h> killcity:  https://irclogs.ubuntu.com/2019/10/15/%23juju.txt (today)
<killcity> i just dont see the benefit to irc over slack, but fair enough. this is coming from a guy that used to work at concentric.net
<rick_h> killcity:  understand. It's just not something I've pushed on because of my own slack reluctance I guess. I'm sure there are others that would <3 it
<rick_h> killcity:  but hey, what can I help you with?
<rick_h> oic, you've got a charm hook question above. Hmmm, there's nothing specific on that I can think of. Just bash at that point.
<hml> pmatulis: doesnât metadata-source get turned into agent-metadata-url and image-metadata-url in model-config?
<killcity> Thanks Rick, I appreciate the response and thoughts. Im excited to move all of our provisioning over to maas and juju from stacki. The one thing Im trying to figure out is how to set get-config values to an env when deployed onto a centos box. seems to work fine with ubuntu (tested on bionic). not so much on centos7. im sure im doing something
<killcity> wrong.
<rick_h> killcity: gotcha, welcome to the party
<killcity> hmmm. ill have to test a little more then. just wanted to make sure there wasnt anything special i needed to do for centos :)
<killcity> thanks
<pmatulis> hml, no. i get null values for each of those
<rick_h> killcity:  hmm yea, I can't think of anything. I guess can you log/dump debug traces in there to see if it's set, if the value is coming out correctly from the config-get call?
<killcity> ah good idea. ill add some debugging in there. thanks :)
<killcity> btw, this channel seems a lot more active than #maas.
<killcity> which is great.
<rick_h> killcity:  cool, glad we're alive and kicking here. MAAS folks are definitely a bit queiter as they're a smaller team in a tighter set of timezones (we go across almost all of them)
<rick_h> killcity:  but I will say the MAAS stuff is awesomesauce and they've got a discourse going as well which is worth hitting up if you hit questions/etc.
<killcity> saw that, so far its looking really nice and stable. the biggest hurdle is understanding the workflow, etc. which isnt a huge deal. most of the docs look very thorough.
<nammn_de> rick_h: yeah thanks!
<hml> pmatulis:  weirdâ¦. you checked all models?
<rick_h> nammn_de:  k, have a good night and feel free to ask around of EU folks in the morning and when I come online we can sync up and catch up on any gaps/knowledge share that's helpful
<nammn_de> rick_h: you as well, will do!
<pmatulis> hml, i check both the 'default' and 'controller' models
<pmatulis> https://bugs.launchpad.net/juju/+bug/1848241 <--- hml
<mup> Bug #1848241: [bootstrap] metadata-source option is not exposed <juju:New> <https://launchpad.net/bugs/1848241>
<hml> pmatulis:  can you please add the version of juju to the bug
<pmatulis> hml, done
<killcity> does anyone know how to set the vlan and create bond via juju commands + maas?
<killcity> so when i deploy a specific charm, any unit under that application gets the same network configuration?
<rick_h> killcity:  so the idea is that you create the network setup in MAAS. There's docs around creating bonds/etc
<rick_h> killcity:  as the underlying cloud it knows the network topology/etc
<rick_h> killcity:  then in Juju, it'll read that data from MAAS and you'll end up doing things like using spaces, and endpoint bindings onto spaces, to manage which networks applications are talking through/over
<rick_h> killcity:  welcome to the "deep end"
<rick_h> killcity:  some reading material from the Juju perspective - https://discourse.jujucharms.com/t/network-spaces/1157 and https://discourse.jujucharms.com/t/deploying-applications-advanced/1061
<killcity> Excellent, thanks Rick :)
<killcity> The last hurdle we have is understanding how to set specific software raid configurations on certain models of hw. some of our nodes dont have raid controllers
<rick_h> killcity:  yea, there's some disk management config you can do on machines in MAAS (though I've not tweaked that myself)
<killcity> I saw some custom stuff done in a single posting somewhere, but theres not any reference material ive found concerning software raid in particular.
<killcity> Btw, I ran my test deploy again and vars were working fine :)  It was an issue with the script itself
<rick_h> killcity:  woot, glad to hear
<killcity> *facepalm...
 * killcity facepalms
<rick_h> killcity:  interesting google search https://docs.mirantis.com/mcp/q4-18/mcp-deployment-guide/deployment-customizations-guidelines/maas/custom-disk-layout.html
<rick_h> has a section "To define software raid"
<rick_h> might be worth a bug to the maas docs if that's what you're looking for and couldn't find it in the maas docs
 * rick_h has to run and get his son from school, back in a bit
<killcity> rick_h thanks! i will definitely look at those docs.
<roadmr> this is a software raid! put all your floppies and CDs in the bag and don't get any funny ideas about downloading any apps... we'll know.
<pmatulis> gulp
<killcity> does anyone know if maas/juju require machines to be set on a specific space before they can be used or can you have the space set at the time of deployment?
<killcity> i would prefer to have a bunch of hw sitting there unallocated and not explicitly added to a specific space so they can land on whatever subnet they need to be on.
<thumper> no, I don't think that is a prerequisite
<killcity> im trying to better understand how to automatically enable bonding and a vlan subinterface on deployment. all our nodes receive trunked/tagged interfaces, so we can put them on whatever vlan we choose when they are built.
<killcity> i can configure nodes via MAAS ui for this, but not sure how to make it happen automatically during deployment...cant seem to find any docs on it.
<killcity> (also maas cli)
<thumper> my understanding is that you configure how you want the network devices set up in maas, and then when you deploy with juju they get set up that way
<thumper> effectively it is the maas instantiation of the machine that sets up the networking
<thumper> there isn't another step
<thumper> as far as I understand it
<babbageclunk> wallyworld: tested out the precheck, all working - just writing a unit test now.
<thedac> thumper: rick_h: Nevermind on the juju run problem. Test on latest 2.7 edge looks good. I'll let you know if I ever see it again.
<wallyworld> babbageclunk: just got out of morning meetings. i'll review once done, ty
<anastasiamac_> wallyworld: PTAL https://github.com/juju/juju/pull/10736 - list clouds changes
<anastasiamac_> wallyworld: yes all is the breakage and so is non display of clouds that do not have creds registered. both were discsussed and insitented on in paris by both jam and rick_h
<anastasiamac_> wallyworld: the removl of color was discussed and agreed onin paris too coz it was obscure for users and has no valud now
<anastasiamac_> value*
<killcity> thumper thanks. ill see if i can figure out how to create the different profiles for interface configs in maas.
<rick_h> thedac:  ok cool ty
<anastasiamac> wallyworld: rick_h : udated PR to only filter public clouds based on whether creds are present... PTAL?
<anastasiamac> really want to land this asap as there is more in pipeline and no time left
#juju 2019-10-16
<wallyworld> kelvinliu: hpidcock: free now
<kelvinliu> yep
<thumper> wallyworld: https://github.com/juju/juju/pull/10738
<wallyworld> looking
<wallyworld> another one down, many to go
<thumper> at least the number is finite
<babbageclunk> quick, everyone start adding more workers
<kelvinliu> wallyworld: got this pr to enable `cluster` scope rbac, +1 plz thx https://github.com/juju/juju/pull/10739
<wallyworld> ok, looking in 5
<timClicks> Feature Highlight: Cross-model relations https://discourse.jujucharms.com/t/feature-highlight-cross-model-relations/2228
<jam> rick_h: nammn_de: I would actually say "../relative/path/not/under/cwd"
<wallyworld> thumper: https://github.com/juju/juju/pull/10740
<jam> I could go either way, but the goal is to make sure we are looking up "what .git is containing the charm" and *not* "what .git contains CWD"
<wallyworld> kelvinliu: sorry for delay, lgtm
<kelvinliu> wallyworld: np thx
<thumper> babbageclunk: thanks for fixing my typo in PR
<thumper> wallyworld: I'm assuming you tested against IAAS models ?
<babbageclunk> :) I figured it was simpler to fix it than point it out
<thumper> wallyworld: code changes look good
<wallyworld> thumper: yeah, sorry, i did but forgot to mention, will update PR
<thumper> I'm sure this will be helpful in tracking issues with k8s controllers
<wallyworld> yup, just had to get shit working in the first place
<kelvinliu> wallyworld: still there?
<wallyworld> depends who's asking
<kelvinliu> wallyworld: aha, can i get some idea from u  about the addrs of binding stuff ð
<wallyworld> sure, standup
<kelvinliu> thx
<timClicks> jam: thanks for responding to that bug report (I didn't mean to imply that I didn't know what juju trust does!)
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10742 - 'juju credentials' changes
<nammn_de> manadart achilleasa mornin guys, let's say I have a file in packages (core, apiserver, worker) which uses a common function to create a logfile, where would you guys put the function? Afaict currently those things are in the juju utils. Not sure about adding one additionally there though
<manadart> nammn_de: Jump on HO?
<nammn_de> manadart: sure
<nammn_de> jam rick_h stickupkid does this test make sense? https://github.com/juju/juju/pull/10744/files Wanted to open that one quick to discuss whether i understood correctly what was wanted
<rick_h> nammn_de:  close, I think the test_deploy_local_charm_revision is missing the sha/version check though
<nammn_de> rick_h: great added. Let's wait for stickupkid pr first makes testing easiert ð
<rick_h> nammn_de:  rgr
<stickupkid> just need someone to OK it :D
<rick_h> stickupkid:  done
<nammn_de> stickupkid: rick did
<nammn_de> :D
<rick_h> stickupkid:  sorry, started but didn't finish it last night :(
<rick_h> stickupkid:  looking at the refactor out deb one now
<rick_h> stickupkid:  but have questions on what a property.file is
<stickupkid> rick_h, it's like a env var file that we can pass around
<stickupkid> rick_h, it allows them to be passed around between jobs
<rick_h> stickupkid:  ok
<achilleasa> manadart: if I juju add-machine, I do hit the provider address change path in the instance poller (going from 0->1 addr); no need to tweak lxd profiles ;-)
<manadart> achilleasa: Great.
<hml> achilleasa:  I wonât be able to get rid of NewBindings completely without ripping apart the bundle export code which uses EndpointBindings() from the description package.  Also, not convient in the upgrade step.  I will enhance the constructor to handle the cases youâve brought to light
<stickupkid> nammn_de, when you get back, my PR is landed :D
<manadart> hml: Can you mark any of achilleasa's comments as resolved if they are addressed?
<hml> manadart: I did that yesterdayâ¦ there was one i couldnât for some reason, regarding the file name change
<manadart> hml: OK. Ta.
<hml> manadart: updating a few more with current status, though they are not resolved
<achilleasa> hml: sure thing. We can create a followup card to patch description at a later stage
<manadart> hml: Made a first pass over the space ID patch. Got some thoughts on the new Bindings type which I'll elucidate tomorrow.
<hml> manadart:  iâm in the middle of changes nowâ¦ thatâs a bit late from my point of view.  any chance of a conversation quickly?
<hml> manadart: i know itâs late for you
<manadart> hml: I have a ride waiting downstairs. There can always be a follow up if you think you'll see to landing it in your day. It's just re-org stuff.
<hml> manadart:  I have more Bindings changes almost done.  EndpoingBindings() is becoming a constructor based on conversations with achilleasa
<hml> manadart:  so if thereâs a big change on top of thatâ¦ i might be wasting my time today
<hml> manadart: understand the ride part.  weâll see how it goes tomorrow
<manadart> hml: That'll be fine. Got to run.
<nammn_de> stickupkid: great thanks :D
<nammn_de> stickupkid: still around? Got a minute?
<nammn_de> rick_h: coming back to the acceptancetest https://github.com/juju/juju/pull/10744 , it shows that we expect charm version to be empty if no git was found. Right now it is working as expected. Given the bug report from manadart, we can tell it is not. Should we change the debug message to be of warning/error?
<rick_h> nammn_de:  no, I don't think we want to stop the user on that
<nammn_de> rick_h: if thats the case we could elevate the message to be a warning (?)
<nammn_de> right now its showing on debug level "no version system found, charm version is empty"
<rick_h> nammn_de:  right, but do we know for sure if it's that way because we don't see a .git?
<rick_h> nammn_de:  I mean do we know it's an error vs we got it wrong somehow?
<nammn_de> rick_h: i cannot say 100% but from the acceptancetests and debugging i found this to be the case of the cases i tested.  AFAICT we use this function to generate the archive https://github.com/juju/charm/blob/v6/charmdir.go#L249 and then in there we create the version string https://github.com/juju/charm/blob/v6/charmdir.go#L419
<nammn_de> if one could add a test where we would be outside of the expected boundaries we can be more sure
<nammn_de> I could be wrong though, if we use another lib/method instead of ArchiveTo, but could not find so. At least not for the use-case of deploy local charm "juju deploy ~/charms/ntp-nogit --debug"
<rick_h> nammn_de:  no, I think you're on track. I just wanted to make sure that we're not erroring when we "don't know" vs we're sure it's a real error
<nammn_de> rick_h: Yea makes sense, I would say elevating the log level in this case is in any case a win
<rick_h> nammn_de:  sounds good
<nammn_de> If a version could not be generated, it is a warning, not error, right?
<nammn_de> rick_h: ^
<rick_h> nammn_de:  not if there's no .git though is what I was wanting to double check.
<rick_h> nammn_de:  not every charm will have to have it
<rick_h> nammn_de:  let me know if you're free to chat and we can walk through it
<nammn_de> rick_h: yeah, HO?
<rick_h> nammn_de:  k, meet you in daily
<nammn_de> rick_h: yeah just  tested the deploy. Debug message is: "20:35:11 DEBUG juju.charm charmdir.go:429 charm is not in revision control directory" and "charm-version" does not even exist. Could it be that we never parsed those local "version" files? :D
<rick_h> nammn_de:  I would hope not, keep going though and see if we load it in a different path perhaps
<nammn_de> rick_h: tested : relative path (deploy . | deploy ./designate), absolute path (deploy ~/workspace/charms/designate) https://pastebin.canonical.com/p/qVjYqK2dVd/
<nammn_de> rick_h: no charm_version
<rick_h> nammn_de:  ok, so there's our bug then I guess
<rick_h> I thought that worked...
<nammn_de> rick_h: Good, now I learned that charms are using that. I thought vcs was the only way I can work on this next (?)
<nammn_de> rick_h: added additional information to the related launchpad bug
<rick_h> nammn_de:  k, ty
<thumper> morning
<rick_h> morning thumper
<thumper> https://github.com/juju/juju/pull/10745
<babbageclunk> thumper: approved
<timClicks_> babbageclunk, thumper: we need a rubber stamp
<timClicks_> wallyworld, kelvinliu, hpidcock: is there a spec/reference for k8s bundles?
<babbageclunk> timClicks_: that's thumper's job! ;)
<kelvinliu> timClicks_: found this https://discourse.jujucharms.com/t/bundles-now-supported/365
<babbageclunk> I mean, any indication of what needs stamping would be good too.
<timClicks> kelvinliu: thanks - I'll update the jaas.ai/docs bundle page
<kelvinliu> and here https://discourse.jujucharms.com/t/charm-bundles/1058
<kelvinliu> np
<wallyworld> timClicks: sorry was on a call, what we have in discourse is the best source of info
<timClicks> wallyworld: all good
<wallyworld> babbageclunk: i don't quite get your question on the PR? what do you mean by job?
<wallyworld> when you call a charm function, the default is sync (blocking). the progress messages for the resulting task are printed as they occur
<wallyworld> when the task ends, showing the task also has all the log messages in the result yaml
<babbageclunk> I mean, will the logging be both in the stdout of the task and the list of log messages? It looks like no, from reading the code more.
<babbageclunk> wallyworld: sorry - I didn't see your question until now.
<wallyworld> babbageclunk: no, stdout is what the actual action/function code itself writes to stdout
<babbageclunk> cool
<wallyworld> juju tracks the log progress messages separately
<babbageclunk> wallyworld: I'm confused by the log message handling - shouldn't Run wait for processLogMessages to finish?
<babbageclunk> Maybe I've missed it.
<dmitri-s> Hello fellow Jujuers. :) I am new to the tool, interested in trying it on LXD. Running into the `ERROR Get https://10.208.133.1:8443/1.0: Unable to connect to: 10.208.133.1:8443`. However, when I connect to the IP in my browser, it outputs JSON status. Only thing I am thinking is that I don't have a certificate, as browser shows a cert warning. Is there way to pass parameter to JUJU to ignore cert warning or is this issue related to
<dmitri-s> something else. I will be very thankful for any advice.
<timClicks> dmitri-s: out of interest, what does `curl https://10.208.133.1:8443/1.0`do?
<dmitri-s> curl: (60) SSL certificate problem: unable to get local issuer certificate
<dmitri-s> Browser returns: {"type":"sync","status":"Success","status_code":200,.... But I had to click ignore cert warning. My understanding (or lack there of) is that 10.208.133.1 is not the Juju controller, but LXD network bridge.
<babbageclunk> dmitri-s: can you pastebin the full output you're seeing?
<babbageclunk> I think I've seen it when the juju code can't get to the port from inside the container (which it needs to be able to launch new ones).
<babbageclunk> If you launch a container yourself, can you curl that address?
<dmitri-s> Full output of the bootstrap process or the JSON from LXD?
<babbageclunk> dmitri-s: from bootstrap (ideally with --debug)
<dmitri-s> Bootstrapping again with debug will post in a minute.
<babbageclunk> thanks
<babbageclunk> how about launching a container with `lxc launch ubuntu:`, then going into it with `lxc exec <name> bash` and running `curl --insecure https://10.208.133.1:8443/1.0`?
<dmitri-s> Considering how long the output is, especially with debug I put it on github: https://github.com/dmitri-s/juju-learn/blob/master/jujubootstrap.log
#juju 2019-10-17
<babbageclunk> right - that confirms that it's code running inside the container that can't get to the lxd port.
<babbageclunk> dmitri-s: ^
<dmitri-s> So my LXD config is incorrect?
<babbageclunk> I think so
<dmitri-s> Just tried curl from inside a container and it times out (curl: (7) Failed to connect to 10.208.133.1 port 8443: Connection timed out)
<kelvinliu> wallyworld: got this PR to add pod addresses to binding addresses for CaaS, could u take a look? thanks! https://github.com/juju/juju/pull/10741
<wallyworld> i can, just need to get some stuff done first
<kelvinliu> np, not urgent at all
<babbageclunk> dmitri-s: sorry, just trying to work out what config needs to change...
<dmitri-s> Here is my current lxd config (lxd init --dump) https://github.com/dmitri-s/juju-learn/blob/master/lxd_config
<babbageclunk> oh thanks
<babbageclunk> comparing that to mine...
<babbageclunk> dmitri-s: the only difference I can see is that I have `core.proxy_ignore_hosts: 127.0.0.1,::1,localhost` in the config section.
<babbageclunk> what does `route -n` show inside the container?
<dmitri-s> root@pleasant-bonefish:~# route -n
<dmitri-s> Kernel IP routing table
<dmitri-s> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
<dmitri-s> 0.0.0.0         10.208.133.1    0.0.0.0         UG    100    0        0 eth0
<dmitri-s> 10.208.133.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
<dmitri-s> 10.208.133.1    0.0.0.0         255.255.255.255 UH    100    0        0 eth0
<babbageclunk> and obviously you can get to the internet from the container since we can see things installing in the juju log...
<babbageclunk> (the route output looks the same as mine)
<dmitri-s> interesting
<babbageclunk> Sorry, this is about the point I'd ask in #lxcontainers
<dmitri-s> thanks. I will do that. BTY, was I supposed to post this in discourse.jujucharms.com rather than ask here?
<dmitri-s> just so I know next time.
<dmitri-s> Just from the server message "https://jujucharms.com, general chat on https://discourse.jujucharms.com"
<dmitri-s> seems like it is where it point me
<babbageclunk> No, this is a good place too, especially for back-and-forth. Although the advantage of discourse is that it's persistent - if people aren't in the same timezone then they won't see irc chat but they can still answer posts.
<dmitri-s> cool. thanks for your help.
<babbageclunk> no worries - sorry I couldn't sort it out fully
<babbageclunk> (or at all)
<dmitri-s> haha... movement in any direction is progress as I see it. I figured out that "x.x.133.1" is a gateway, so it is a progress.
<dmitri-s> With your help, nothing less. :)
<babbageclunk> :)
<wallyworld> kelvinliu: free now in standup
<kelvinliu> yep
<wallyworld> kelvinliu: lgtm with some suggested cleanup to make things more logical now that the behaviour has changed
<kelvinliu> wallyworld: yeo thx
<kelvinliu> yep*
<anastasiamac> wallyworld: for post-lunch digestion, PTAL https://github.com/juju/juju/pull/10746 - 'show-cloud' ask-or-tell
<wallyworld> ok
<wallyworld> just getting coffee
<anastasiamac> oh good! o the timing is right for light reading :D
<wallyworld> anastasiamac: i asked a question, see if i am being dumb
<wallyworld> babbageclunk: i pushed a change to address the race, see what you think?
<anastasiamac> wallyworld: kind of ;) i'll reply on pr
<wallyworld> anastasiamac: s/kind of/most definitely. lgtm
<anastasiamac> wallyworld: :) it was existing code anyway that I have just prettied up :)
<wallyworld> yeah, i didn't see that from the review diff, just mainly saw the comment etc
<anastasiamac> wallyworld: thank you for the review tho \o/ really appreciate another set of eyes
<wallyworld> more eyes always good
<babbageclunk> wallyworld: approved
<wallyworld> \o/ tyvm
<wallyworld> babbageclunk: call?
<babbageclunk> oops
<thumper> babbageclunk: another https://github.com/juju/juju/pull/10747
<thumper> or wallyworld ^^
<wallyworld> oooh instancepoller, my favourite
 * thumper EODs
<anastasiamac> wallyworld: PTAL https://github.com/juju/juju/pull/10748 - 'show-credential' part
<wallyworld> ok
<stickupkid> nammn_de, good catch last night, yes if you hold it differently it breaks, I fixed it so both ways works https://github.com/juju/juju/pull/10749
<nammn_de> stickupkid: cool, approved :D
<nammn_de> rick_h I just scrolled through the systemd code and found out that we do a verify an a basic level. So I don't think we need to add additional checks. So IMO the pr is ready to be reviewed https://github.com/juju/juju/pull/10696
<manadart> wallyworld: Is there are a resolution for all apps in a CAAS model getting "already exists" errors for generating secrets?
<nammn_de> manadart: regarding the bug you mentioned before with the charm version. I did add some additional test, which does not include the bug yet, but still include some other cases. https://github.com/juju/juju/pull/10744 , when we "fix the bug" we could add the other cases as well.
<nammn_de> Additionally I updated the bug description with my debugging output https://bugs.launchpad.net/juju/+bug/1848149 . I planned to take this on later
<mup> Bug #1848149: Charm Deployed or Upgrade from Dir Ignores "version" <juju:Triaged> <https://launchpad.net/bugs/1848149>
<manadart> nammn_de: Ack.
<manadart> wallyworld: for example: "creating secrets for container: tensorboard: secret "tensorboard-tensorboard-secret" already exists"
<nammn_de> achilleasa stickupkid : im trying the qa steps from hml https://github.com/juju/juju/pull/10718 there are some things I don't know. Got some time and want to have fun walking me through a little with a little qa? Doesn't have to be now
<achilleasa> jam: shouldn't this (https://github.com/juju/juju/blob/develop/worker/instancepoller/updater.go#L140-L145) check use the instance status (which can be running) instead of the machine status (AFAICT would be created)? Or are statuses working differently for manual machines?
<nammn_de> manadart: if you have some free time https://github.com/juju/juju/pull/10691 pr regarding spaces. Sadly I do have some weird network issues and my internal upgrade bootstraps got stuck (qa steps)
<manadart> nammn_de: Reviewed. Upgrade step needs another look.
<nammn_de> manadart: Just read it, good to know for me! Thanks
<achilleasa> nammn_de: for 10691 you probably also need to patch this: https://github.com/juju/juju/blob/develop/cmd/juju/application/upgradecharm.go#L757
<nammn_de> manadart: added your suggestions, qa steps worked for me!
<nammn_de> achilleasa, manadart :" ohh, nvm me than I ll take a look
<nammn_de> achilleasa: are we fine to use the default if none is set, which should be set in any case with my patch?
<stickupkid> anyone know where the simple streams url is in the source code? I know we have a model config value, but it's empty, so we must fill it in with something if it's empty.
<achilleasa> nammn_de: I 'd say leave it for now to be safe in case someone is testing the new client against a controller from a sha before your PR landed
<nammn_de> achilleasa: not patching this function at all or you mean adding a check if it exists, use it?
<achilleasa> nammn_de: if the model (it's passed an an arg to that function) defines it use that or use the network.DefaultSpaceName as a fallback
<nammn_de> achilleasa: thanks. Similiar to that? https://github.com/juju/juju/blob/ef08716950173d3a388d675b5c49c1210137c050/cmd/juju/application/upgradecharm.go#L761-L760
<jam> achilleasa: IIRC, MachineStatus is the status of the machine agent, vs the status of the machine itself
<achilleasa> jam: that's my understanding as well. So can this be ever running?
<achilleasa> (in running state)
<jam> achilleasa: I'm really not sure what it is trying to check there. It may just be trying to say "set it to running if it isn't running"
<achilleasa> yes, but I would assume that the check would be more like "if my *instance* status is not running and I am a manual machine then set me as running"
<achilleasa> stickupkid: can you please check if this works on develop for you? go test -check.v -check.f TestAgentSetsToolsVersion (in cmd/jujud/agent)
<stickupkid> achilleasa, why is that?
<achilleasa> stickupkid: https://pastebin.canonical.com/p/pf5wXMFmtD/
<stickupkid> wtf is that
<achilleasa> I will retry on current HEAD
<stickupkid> achilleasa, https://pastebin.canonical.com/p/J86Vg3z53B/
<achilleasa> this fails on HEAD for me: "FAIL: machine_test.go:607: MachineLegacyLeasesSuite.TestAgentSetsToolsVersionManageModel"
<achilleasa> could it be deps? brb
<achilleasa> nope... still failing
<nammn_de> manadart: time for a quick review? I added your and achilleasa suggestions https://github.com/juju/juju/pull/10691/files
<achilleasa> stickupkid: wonder if that test is actually pulling stuff from somewhere...
<stickupkid> achilleasa, we should have a look at what the test does
<achilleasa> stickupkid: L598
<stickupkid> achilleasa, ho?
<achilleasa> omw
<manadart> nammn_de: Needs another change. I have to run.
<achilleasa> hml: are you on eoan?
<hml> achilleasa:  not yet
<stickupkid> hml, go test -check.v -check.f TestAgentSetsToolsVersion
<stickupkid> hml, ho in daily
<hml> stickupkid: omw
<achilleasa> rick_h: are you on eoan?
<achilleasa> also nammn_de are you on eoan?
<nammn_de> achilleasa: yesss
<nammn_de> manadart: thanks, damnit
<achilleasa> can you join daily please?
<achilleasa> nammn_de: ^
<achilleasa> nammn_de: actually can you please run go test -check.v -check.f TestAgentSetsToolsVersion (in cmd/jujud/agent) and let me know if it fails for you?
<rick_h> achilleasa:  yes, I'm on eoan
<achilleasa> rick_h: can you try the above command ^^
<rick_h> achilleasa:  the test command?
<achilleasa> yeap. I think it fails on eoan
<achilleasa> (only)
<rick_h> achilleasa:   I'll see. I need to setup a dev environment on here. I've not done that yet.
<rick_h> achilleasa:  heads up in case you didn't see it https://github.com/juju/juju/pull/10747#event-2720070282
<rick_h> achilleasa:  might be worth pulling and making sure that there's no issues in overlap/etc
<achilleasa> rick_h: thanks, already incorporated that in my PR
<rick_h> achilleasa:  awesome
<nammn_de> achilleasa: sorry wasnt there, still need me?
<nammn_de> achilleasa: ah i see not needed anymore
<nammn_de> manadart: reverted the change
<manadart> achilleasa: Can you approve that^ one?
<achilleasa> manadart: one sec
<manadart> Got hands full with kids here. Not at PC.
<achilleasa> nammn_de: approved
<stickupkid> achilleasa, any news?
<manadart> Need a review for https://github.com/juju/juju/pull/10751.
<achilleasa> manadart: one for tomorrow: https://github.com/juju/juju/pull/10750
<nammn_de> thanks manadart achilleasa
<mbeierl> Having some trouble getting the apt layer to install a ppa.  It seems to be ignoring whatever I put in.  Maybe a yaml level problem?  Are there any working examples that show a full layer.yaml with adding a ppa?
<mbeierl> stub: where is the best place to go for more info on the apt-layer?  I am obviously missing something.
<thumper> morning
<rick_h> morning thumper
<rick_h> thumper:  if you get time can you peek back at nam's branch for the log permissions please?
<thumper> rick_h: will do
<rick_h> <3
<stub> mbeierl: https://github.com/stub42/layer-apt has the README rendered
<babbageclunk> can I get a review for this merge? https://github.com/juju/juju/pull/10752
<hpidcock> babbageclunk: on it
<babbageclunk> merci m'sieur
#juju 2019-10-18
<wallyworld> hpidcock: once you've put up your init container PR, I'll swap you https://github.com/juju/juju/pull/10754
<hpidcock> might be a bit longer, I'll check it out when I'm back
<wallyworld> no rush
<thumper> wallyworld: did you know the all the caas workers use "juju.workers" as the base logging module, but the rest of the workers use "juju.worker"?
 * thumper is fixing as he goes
<wallyworld> sigh
<wallyworld> hard to get good help nowadays
<thumper> heh
<thumper> I'm almost done
<wallyworld> i put it there to see if you were paying attention
<thumper> have about two more I think
<thumper> wallyworld: last 7 workers: https://github.com/juju/juju/pull/10756
<kelvinliu> wallyworld: got this PR to add ResourceNames and NonResourceURLs support on RBAC rules, +1 plz thanks! https://github.com/juju/juju/pull/10755
<wallyworld> i'm popular today
<kelvinliu> ur always popular lol
<wallyworld> depends who you ask :-)
<wallyworld> thumper: kelvinliu: both lgtm
<thumper> thanks wallyworld
<kelvinliu> thanks wallyworld
<hpidcock> wallyworld: just running through the QA steps but the code looks good
<wallyworld> for your branch?
<hpidcock> your branch
<wallyworld> oh, nice ok
<stub> wallyworld: I just reopened https://bugs.launchpad.net/juju/+bug/1830252 , but I'm working on second hand reports rather than direct experience.
<mup> Bug #1830252: k8s charm sees no egress-subnets on its own relation <k8s> <juju:New for wallyworld> <https://launchpad.net/bugs/1830252>
<wallyworld> stub: ok, will look. it would be good to get some more detail, how to repro etc
<stub> davigar15 tripped over it working on his k8s prometheus stuff
<wallyworld> just reading the bug, maybe it's a charm helpers issue, not sure, will look
<stub> Best I can tell, network_get works to get the ingress-address, but pulling it from the relation like charmhelpers does doesn't. And we can fix charm-helpers if necessary I think.
<wallyworld> stub: it may well be juju reading the charn helpers code. need to look. but it is a different issue IMO. i'd prefer to open a new bug
<wallyworld> although i guess it's all in the same area
<wallyworld> i'll comment on the bug
<wallyworld> stub: the juju code looks ok i think. as noted in a comment on the bug, ingress-address is not guaranteed to be available immediately when a relation is joined because the pod and service need time to spin up. so there will be times early on when the relation data will not have the value. the charm needs to deal with that
<stub> Yeah, I was surprised the work around worked. But apparently it does with a freshly deployed service.
<wallyworld> it may be a timing thing
<wallyworld> it would be good to know if a sleep were added or something if the value shows up
<wallyworld> network-get and the relation data use the same code to get the ingress address
<wallyworld> it could also be that juju does not react to when the address info becomes available and updates relation settings
<wallyworld> so i will investigate that aspect to be sure
<wallyworld> there *could* be a juju issue, not sure yet
<wallyworld> there's been a fair bit of refactoring of the network code to do the spaces work this cycle
<thumper> https://github.com/juju/worker/pull/13
<wallyworld> stub: do you know if the issue is on juju 2.6 or 2.7?
<wallyworld> it's not clear from the promethius MP, but i may have missed it
<wallyworld> thumper: lgtm
<thumper> wallyworld: cheers
<stub> I don't know sorry
<wallyworld> stub: no worries, i opened bug 1848628 and subscribed you and david and asked for more info
<mup> Bug #1848628: relation data not updated when unit address changes <k8s> <juju:Incomplete> <https://launchpad.net/bugs/1848628>
<wallyworld> i closed the other one again
<wallyworld> hpidcock: thanks for review. init container takes precedence, but after that, for monday before we cut beta1, there's a folllow up to add --watch to show-task https://github.com/juju/juju/pull/10757
<manadart> achilleasa: Reviewing 10750 now. Can you look over https://github.com/juju/juju/pull/10751 ?
<achilleasa> manadart: sure
<achilleasa> manadart: lgtm, left some minor comments about using the default space id const; running QA steps
<manadart> Ta.
<nammn_de> achilleasa: let's say i want deploy to a aws public subnet (spaces -> using public subnet), is there something i need to take care or should it work without additional action?
<nammn_de> is there a way to see which constraint are currently applied to a machine?
<rick_h> nammn_de:  juju show-machine X
<rick_h> nammn_de:  should show the constraints
<achilleasa> rick_h: got a few min for a quick chat (no rush, can wait till standup time)
<nammn_de> manadart: regarding the bug you have filed that version does not get parsed correctly. Did you look into it a bit deeper? That's what I have found out https://bugs.launchpad.net/juju/+bug/1848149/comments/3 . Wdyt?
<mup> Bug #1848149: Charm Deployed or Upgrade from Dir Ignores "version" <juju:Triaged> <https://launchpad.net/bugs/1848149>
<manadart> nammn_de: I *suspect* we have to add the version member/accessor to CharmDir and populate it charm.ReadVersion. Beyond that I am not sure how it will work on the receiver side. Not looked further.
<nammn_de> manadart: just looked into that dir, would make sense.
<mbeierl> stub: thanks for your help.  Had read that already but could not get the specific version of ansible from a ppa I wanted to use working.  It would appear that I needed to put the packages: line in layer.yaml and the repo in config.yaml.  I could not get extra_packages to do anything.  Either way, I have the ppa version of ansible loaded now.
<achilleasa> hml: running state tests and deploying to lxd/guimaas to test my patch
<hml> achilleasa: rgr
<nammn_de> manadart achilleasa stickupkid any way that this can be false? https://github.com/nammn/charm/blob/a595818559134f9e9981a08a32cd9de76cb478f0/charmdir.go#L444
<nammn_de> am i overlooking something or is that just a mistake?
<achilleasa> hml: actually if you can push your latest changes for lxd I can append my changes, test and push
<stickupkid> nammn_de, yeah it seems like that should just return
<hml> achilleasa:  havenât kicked the lxd issue yet
<stickupkid> nammn_de, yeah, doesn't seem right tbh https://github.com/nammn/charm/commit/33dbb063fecda733f37b4dc0e476fd3f04d28c6d
<hml> achilleasa:  iâm using Map() but looking at the code, that should be okay with the other changesâ¦ just missing something.  the spacenames were used to map spaceInfosâ¦ with the spaceInfos being used, not the map key.  need to track it
<achilleasa> hml: I will try to deploy the defender/invader charms on machines to verify that the hash works
<hml> achilleasa:  youâll EOD hours before me.  if youâre reasonibly confident of your changes, push it to the PRâ¦ and iâll take it from there.
<achilleasa> (which means that temporarily we will be out of maas nodes)
<achilleasa> (multi-nic ones that is)
<hml> ahâ¦
<achilleasa> just need 5-10min to test
<hml> achilleasa:  iâll release the machines for you then.
<hml> achilleasa:  was about to get lunch anyways
<hml> achilleasa:  you should be good to use the dual nic machines..
<achilleasa> hml: yeap, got them. thanks
<nammn_de> manadart rick_h: this should fix the bug we were talking before https://github.com/juju/charm/pull/294 . I still need to test it by changing the gopkg toml and send a pr for the juju main
<manadart> nammn_de: Should be easy enough to verify by applying the patch straight the dep tree and building juju from it.
<nammn_de> manadart: oh yeah, you are right xD
<Fallenour> hey does anyone know the background on this video? https://www.youtube.com/watch?v=CidoBy3iqUw
<Fallenour> I want to build this and replicate it, but they didnt show how they built the HA juju controller cluster on the clustered LXD nodes
<Fallenour> @bdx, @rick_h
<nammn_de> manadart: should be working, to be sure im running the unit test on juju as well. But does the change make sense to you from your filed bug perspective?
<manadart> nammn_de: Yes; pretty much along the lines of what I thought we'd have to do.
<nammn_de> manadart: one thing I am not 100% sure  is, by extracting the version generation from "archiveTo" to the initial "charmdir" creation, any updates which happen between "creation" and "archiveTo" would not be added. I am not sure whether there can be any updates anyway and should be. I will add this to the PR as "disclaimer" to think about it while
<nammn_de> going through
<rick_h> Fallenour:  howdy, so it was just a normal bootstrap to the cloud once it was added, and then juju enable-ha
<rick_h> Fallenour:  hitting an issue with getting the cluster going or the bootstrap or the making ha part?
<Fallenour> @rick_h, Im looking to rebuild my current stack, and instead of having 3 controllers on 3 physical machines, I wanted to reduce it to 3 lxd machines on 3 lxd machines in HA to reduce power so I could add more machines because of current power constraints.
<rick_h> Fallenour:  definitely, juju + lxd cluster is a great dense setup for workstation/etc
<Fallenour> rick_h, will it work well in prod?
<Fallenour> rick_h, rephrase, well enough?
<rick_h> Fallenour:  it'll work in prod, but there's limitations you have to be aware of. In a lxd cluster all systems are treated as homogenous, so one flat network/etc
<rick_h> Fallenour:  and so it's hard to do things like different spaces, storage, etc
<Fallenour> rick_h, currently building the lXD nodes now. yea I expanded to a /22 to give me plenty of space, with HA included. I should have enough room for roughly 200-300 different applications, which is more than enough for me, especially given current power constraints.
<rick_h> Fallenour:  I've not tried to use it in anger in production to see what assumptions or limitations it fully bakes in
<Fallenour> rick_h, im getting pretty clever with it, adding lxd using Cluster, and then adding ceph storage, using the other 4 drives, on a raid 1 for OS, raid 0 for ceph drives, then gonna deploy ceph to the systems via juju, let juju manage ceph, and then build volumes on ceph, and store data on those via databases.
<Fallenour> rick_h, currently lxd doesnt support ceph and HA concurrently, but it does support HA and extra storage spaces, sooo...
<rick_h> Fallenour:  lol
<rick_h> Fallenour:  make sure to write up your project/hits and misses on discourse! :)
<rick_h> I can't wait to hear how it goes
<Fallenour> rick_h, hopefully its not a full on trainwreck!
<rick_h> Fallenour:  naw, trains are cool and rarely wreck :P
<Fallenour> rick_h, But! I do believ eif it works, I should be able to take advantage of the best of both worlds, and if htis works, its going to do wonders for the CI/CD food chain, because people absolutely loathe service containers vs systems containers for security.
#juju 2019-10-19
<Fallenour> @rick_h, hey rick, do you recommend snap or apt for lxd HA clouster deployments
<Fallenour> @bdx, manadart
<rick_h> Fallenour:  snap
<rick_h> Fallenour:  and if you use juju on bionic+ (well 2.7 will) we manage lxd as a snap on the systems
