/srv/irclogs.ubuntu.com/2015/09/18/#juju.txt

=== scuttle` is now known as scuttle|afk
=== scuttle|afk is now known as scuttle`
=== axw_ is now known as axw
blrWould anyone happen to know if it is possible to debug-hooks both a subordinate and its parent together? Starting debugging for the second service complains that the tmux session already exists.04:55
=== urulama__ is now known as urulama
=== zz_CyberJacob is now known as CyberJacob
=== CyberJacob is now known as zz_CyberJacob
ejatdeployed horizon dashboard through juju ... may i know what is the default login n password ?09:00
ejatopenstack-dashboard timeout when i tried to login12:09
ejatwhat should i do ?12:09
ejatmarcoceppi: r u there?12:36
=== skaro is now known as Guest17239
jrwrencan I cancel a pending action?13:39
jrwrenuse case is: i ran `juju action do db/0 dump` which takes a while. I accidentally ran it twice. I'd like to cancel the pending job.13:41
rick_h_jrwren: I know you can see the queue, thuoght there was a cancel api13:48
jrwrenrick_h_: i can't find it.13:48
rick_h_jrwren: hmm, wonder if it made the api but not the cli13:49
jrwrenrick_h_: must be. cli has only do, fetch, status13:50
aisraeljrwren: rick_h_: we have a meeting next week about action 2.0 features. I'll add a cancel command to the list13:50
rick_h_aisrael: ah ok, isn't there a method to see the queue and such?13:51
rick_h_I know it was part of the spec13:51
aisraelrick_h_: `juju action status` will show you everything, pending or not, but there's no queue management afaik13:53
jrwrenrick_h_: `juju action status` shows all of them.13:53
rick_h_ah ok13:54
rick_h_cool13:54
thedacjamespage: question about ceph and cinder charms. Can we specify different pools like this example http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/14:00
jamespagethedac, I think so - let me check14:01
thedacthanks14:01
jamespagethedac, yes - the ceph pool is aligned to the service name - so14:02
jamespagejuju deploy cinder-ceph cinder-ceph-sata14:02
jamespagejuju deploy cinder-ceph cinder-ceph-ssd14:02
jamespagefor example14:02
thedacah, cool14:02
jamespagethedac, the trick is that the ceph charm does not support special placements (yet)14:03
jamespagethedac, so the backend pools should be pre-created in ceph by hand first - cholcombe has some stuff inflight to enhance pool management14:03
firlso I could set this up by hand manually within ceph, then have cinder-ceph relation charm take care of the relations, and use possibly what cholcombe has to link it between cinder volumes and the ceph pools?14:04
jamespagefirl, kinda14:06
jamespageif I deploy: juju deploy cinder-ceph cinder-ceph-sata14:06
jamespagethe backend pool must == 'cinder-ceph-sata'14:06
jamespageso if you pre-create or re-create the pool directly in ceph with the required characteristics it will work ok14:07
jamespagejuju add-relation cinder-ceph-sata cinder14:07
jamespageand juju add-relation cinder-ceph-sata ceph14:07
jamespageare required of course14:08
firlso I would have 2 cinder-ceph relations14:08
firland 2 separate ceph charms for the environment14:08
firl?14:08
coreycbfirl, here's a bundle to reference for deploying from source - http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/source/default.yaml14:12
firlthanks!14:13
bdxfirl:http://paste.ubuntu.com/12449099/14:14
coreycbfirl, https://bugs.launchpad.net/charms/+source/nova-compute14:17
firlthedac: https://bugs.launchpad.net/charms/+bug/149730814:22
mupBug #1497308:  local repository for all Openstack charms <Juju Charms Collection:New> <https://launchpad.net/bugs/1497308>14:22
beisnerwolverineav, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes150414:26
wolverineavhey, neutron question - when I enable DVR, the DHCP and L3 agent are deployed on the compute node. I'd like to disable the L3 agent completely. Is there a way to do that in the neutron-api charm?14:33
wolverineavor, what would be the way to go about it?14:33
=== ming is now known as Guest90652
coreycbjamespage, gnuoy, any idea on this ^14:34
jamespagewolverineav, DVR enables metadata and l3-agent I think14:34
jamespagethere is an extra toggle to enable dhcp as well14:34
Guest90652does juju-core 1.24.2 support CentOS7 on EC2?14:34
jamespagethere is no way todo that in the charm right now, as its assummed from the charm choices you're making that you want ml2/ovs driver14:35
jamespagewolverineav, whats the use case?14:35
wolverineavjamespage, yes right. we're currently moving towards pulling the various agents into the big switch controller. the current release supports L3 and the next one will have DHCP and Metadata.14:36
jamespagewolverineav, ok - so in this case, you don't want to use the neutron-openvswitch charm - I'd suggest a neutron-bigswitch charm that dtrt for a big switch deployment14:37
jamespageas you really just want ovs right?14:37
jamespagenot all of the neutron agent scaffolding around it14:37
wolverineavi'll be doing something like the ODL charm which deploys its own virtual switch. I would not be deploying the vanilla OVS14:38
ejatjamespage: openstack-dashboard charm , do i need manually change at the local_setting.py for the keystone host ?14:39
jamespageejat, no you just add a relation to keystone14:39
jamespagewolverineav, sounds like a neutron-bigswitch charm is the right approach then14:39
ejatalready did the relation14:39
jamespagewolverineav, openswitch-odl is the way forward from a frameworks perspective14:40
ejatshould i change :14:40
ejatFrom:14:40
ejatOPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"14:40
ejatTo14:40
ejatOPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_"14:40
jamespageno14:40
ejator let as it is14:40
jamespageyou should not need to change anything14:40
wolverineavjamespage, so it would be a neutron-api-bigswitch kinda thing. ah, i see14:40
jamespagewolverineav, kinda14:40
jamespagethe bit for nova-compute == like openvswitch-odl charm14:40
jamespagethe bit for neutron-api == like neutron-api-odl14:41
wolverineavgot it14:41
ejatit cant communicate14:41
ejaton azure14:41
ejatfrom dashboard cant ping keystone14:41
ejatbecause it take the public dns14:41
ejathow to restart/reboot on of the service machine14:42
ejatjamespage: http://paste.ubuntu.com/12449394/14:44
jamespageejat, blimey openstack ontop of azure?14:50
ejatyups ..14:51
ejatdemo purpose14:51
ejatjamespage: http://picpaste.com/Screen_Shot_2015-09-18_at_10.56.08_PM-a0Td6kaK.png14:57
Slugs_I’ve followed the Ubuntu openstack single installer guide located here — http://openstack.astokes.org/guides/single-install — Every service statrts up except  for Glance - Simplestream Image Sync.  This should not hinder me from logging into to horizion but for some reason I can’t authenticate with my username as ubuntu and password I have is ‘openstack’.  I have been able to stop the container, start the container, login to the container and15:06
Slugs_check juju logs but I would like some more clairication on this to make sure I’m doing this correctly.15:06
jingizu_Hi all! When I try to juju remove a service (e.g. quantum-gateway, fundamental part of openstack) and re-deploy it to a different server (e.g. --to lxc:3)... I notice that it does remove it from juju, but the actual services themselves (i.e. all the neutron python servers on the origianl host) are still running... it's as if it just removed it from the juju15:16
jingizu_database but did not actually stop the services themselves15:16
jingizu_Of note is that the service is running on a system deployed bare-metal that is also running other juju services, so juju couldn't just tear down the lxc or tell MAAS to kill the bare metal machine altogether (Which obviously would kill the services too)15:17
marcoceppiejat: what's your question?15:22
ejatopenstack-dashboard charm15:23
ejatadd-relation putting the public dns for keystone host in dashboard15:23
ejatjamespage: said i should not change anything in local_setting.py15:25
jamespageejat, sorry - I'm not that familiar with how public dns works in azure; you should not have to change anything in settings.py normally but ymmv on anything other that MAAS (or OpenStack itself)15:26
ejati cant login the dashboard15:27
ejatopenstack.informology.my/horizon15:27
ejatlogin then timeout15:33
firlthedac: http://paste.ubuntu.com/12450237/16:00
firlthedac: http://paste.ubuntu.com/12450337/16:10
amit213<amit213> jingizu_ : on you question about removing service, you'll also have to first do remove-relation on that service (which you're trying to remove) for all its peer services. Once the removal of relation is done, the remove-service should go smoothly. there is also a --force flag that can be used.17:25
jingizu_amit213: Thanks for the reply. At this point I have managed to remove the service(s) in question. Like I mentioned, the service is no longer listed in juju status. However, the underlying programs that correspond to the service are still running... Any ideas why it would not delete said programs, configs, etc. when removing the service?17:35
=== scuttle` is now known as scuttle|afk
ennoblewith juju deployer or with the juju client add_machine call, can you specify a specific machine?20:38
firlennoble: you can with constraints, and depending on the machined environment you can even use tags21:02
=== natefinch is now known as natefinch-afk
ennoblefirl: I'm using maas. Is it possible to specify a specific machine my-server-1.foo? What about with the manual provider? ssh:root@my-server-that-maas-hates?21:06
firlennoble: I don’t know anything about the manual provider. here is the tags information https://maas.ubuntu.com/docs/tags.html21:07
firldepending on how many nodes and what not, I sometimes just “acquire” the nodes in MaaS so that I don’t have things deployed to those containers21:08
firlSo with juju deployer you can use the tags constraints as with the add machines21:08
firlfor service units you can just do a —to21:08
ennoblefirl: so you acquire the nodes in maas? add tags to them there, and then deploy to them with juju deployer?21:09
firl“acquire” nodes in maas just means that juju deployer can’t use them to pull from ( it’s a hack I use )21:09
firlbut you can just add tags via the maas cli  ( in the maas gui in 1.8 ) to the physical machines and then use the deployer21:09
firland everyone from ubuntu is probably at a party or traveling because they just finished up a summit in DC21:10
nodtknennoble: you can add a machine to any existing enviorment with juju add-machine ssh:ubunut@<hostname>21:27
=== zz_CyberJacob is now known as CyberJacob
ennoblenodtkn: thanks, I can do that, I'm wondering after I do that can I make juju deployer use it?22:04
mwenninghi, looking for a quick answer - I moved a bundle from one system to another and ran juju-deployer --config=lis-test-bundle.yaml -e maas22:04
mwenningIt returned with something about must specify deployment, what did I forget?22:05
mwenning'Deployment name must be specified'22:06

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!