=== scuttle` is now known as scuttle|afk === scuttle|afk is now known as scuttle` === axw_ is now known as axw [04:55] Would anyone happen to know if it is possible to debug-hooks both a subordinate and its parent together? Starting debugging for the second service complains that the tmux session already exists. === urulama__ is now known as urulama === zz_CyberJacob is now known as CyberJacob === CyberJacob is now known as zz_CyberJacob [09:00] deployed horizon dashboard through juju ... may i know what is the default login n password ? [12:09] openstack-dashboard timeout when i tried to login [12:09] what should i do ? [12:36] marcoceppi: r u there? === skaro is now known as Guest17239 [13:39] can I cancel a pending action? [13:41] use case is: i ran `juju action do db/0 dump` which takes a while. I accidentally ran it twice. I'd like to cancel the pending job. [13:48] jrwren: I know you can see the queue, thuoght there was a cancel api [13:48] rick_h_: i can't find it. [13:49] jrwren: hmm, wonder if it made the api but not the cli [13:50] rick_h_: must be. cli has only do, fetch, status [13:50] jrwren: rick_h_: we have a meeting next week about action 2.0 features. I'll add a cancel command to the list [13:51] aisrael: ah ok, isn't there a method to see the queue and such? [13:51] I know it was part of the spec [13:53] rick_h_: `juju action status` will show you everything, pending or not, but there's no queue management afaik [13:53] rick_h_: `juju action status` shows all of them. [13:54] ah ok [13:54] cool [14:00] jamespage: question about ceph and cinder charms. Can we specify different pools like this example http://www.sebastien-han.fr/blog/2013/04/25/ceph-and-cinder-multi-backend/ [14:01] thedac, I think so - let me check [14:01] thanks [14:02] thedac, yes - the ceph pool is aligned to the service name - so [14:02] juju deploy cinder-ceph cinder-ceph-sata [14:02] juju deploy cinder-ceph cinder-ceph-ssd [14:02] for example [14:02] ah, cool [14:03] thedac, the trick is that the ceph charm does not support special placements (yet) [14:03] thedac, so the backend pools should be pre-created in ceph by hand first - cholcombe has some stuff inflight to enhance pool management [14:04] so I could set this up by hand manually within ceph, then have cinder-ceph relation charm take care of the relations, and use possibly what cholcombe has to link it between cinder volumes and the ceph pools? [14:06] firl, kinda [14:06] if I deploy: juju deploy cinder-ceph cinder-ceph-sata [14:06] the backend pool must == 'cinder-ceph-sata' [14:07] so if you pre-create or re-create the pool directly in ceph with the required characteristics it will work ok [14:07] juju add-relation cinder-ceph-sata cinder [14:07] and juju add-relation cinder-ceph-sata ceph [14:08] are required of course [14:08] so I would have 2 cinder-ceph relations [14:08] and 2 separate ceph charms for the environment [14:08] ? [14:12] firl, here's a bundle to reference for deploying from source - http://bazaar.launchpad.net/~ost-maintainers/openstack-charm-testing/trunk/view/head:/bundles/source/default.yaml [14:13] thanks! [14:14] firl:http://paste.ubuntu.com/12449099/ [14:17] firl, https://bugs.launchpad.net/charms/+source/nova-compute [14:22] thedac: https://bugs.launchpad.net/charms/+bug/1497308 [14:22] Bug #1497308: local repository for all Openstack charms [14:26] wolverineav, https://wiki.ubuntu.com/ServerTeam/OpenStackCharms/ReleaseNotes1504 [14:33] hey, neutron question - when I enable DVR, the DHCP and L3 agent are deployed on the compute node. I'd like to disable the L3 agent completely. Is there a way to do that in the neutron-api charm? [14:33] or, what would be the way to go about it? === ming is now known as Guest90652 [14:34] jamespage, gnuoy, any idea on this ^ [14:34] wolverineav, DVR enables metadata and l3-agent I think [14:34] there is an extra toggle to enable dhcp as well [14:34] does juju-core 1.24.2 support CentOS7 on EC2? [14:35] there is no way todo that in the charm right now, as its assummed from the charm choices you're making that you want ml2/ovs driver [14:35] wolverineav, whats the use case? [14:36] jamespage, yes right. we're currently moving towards pulling the various agents into the big switch controller. the current release supports L3 and the next one will have DHCP and Metadata. [14:37] wolverineav, ok - so in this case, you don't want to use the neutron-openvswitch charm - I'd suggest a neutron-bigswitch charm that dtrt for a big switch deployment [14:37] as you really just want ovs right? [14:37] not all of the neutron agent scaffolding around it [14:38] i'll be doing something like the ODL charm which deploys its own virtual switch. I would not be deploying the vanilla OVS [14:39] jamespage: openstack-dashboard charm , do i need manually change at the local_setting.py for the keystone host ? [14:39] ejat, no you just add a relation to keystone [14:39] wolverineav, sounds like a neutron-bigswitch charm is the right approach then [14:39] already did the relation [14:40] wolverineav, openswitch-odl is the way forward from a frameworks perspective [14:40] should i change : [14:40] From: [14:40] OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member" [14:40] To [14:40] OPENSTACK_KEYSTONE_DEFAULT_ROLE = "_member_" [14:40] no [14:40] or let as it is [14:40] you should not need to change anything [14:40] jamespage, so it would be a neutron-api-bigswitch kinda thing. ah, i see [14:40] wolverineav, kinda [14:40] the bit for nova-compute == like openvswitch-odl charm [14:41] the bit for neutron-api == like neutron-api-odl [14:41] got it [14:41] it cant communicate [14:41] on azure [14:41] from dashboard cant ping keystone [14:41] because it take the public dns [14:42] how to restart/reboot on of the service machine [14:44] jamespage: http://paste.ubuntu.com/12449394/ [14:50] ejat, blimey openstack ontop of azure? [14:51] yups .. [14:51] demo purpose [14:57] jamespage: http://picpaste.com/Screen_Shot_2015-09-18_at_10.56.08_PM-a0Td6kaK.png [15:06] I’ve followed the Ubuntu openstack single installer guide located here — http://openstack.astokes.org/guides/single-install — Every service statrts up except for Glance - Simplestream Image Sync. This should not hinder me from logging into to horizion but for some reason I can’t authenticate with my username as ubuntu and password I have is ‘openstack’. I have been able to stop the container, start the container, login to the container and [15:06] check juju logs but I would like some more clairication on this to make sure I’m doing this correctly. [15:16] Hi all! When I try to juju remove a service (e.g. quantum-gateway, fundamental part of openstack) and re-deploy it to a different server (e.g. --to lxc:3)... I notice that it does remove it from juju, but the actual services themselves (i.e. all the neutron python servers on the origianl host) are still running... it's as if it just removed it from the juju [15:16] database but did not actually stop the services themselves [15:17] Of note is that the service is running on a system deployed bare-metal that is also running other juju services, so juju couldn't just tear down the lxc or tell MAAS to kill the bare metal machine altogether (Which obviously would kill the services too) [15:22] ejat: what's your question? [15:23] openstack-dashboard charm [15:23] add-relation putting the public dns for keystone host in dashboard [15:25] jamespage: said i should not change anything in local_setting.py [15:26] ejat, sorry - I'm not that familiar with how public dns works in azure; you should not have to change anything in settings.py normally but ymmv on anything other that MAAS (or OpenStack itself) [15:27] i cant login the dashboard [15:27] openstack.informology.my/horizon [15:33] login then timeout [16:00] thedac: http://paste.ubuntu.com/12450237/ [16:10] thedac: http://paste.ubuntu.com/12450337/ [17:25] jingizu_ : on you question about removing service, you'll also have to first do remove-relation on that service (which you're trying to remove) for all its peer services. Once the removal of relation is done, the remove-service should go smoothly. there is also a --force flag that can be used. [17:35] amit213: Thanks for the reply. At this point I have managed to remove the service(s) in question. Like I mentioned, the service is no longer listed in juju status. However, the underlying programs that correspond to the service are still running... Any ideas why it would not delete said programs, configs, etc. when removing the service? === scuttle` is now known as scuttle|afk [20:38] with juju deployer or with the juju client add_machine call, can you specify a specific machine? [21:02] ennoble: you can with constraints, and depending on the machined environment you can even use tags === natefinch is now known as natefinch-afk [21:06] firl: I'm using maas. Is it possible to specify a specific machine my-server-1.foo? What about with the manual provider? ssh:root@my-server-that-maas-hates? [21:07] ennoble: I don’t know anything about the manual provider. here is the tags information https://maas.ubuntu.com/docs/tags.html [21:08] depending on how many nodes and what not, I sometimes just “acquire” the nodes in MaaS so that I don’t have things deployed to those containers [21:08] So with juju deployer you can use the tags constraints as with the add machines [21:08] for service units you can just do a —to [21:09] firl: so you acquire the nodes in maas? add tags to them there, and then deploy to them with juju deployer? [21:09] “acquire” nodes in maas just means that juju deployer can’t use them to pull from ( it’s a hack I use ) [21:09] but you can just add tags via the maas cli ( in the maas gui in 1.8 ) to the physical machines and then use the deployer [21:10] and everyone from ubuntu is probably at a party or traveling because they just finished up a summit in DC [21:27] ennoble: you can add a machine to any existing enviorment with juju add-machine ssh:ubunut@ === zz_CyberJacob is now known as CyberJacob [22:04] nodtkn: thanks, I can do that, I'm wondering after I do that can I make juju deployer use it? [22:04] hi, looking for a quick answer - I moved a bundle from one system to another and ran juju-deployer --config=lis-test-bundle.yaml -e maas [22:05] It returned with something about must specify deployment, what did I forget? [22:06] 'Deployment name must be specified'