=== thumper-afk is now known as thumper === rcj` is now known as rcj === scuttlemonkey is now known as scuttle|afk === uru_ is now known as urulama === CyberJacob|Away is now known as CyberJacob === CyberJacob is now known as CyberJacob|Away === urulama is now known as urulama-afk [09:48] gnuoy, https://code.launchpad.net/~james-page/charm-helpers/haproxy-https-multi-network/+merge/236674 [09:49] looking [09:55] jamespage, just bikeshedding but if len(cluster_hosts) < 1: rather than if cluster_hosts: ? [09:55] it might have just the local address in it - in which case we should ignore it [10:01] sorry, I meant if not cluster_hosts: === urulama-afk is now known as urulama [10:14] jamespage, approved [10:14] gnuoy, thanks - I'll probably wait for dosaboy and xianghuis ipv6 stuff to land before landing that one [10:15] otherwise well get all twisted up [10:15] sure [10:19] hazmat: another trivial deployer MP for ya: https://code.launchpad.net/~bloodearnest/juju-deployer/no-relation/+merge/236679 [10:20] bloodearnest thanks [14:14] gnuoy, could you double check me on https://code.launchpad.net/~james-page/charm-helpers/haproxy-https-multi-network/+merge/236674 [14:33] jamespage, looking [14:36] jamespage, if/when you have a moment https://code.launchpad.net/~gnuoy/charms/trusty/openstack-dashboard/lp-1373714/+merge/236720 [14:40] jamespage, still looks good === uru_ is now known as urulama [16:18] marcoceppi: JoshStrobl: - https://github.com/juju/docs/pull/190 [16:18] JoshStrobl: so i heard you hate needlessly recompiling the entire hash, what if i said you could get live-building fairly easily with this mod :) [16:20] feedbark [16:21] marcoceppi: ack, and will do - let me hack up this watch: target first and inc. fix [16:21] lazyPower: I literally was moments away from clicking Comment on "Also, could you add a make watch target" [16:22] i wanted you to nuke my approach from orbit before i vested the 10 minutes getting that working ;) [16:23] watchmedo works well enough [16:24] make target isn't passing the variable though [16:25] watchmedo shell-command --patterns="*.md" --recursive --command='echo "${watch_src_path}"' . is what it shoudl look like [16:25] the ${watch_src_path} is coming up as nil, so its building the entire tree [16:27] where is what_src_path set? [16:27] it bubbles up from watchmedo [16:33] weird, apparently you cant escape $'s in a makefile [16:47] lazyPower, commented on pull 190 [16:48] hmm, i like that [16:48] * JoshStrobl edited his comment [16:48] same approach, cleaner to read [16:48] yep [16:48] either way, it'll just call the build func once [16:48] if there is no arg => same as it was before, otherwise just the one file we define [16:53] pushed [16:53] no luck on getting the watch target to populate correctly tho, its documented in that PR - moving on. [17:01] JoshStrobl: marcoceppi - cleaned up, rdy 4 eyeballs [17:08] lazyPower, thanks for the addition of documentation! [17:08] thats icing on the cake [17:09] Yea :) Everything else looks great man. [17:09] i'm cowboying this marcoceppi, unless you ninja me [17:19] lazyPower: why cowboy? [17:19] Give me a chance to test, its not that urgent [17:20] marcoceppi: im' hacking on docs today. *points @ his cards on kanban* - i want this [17:20] Okay, well patch your local branch [17:20] Someone have passed to a ERROR juju.provider.common bootstrap.go:122 bootstrap failed: cannot start bootstrap instance: cannot run instances: gomaasapi: got error back from server: 409 CONFLICT (No matching node is available.) [17:20] when try to juju bootstrap on maas? [17:20] ayr-ton: are you specifing a zone, or --to when you attempt to bootstrap? [17:21] Ayr-ton. What arch is your machines? [17:21] lazyPower: no, just a boostrap with --upload-tools [17:21] amd64 [17:21] Ayr-ton are any less than 1g ram? [17:21] Here's the full log: http://paste.ubuntu.com/8473883/ [17:22] marcoceppi: No. 2GB, for test purposes. [17:22] The command I tried was: juju bootstrap --constraints="mem=512M" --debug [17:23] And I also tried: juju bootstrap --upload-tools --constraints="mem=512M" --debug [17:23] 409 means Maas doesn't have any instances matching your request. Either there are non in ready state or that match constraints [17:25] marcoceppi: boot images? Or nodes? [17:25] Ayr-ton not sure what you mean [17:26] marcoceppi: Like, an instance matching my request under maas? [17:27] hmm, anyone have some git foo here? merged the changes from juju/docs to my fork but it decided to create a new commit with a merge message and push it to my master (so now it is unnecessarily ahead of juju/docs). tips? [17:27] Yes, a node in maas [17:27] marcoceppi: Or a boot image matching my request? Or a node? [17:28] marcoceppi: So, It is my first try on maas, I need do manually add a node before juju bootstrap? Or juju will automatically add nodes? [17:28] need to* [17:29] Ayr-ton okay, yeah. You need to enlist machines in Maas first. Basically, if Maas says there are no node you have to add them first [17:30] Ayr-ton are these physical machines? [17:30] marcoceppi: Yep. One physical machine. [17:30] JoshStrobl: git reset --hard hash [17:30] OK #juju question for you. [17:31] marcoceppi: A node can be both a virtual machine and a physical machine? [17:31] and re-try the merge. Should be good to just checkout master, and git pull upstream master. [17:31] Ayr-ton yes. You can have a virtual machine [17:31] The Juju charm can not download the charm payload. But when I try the download on the Juju host it can download it. [17:31] http://bb01.mariadb.net/10.0/bintar_10.0_rev4416/mariadb-10.0.14-linux-ppc64le.tar.gz [17:32] mbruzek: when you say juju host, are you talking the machine that is hosting the charm? [17:32] lazyPower, https://github.com/JoshStrobl/docs [17:32] mbruzek: this is from the actual macine, correct? [17:32] Why can the charms not see that url? [17:33] lazyPower, doing a git reset hard isn't helpful after the commit merge has been push to origin/master :/ [17:33] marcoceppi: And how to add a physical node? Like, how to add as a node the machine is running maas master? Just for tests, not for prod =x [17:33] JoshStrobl: at that point, you have to force push, and make sure you're rewinding to a state that is pre-upstream so you ff-only merge. [17:33] mbruzek: you guys in /topic? [17:35] ayr-ton: Okay, maas is going to be painful unless you have more than 10 machines, you can't enlist the machine if it's running maas master [17:35] lazyPower, okay, I owe you two beers now. [17:35] JoshStrobl: <3 [17:35] ayr-ton: maas runs on it's own machine, typically it controls the DNS and DHCP for the environment [17:36] ayr-ton: so, typically, a machine is booted on the network, gets a DHCP from maas, mass does an enlistment of it, records its hardware, and turns off the machine, then it shows in your dashboard so you or a tool like juju can provision it [17:40] lazyPower, not entirely sure what the end consensus will be about https://github.com/juju/docs/pull/179 . There was a short dialog between evilnick and I but a consensus wasn't made in the end about removing the "revision is now deprecated" statement. [17:40] https://github.com/juju/docs/pull/179 [17:40] oops, already added it in the main message [17:40] I need to lay off the energy drinks [17:40] Yeah, i read that one. I'm leaving it alone for now - as we're cleaning up revision files and dont have it in the template to be added to .gitignore/.bzrignore by default. [17:41] so until that time that we get it cleaned up from the templates, i'm not vaping the doc page [17:41] fair enough [17:41] I FIGHT FOR THE USERS! [17:44] marcoceppi: So, in that environment: http://marcoceppi.com/2014/06/deploying-openstack-with-just-two-machines/ [17:44] whoami > flynn uname -a -> SolarOS 4 something... [17:44] marcoceppi: You do have a maas master in the bigger machine, right? And the three nodes are all nucs? [17:45] ayr-ton: the bigger machine is the maas master, but it's not very powerful, it's just an old desktop PC that acts as a network bridget to the rest of my network. So it's got two nics and is the gateway for the switch all the nucs are connected to [17:45] JoshStrobl: http://goo.gl/kplXgm [17:46] lazyPower, yea I remember seeing that a few years ago [17:46] Special Features DVD [17:46] or BR, respectively [17:46] ayr-ton: also, the maas master doesn't get enlisted in maas [17:47] marcoceppi: Got it. So, for have something from maas master enlisted in maas, I do need to boot up a virtual machine inside this, right? [17:48] ayr-ton: right, you can use virtual machines, I recommend virsh/kvm as it's a supported type in MAAS [17:48] ayr-ton: https://maas.ubuntu.com/docs/nodes.html#virtual-machine-nodes [17:48] those can live on maas master without incident [17:49] marcoceppi: Interesting. So, I could have an storage with like 60GB/s [17:49] 60GB/ram === uru_ is now known as urulama [17:49] marcoceppi: And use this for mass master and the other nodes. [17:49] ayr-ton: depending on how big your machine was tha tmaas master is running on, usre [17:50] marcoceppi: Thanks. I will try that (: [17:52] ayr-ton: best of luck, it can be a bit finicky to setup, I recommend virtual-manager it makes creating kvms with qemu/libvirt really easy [17:52] marcoceppi: thanks (: [17:52] * marcoceppi will make a blog post about it [17:57] marcoceppi: Any idea why the subordinate icons do not show up in a local deployment in the Juju GUI? [17:58] marcoceppi: The local "regular" charms show up with icon. === roadmr is now known as roadmr_afk [18:05] marcoceppi: ^^ [18:05] mbruzek: a few reasons [18:06] marcoceppi: Any of them things I can fix? [18:08] mbruzek: are you in a hangout [18:09] marcoceppi: going there === CyberJacob|Away is now known as CyberJacob [18:37] jcastro: ping [18:37] yo [18:37] marcoceppi, hazmat, rick_h_: can you guys give this a once over? https://code.launchpad.net/~dweaver/orange-box/orange-box-robust-sync-charmstore/+merge/236755 [18:37] jrwren, yeah [18:38] jcastro: you know anything about trusty/elasticsearch not opening a port after running juju expose? [18:39] hmm, no [18:39] which provider? [18:39] jcastro: Know for sure if it has worked on ec2? [18:39] it has for sure worked on ec2 [18:40] kirkland: in re orange-box-sync, do you not need the people.canonical mirror anymore? [18:40] jcastro: its not responsive for me :( [18:40] jcastro: precise charm works, but trusty does not. [18:40] jrwren, how long since you deployed it? [18:40] huh ... ssh into the unit and see what's up? [18:40] marcoceppi: we haven't gotten around to it yet; but if that's stable, your tarball would be *much* preferred [18:40] marcoceppi, the code for getall looks strange mr.list() -> config.sections [18:40] jcastro: hour or so. [18:40] jcastro: yes. I'll do that. [18:41] hazmat: yeah it tried to replicate / be compatible with the original charm getall from days gone by [18:41] marcoceppi, what i don't see is the call for getBranchTips to actually get all the charms [18:41] hazmat: it's in mr [18:42] marcoceppi, in here ? http://bazaar.launchpad.net/~charm-toolers/charm-tools/1.4/view/head:/charmtools/mr.py#L96 [18:42] hazmat: yes [18:42] marcoceppi, where? [18:42] it does a full bzr branch, not a checkout [18:43] oh, where does it get the lists of charms [18:43] marcoceppi, yup [18:43] jcastro: i don't understand the ansible logs, which is part of my problem. [18:43] charm getall -> just does mr.list() for retrieving charms [18:44] jrwren, noodles is the author, I'd start with him. He's in europe so I recommend mail [18:44] jcastro: thanks. [18:44] when you find the problem, ask him to add it as a test [18:46] dweaver: marcoceppi has a minimal tarball of all charms [18:46] marcoceppi: what's the url for that? [18:46] hazmat: charm update has it [18:46] marcoceppi: and how often is it synced? [18:46] hazmat: it's super freakin convoluted [18:46] kirkland: I was just finishing it, it'll happen daily [18:47] kirkland: http://people.canonical.com/~marco/mirror/juju/charmstore/ [18:47] marcoceppi: and the resulting tarball is like 15MB, right? [18:47] kirkland: yes [18:47] marcoceppi: sweet; any chance we could get this merged into juju-gui, and at jujucharms.com (eventually)? [18:47] essentially a shallow checkout without version control of all precise and trusty promulgated charms [18:47] jcastro: yeah, I found the issue. I'll drop him a note. I don't think I know ansible well enough to propose a fix [18:47] dweaver: http://people.canonical.com/~marco/mirror/juju/charmstore/ [18:47] kirkland: merged in how? [18:47] dweaver: http://people.canonical.com/~marco/mirror/juju/charmstore/latest.tar.gz [18:48] marcoceppi: for this to just be a "service" that is at jujucharms.com/latest.tar.gz or whatever [18:48] dweaver: so, what I'd much rather see, is that sync script just grab that tarball, and untar it [18:48] kirkland: oh, uh, I suppose. I'll talk to rick_h_ about that, but this will probably need to live directly in the charm store since that's where all the charms will be more or less [18:48] kirkland, I think that's more of a rick question [18:48] marcoceppi: jcastro: cool; rick_h_ ? [18:49] dweaver: would you mind having a crack at that? [18:49] * rick_h_ reads backlog [18:49] kirkland, marcoceppi thanks, that'll do the trick. [18:49] jrwren: hey, i know whats up [18:49] dweaver: I'd imaging a wget + untar would be about 10000x more reliable and faster than bzr branch 250 times :-) [18:49] jcastro, jrwren: noodles now lives in australia [18:49] jrwren: elasticsearch byd efault enables UFW on port 9200, which is the default elasticsearch port. You can nuke this from orbit with juju run - juju run --service elasticsearch "service ufw stop" [18:49] kirkland, indeed, I'll use that method, should be a lot faster too. [18:50] dweaver: also, re: "enable-os-upgrade: false", hazmat says that'll be silently ignored on juju-core < 1.21 [18:50] dweaver: so, good catch on the stupid \t tab [18:50] dweaver: but I'm going to remove the # commenting that bit out [18:50] lazyPower: thanks! that would do it! Any reason to not have the charm default to that? [18:50] dweaver: +1, thanks [18:50] dweaver: cool? [18:50] kirkland, well, I was testing it and it might not matter, but it doesn't silently ignore it. [18:51] dweaver: ah, it warns, but it keeps going? [18:51] jrwren: it falls under the 'secure by default' section. [18:51] kirkland, it prints out something like unsupported option at every juju command [18:51] dweaver: oh [18:51] kirkland, but does keep going [18:51] dweaver: hmm [18:51] kirkland: hmm, so the goal is to get a tarball of the latest rev of all charm zips? [18:51] dweaver: okay, well, in the debian/control, we could depend on juju-core >= 1.21 [18:51] rick_h_: yes, all promulgated charms [18:51] lazyPower: I thought that was exposes job. Any reason to not have the charm do that on expose? [18:51] kirkland: marcoceppi bundles won't work as they're versioned locked and you'd need the right version [18:51] dweaver: and add to the orange-box manual that you have to go and add the juju ppa [18:51] marcoceppi: ok, only promulgated? [18:51] jrwren: also its documented in the readme. Between those two reasons alone - I cannot fault the author for ensuring data security. It's worthy of filing a bug. [18:52] lazyPower: also, secure from what exactly? :) [18:52] rick_h_: oh....fudge [18:52] lazyPower: Huge thanks. [18:52] jrwren: juju expose only interfaces with the provider firewalling solution, afaik it doesn't send any signals to the charm/service itself. [18:52] marcoceppi: kirkland I'm tempted to just say it's a few line script to do this with the new charmstore api going live at the end of the month [18:52] dweaver: uh oh, that's a good point [18:52] rick_h_: right, and given that these are all deployed from local I think that no bundles working is okay? [18:52] marcoceppi: right, for now. [18:52] dweaver: if we do the tarball thing, we'll need to remove all of the version locking of charms [18:52] marcoceppi: kirkland we can move the conversation and I can tell you where we're headed [18:53] marcoceppi: kirkland and maybe get you guys in on beta testing/etc but not sure about this solving the root issue atm [18:53] rick_h_: ok [18:53] kirkland: well, even with charm getall bundles won't work [18:53] simply because you're doing juju deploy --repository local:charm [18:55] kirkland, marcoceppi I'm not worried about bundles, we've been creating local version of bundles that exist locally on the orange box and want the charms to be available locally, so we remove the version from the bundle anyway [18:55] dweaver: I figured [18:56] dweaver: the mirror will give you that [18:56] dweaver: right, but our current bundles in lp:orange-box-examples often specify specific versions of charms [18:56] dweaver: charm: "cs:trusty/ceph-27" [18:57] kirkland, yes, but I've been creating local versions too that have "branch: lp:charms/ceph" instead [18:57] dweaver: [18:57] dweaver: okay [18:57] dweaver: well, as long as you're aware, and handle it, I think it's great [18:57] dweaver: we'll just need to adjust [18:57] dweaver: in the end, I love the idea of grabbing the snapshot tarball [18:59] kirkland, so do I, I'll work on including that. [18:59] dweaver: thanks! [19:11] dweaver: fyi, I've committed, pushed, and released the rest of your fixes, thanks! [19:12] kirkland, ty [19:18] marcoceppi, I get an error if I click on a person's name in the review queue [19:18] jcastro: yeah, known issue, patched, but patch isn't applying in production [19:19] ack === scuttle|afk is now known as scuttlemonkey [19:38] Hey charmers, any chance one of you could take a look at this MP? https://code.launchpad.net/~aisrael/charms/precise/cassandra/apt-gpg-key/+merge/235341 [19:41] thanks, marcoceppi! === roadmr_afk is now known as roadmr === scuttlemonkey is now known as scuttle|afk [21:24] marcoceppi: A other question. Is possible to deploy nova-compute in the maas master with juju, if it have enough memory? Because I can't add the maas master to enlist as a physical node, right? [21:24] ayr-ton: you can't to both of those [21:25] marcoceppi: So. In that environment, I would have only a storage. [21:25] ayr-ton: but I hink you can put nova-compute in a KVM [21:25] it'll be slow, but it should work [21:25] marcoceppi: can i get a quick glance over - https://bugs.launchpad.net/charms/+source/haproxy/+bug/1373081 [21:25] this is for arosales [21:25] marcoceppi: The best way would be install the openstack directly on the machine? [21:25] dpb1, niedbalski - feel free to jump in on this as well if you've got the time. [21:26] marcoceppi: Use MAAS can make the things difficult? [21:26] ayr-ton: you can't deploy nova-compute to the bare metal for maas master [21:26] ayr-ton: but you can put it in a VM [21:27] marcoceppi: Put nova-compute under KVM is a good idea? [21:27] ayr-ton: MAAS is well worth the investment of time if you're going to be orchestrating > 10 nodes. Otherwise, i've found its just 'simpler' to get moving with KVM hosts under a manual environment, and VM snapshots to revert to a 'clean' state. [21:28] ayr-ton: if you have no physial machines, then it's a great idea. [21:28] but i think marco alluded to that earlier today, so pardon my anecdotal interruption. [21:31] marcoceppi: If I use nova-compute inside a KVM I will be using nested virtualization, right? [21:31] ayr-ton: yes [21:34] marcoceppi: In your setup with NUCs, the NUCs was added as physical machines? [21:35] ayr-ton: two were, I put virtual machines on the third. One was storage, one was nova-compute, the rest on VMs [21:37] marcoceppi: So, the juju deploy nova-compute deployed this inside physical machine, not a vm? [21:37] ayr-ton: in my example, yes [21:37] Awesome (: [21:38] marcoceppi, lazyPower: Okay. Thank you guys === CyberJacob is now known as CyberJacob|Away [21:38] ayr-ton: happy to help in any way that i can [21:45] lazyPower, ack [21:49] lazyPower: lgtm [21:49] ta marcoceppi [21:49] niedbalski: if you're reviewing i'll wait to address it a bit longer [21:52] lazyPower, i will review it in short [22:31] jose: i'm going to be in the DrupalCamp Bolivia giving session about DevOps with Drupal and Ubuntu Juju :) http://cocha2014.drupalbolivia.org/session/desarrollando-con-drupal-en-equipo [22:31] if you can please! join us! [22:31] lazyPower, do you have a MP for the haproxy branch? [22:32] niedbalski: https://launchpad.net/~lazypower/charms/trusty/haproxy/trunk - it was a new branch, as it didn't exist in the store [22:44] lazyPower, i put some comments https://bugs.launchpad.net/charms/+source/haproxy/+bug/1373081 [23:57] Is it possible to have more then one bootstrap machines on the same amazon region [23:57] ?