[00:46] how to add a machine to juju? [00:56] b1tbkt: What do you mean? [00:59] i have maas & juju. In maas, I've commissioned an additional (bare metal) machine but that is not reflected, for instance, in 'juu status' [01:00] b1tbkt: You need to deploy a service. Juju will only add a machine in it's status once it has a reason too. Otherwise it leaves the machine out there in case something else uses it [01:00] ahh okay. makes perfect sense. tks! [01:00] So try deploying something and Juju should then requisition that node for use and it'll thus show up in the status [01:01] is juju communicating with maas then to identify available resources? [01:02] or, i should say, is it relying on maas to know about additional (machine) resources? [01:04] b1tbkt: Yes, it relies on the provider to give it a machine [01:05] b1tbkt: It's been forever and a day since I used MAAS and Juju so I don't know the answer, but I assume Juju gives an error when MAAS has no more machines available [01:08] hrmm. okay. i appreciate the insight. any particular technical reason that you haven't used both together lately or just circumstance? [01:09] b1tbkt: Purely circumstance, I dont' have enough hardware to throw at MAAS at the moment. I hope to use it more in the coming months when I get more physical hardware to play with [01:10] ack. tks. just test driving it now for the first time. fortunately I've got a spare dozen boxes at my disposal. definitely a high barrier to entry w/ maas. [01:11] b1tbkt: cheers! Good luck [01:13] tks. only other problem so far is that ipmi startup doesn't seem to be working quite right. credentials get populated inside the server ipmi interface. not sure what's going on there but it's likely a problem for another day ;) === defunctzombie is now known as defunctzombie_zz [03:48] is this the correct way to deploy local changes to a charm? juju deploy --repository=trunk local:precise/tracks I ask because it's not seeing my changes [04:14] mfisch: is trunk a directory? Could you run a tree instide of the trunk directory? [04:15] my charm is in here, trunk/precise/tracks/ [04:15] it looks right to me based on the docs [04:15] I destroyed everything and will try one more time [04:15] mfisch: cool, just checking. You can drop the precise/ from the local:precise/tracks [04:16] just local:tracks should be sufficient since the series is determined by the environment.yaml file [04:16] ok [04:16] I should see pass or fail in about 2 more mins [04:17] When you run the deploy, it should confirm that it's using the local repository over the charm store (cs) [04:17] by confirm, I mean it should just echo back that it's doing so, not actually prompt you [04:17] marcoceppi: I'm fixing your review comment, it was a dumb mistake [04:18] INFO Searching for charm local:precise/tracks in local charm repository: /home/mfisch/experiments/tracks/trunk [04:19] So, /home/mfisch/experiments/tracks/trunk has the following structure: /home/mfisch/experiments/tracks/trunk/precise/tracks; where tracks is the charm? [04:19] yes [04:19] cool [04:20] see this is stil wrong cd tracks-2.2.1 [04:20] it should be a mv blah and then cd tracks [04:20] 18 unzip tracks.zip [04:20] 19 mv tracks-${VERSION} tracks [04:20] 20 cd tracks [04:21] So, the charm you're deploying doesn't reflect what was actually deployed? [04:21] it does not appear that way [04:21] I'm sshed in and rechecking [04:21] Are you deploying to local or another cloud provider? [04:21] marcoceppi: cloud provider [04:22] marcoceppi: wait [04:22] hum, and you said you're destroying the environment between tests? [04:22] * marcoceppi waits [04:22] yeah [04:22] so the install file here looks ight [04:22] right [04:22] unzip tracks.zip [04:22] mv tracks-${VERSION} tracks [04:22] cd tracks [04:22] so the plot thickens [04:22] but the logs [04:22] here's the unzip finishing [04:22] extracting: tracks-2.2.1/vendor/query_trace.tar.gz [04:23] and then ERROR: + cd tracks-2.2.1 [04:23] thats the old code [04:23] pastebin the whole charm.log file [04:23] It's late(early) here, but another pair of eyes might help [04:24] yeah its getting late here even (Colorado) [04:27] marcoceppi: http://paste.ubuntu.com/5717798/ [04:28] marcoceppi: the old code did cd tracks-${VERSION} which is what I see in the log [04:30] huh. The output indicates that it's using your local version [04:30] If you're around tomorrow I can try to deploy your latest changes from here and try to replicate the issue. [04:30] It's about time for me to retire to the bedroom [04:31] marcoceppi: sure, I'll be here thanks [04:32] marcoceppi: I'm going to sign off too, I have baby duty [04:32] mfisch: cheers, good luck with that [04:47] Extending Apache charm to include Apache Modules | http://askubuntu.com/q/282660 === wedgwood_away is now known as wedgwood === salgado is now known as salgado-lunch [15:22] Has anyone succesfully bootstrapped juju with devstack? [15:36] bcmfh: I tried about 5-8 months ago and wasn't able to get it to work quite right [15:47] marcoceppi, yeah, I'm a bit hampered by my lack of Python understanding, I'll try the ask-ubuntu site === salgado-lunch is now known as salgado === rogpeppe3 is now known as rogpeppe === defunctzombie_zz is now known as defunctzombie [17:44] hey guys, are there no quantal i386 builds of juju-core (golang)? Why? [17:44] https://launchpad.net/~juju/+archive/devel/+packages only amd64 [20:27] hey, where is the agent source code for the golang version of juju? I'm looking around in the bzr branch lp:juju-core, but not finding anything eminently obviously the agent. [20:27] s/agent source code/source code of the agent/ [20:27] orospakr: I think the agent code is part of the whole juju-core package [20:28] alright, I figured it might be that :) [20:28] orospakr: I could be wrong but that's kind of how the py-juju version works. A lot of the go-juju guys hang out in #juju-dev - you might be able to get more detailed answers there [20:33] another nub question: it seems like the canonical (heh, see what I did there) representation of your deployment structure lives inside the first instance. can it be dumped and recreated on a different cloud provider (or on a different account on the same provider?) [20:34] orospakr: It can, it's not currently a part of the core juju project, but there's a side project called juju-jitsu which adds commands like jitsu export and jitsu import which take a snapshot of your running environment and allow you to move that structure to another provider. The downside is: any data in the previous cloud doesn't move with your deployment. [20:35] fair enough. :) [20:35] Depending on your deployment, a lot of time it's pretty easy to wrap the jitsu export/import to also sync your data [20:36] what's the general strategy for handing backups of juju managed services that store persistent data? is there a common pattern charm writers tend to use for enabling data export? [20:38] Well [20:44] orospakr: There are charms like cheph, nfs, etc that can handle pooled storage across multiple nodes. But as it stands now we don't quite have a backup charm or anything in core that handles this. I know there's a lot of talk about making a "backup charm" that would [20:44] okay, fair enough! :) [20:44] be configured to know what data to pull and where to store it, say S3 or something else. Then know when to pull it down [20:45] it would have to cooperate with the charms that are responsible for things that store data, like mongodb, postgresql, etc. [20:46] orospakr: Right, so it'd probably be a subordinate charm that would reside on these units and can then be configured by either the parent charm (mongodb, psql, mysql, etc) or via a config option of what paths/files to back up [20:46] that sounds like a great design to me! [20:46] So, immediately it's a pretty large/robust charm. Which is why it hasn't really been tackled yet I assume [20:47] I hope this next comming up cycle we can spend some time figuring out how to handle things like backups and the like [20:50] so, if the deployment structure is basically sticky (excepting any clever use of jitsu import/export) to the single cloud account, what is the usual method for having a separate staging instance of all of your stuff discrete from production? or is the notion that most of the logic is in charms themselves, so having a duplicate deployment isn't such a big deal? [20:51] orospakr: it's the latter, almost all the logic is in the charms. So you can juju deploy -e stating your_service which would be identical to juju deploy -e production your_service, where the staging environment might be a private MAAS, OpenStack, or even your local machine and production is any other cloud [20:54] hmm, nice. [20:55] alright, here's a doozy: what is the AGPL understood to consider as a derivative work? particularly, are charms considered derivative work of juju? [20:56] orospakr: IANAL and I think SpamapS knows better than I do, but No, charms are not considered derivative works of Juju [20:56] ok! IANAL is understood ;) [20:57] Considering charms have their own licesnces, I'm going to go with not derivative works [20:58] marcoceppi: IANAL too btw :) [20:58] charms have their own licenses [20:58] SpamapS: true, but I feel you have a far better grasp on the licensing aspect [20:59] * marcoceppi bows [20:59] they are like python scripts.. python scripts are not derivative of python [20:59] Cool, that's what I figured === defunctzombie is now known as defunctzombie_zz [21:18] I figured i'd besk ask; some interpreters of GPL license versions consider API consumption to be a derivative work [21:20] hm, is it possible to use more than one cloud provider in a single juju environment? [21:21] orospakr: Not at the moment. There's talk of cloud federation as being a core feature for juju but I dont' think any work has started on it [21:22] hm, cool. [21:23] does juju-jitsu work with the golang version of juju? [21:25] orospakr: not sure, but I'm going to venture a guess of no [21:32] SpamapS: ^ [21:35] orospakr: some bits of jitsu might work fine w/ juju-core (the go port) but most will not as they directly call the python API's from juju. === defunctzombie_zz is now known as defunctzombie === wedgwood is now known as wedgwood_away