[00:03] <SpamapS> hazmat: https://code.launchpad.net/~clint-fewbar/txaws/drop-epsilon/+merge/87700 ... nice juicy txaws change for you to review. :)
[00:04] <hazmat> tasty
[00:04] <hazmat> SpamapS, nice
[00:04] <SpamapS> next on the hit list is pytz
[00:05] <hazmat> that ones harder
[00:05] <SpamapS> not really
[00:06] <SpamapS> only used for UTC
[00:06] <hazmat> not sure what the usage is, but as far as replacements..
[00:06] <SpamapS> which dateutil also has
[00:06] <hazmat> cool
[00:06] <SpamapS> if we're going to pull txaws into main, might as well get txaws using the existing main stuff rather than expand main even further
[00:08] <hazmat> SpamapS, sounds good to me
[00:11] <hazmat> hmm.. utc support should be builtin
[00:12] <hazmat> oh.. its there for constant
[00:12] <hazmat> sad
[00:14] <hazmat> ah.. there tzinfo object instances
[00:14] <SpamapS> yeah, just need datetime.datetime.utcnow()
[00:15] <nijaba> I seem to run into cases where hooks are skiped when running juju on lxc.  anyone seeing this?
[00:16] <SpamapS> hm, its not quite the same thing.. datetime.utcnow() will return a naive datetime.. I need it to be a UTC datetime
[00:16] <hazmat> SpamapS, as you said dateutil includes the nesc utc tzinfo impl
[00:16] <hazmat> nijaba, skipped ? or not observed ;-)
[00:16] <hazmat> nijaba, which hooks in particular (start/install)?
[00:17] <nijaba> hazmat: skipped as in state: running but obviously the install hook was never called
[00:17] <SpamapS> yeah exactly, just had to wrap my head around it
[00:17] <hazmat> nijaba, is the hook executable?
[00:18] <hazmat> nijaba, i'd check the unit agent log on unit's disk to verify, it will skip if not found
[00:18] <nijaba> hazmat: it is.  and what's funny is that it happens when I start two unit of the same service in short time period.  One does it fine, the other one does not
[00:19]  * nijaba looks at the log
[00:19] <hazmat> nijaba, can you pastebin the unit agent log from one thats being skipped?
[00:21] <hazmat> nijaba, there shouldn't be any way for that to happen, to get to started it passes through install/start transitions which execute relevant hooks. i'm happy to take a look though
[00:22] <hazmat> SpamapS, its a drop in replacement for the imports UTC/tzutc, the only delta of note is the unit test
[00:23] <hazmat> for the FixedOffset test
[00:23] <SpamapS> hazmat: yeah, I don't even think I need dateutil .. as datetime.utcnow().replace(tzinfo=tzinfo('UTC')) seems to work fine
[00:23] <hazmat> SpamapS, cool.. i could have sworn it was builtin
[00:24] <nijaba> hazmat: ok, I think I know where it comes from.  I started debug-hook before the unit was ready, went to have dinner, came back, the debug-hook timed out on the "accept key" prompt and installed was never actually run
[00:24] <SpamapS> hazmat: you get the UTC time, but to be identical, you need tzinfo to be set just like pytz does it
[00:25] <hazmat> "An object d of type time or datetime may be naive or aware. d is aware if d.tzinfo is not None and d.tzinfo.utcoffset(d) does not return None. If d.tzinfo is None, or if d.tzinfo is not None but d.tzinfo.utcoffset(d) returns None, d is naive."
[00:26] <nijaba> hazmat: http://pastebin.ubuntu.com/794426/
[00:27] <hazmat> nijaba, ah.. so debug-hooks was running and exited, so the hooks where run and there was no start hook
[00:27] <hazmat> thats still a little odd
[00:27] <hazmat> ah the cli client connection to zk timed out, removing debug mode
[00:28] <hazmat> nijaba, did your laptop sleep or network get interuppted?
[00:28] <nijaba> hazmat: well, it is quite logical. for the unit, I was in debug hook mode, so nothing was run (was waiting for me to launch it).  Since the session timed out  before I got back, it continued without ever running it
[00:29] <nijaba> hazmat: nope, just left alone for 3h
[00:29] <hazmat> there isn't an explicit timeout on debug hook sessions, but if the client looses connectivity for any reason it is ended
[00:30] <hazmat> time for dinner bbiab
[00:30] <nijaba> hazmat: the time out is on the client side for me to accept connecting to a host with a new key
[00:33] <nijaba> is there a way to tell juju (lxc) to create the guest in a tmpfs ?  Would speed up tests quite a bit I think
[00:35] <nijaba> gah, I'll just mount /var/lib/lxc/ in a tmpfs...
[00:36] <hazmat> nijaba, that should work
[00:36] <hazmat> nijaba, there's support in lxc-clone for snapshots but its not something we've explored
[00:37] <hazmat> on an ssd after the first unit (which also does the master template creation) takes just a few secs
[00:37] <hazmat> at least for me
[00:37] <hazmat> er.. template lxc container creation
[00:37] <nijaba> hazmat: sorry, only a good old spinner 7200rpm here :)
[00:37] <SpamapS> hazmat: indeed, dateutil.tz's tzutc == pytz.UTC and tzoffset ~= pytz.FixedOffset
[00:48] <nijaba> whaaa.  That was fast!
[00:50] <nijaba> I have to be REALLY quick to be able to start juju debug-hook before the install hook is called
[00:54]  * nijaba just added "tmpfs /var/lib/lxc tmpfs defaults,size=3G 0 0" to his /etc/fstab
[01:01] <hazmat> nijaba, yeah.. deploy really needs an option for running with debug-hooks to help against the install/start case
[01:01] <hazmat> ie.. deploy --debug
[01:01] <nijaba> hazmat: would be nice indeed
[01:10]  * nijaba juju deploy local:bed && juju deploy local:nick && juju add-relation nick bed
[01:37] <SpamapS> hazmat: woot.. managed to replace pytz, zope.datetime, and epsilon with dateutil
[01:37] <hazmat> SpamapS, score
[01:37] <hazmat> nijaba, just don't destroy the env ;-)
[01:37]  * hazmat hibernates
[10:07] <koolhead17> nijaba: the limesurvey charm has been very helpful 2 me in understanding charms writing :)
[10:07] <nijaba> koolhead17: I am glad :)
[10:09] <koolhead17> nijaba: will you have time to review my charm later in day? i am almost done with all the blockers you mentioned in the review. I need some more time to get mysql part talking!! :P
[10:09] <nijaba> koolhead17: If not today, I'll try this we.  Ping me when you think you are good
[10:10] <koolhead17> nijaba: perfect. :)
[10:45] <koolhead17> i had accidently shutdown my machine while juju was still running and i kept getting this http://paste.ubuntu.com/794719/
[10:46] <koolhead17> the only solution i found for this was to remove the files inside my data-dir
[10:46] <koolhead17> is it right way?
[10:48] <fwereade> koolhead17, I don't know for sure; but what do you get if you destroy-environment?
[10:50] <koolhead17> fwereade: well i got no solution even after trying that, so i applied the hard way :P
[10:50] <fwereade> koolhead17, fair enough -- so destroy-environment just didn't do anything?
[10:51] <koolhead17> fwereade: destroy-environment has worked for me in normal condition
[10:51] <fwereade> koolhead17, indeed, just not with a restart in between
[10:51] <koolhead17> this time my poor system hanged while juju was spawning some contaners
[10:52] <koolhead17> fwereade: i would say its my system issue and 4 that solution was what i did. :P
[10:53] <fwereade> koolhead17, well, glad it's fixed anyway ;)
[10:55] <koolhead17> fwereade: indeed!! :D
[11:37] <rbasak> Has anyone yet tried running juju on arm?
[11:38] <rbasak> I get http://paste.ubuntu.com/794755/ but not sure if it's an arm issue or it's me doing something wrong.
[11:40] <koolhead17> fwereade: i was going through my uunits.log and saw http://paste.ubuntu.com/794757/  is it okey? my charm is running well
[11:52] <nijaba> argh, looks like my lxc is stuck on something.  Getting a weird error on juju destroy-environment http://pastebin.ubuntu.com/794769/
[12:11] <koolhead17> nijaba: last time we were discussing about the possiblity of having a mail-server kind charm so we can simply use it for CMS
[12:12] <nijaba> koolhead17: yep.  And I hinted that this may not be the only option we want to offer.  it is often a better choice to point the CMS to an external SMTP server rather than forcing to deploy one in the juju environment
[12:13] <nijaba> koolhead17: which is what I did for roundcube and limesurvey
[12:16] <fwereade> koolhead17, I'm afraid I don't know about that at all :(
[12:17] <koolhead17> nijaba: yeah am going through the charm!! :P
[12:18] <koolhead17> nijaba: fwereade in my charm case, the mail server simply needed to send password reset msg.
[12:19] <nijaba> koolhead17: so either use a local smtp server or point to an external one
[12:22] <koolhead17> nijaba: okey.
[13:02] <koolhead17> nijaba: http://bazaar.launchpad.net/~charmers/charm/oneiric/limesurvey/trunk/view/head:/config.yaml  i see information about the mailserver settings, i can use something equivalant
[13:03] <nijaba> koolhead17: yes, that's what I would suggest
[13:04] <koolhead17> nijaba: but i  don`t see any of the mail server related package getting installed via Install file of the charm, am i missing something?
[13:06]  * koolhead17 is bit confused :P
[13:12] <koolhead17> shit, we can use remote host for that as you suggested nijaba , stupid me
[13:20] <nijaba> koolhead17: right, but let's keep the s**t out :)
[13:40] <koolhead17> nijaba: :)
[14:36] <rbasak> running juju locally (lxc), juju is calling lxc is calling debootstrap without a proxy. Any idea how I can set one? debootstrap will accept an http_proxy environment variable AFAIK, but I don't know how to set it in the middle
[14:38] <koolhead17> nijaba: so limesurvey mysql db while imported in mysql db contains authentication credentials like user: admin password: password, i am reading the readme.txt?
[14:39] <nijaba> koolhead17: yes
[14:40] <koolhead17> nijaba: aah nice, which means we are importing the db and accordingly adding the credentials in its config file :)  This makes me feel goooooooood
[14:46] <nijaba> looks like I hit a juju bug when I have an unbound var in a peer-relation-joined.  once I have this condition, relation events won't be processed anymore until I restart my environment http://pastebin.ubuntu.com/794903/
[14:47] <nijaba> hazmat: want me to open a bug for this ^?
[15:36] <m_3> reminder to update your lxc caches (P _and_ O) and whatever you need in apt-cacher-ng caches before travel
[15:46] <nijaba> m_3: may sound silly, but what are P and O lxc caches
[15:47] <m_3> nijaba: sorry, local juju deployment caches for precise and oneiric
[15:48] <nijaba> m_3: thanks, much clearer!
[15:48] <m_3> i.e., bootstrap and deploy something with each series so the base install gets built out
[15:48] <nijaba> rght
[15:51] <m_3> the harder one to handle is apt-cacher-ng... gotta spin up whatever charms you might wanna work with offline
[15:53] <_mup_> Bug #912812 was filed: Error condition on relation hooks locks events processing <juju:New> < https://launchpad.net/bugs/912812 >
[16:56] <hazmat> hmm
[17:02] <nijaba> hmhmm?
[17:36] <hazmat> nijaba, do you have the complete log for that unit agent
[17:37] <nijaba> hazmat: killed it, but will reproduced
[17:41] <hazmat> nijaba, cool, i think i've worked it out, thats a rather serious bug issue imo
[17:42] <nijaba> hazmat: looked like it to me.  I opened a bug, if you did not see
[17:42] <hazmat> nijaba, got it just commenting on it now
[17:48] <nijaba> hmmm, looks like this bug is even worse than I thought.  After destroying my env and bootstraping a new one, a new deploy seems to block
[17:49] <nijaba> short of rebooting my system, what do I need to wipe to restart from a clean state?
[17:49] <koolhead17> nijaba: i have fixed most of the blockers you mentioned in 1st review, if you have time please review it  :)
[17:49] <koolhead17> https://code.launchpad.net/~koolhead17/charm/oneiric/owncloud2/trunk
[17:53] <nijaba> koolhead17: looking
[17:58] <nijaba> koolhead17: just our of curiosity, why in start are you using 'service apache2 start' and in stop '/etc/init.d/apache2 stop' ?  just curious, as both should work, but would tend to use the same form in both.
[17:58] <hazmat> nijaba, huh? simplying destroying the env should resolve it
[17:58] <hazmat> actually just removing the unit would suffice, its a local unit problem
[17:59] <koolhead17> nijaba: :P lemme check it.
[18:00] <nijaba> hazmat: well, it might be something else, but my first unit in the new env has been pending for 15min now.  it's a bit long for lxc in a tmpfs :(
[18:01] <koolhead17> nijaba: because i wrote that part long time ago :P just modified it
[18:02] <nijaba> koolhead17: your changes look good to me :)
[18:02] <nijaba> koolhead17: can't wait for the mysql part!
[18:04] <nijaba> koolhead17: do you want to wait for that until we promulgate? if not, please change the bug status to "fix-commited" and comment on the changes you made
[18:05] <nijaba> hazmat: you interested in my current lock state or should I go ahead an reboot?
[18:06] <hazmat> nijaba, yes i'd like a pastebin of the unit agent log in complete
[18:06] <hazmat> nijaba, oh the locked state, sure
[18:06] <koolhead17> nijaba: it currently works without any issue and uses sqlite, i will change it to fix-commited with comment that mysql integration part working with.
[18:06] <hazmat> nijaba, can you pastebin a ps aux process listing
[18:06] <nijaba> hazmat: sure
[18:09] <nijaba> hazmat: http://paste.ubuntu.com/795106/
[18:10] <hazmat> nijaba, yeah.. i've seen that occasionally lxc-wait basically hangs..
[18:10] <hazmat> lxc-wait -n nbarcet-lxc-roundcube-0 -s RUNNING
[18:10] <hazmat> nijaba, what's the output of lxc-ls ?
[18:11] <hazmat> that should show if the container is running
[18:11] <nijaba> hazmat: http://paste.ubuntu.com/795107/
[18:11]  * nijaba loves pastebinit
[18:12] <hazmat> nijaba, me too.. last one.. could you pastebin the machine agent log
[18:13] <nijaba> hazmat: the one in the unit?  Can't ssh to it at this point
[18:14] <hazmat> nijaba, /home/nbarcet/tmp/juju/nbarcet-lxc/machine-agent.log
[18:15] <hazmat> nijaba, the unit log files are on local disk as well and symlinked in to the data-dir, although not sure that works across disk partitions like you've setup
[18:16] <SpamapS> nijaba: 11.10 or precise?
[18:16] <SpamapS> I believe there have been quite a few fixes to lxc in precise
[18:16] <nijaba> hazmat: http://paste.ubuntu.com/795113/
[18:16] <hazmat> i think this is an lxc issue, but not one reported upstream yet
[18:16] <nijaba> SpamapS: 11.10
[18:17] <hazmat> hmm.. so the container start fails, and then the lxc-wait hangs because the container isn't started
[18:18] <hazmat> i guess the question is why the container start failed, which should be in the container console log under /home/nbarcet/tmp/juju/nbarcet-lxc/units/roundcube-0/
[18:18] <hazmat> nijaba, okay.. last one ;-) could you pastebin the files in that dir?
[18:18] <SpamapS> hazmat: and why lxc-wait doesn't return an error when start has failed.
[18:20] <nijaba> hazmat: http://paste.ubuntu.com/795117/
[18:20] <hazmat> SpamapS, we should probably lxc-wait on more than just start to detect the failure and avoid the machine agent hang, the lxc processes have completed, but understanding why the container fails would be nice helpful as well
[18:21] <hazmat> hmm.. that looks normal
[18:21] <koolhead17> Is it a good idea to have charm for splunk?
[18:21] <nijaba> hazmat: arg...  truncated by pastebinit
[18:21] <nijaba> hazmat: then end is not normal, wait a sec
[18:23] <nijaba> hazmat: http://paste.ubuntu.com/795123/
[18:23] <SpamapS> hazmat: indeed.. RUNNING|STOPPED maybe?
[18:23] <SpamapS> koolhead17: the splunk server, yes. The splunk agent? not until subordinate services land (soon I think)
[18:24] <nijaba> SpamapS: splunk wa my first use case to justify "virtual" charms, which we now call subordinates ;)
[18:25] <jcastro> koolhead17: dude nice work there with owncloud
[18:25] <koolhead17> SpamapS: i have two questions, 1) it has separate pkg for 32 and 64 bit.  2) it asks you to fill company details your name before it lets you download them actually.  How to overcome this part :D
[18:25] <jcastro> we just need the mysql parts right?
[18:25] <koolhead17> jcastro: thanks sir!!
[18:26] <koolhead17> jcastro: yes i will get it hopefully by monday/tuesday.
[18:26] <SpamapS> nijaba: I actually don't think they're at all similar, virtual and subordinate. We just happen to be able to mimic the end-result of virtual charms using subordinate ones.. but I'm still not really "happy" with that sort of hack. :p
[18:27] <nijaba> jcastro: actually, I think we can promulgate as is, without the mysql part, as the limit is clearly stated in the readme.  do you agree?
[18:27] <SpamapS> nijaba: a virtual service would be a charm that doesn't need a machine... so the hooks would run once, in one place. using a subordinate charm, we have to run the hooks on every machine that gets related to it..and, IMO, they will end up being more complicated.
[18:27] <jcastro> I agree, I mean, if it's for personal use, I don't need mysql
[18:27] <jcastro> the person who wants to deploy it for tons of  users will want mysql and will find the bug report and/or fix it.
[18:27] <koolhead17> jcastro:  i can write saperate charm for mysql if its good idea :P
[18:28] <jcastro> but we do have a bug on the charm for mysql
[18:28] <nijaba> SpamapS: well, not really, as long as we do not have to start a container for it, I think it is the right way to go after quite a bit of brainstorming with jimbaker
[18:28] <jcastro> koolhead17: config option?
[18:28] <koolhead17> jcastro: currently it just works with sqlite
[18:28] <nijaba> jcastro: what bug for for mysql?
[18:28] <SpamapS> nijaba: this means that for an ELB charm, you have to copy the ELB credentials to every single machine.
[18:29] <hazmat> SpamapS, yup re RUNNING|STOPPED
[18:29] <nijaba> koolhead17: yes, we want to avoid duplicating charms, use config options instead
[18:29] <jcastro> nijaba: oh hey, check this out: https://code.launchpad.net/~marcoceppi/charm/oneiric/owncloud2/mysql
[18:29] <nijaba> SpamapS: why not provide them in config?
[18:30] <jcastro> here we go, nijaba, let me find marco and see if that's review worthy
[18:30] <SpamapS> nijaba: they'll be in config.. and copied, to *every* machine.
[18:30] <jcastro> and we'll just fix it
[18:31] <SpamapS> nijaba: a virtual charm would have those credentials isolated to a single machine, most likely a provisioning machine.
[18:31] <SpamapS> I realize this is moot while we have no ACL isolation for ZK
[18:31] <hazmat> nijaba, thanks thats helpful
[18:32] <SpamapS> nijaba: its a workaround, not a solution, thats all.
[18:32] <nijaba> SpamapS: agreed
[18:32] <nijaba> hazmat: np
[18:32]  * nijaba reboots
[18:33] <SpamapS> Most places currently have similar problems.. root on any puppet box can read the entire corpus of puppet manifests and files in the puppet master.
[18:33] <hazmat> SpamapS, really?
[18:34] <hazmat> so they get transport level security with certs but no resource access granularity ?
[18:34] <SpamapS> That may be different now..
[18:35] <SpamapS> but in the past, it was true, the cert process was just to verify that you were allowed to read the puppet manifests and files
[18:35] <SpamapS> Its then entirely up to your puppet run to do whatever you will with that information.
[18:35] <_mup_> Bug #912879 was filed: Machine agent hangs if lxc container start fails <juju:New> < https://launchpad.net/bugs/912879 >
[18:35] <SpamapS> hazmat: the key difference between that and our ZK problem is that its global *read*
[18:38] <hazmat> ic
[18:40] <koolhead17> SpamapS: splunk asks you to register an account to download there pkg, do u think its good idea to work on its charm?
[18:41] <nijaba> koolhead17: <Canonical hat on>I have add some discussion with them, so I would wait for subordintate to land and discussion to conclude<canonical hat off>
[18:41] <SpamapS> koolhead17: IMO, no, I'd focus on 100% open source software ..
[18:41] <nijaba> s/add/had
[18:42]  * hazmat lunches
[18:42] <nijaba> bon appetit
[18:42] <koolhead17> SpamapS: nijaba thanks. :)
[19:00] <nijaba> koolhead17: just promulgated your owncloud charm!
[19:00] <nijaba> jcastro: blogging material? ^^
[19:01] <jcastro> yep
[19:01] <jcastro> already working on it
[19:02] <jcastro> Friday, close to EOD? That's when all the new charms land.
[19:02] <jcastro> :)
[19:02] <SpamapS> wow.. ZK's internal test suite is pretty comprehensive
[19:02] <SpamapS> jcastro: friday is hack day. :)
[19:05]  * nijaba hopes secretly that it will also be a bit of a review/merge day for SpamapS :]
[19:06]  * koolhead17 kicks himself, why on earth he was not aware about pastebinit
[19:26] <nijaba> SpamapS: it looks like we may have an issue with versioning in charm-helppers-daily: the last daily build failed because of that...
[19:43] <jcastro> SpamapS: hey, can you do a "charm get owncloud" and tell me what happens?
[19:43] <jcastro> mine comes up with an empty bzr branch?!
[19:45] <SpamapS> jcastro: you did something wrong.. or charm tools is broke
[19:45] <SpamapS> jcastro: works fine for me
[19:45] <SpamapS> $ charm get owncloud
[19:45] <SpamapS> Branched 8 revisions.
[19:45] <jcastro> nm, I am not able to reproduce
[19:45] <jcastro> weird
[19:47] <jcastro> ok so this threw me off
[19:47] <jcastro> koolhead17: the charm is "owncloud"
[19:48] <jcastro> but the service is owncloud2
[19:48] <koolhead17> jcastro: it should be owncloud2 because its the latest version, owncloud by default is older version 1 in our repo
[19:49] <koolhead17> jcastro: it be good if it is owncloud2
[19:49] <jcastro> oh, so we have owncloud1 in the archive I see as "owncloud"
[19:50] <koolhead17> jcastro: no, owncloud as in pkg on ubuntu repository, this charm is working with owncloud version 2 so i have used owncloud2 everwhere
[19:50] <koolhead17> *everywhere
[19:51] <jcastro> ah, except "charm get owncloud2" doesn't work
[19:51] <koolhead17> jcastro: i think that should be its proper path :)
[19:52] <koolhead17> but  i see nijaba has used https://code.launchpad.net/~charmers/charm/oneiric/owncloud/trunk
[19:52] <jcastro> yeah I am just saying it should be consistant one way or the other
[19:52] <jcastro> wether it explicitly says 2 or not doesn't matter to me
[19:55] <koolhead17> jcastro: what should i do now sir :)
[19:55] <jcastro> nijaba: or SpamapS: can we make it so "charm get owncloud2" works, I'd like my video to be consistant when I record it
[19:55] <jcastro> koolhead17: I think they fix this in the store, not sure, asking now. :)
[19:55] <koolhead17> jcastro: :P
[19:56] <SpamapS> $ charm get owncloud2
[19:56] <SpamapS> owncloud2 does not exist in official charm store.
[19:57] <SpamapS> why would we have 'owncloud' and 'owncloud2' ?
[19:59] <jcastro> making it so it's just all "owncloud" is fine by me
[20:03] <SpamapS> nijaba: about the charm-tools versioning thing.. that happens sometimes, I don't think its a real problem tho
[20:06] <jcastro> SpamapS: ok so what should we do?
[20:06] <jcastro> (sorry to be annoying but I'd love to videocast this right now. :)
[20:10] <adam_g> whats the environments.yaml option to install from a specific juju PPA?
[20:10] <hazmat> juju-origin: ppa
[20:11] <adam_g> hazmat: danke
[20:11]  * hazmat double checks
[20:11] <hazmat> yup
[20:15] <SpamapS> jcastro: it looks like the 'owncloud' charm downloads and installs 2.0.1 .. whats there to change?
[20:15] <SpamapS> jcastro: if its not packaged.. sobeit. :-P
[20:16] <jcastro> SpamapS: I mean the namespace
[20:16] <jcastro> charm get is "owncloud"
[20:16] <jcastro> but the service and stuff for the charm is "owncloud2"
[20:16] <SpamapS> AH
[20:16] <SpamapS> yes fix that, I'd suggest by changing the charm itself
[20:17] <SpamapS> easier to fix the contents of a branch than rename it
[20:17] <jcastro> ok
[20:17] <jcastro> koolhead17: got time to rename it now?
[20:17] <koolhead17> SpamapS: but when i say owncloud, as per our repo<ownclous pkg avaiblablity> it means older version, sorry if am wrong
[20:18] <koolhead17> ubuntu package
[20:18] <jcastro> well, at some point the package "owncloud" in the repo is just 2.x
[20:18] <SpamapS> packages and charms are in a different namespace
[20:18] <koolhead17> apt-get install owncloud will as of now give 1.1
[20:18] <SpamapS> in this case, its just a version difference
[20:19] <SpamapS> so I say, change name: owncloud2 to name: owncloud , and be done with it.
[20:19] <koolhead17> SpamapS: okey, so am doing it as you suggested :P
[20:19] <koolhead17> jcastro: 2 mins
[20:19] <jcastro> \o/
[20:20] <koolhead17> jcastro: what changes am i supposed to do other than replacing owncloud2 to owncloud inside charm?
[20:20] <jcastro> that should be it?
[20:20] <jcastro> SpamapS: right?
[20:22] <SpamapS> we need to fix readme.txt too
[20:22]  * SpamapS ponders making 'charm promulgate' ensure that this doesn't happen again
[20:25] <koolhead17> SpamapS:  readme needs to be modified to charm get owncloud, instead my bzr repo url ?
[20:26] <nijaba> looks like I messed up a bit my first promulgate...  maybe before making a tool, we could have a checklist?
[20:27] <nijaba> koolhead17: yep, that would be better
[20:27] <koolhead17> nijaba: i will fix it right away, looking at the roundcube file for help :)
[20:27] <jcastro> I have a spot for a checklist here: https://juju.ubuntu.com/CharmGuide
[20:28] <SpamapS> koolhead17: the readme doesn't need to even mention how to "get" the charm
[20:28] <SpamapS> just how to use it
[20:28] <koolhead17> SpamapS: okey, sorry for missing on that :(
[20:28] <SpamapS> if you have the readme, you already got the charm.. or you're on the charm browser site, which should maybe have something copy/pastable on it.. like 'to get this charm,   charm get owncloud'
[20:29] <nijaba> jcastro: yup, but can we edit it?
[20:29] <SpamapS> koolhead17: smile, you're about to get blogged about. :)
[20:29] <jcastro> nijaba: you should be able to, try it and lmk
[20:29] <nijaba> jcastro: immutable page
[20:32] <jcastro> grr, ok, when we figure out a list I will just add it
[20:35] <koolhead17> jcastro: i have added the modification in my 9th revision just now :)
[20:35] <koolhead17> SpamapS: nijaba modified the files as suggested
[20:37] <adam_g> hey guys, it looks like the precise  juju ppa is stuck at r434 while oneiric is up to r440. any chance of syncing up? there happens to be  a critical fix for orchestra missing :)
[20:39] <SpamapS> adam_g: I believe the daily builds are failing because of java problems. :-P
[20:39] <SpamapS> adam_g: I suppose we can do a manual upload
[20:40] <adam_g> SpamapS: ah, makes sense
[20:40] <adam_g> SpamapS: if you wouldn't mind, it'd be helpful (especially in prepartion for next week)
[20:40] <SpamapS> adam_g: its quite unfortunate but has something to do with java and the way it overcommits memory you can't even install it on the virtual buildds
[20:40] <SpamapS> sometimes
[20:40] <SpamapS> which is the frustarting part
[20:40] <SpamapS> frustrating even
[20:49] <koolhead17> SpamapS: please put my latest revision in the charm repository, i have added few more details in readme.txt :)
[20:49] <SpamapS> koolhead17: branch?
[20:50] <koolhead17> SpamapS: bazaar.launchpad.net/~koolhead17/charm/oneiric/owncloud2/trunk
[20:52] <koolhead17> how will i explain about juju to someone who thinks juju is like puppet/chef? i tried explaining juju is a layer above it. i would love to get some info on same
[20:54] <SpamapS> koolhead17: show them how your charm can work with the haproxy charm without either of them ever sharing code
[20:54] <SpamapS> koolhead17: and without ever learning ruby, or a new language like puppet's DSL
[20:55]  * koolhead17 notes down
[20:57] <koolhead17> SpamapS: also another question, in order to run charm --> application , juju has to keep running? If we will upgrade juju version, will it effect the working/running of existing instances which are deployed via the charms
[20:58] <SpamapS> koolhead17: upgrades are still up in the air. Eventually we'll need to go to each box and upgrade its agent.. stop/start..etc.
[20:59] <SpamapS> bzr: ERROR: Not a branch: "bzr+ssh://bazaar.launchpad.net/~coolhead17/charm/oneiric/owncloud2/trunk/".
[20:59] <SpamapS> doh
[20:59] <SpamapS> k
[20:59] <koolhead17> SpamapS: :P
[21:00] <SpamapS> +> juju get owncloud
[21:01] <SpamapS> thats incorrect
[21:01] <koolhead17> +> juju get owncloud ?
[21:02] <SpamapS> koolhead17: thats from your readme
[21:02] <koolhead17> SpamapS: i used > juju get owncloud
[21:02] <koolhead17> what should i modify it to ?
[21:02] <SpamapS> koolhead17: the + is not what I mean
[21:02] <SpamapS> it shouldn't be there
[21:02] <SpamapS> if they have the readme, they already have the charm
[21:02] <koolhead17> ooh ok
[21:02] <koolhead17> so removing that whole part
[21:02] <SpamapS> also thats not even valid advice.. it would be 'charm get'
[21:03] <SpamapS> and really, you mean 'juju deploy
[21:03] <koolhead17> ooh ok
[21:03] <SpamapS> Step 1 is 'juju deploy --repository=charms local:owncloud' ..
[21:03] <SpamapS> step 2 would be to expose it
[21:03] <koolhead17> k
[21:03] <SpamapS> 'juju expose owncloud'
[21:04] <koolhead17> k
[21:04] <SpamapS> then step 3 would be Access.. 4 user account.. etc. etc.
[21:04] <koolhead17> SpamapS: 1 minute modifying it
[21:04] <SpamapS> koolhead17: you have tested this on your eucalyptus, right?
[21:04] <koolhead17> SpamapS: no on LXC
[21:04] <SpamapS> ahh ok, it doesn't have a firewall ;)
[21:05] <koolhead17> SpamapS: yeah :P
[21:05] <koolhead17> i been pissed and i failed using juju on my openstack because of internal network
[21:06] <koolhead17> a friend of mine will be here, he has setup of eucalyptus running
[21:06] <koolhead17> on mon/tue
[21:09] <koolhead17> SpamapS: revision 13 should have all changes you suggested
[21:25] <SpamapS> koolhead17: merged, pushed. Thanks!
[21:25] <SpamapS> jcastro: ^^
[21:26]  * koolhead17 can finally go sleep :)
[21:49]  * SpamapS curses that launchpad still doesn't support jira external bug trackers. :-P
[22:36] <_mup_> juju/ssh-known_hosts r474 committed by jim.baker@canonical.com
[22:36] <_mup_> Test key layout for ssh-keygen