=== cpg|away is now known as cpg === Aaton is now known as Aaton_off [01:06] New bug: #1021530 in openvswitch (universe) "update to include stable fixes for OVS 1.4" [Undecided,New] https://launchpad.net/bugs/1021530 [01:17] so, my server's still broken, and it looks pretty unsalvageable, so I was going to reinstall [01:18] my current plan is to back up the drive and get a dpkg-get-selections, and use it to restore /etc and the installed packages [01:18] anyone with experience have other suggestions? [01:19] obviously I wouldn't just blanket restore /etc, I'd go app by app === metasansana is now known as hadron === cpg is now known as cpg|away [02:11] New bug: #1021548 in nova (main) "nova-network does not contain a dependency on iptables" [Undecided,New] https://launchpad.net/bugs/1021548 [02:56] New bug: #1021559 in bind9 (main) "bind9 upgrade failed." [Undecided,New] https://launchpad.net/bugs/1021559 === cpg|away is now known as cpg === cpg is now known as cpg|away [05:29] Would anyone have a moment to help me with a new MAAS installation and Openstack config? I've been using the guide here: https://help.ubuntu.com/community/UbuntuCloudInfrastructure but have run into an issue. === matsubara-afk is now known as matsubara === cpg|away is now known as cpg [07:19] Daviey, all thrift-y bits a pieces for floodlight now in the NEW queue [07:19] Fuginator, what issue are you hitting? [07:30] * jamespage goes to fix openvswitch [08:17] hi, is there anyway I can see with do-release-upgrade why it wants to install new packages like openal, webkit, gtk2-engines, etc. I seriously don't need that cruft on my server [08:29] jamespage: ok, will review shortly.. thanks :) === cpg is now known as cpg|away [10:37] New bug: #1007273 in autofs5 (main) "autofs does not start automatically after reboot" [High,Incomplete] https://launchpad.net/bugs/1007273 === al-maisan is now known as almaisan-away === matsubara is now known as matsubara-afk [12:01] New bug: #1006293 in exim4 (main) "exiqgrep fails to parse output of exim4 -bp if the mail message is less than 1k" [Undecided,Incomplete] https://launchpad.net/bugs/1006293 [12:08] New bug: #1021630 in samba (main) "package smbclient 2:3.6.3-2ubuntu2 failed to install/upgrade: le sous-processus dpkg-deb --fsys-tarfile a retourné une erreur de sortie d'état 2" [Undecided,New] https://launchpad.net/bugs/1021630 === n0ts is now known as n0ts_off [12:51] New bug: #1021708 in keystone (main) "no CLI interface to find all of the tenants which a given user belongs to" [Undecided,New] https://launchpad.net/bugs/1021708 [13:47] * n3rdo hi all === ninjak_ is now known as ninjak [14:01] New bug: #1021730 in bind9 (main) "package bind9 1:9.8.1.dfsg.P1-4ubuntu0.1 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/1021730 === n0ts_off is now known as n0ts [14:21] New bug: #1021528 in python-setuptools-git (universe) "[MIR] python-setuptools-git" [Critical,Fix released] https://launchpad.net/bugs/1021528 [15:03] FYI -- the Ubuntu Cloud Images are now being served by S3 for archive mirrors. [15:16] New bug: #1021768 in bacula (main) "debconf integration is broken" [Undecided,New] https://launchpad.net/bugs/1021768 === n0ts is now known as n0ts_off [15:31] New bug: #1021781 in mysql-5.5 (main) "package mysql-server-5.5 5.5.24-0ubuntu0.12.04.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1 (dup-of: 1012058)" [Undecided,New] https://launchpad.net/bugs/1021781 === n0ts_off is now known as n0ts === matsubara-afk is now known as matsubara [16:07] New bug: #1021698 in samba (main) "package smbclient 2:3.6.3-2ubuntu2 failed to install/upgrade: le sous-processus dpkg-deb --fsys-tarfile a retourné une erreur de sortie d'état 2" [Undecided,New] https://launchpad.net/bugs/1021698 === benjiedmund is now known as roboto [16:20] xnox, s3 mirrors giving snapshot.debian.org like behavior is a great idea. [16:20] utlemming, ^ it'd be great if you could at least sniff that. xnox if you had specific implementation thoughts, that'd be nice. [16:22] smoser: no implimentation details. But I have one more feature request: can you please do $ juju deploy S3mirrors on all three regions in the HPCloud [16:22] I have contacted them, they are 'escallating it' [16:23] xnox, i bleieve there is some work being done on that, but i'll poke a bit [16:23] but you can also contact jorge [16:23] smoser: well currently we have setup "unofficial" apt-mirror based mirrors in the first region [16:24] smoser: and those will probably go away after the free 3month period is over. [16:24] smoser: cause it's setup by one of the devs who got sponsored to use HPCloud for free for three monts [16:25] xnox, yeah, i saw that. [16:25] xnox, when it was free i was running a public squid proxy for the same thing [16:27] smoser: ideally we should partner with HPCloud people to have ~ official mirrors there same as in EC2 [16:27] is your S3 stuff using juju-deploy? =) [16:27] cause I am sure there are plenty of clouds that need local mirrors ;-) [16:28] xnox: nope, its not using JuJu [16:28] xnox: its using Auto Scaling [16:28] utlemming: is it OpenStack friendly? === Lcawte|Away is now known as Lcawte [16:29] xnox: the code is done with boto. I've seen some requests for me to use a more generic cloud api so that it would be OpenStack, et al, friendly, but no, its not [16:29] xnox: programming around S3 is, er, finicky [16:30] you have to assume that its A) going to fail, B) it going to fail even when it says it succeeded and C) its going to succeed even when it says it failed [16:30] =)))) [16:31] I have done some initial looking in switching the code over to libcloud, but I haven't had the time per se. [16:31] https://www.hpcloud.com/products seems to have CDN, object, block storage, et al [16:32] xnox, thinking about snapshots, i dont think we coould easily do this without magic (i believe) new feature from amazon [16:32] if we had a magical feature from amazon that allowed you to do something like: [16:33] TIMESTAMP.bucketname.s3.amazonaws.com/ubuntu/ [16:33] and give you the content as of that TIMESTAMP for the versioned bucket [16:33] (or put TIMESTAMP somewhere else in the url, but the dns portion is the only part that is clearly not part of the path to an object) === zyga is now known as zyga-afk [16:33] xnox, smoser: the feature is there, just that APT wouldn't understand it. The request looks like http://.../ubunut/precise-main/Packages.gz?Version=4 or something like that [16:34] utlemming, right. but you'd have to collect the Version= for each given path [16:34] (don't take that as the gospel truth, I have to look at it to be sure) [16:34] and that would have to be in Release [16:34] and Release would then have to be re-signed. [16:35] which is why apt wouldn't understand it. But having a timestamp feature would be nice [16:35] so, not really possible. you need a way to magically move all of /ubuntu to /TIMESTAMP/ubuntu [16:35] or potentially, to specify TIMESTAMP in a header. [16:35] well - on snapshot.debian.net they generate static versioned release files, but access generic buckets with debs [16:35] My server is suddenly not letting me log in via ssh. Just getting a permission denied error each time I attempt the pass, which is correct. Using my hosts ajax console I can login just fine with the credentials I'm providing. [16:36] but utlemming: can't we just teach apt to support magic URLs and cloud version mirrors? [16:36] =) [16:36] sounds like xnox has volunteered :) [16:36] 2 ways to do this, i think [16:37] utlemming: sounds like utlemming volunteered to write the spec of what/how apt should do requests.... since /me has no clue about clouds [16:37] a.) ask amazon for some header that you can set, and have the version returned give the content as of that timestamp [16:37] and APIs to clouds [16:37] then we'd have to make apt able to specify arbitrary headers (and you'd just specify 'Timestamp: ' in your apt config to get as of one timestamp... that seems a bit less than ideal) [16:38] b.) have a re-director service that tracks versions of metadata and allows /TIMESTAMP/ and does the translation. [16:39] utlemming, can you get the revision history of a bucket including timestamps of changes? [16:40] if you could, then we wouldn't need to require additional code form the populating code, but could just read what was there [16:40] smoser: yes, if versioning is on. You can request the versioning information [16:40] smoser: but that's not public [16:40] right. [16:40] but thats ok. [16:41] that service could have acl, but woudl mean your populating code didn't have to export that data. [16:41] for you redirector service then, it would also have to feed the files to the client too [16:41] unless it 302's all requests for anything not meta-data [16:42] 302 would be right. [16:42] as we'd never delete the data [16:42] rbasak, [16:43] the above is actually one benefit of the prposed "change the format of the release file to contain a hash in the path" [16:44] (versus the /by-hash scheme we're proposing at https://wiki.ubuntu.com/AptByHash) [16:45] the redirector service would'nt be that hard to do i dont think. [16:46] xnox, thanks for making us think about this. [16:46] smoser, xnox: why not just use by-hash and keep multiple InRelease files around? [16:46] smoser: so reading between the lines...you could have a by-hash and a by-date? [16:46] Start off with a historical InRelease file and then hit the mirror exactly as usual with the by-hash scheme [16:46] rbasak, because keepign multiple in-release files around doesn't work. [16:47] because the path is in them [16:47] Path to what? [16:47] ok. so http://us.archive.ubuntu.com/ubuntu/dists/precise/Release [16:47] says "get main/binary-amd64/Packages" based on the path to Release. [16:47] right? [16:48] Yes, but with the by hash scheme "main/binary-amd64/Packages" is just a key, not a path. The path is taken from the hash. [16:49] wait. yeah, you're right. [16:50] then if you add '?version=4' to "main/binary-amd64/Packages" and traverse that for all the relevant hashes,t hey're still present === kraut is now known as savenicks [16:50] and you eseentially pin the date. [16:50] yeah. [16:50] that works too === savenicks is now known as kraut === kraut is now known as help === help is now known as kraut [16:50] You never need to add ?version=4 to anything except InRelease [16:50] right. === n0ts is now known as n0ts_off [16:51] so the redirector service would then only have to redirect that one file [16:51] It'd be hard to do in client code, but I could write a http frontend that translated the entire S3 mirror dynamically to any date fairly easily [16:51] Yes exactly [16:51] is there a good article for how to connect an ubuntu server to an AP via wireless? [16:51] Then we just need to keep old index files and old pool debs around [16:52] I've got wired working fine but I wouldn't mind dropping the cable [16:52] I want to do a PoC on this now [16:52] storrgie /usr/share/doc/wireless-tools/README.Debian [16:52] It's a relatively trivial addition to our existing scheme, and would have some immediately useful benefits to developers === kraut is now known as ircnet === ircnet is now known as kraut [16:52] (and sys admins/engineers) [16:53] The only real extra cost is a redirector service so one EC2 instance basically [16:53] storrgie: Basically make a wpa-supplicant.conf file and then wpa_supplicant -i eth# -c /wherever/conf-file-is & and then maybe dhclient eth# [16:53] rbasak, right. [16:54] and it doesn't even need storage [16:54] Yep [16:54] ( substitute wlan0 or whatever accordingly ) [16:54] I wonder if apt supports following a 301 http redirect? [16:54] rbasak, i'm pretty sure it does. [16:54] rbasak: I was wondering that myself [16:54] smoser: for every index file and pool deb? [16:55] genii-around, I'm not sure how to specify a wpa key then [16:55] the document doesn't describe it [16:55] Anyway we can implement a redirector without that, by actually serving every file from S3 through the instance. And support for that could always be added to apt [16:55] stgraber: around? [16:56] rbasak, right. but that doesn't really scale [16:56] https://gist.github.com/3061307 [16:56] well, at least not as well as rediret [16:56] Yes, but it could scale later by adding 301 support to apt if it isn't already there [16:57] How much scale would a historical archive service need? [16:57] I mean that we can implement now and scale later without having to change the architecture later [16:58] https://gist.github.com/3061307#comments [16:58] rbasak: one way to do this might be have "apt-historical" package [16:58] not sure how to get the interface to work properly [16:58] it doesn't appear to be connecting [16:58] storrgie: The key is in the wpa_supplicant.conf ... man wpa_supplicant.conf has the syntax for that file [16:58] rbasak: have it local to the box in question which hits "http://localhost:1337" or something like that [16:59] smoser: then we don't have to have a service per se [16:59] utlemming: that's a good idea [16:59] Then we wouldn't need an ec2 instance for it either [16:59] storrgie: Apologies on lag, work required me for bit [16:59] rbasak: then you don't have to use a 302, because it could just be passed through [16:59] genii-around, not sure if I have wpa supplicant installed [16:59] Actually that's not a good idea at all. It's a great idea ;) [17:00] rbasak: the other advantage is that it means that someone could make it a service if they wanted to [17:01] Yes [17:01] genii-around, is it required to have wpa supplicant [17:01] So do we have a plan then? :) [17:01] storrgie: Yes [17:01] Also, did anyone have any feedback for me on https://wiki.ubuntu.com/AptByHash? [17:01] storrgie: The packagename is just wpasupplicant if you do not already have it installed [17:02] New bug: #1021382 in maas "The COMMISSIONING_SCRIPT setting uses a relative path." [Critical,Confirmed] https://launchpad.net/bugs/1021382 [17:03] turns out it is installed [17:03] genii-around, so wait, the doc you pointed me to didn't show me anything about wpasupplicant.... it just said to edit the /etc/networking/interfaces file [17:03] storrgie: I did not point you to any documents. [17:04] ivoks: yep [17:04] rbasak: another thing you can do is to use the "Last-Modified" HTTP header (http://paste.ubuntu.com/1078335/) to determine the dates [17:04] rbasak: which makes it really, really fast [17:04] smoser, did, sorry [17:04] genii-around, where do you suggest I place this wpa_supplicant.conf file? What is good nature? [17:04] stgraber: you remember, bacin in 2011, you've added a workaround for open-iscsi in its init script [17:05] storrgie: I said more-or less: make a wpa-supplicant.conf file and then wpa_supplicant -i eth# -c /wherever/conf-file-is & and then maybe dhclient eth# and then: substitute wlan0 or so accordingly [17:05] stgraber: idea was not to start open-iscsi if iscsi was started in initramfs [17:05] rbasak: you hit the by-hashes and look at the last modified so that if someone as YYYYMMDD-HHmmsSS you can choose the by-hash that is applicable. [17:05] utlemming: so I do an HTTP HEAD on each InRelease?version=x and note the Last-Modified dates, right? And I'll eventually get a 404 or something which means I have all the versions. Then cache those. [17:05] stgraber: bug https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/850960 [17:05] Launchpad bug 850960 in open-iscsi "iscsid tries to reconnect existing session at startup, failing to do so and hanging the system (dup-of: 677333)" [Undecided,Confirmed] [17:05] Launchpad bug 677333 in open-iscsi "open-iscsi: reconnecting to targets fails with kernel >2.6.32 due to sysfs changes (open-iscsi pkg version out of date with kernel)" [Undecided,Confirmed] [17:05] storrgie: Someplace like /etc/my-wpa-supplicant.conf maybe. Doesn't really matter so long as the command-line has the path to it [17:05] storrgie, i'm sorry, i shared all knowledge i have of this. [17:05] rbasak: yup [17:05] :) [17:06] rbasak, smoser: the other thing is that now that these mirrors are public and production, I need to wrok with IS to get versioning turned on [17:06] ivoks: right, I believe this bug was fixed upstream/Debian so in theory the hack is no longer required, but someone should test before dropping it [17:06] (haven't looked at the bug again but that's what I remember from the last time I read it) [17:06] stgraber: right, but we don't have the fix in our open-iscsi [17:06] utlemming, well, even before IS was involved, you should have thought about it [17:07] storrgie, genii-around: whoa. No need to create wpa_supplicant.conf. You can put all needed keys directly in /etc/network/interfaces. [17:07] stgraber: i've tested the patch from upstream and it works [17:07] as not deleting, means massive increase in storage [17:07] ivoks: yeah, it needs merging from Debian I believe... [17:07] stgraber: or that... [17:07] rbasak: Ah, nice. I've never used it in this way before. [17:07] smoser: I delete now [17:07] and bill [17:07] even if everything else in s3 versioning was magic and perfect [17:07] smoser: to save storage [17:07] storrgie, genii-around: see /usr/share/doc/wpasupplicant/README.Debian.gz for examples [17:07] smoser: the change here is that we need to _not_ delete and we need to enable vesioning. [17:08] ivoks: I think we definitely want to merge from Debian, not sure how many other changes we have on that one though so don't know how painful it'll be [17:08] * genii-around bookmarks [17:08] utlemming, right [17:08] smoser: which is a big, big change [17:08] which increases costs significantly [17:08] well, even if the versioning was magic [17:08] at least it's good to know that the fix works and we can get rid of my ugly hack :) [17:08] stgraber: i could take a look, in precise we are at 0ubuntu9 [17:08] rbasak, genii-around how do i reference this wpa_supplicant.conf file from my /etc/networking/interfaces ? [17:08] smoser, rbasak: looks like the makings of blueprint for UDS-R [17:09] stgraber: yeah, just wanted to let you know if it's still making problems for you somwhere :) [17:09] utlemming: I'm very tempted to JFDI. Assuming by-hash is accepted, this feels relatively simple [17:09] ivoks: right, we'd probably want to merge from Debian in quantal and if there are enough people asking for it and we know it's 100% safe, then drop my hack from oneiric and precise and replace it by a cherry-pick of the fix [17:09] storrgie: For that you would probably need to read the documentation rbasak referred to. I have myself not used it in this way and so of limited help there. [17:09] stgraber: yeah, fix is an onliner [17:09] rbasak: except that I can't implement this on the S3 side that easily [17:10] ivoks: I'm not actually using iscsi, I just happened to be the lucky one to get that bug escalated to intially :) [17:10] ivoks: good, should be easy to cherry-pick then :) [17:10] storrgie: you don't need a wpa_supplicant.conf. See the example in the docs. Just a few lines in /etc/network/interfaces is all you need. [17:10] rbasak: the _best_ case is that I can throw up a parallel mirror that has versioning, but we're talking a cost of $600/month [17:10] utlemming: oh yeah, there's the money :) [17:10] ivoks: I'll try and have a look at it this afternoon once I'm done fighting with my automated ipv6 testing [17:10] rbasak: ubuntu-server does not own the S3 mirrors as of ~20 hours ago [17:10] stgraber: https://github.com/mikechristie/open-iscsi/commit/f0b670c016c85c882bb0583eaef8ea2f7147d4af :) [17:11] pretty silly :) [17:12] utlemming: I have a test quantal mirror right now :) === dendrobates is now known as dendro-afk [17:12] (btw, any idea how much that's costing us? I can't see the size of the bucket) [17:12] rbasak: ~$30-40USD/month [17:13] utlemming: OK. And I have an instance running too. [17:13] utlemming: it'll need to be syncing the mirror every 30 minutes, so not worth turning off [17:13] rbasak: ~$250/month + 0.10/GB inbound [17:14] I should kill the instance actually. It's only sitting idle right now, pending feedback. [17:14] * rbasak does so [17:16] the reason I used AutoScaling is because it is cheaper...we don't have to pay bandwidth in to update the mirrors because the first 1GB is free. [17:17] That's different for a long lived instance? === dendro-afk is now known as dendrobates === cpg|away is now known as cpg [17:47] I wonder if utlemming can see stats on the total of S3 PUTs in the S3 mirror? [17:47] That should give us an approximate monthly storage growth figure, right? === zyga-afk is now known as zyga [19:05] I'm getting kernel crashes in Lucid when running NFS + iscsi inside KVM. Unfortunately, it crashes hard and fast enough that nothing is logged, even remotely. [19:05] Lucid. [19:05] They seem to happen every five days or so. [19:05] Anyone seen anything similar? === Aaton_off is now known as Aaton [19:14] Daviey: /win 13 [19:14] err [19:16] hallyn: just spent half an hour trying to figure out why my ipv6 testing scripts are failing quite badly: http://paste.ubuntu.com/1078528/ [19:20] stgraber: hm, yeah, hooks and api came through separate branches [19:21] hallyn: yeah but I rebased the API branch on quantal, so we now have the hooks in there but they somehow get stripped every time I save the config [19:22] stgraber: yes, because i had to implement save_config - there was none. so the lxc.hooks need to be implemented in save_config [19:23] hallyn: ah, right, I guess I'll just skip using the hooks for now then [19:24] stgraber: which bzr branch are you using atm? [19:24] i'll whip up a merge proposal [19:25] hallyn: lp:~ubuntu-lxc/ubuntu/quantal/lxc/lxc-api-and-python [19:57] stgraber: feh, of all the files in that $(*&% patch, conf.h wasn't one. gotta rebuild my patch :) === dendrobates is now known as dendro-afk [20:11] stgraber: why did you remove the README in your last commit? [20:11] it makes it fail to build [20:12] New bug: #997269 in dovecot (main) "dovecot imap broken by apparmor policy" [High,Invalid] https://launchpad.net/bugs/997269 [20:12] hallyn: oops, I'll fix the FTBFS. The reason is that the directory is no longer empty as we have lxc-init in there [20:13] stgraber: ok, lemme first check in my change for hooks. it compiles, hopefully it runs right too :) [20:14] hallyn: oops, saw your comment right after I pushed the fix for README, so you may need to uncommit/pull/commit, sorry === dendro-afk is now known as dendrobates [20:34] stgraber: done - but your fix isn't enough. the *makefile* wants there to be a README [20:35] hallyn: ... ok, will fix the Makefile too then :) [20:41] is it possible to ensure grubpc is installed instead of grubefi during server install? === fenris_ is now known as Guest70358 [20:53] stgraber: thanks :) at any rate save_config() is working [20:53] for hooks that is [20:54] hallyn: cool, and I just updated the branch to make it buildable again. I'll push to the PPA soonish === isitme is now known as guntbert [21:01] refetching. let's see if i can think up a good plan for get_config() [21:02] probably makes the most sense then to re-implement save_config in terms of get_config [21:02] stgraber: for c->get_config_item(c, "lxc.mount"), would you want just the value part, or the whole 'lxc.mount = value' string? [21:03] just the value [21:03] ok. i think unfortunately in the c cal li'l lhave to make it === cpg is now known as cpg|away [21:03] int get_config_item(struct container, char *key, char *return) [21:04] I can live with that [21:04] returning the length, so you can do 'len = get_config_item(c, key, NULL); v = malloc(len+1); len = get_config_item(c, key, v)' [21:07] still seriously ugly because of multiple-line things like lxc.mount.entry === cpg|away is now known as cpg [21:15] stgraber: do you need to be able to do 'get_config_item(c, "lxc.cgroup")' to get all cgroup items, or can we just support only specific cgroup items? [21:15] like lxc.cgroup.devices.allow [21:16] would be best to enforce an exact match of the key [21:17] so lxc.cgroup.devices.allow would work but lxc.cgroup would return an error (no match) [21:17] not sure what to return for these that can exist multiple times (network, devices.allow, ...) [21:20] i can just return a string with newlines, [21:20] or if you prefer I can return one line at a time with indication of wether there are more lines [21:23] hallyn: newlines is best I think. Easier to handle for me at least [21:24] hallyn: btw, how does that work in set_config_item? how would I got about defining multiple network interfaces using it? [21:27] stgraber: IIRC just do same way (order) as in a config file [21:27] does that not work? [21:28] so, you can't change a setting for an already-defined nic, i uess [21:28] i guess [21:28] I can see that being a bit annoying for a few of my setups ;) [21:29] would it suffice to have a clear_config(c, "network") and clear_config(c, "cap.drop") ? [21:29] clear_config_item(key) wiping all occurences of a key would be good yeah [21:29] anything more structured doesn't really fit with the set_config [21:30] for scripting, I don't really care so much about the current values, I just want to see them gone and replaced ;) [21:30] if you can think of a better way to do something like container->nic[1].name = "eth3" with the api, i'm all ears [21:31] clear all network entries, or only #0 or #1 ? [21:31] all nics i should say [21:31] ideally I'd like to see the config format change to not allow duplicate keys unless they are just multiple values for a key [21:31] and see network be moved to lxc.network.ethX.type/flags/link/hwaddr [21:32] I'd expect clear_config_item(key) to be a generic function removing any occurence of "key" from the config, so in the case of .network, all nics [21:32] ok. will see what i can come up with [21:42] stgraber: network multiple entries are of course a bit of a pain since subkeys might exist for only some of the nics [21:48] stgraber: maybe the right answer will hit me over the weekend. have a good weekend [21:49] hallyn: well, with multiple calls to set_config_item (one per key and per interface) + get_config_item returning multiple lines and clean_config_item to wipe the entries, I should be able to make a wrapper in python that lets you change the network config in a scripter friendly way [21:50] hallyn: have a good weekend! [21:57] stgraber: ok, sounds good. === Lcawte is now known as Lcawte|Away