/srv/irclogs.ubuntu.com/2012/07/06/#ubuntu-server.txt

=== cpg|away is now known as cpg
=== Aaton is now known as Aaton_off
uvirtbotNew bug: #1021530 in openvswitch (universe) "update to include stable fixes for OVS 1.4" [Undecided,New] https://launchpad.net/bugs/102153001:06
qman__so, my server's still broken, and it looks pretty unsalvageable, so I was going to reinstall01:17
qman__my current plan is to back up the drive and get a dpkg-get-selections, and use it to restore /etc and the installed packages01:18
qman__anyone with experience have other suggestions?01:18
qman__obviously I wouldn't just blanket restore /etc, I'd go app by app01:19
=== metasansana is now known as hadron
=== cpg is now known as cpg|away
uvirtbotNew bug: #1021548 in nova (main) "nova-network does not contain a dependency on iptables" [Undecided,New] https://launchpad.net/bugs/102154802:11
uvirtbotNew bug: #1021559 in bind9 (main) "bind9 upgrade failed." [Undecided,New] https://launchpad.net/bugs/102155902:56
=== cpg|away is now known as cpg
=== cpg is now known as cpg|away
FuginatorWould anyone have a moment to help me with a new MAAS installation and Openstack config?  I've been using the guide here: https://help.ubuntu.com/community/UbuntuCloudInfrastructure but have run into an issue.05:29
=== matsubara-afk is now known as matsubara
=== cpg|away is now known as cpg
jamespageDaviey, all thrift-y bits a pieces for floodlight now in the NEW queue07:19
jamespageFuginator, what issue are you hitting?07:19
* jamespage goes to fix openvswitch07:30
confusiushi, is there anyway I can see with do-release-upgrade why it wants to install new packages like openal, webkit, gtk2-engines, etc. I seriously don't need that cruft on my server08:17
Davieyjamespage: ok, will review shortly.. thanks :)08:29
=== cpg is now known as cpg|away
uvirtbotNew bug: #1007273 in autofs5 (main) "autofs does not start automatically after reboot" [High,Incomplete] https://launchpad.net/bugs/100727310:37
=== al-maisan is now known as almaisan-away
=== matsubara is now known as matsubara-afk
uvirtbotNew bug: #1006293 in exim4 (main) "exiqgrep fails to parse output of exim4 -bp if the mail message is less than 1k" [Undecided,Incomplete] https://launchpad.net/bugs/100629312:01
uvirtbotNew bug: #1021630 in samba (main) "package smbclient 2:3.6.3-2ubuntu2 failed to install/upgrade: le sous-processus dpkg-deb --fsys-tarfile a retourné une erreur de sortie d'état 2" [Undecided,New] https://launchpad.net/bugs/102163012:08
=== n0ts is now known as n0ts_off
uvirtbotNew bug: #1021708 in keystone (main) "no CLI interface to find all of the tenants which a given user belongs to" [Undecided,New] https://launchpad.net/bugs/102170812:51
* n3rdo hi all13:47
=== ninjak_ is now known as ninjak
uvirtbotNew bug: #1021730 in bind9 (main) "package bind9 1:9.8.1.dfsg.P1-4ubuntu0.1 failed to install/upgrade: ErrorMessage: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/102173014:01
=== n0ts_off is now known as n0ts
uvirtbotNew bug: #1021528 in python-setuptools-git (universe) "[MIR] python-setuptools-git" [Critical,Fix released] https://launchpad.net/bugs/102152814:21
utlemmingFYI -- the Ubuntu Cloud Images are now being served by S3 for archive mirrors.15:03
uvirtbotNew bug: #1021768 in bacula (main) "debconf integration is broken" [Undecided,New] https://launchpad.net/bugs/102176815:16
=== n0ts is now known as n0ts_off
uvirtbotNew bug: #1021781 in mysql-5.5 (main) "package mysql-server-5.5 5.5.24-0ubuntu0.12.04.1 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1 (dup-of: 1012058)" [Undecided,New] https://launchpad.net/bugs/102178115:31
=== n0ts_off is now known as n0ts
=== matsubara-afk is now known as matsubara
uvirtbotNew bug: #1021698 in samba (main) "package smbclient 2:3.6.3-2ubuntu2 failed to install/upgrade: le sous-processus dpkg-deb --fsys-tarfile a retourné une erreur de sortie d'état 2" [Undecided,New] https://launchpad.net/bugs/102169816:07
=== benjiedmund is now known as roboto
smoserxnox, s3 mirrors giving snapshot.debian.org like behavior is a great idea.16:20
smoserutlemming, ^ it'd be great if you could at least sniff that. xnox if you had specific implementation thoughts, that'd be nice.16:20
xnoxsmoser: no implimentation details. But I have one more feature request: can you please do $ juju deploy S3mirrors on all three regions in the HPCloud16:22
xnoxI have contacted them, they are 'escallating it'16:22
smoserxnox, i bleieve there is some work being done on that, but i'll poke a bit16:23
xnoxbut you can also contact jorge16:23
xnoxsmoser: well currently we have setup "unofficial" apt-mirror based mirrors in the first region16:23
xnoxsmoser: and those will probably go away after the free 3month period is over.16:24
xnoxsmoser: cause it's setup by one of the devs who got sponsored to use HPCloud for free for three monts16:24
smoserxnox, yeah, i saw that.16:25
smoserxnox, when it was free i was running a public squid proxy for the same thing16:25
xnoxsmoser: ideally we should partner with HPCloud people to have ~ official mirrors there same as in EC216:27
xnoxis your S3 stuff using juju-deploy? =)16:27
xnoxcause I am sure there are plenty of clouds that need local mirrors ;-)16:27
utlemmingxnox: nope, its not using JuJu16:28
utlemmingxnox: its using Auto Scaling16:28
xnoxutlemming: is it OpenStack friendly?16:28
=== Lcawte|Away is now known as Lcawte
utlemmingxnox: the code is done with boto. I've seen some requests for me to use a more generic cloud api so that it would be OpenStack, et al, friendly, but no, its not16:29
utlemmingxnox: programming around S3 is, er, finicky16:29
utlemmingyou have to assume that its A) going to fail, B) it going to fail even when it says it succeeded and C) its going to succeed even when it says it failed16:30
xnox=))))16:30
utlemmingI have done some initial looking in switching the code over to libcloud, but I haven't had the time per se.16:31
xnoxhttps://www.hpcloud.com/products seems to have CDN, object, block storage, et al16:31
smoserxnox, thinking about snapshots, i dont think we coould easily do this without magic (i believe) new feature from amazon16:32
smoserif we had a magical feature from amazon that allowed you to do something like:16:32
smoser TIMESTAMP.bucketname.s3.amazonaws.com/ubuntu/16:33
smoserand give you the content as of that TIMESTAMP for the versioned bucket16:33
smoser(or put TIMESTAMP somewhere else in the url, but the dns portion is the only part that is clearly not part of the path to an object)16:33
=== zyga is now known as zyga-afk
utlemmingxnox, smoser: the feature is there, just that APT wouldn't understand it. The request looks like http://.../ubunut/precise-main/Packages.gz?Version=4 or something like that16:33
smoserutlemming, right. but you'd have to collect the Version= for each given path16:34
utlemming(don't take that as the gospel truth, I have to look at it to be sure)16:34
smoserand that would have to be in Release16:34
smoserand Release would then have to be re-signed.16:34
utlemmingwhich is why apt wouldn't understand it. But having a timestamp feature would be nice16:35
smoserso, not really possible. you need a way to magically move all of /ubuntu to /TIMESTAMP/ubuntu16:35
smoseror potentially, to specify TIMESTAMP in a header.16:35
xnoxwell - on snapshot.debian.net they generate static versioned release files, but access generic buckets with debs16:35
wrapidsMy server is suddenly not letting me log in via ssh. Just getting a permission denied error each time I attempt the pass, which is correct. Using my hosts ajax console I can login just fine with the credentials I'm providing.16:35
xnoxbut utlemming: can't we just teach apt to support magic URLs and cloud version mirrors?16:36
xnox=)16:36
utlemmingsounds like xnox has volunteered :)16:36
smoser2 ways to do this, i think16:36
xnoxutlemming: sounds like utlemming volunteered to write the spec of what/how apt should do requests.... since /me has no clue about clouds16:37
smosera.) ask amazon for some header that you can set, and have the version returned give the content as of that timestamp16:37
xnoxand APIs to clouds16:37
smoser then we'd have to make apt able to specify arbitrary headers (and you'd just specify 'Timestamp: ' in your apt config to get as of one timestamp... that seems a bit less than ideal)16:37
smoserb.) have a re-director service that tracks versions of metadata and allows /TIMESTAMP/ and does the translation.16:38
smoserutlemming, can you get the revision history of a bucket including timestamps of changes?16:39
smoserif you could, then we wouldn't need to require additional code form the populating code, but could just read what was there16:40
utlemmingsmoser: yes, if versioning is on. You can request the versioning information16:40
utlemmingsmoser: but that's not public16:40
smoserright.16:40
smoserbut thats ok.16:40
smoserthat service could have acl, but woudl mean your populating code didn't have to export that data.16:41
utlemmingfor you redirector service then, it would also have to feed the files to the client too16:41
utlemmingunless it 302's all requests for anything not meta-data16:41
smoser302 would be right.16:42
smoseras we'd never delete the data16:42
smoserrbasak,16:42
smoserthe above is actually one benefit of the prposed "change the format of the release file to contain a hash in the path"16:43
smoser(versus the /by-hash scheme we're proposing at https://wiki.ubuntu.com/AptByHash)16:44
smoserthe redirector service would'nt be that hard to do i dont think.16:45
smoserxnox, thanks for making us think about this.16:46
rbasaksmoser, xnox: why not just use by-hash and keep multiple InRelease files around?16:46
utlemmingsmoser: so reading between the lines...you could have a by-hash and a by-date?16:46
rbasakStart off with a historical InRelease file and then hit the mirror exactly as usual with the by-hash scheme16:46
smoserrbasak, because keepign multiple in-release files around doesn't work.16:46
smoserbecause the path is in them16:47
rbasakPath to what?16:47
smoserok. so http://us.archive.ubuntu.com/ubuntu/dists/precise/Release16:47
smosersays "get main/binary-amd64/Packages" based on the path to Release.16:47
smoserright?16:47
rbasakYes, but with the by hash scheme "main/binary-amd64/Packages" is just a key, not a path. The path is taken from the hash.16:48
smoserwait. yeah, you're right.16:49
smoserthen if you add '?version=4' to "main/binary-amd64/Packages" and traverse that for all the relevant hashes,t hey're still present16:50
=== kraut is now known as savenicks
smoserand you eseentially pin the date.16:50
smoseryeah.16:50
smoserthat works too16:50
=== savenicks is now known as kraut
=== kraut is now known as help
=== help is now known as kraut
rbasakYou never need to add ?version=4 to anything except InRelease16:50
smoserright.16:50
=== n0ts is now known as n0ts_off
smoserso the redirector service would then only have to redirect that one file16:51
rbasakIt'd be hard to do in client code, but I could write a http frontend that translated the entire S3 mirror dynamically to any date fairly easily16:51
rbasakYes exactly16:51
storrgieis there a good article for how to connect an ubuntu server to an AP via wireless?16:51
rbasakThen we just need to keep old index files and old pool debs around16:51
storrgieI've got wired working fine but I wouldn't mind dropping the cable16:52
rbasakI want to do a PoC on this now16:52
smoser storrgie /usr/share/doc/wireless-tools/README.Debian16:52
rbasakIt's a relatively trivial addition to our existing scheme, and would have some immediately useful benefits to developers16:52
=== kraut is now known as ircnet
=== ircnet is now known as kraut
utlemming(and sys admins/engineers)16:52
rbasakThe only real extra cost is a redirector service so one EC2 instance basically16:53
genii-aroundstorrgie: Basically make a wpa-supplicant.conf   file and then wpa_supplicant -i eth# -c /wherever/conf-file-is &      and then maybe dhclient eth#16:53
smoserrbasak, right.16:53
smoserand it doesn't even need storage16:54
rbasakYep16:54
genii-around( substitute wlan0 or whatever accordingly )16:54
rbasakI wonder if apt supports following a 301 http redirect?16:54
smoserrbasak, i'm pretty sure it does.16:54
utlemmingrbasak: I was wondering that myself16:54
rbasaksmoser: for every index file and pool deb?16:54
storrgiegenii-around, I'm not sure how to specify a wpa key then16:55
storrgiethe document doesn't describe it16:55
rbasakAnyway we can implement a redirector without that, by actually serving every file from S3 through the instance. And support for that could always be added to apt16:55
ivoksstgraber: around?16:55
smoserrbasak, right. but that doesn't really scale16:56
storrgiehttps://gist.github.com/306130716:56
smoserwell, at least not as well as rediret16:56
rbasakYes, but it could scale later by adding 301 support to apt if it isn't already there16:56
rbasakHow much scale would a historical archive service need?16:57
rbasakI mean that we can implement now and scale later without having to change the architecture later16:57
storrgiehttps://gist.github.com/3061307#comments16:58
utlemmingrbasak: one way to do this might be have "apt-historical" package16:58
storrgienot sure how to get the interface to work properly16:58
storrgieit doesn't appear to be connecting16:58
genii-aroundstorrgie: The key is in the wpa_supplicant.conf ... man wpa_supplicant.conf   has the syntax for that file16:58
utlemmingrbasak: have it local to the box in question which hits "http://localhost:1337" or something like that16:58
utlemmingsmoser: then we don't have to have a service per se16:59
rbasakutlemming: that's a good idea16:59
rbasakThen we wouldn't need an ec2 instance for it either16:59
genii-aroundstorrgie: Apologies on lag, work required me for bit16:59
utlemmingrbasak: then you don't have to use a 302, because it could just be passed through16:59
storrgiegenii-around, not sure if I have wpa supplicant installed16:59
rbasakActually that's not a good idea at all. It's a great idea ;)16:59
utlemmingrbasak: the other advantage is that it means that someone could make it a service if they wanted to17:00
rbasakYes17:01
storrgiegenii-around, is it required to have wpa supplicant17:01
rbasakSo do we have a plan then? :)17:01
genii-aroundstorrgie: Yes17:01
rbasakAlso, did anyone have any feedback for me on https://wiki.ubuntu.com/AptByHash?17:01
genii-aroundstorrgie: The packagename is just wpasupplicant  if you do not already have it installed17:01
uvirtbotNew bug: #1021382 in maas "The COMMISSIONING_SCRIPT setting uses a relative path." [Critical,Confirmed] https://launchpad.net/bugs/102138217:02
storrgieturns out it is installed17:03
storrgiegenii-around, so wait, the doc you pointed me to didn't show me anything about wpasupplicant.... it just said to edit the /etc/networking/interfaces file17:03
genii-aroundstorrgie: I did not point you to any documents.17:03
stgraberivoks: yep17:04
utlemmingrbasak: another thing you can do is to use the "Last-Modified" HTTP header (http://paste.ubuntu.com/1078335/) to determine the dates17:04
utlemmingrbasak: which makes it really, really fast17:04
storrgiesmoser, did, sorry17:04
storrgiegenii-around, where do you suggest I place this wpa_supplicant.conf file? What is good nature?17:04
ivoksstgraber: you remember, bacin in 2011, you've added a workaround for open-iscsi in its init script17:04
genii-aroundstorrgie: I said more-or less: make a wpa-supplicant.conf file and then wpa_supplicant -i eth# -c /wherever/conf-file-is & and then maybe dhclient eth#   and then: substitute wlan0 or so accordingly17:05
ivoksstgraber: idea was not to start open-iscsi if iscsi was started in initramfs17:05
utlemmingrbasak: you hit the by-hashes and look at the last modified so that if someone as YYYYMMDD-HHmmsSS you can choose the by-hash that is applicable.17:05
rbasakutlemming: so I do an HTTP HEAD on each InRelease?version=x and note the Last-Modified dates, right? And I'll eventually get a 404 or something which means I have all the versions. Then cache those.17:05
ivoksstgraber: bug https://bugs.launchpad.net/ubuntu/+source/open-iscsi/+bug/85096017:05
uvirtbotLaunchpad bug 850960 in open-iscsi "iscsid tries to reconnect existing session at startup, failing to do so and hanging the system (dup-of: 677333)" [Undecided,Confirmed]17:05
uvirtbotLaunchpad bug 677333 in open-iscsi "open-iscsi: reconnecting to targets fails with kernel >2.6.32 due to sysfs changes (open-iscsi pkg version out of date with kernel)" [Undecided,Confirmed]17:05
genii-aroundstorrgie: Someplace like /etc/my-wpa-supplicant.conf    maybe. Doesn't really matter so long as the command-line has the path to it17:05
smoserstorrgie, i'm sorry, i shared all knowledge i have of this.17:05
utlemmingrbasak: yup17:05
smoser:)17:05
utlemmingrbasak, smoser: the other thing is that now that these mirrors are public and production, I need to wrok with IS to get versioning turned on17:06
stgraberivoks: right, I believe this bug was fixed upstream/Debian so in theory the hack is no longer required, but someone should test before dropping it17:06
stgraber(haven't looked at the bug again but that's what I remember from the last time I read it)17:06
ivoksstgraber: right, but we don't have the fix in our open-iscsi17:06
smoserutlemming, well, even before IS was involved, you should have thought about it17:06
rbasakstorrgie, genii-around: whoa. No need to create wpa_supplicant.conf. You can put all needed keys directly in /etc/network/interfaces.17:07
ivoksstgraber: i've tested the patch from upstream and it works17:07
smoseras not deleting, means massive increase in storage17:07
stgraberivoks: yeah, it needs merging from Debian I believe...17:07
ivoksstgraber: or that...17:07
genii-aroundrbasak: Ah, nice. I've never used it in this way before.17:07
utlemmingsmoser: I delete now17:07
smoserand bill17:07
smosereven if everything else in s3 versioning was magic and perfect17:07
utlemmingsmoser: to save storage17:07
rbasakstorrgie, genii-around: see /usr/share/doc/wpasupplicant/README.Debian.gz for examples17:07
utlemmingsmoser: the change here is that we need to _not_ delete and we need to enable vesioning.17:07
stgraberivoks: I think we definitely want to merge from Debian, not sure how many other changes we have on that one though so don't know how painful it'll be17:08
* genii-around bookmarks17:08
smoserutlemming, right17:08
utlemmingsmoser: which is a big, big change17:08
smoserwhich increases costs significantly17:08
smoserwell, even if the versioning was magic17:08
stgraberat least it's good to know that the fix works and we can get rid of my ugly hack :)17:08
ivoksstgraber: i could take a look, in precise we are at 0ubuntu917:08
storrgierbasak, genii-around how do i reference this wpa_supplicant.conf file from my /etc/networking/interfaces ?17:08
utlemmingsmoser, rbasak: looks like the makings of blueprint for UDS-R17:08
ivoksstgraber: yeah, just wanted to let you know if it's still making problems for you somwhere :)17:09
rbasakutlemming: I'm very tempted to JFDI. Assuming by-hash is accepted, this feels relatively simple17:09
stgraberivoks: right, we'd probably want to merge from Debian in quantal and if there are enough people asking for it and we know it's 100% safe, then drop my hack from oneiric and precise and replace it by a cherry-pick of the fix17:09
genii-aroundstorrgie: For that you would probably need to read the documentation rbasak referred to. I have myself not used it in this way and so of limited help there.17:09
ivoksstgraber: yeah, fix is an onliner17:09
utlemmingrbasak: except that I can't implement this on the S3 side that easily17:09
stgraberivoks: I'm not actually using iscsi, I just happened to be the lucky one to get that bug escalated to intially :)17:10
stgraberivoks: good, should be easy to cherry-pick then :)17:10
rbasakstorrgie: you don't need a wpa_supplicant.conf. See the example in the docs. Just a few lines in /etc/network/interfaces is all you need.17:10
utlemmingrbasak: the _best_ case is that I can throw up a parallel mirror that has versioning, but we're talking a cost of $600/month17:10
rbasakutlemming: oh yeah, there's the money :)17:10
stgraberivoks: I'll try and have a look at it this afternoon once I'm done fighting with my automated ipv6 testing17:10
utlemmingrbasak: ubuntu-server does not own the S3 mirrors as of ~20 hours ago17:10
ivoksstgraber: https://github.com/mikechristie/open-iscsi/commit/f0b670c016c85c882bb0583eaef8ea2f7147d4af :)17:10
ivokspretty silly :)17:11
rbasakutlemming: I have a test quantal mirror right now :)17:12
=== dendrobates is now known as dendro-afk
rbasak(btw, any idea how much that's costing us? I can't see the size of the bucket)17:12
utlemmingrbasak: ~$30-40USD/month17:12
rbasakutlemming: OK. And I have an instance running too.17:13
rbasakutlemming: it'll need to be syncing the mirror every 30 minutes, so not worth turning off17:13
utlemmingrbasak: ~$250/month + 0.10/GB inbound17:13
rbasakI should kill the instance actually. It's only sitting idle right now, pending feedback.17:14
* rbasak does so17:14
utlemmingthe reason I used AutoScaling is because it is cheaper...we don't have to pay bandwidth in to update the mirrors because the first 1GB is free.17:16
rbasakThat's different for a long lived instance?17:17
=== dendro-afk is now known as dendrobates
=== cpg|away is now known as cpg
rbasak<rbasak> I wonder if utlemming can see stats on the total of S3 PUTs in the S3 mirror?17:47
rbasak<rbasak> That should give us an approximate monthly storage growth figure, right?17:47
=== zyga-afk is now known as zyga
InsyteI'm getting kernel crashes in Lucid when running NFS + iscsi inside KVM.  Unfortunately, it crashes hard and fast enough that nothing is logged, even remotely.19:05
InsyteLucid.19:05
InsyteThey seem to happen every five days or so.19:05
InsyteAnyone seen anything similar?19:05
=== Aaton_off is now known as Aaton
roaksoaxDaviey: /win 1319:14
roaksoaxerr19:14
stgraberhallyn: just spent half an hour trying to figure out why my ipv6 testing scripts are failing quite badly: http://paste.ubuntu.com/1078528/19:16
hallynstgraber: hm, yeah, hooks and api came through separate  branches19:20
stgraberhallyn: yeah but I rebased the API branch on quantal, so we now have the hooks in there but they somehow get stripped every time I save the config19:21
hallynstgraber: yes, because i had to implement save_config - there was none.  so the lxc.hooks need to be implemented in save_config19:22
stgraberhallyn: ah, right, I guess I'll just skip using the hooks for now then19:23
hallynstgraber: which bzr branch are you using atm?19:24
hallyni'll whip up a merge proposal19:24
stgraberhallyn: lp:~ubuntu-lxc/ubuntu/quantal/lxc/lxc-api-and-python19:25
hallynstgraber: feh, of all the files in that $(*&% patch, conf.h wasn't one.  gotta rebuild my patch :)19:57
=== dendrobates is now known as dendro-afk
hallynstgraber: why did you remove the README in your last commit?20:11
hallynit makes it fail to build20:11
uvirtbotNew bug: #997269 in dovecot (main) "dovecot imap broken by apparmor policy" [High,Invalid] https://launchpad.net/bugs/99726920:12
stgraberhallyn: oops, I'll fix the FTBFS. The reason is that the directory is no longer empty as we have lxc-init in there20:12
hallynstgraber: ok, lemme first check in my change for hooks.  it compiles, hopefully it runs right too :)20:13
stgraberhallyn: oops, saw your comment right after I pushed the fix for README, so you may need to uncommit/pull/commit, sorry20:14
=== dendro-afk is now known as dendrobates
hallynstgraber: done - but your fix isn't enough.  the *makefile* wants there to be a README20:34
stgraberhallyn: ... ok, will fix the Makefile too then :)20:35
mu3enis it possible to ensure grubpc is installed instead of grubefi during server install?20:41
=== fenris_ is now known as Guest70358
hallynstgraber: thanks :)  at any rate save_config() is working20:53
hallynfor hooks that is20:53
stgraberhallyn: cool, and I just updated the branch to make it buildable again. I'll push to the PPA soonish20:54
=== isitme is now known as guntbert
hallynrefetching.  let's see if i can think up a good plan for get_config()21:01
hallynprobably makes the most sense then to re-implement save_config in terms of get_config21:02
hallynstgraber: for c->get_config_item(c, "lxc.mount"), would you want just the value part, or the whole 'lxc.mount = value' string?21:02
stgraberjust the value21:03
hallynok.  i think unfortunately in the c cal li'l lhave to make it21:03
=== cpg is now known as cpg|away
hallynint get_config_item(struct container, char *key, char *return)21:03
stgraberI can live with that21:04
hallynreturning the length, so you can do 'len = get_config_item(c, key, NULL); v = malloc(len+1); len = get_config_item(c, key, v)'21:04
hallynstill seriously ugly because of multiple-line things like lxc.mount.entry21:07
=== cpg|away is now known as cpg
hallynstgraber: do you need to be able to do 'get_config_item(c, "lxc.cgroup")' to get all cgroup items, or can we just support only specific cgroup items?21:15
hallynlike lxc.cgroup.devices.allow21:15
stgraberwould be best to enforce an exact match of the key21:16
stgraberso lxc.cgroup.devices.allow would work but lxc.cgroup would return an error (no match)21:17
stgrabernot sure what to return for these that can exist multiple times (network, devices.allow, ...)21:17
hallyni can just return a string with newlines,21:20
hallynor if you prefer I can return one line at a time with indication of wether there are more lines21:20
stgraberhallyn: newlines is best I think. Easier to handle for me at least21:23
stgraberhallyn: btw, how does that work in set_config_item? how would I got about defining multiple network interfaces using it?21:24
hallynstgraber: IIRC just do same way (order) as in a config file21:27
hallyndoes that not work?21:27
hallynso, you can't change a setting for an already-defined nic, i uess21:28
hallyni guess21:28
stgraberI can see that being a bit annoying for a few of my setups ;)21:28
hallynwould it suffice to have a clear_config(c, "network") and clear_config(c, "cap.drop") ?21:29
stgraberclear_config_item(key) wiping all occurences of a key would be good yeah21:29
hallynanything more structured doesn't really fit with the set_config21:29
stgraberfor scripting, I don't really care so much about the current values, I just want to see them gone and replaced ;)21:30
hallynif you can think of a better way to do something like container->nic[1].name = "eth3" with the api, i'm all ears21:30
hallynclear all network entries, or only #0 or #1 ?21:31
hallynall nics i should say21:31
stgraberideally I'd like to see the config format change to not allow duplicate keys unless they are just multiple values for a key21:31
stgraberand see network be moved to lxc.network.ethX.type/flags/link/hwaddr21:31
stgraberI'd expect clear_config_item(key) to be a generic function removing any occurence of "key" from the config, so in the case of .network, all nics21:32
hallynok.  will see what i can come up with21:32
hallynstgraber: network multiple entries are of course a bit of a pain since subkeys might exist for only some of the nics21:42
hallynstgraber: maybe the right answer will hit me over the weekend.  have a good weekend21:48
stgraberhallyn: well, with multiple calls to set_config_item (one per key and per interface) + get_config_item returning multiple lines and clean_config_item to wipe the entries, I should be able to make a wrapper in python that lets you change the network config in a scripter friendly way21:49
stgraberhallyn: have a good weekend!21:50
hallynstgraber: ok, sounds good.21:57
=== Lcawte is now known as Lcawte|Away

Generated by irclog2html.py 2.7 by Marius Gedminas - find it at mg.pov.lt!