=== markthomas is now known as markthomas|away [03:01] rbasak, jamespage: I pushed some more changes to bcache-tools, and I think it's ready for a release. It should be all prepped, but it's not tagged yet (waiting for feedback from you guys first) [03:15] Can somebody give me a link to a tutorial how to make an ubuntu guest ready for a virtio network adapter? Googling confuses me at the moment since the tutorials I find refer to the configuration on the kvm host itself [03:17] Am I done by installing libvirt-bin and then enabling virtio for the adapter on the KVM Host system? === Wiched is now known as Guest15186 [08:30] squisher: OK, I'll take a look. [08:35] Goodmorning. === shauno_ is now known as shauno [10:18] hello guys, I read that a backport for apache may be released for is there any ETA for the packages? [10:19] I am using 12.04 [10:25] pvlos: Which CVE is it? [10:25] bekks: http://people.canonical.com/~ubuntu-security/cve/2015/CVE-2015-4000.html === Wiched is now known as Guest17537 [10:27] Well, 2.2 is currently in needs-triage. So no need to backport 2.4 [10:27] The link can be found in your link :) [10:29] bekks: I know but the 'needs-triage' may stay forever [10:29] As for every bug. There is no guarantee that a bug will be fixed. [10:30] bekks: the bug is critical though [10:31] is there any way to fix the issue withou the backports? [10:31] without^ [10:31] Which makes it very likely that it will be fixed, but still no guarantee [10:32] i see [10:32] And since 12.04 (and 2.2 with it) are supported until 2017, it is not likely that a backport isneeded. Instead, 2.2 will be fixed. === genpaku_ is now known as genpaku === alai` is now known as alai [13:57] Hey all: does the cloud archive repository lag behind the cloud-archive staging PPA? [13:57] Because the cloud archive Kilo staging PPA contains a fix for libvirt that does not appear to have made it to the package source that is added by apt-add-repository cloud-archive:kilo [14:20] lukasa_, it will get there - the staging PPA is a holding area for entry in the stable update process [14:22] lukasa_, those updates are in proposed - http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/kilo_versions.html [14:22] just pending some testing === Wiched is now known as Guest16452 [14:32] jamespage: ah great, thanks [14:32] Out of interest, do you know what the timeline usually is? === Lcawte|Away is now known as Lcawte [14:49] lukasa_, normally minimum of a week [14:49] lukasa_, those ones colided with last weeks summit - apologies [14:50] jamespage: No worries, it happens. =) Just wanted to make sure I understood what was happening. For testing we can temporarily use the kilo-staging PPA. =) [14:50] lukasa_, use the proposed pocket [14:50] add-apt-repository cloud-archive:kilo-proposed [14:51] Ah, much better idea, thanks! [14:51] lukasa_, that way you will use exactly the same binary as we release to updates [14:58] lukasa_, i've just kicked off the testing - if that succeeds, I'll push through today [14:58] mean't to be taking two days off :-) [14:58] That's extremely kind of you jamespage, thanks so much. =) [14:58] lukasa_, no problemo [14:58] In return, I promise to ask you no further questions for a week or so [14:58] (If only because I'm on holiday myself. ;) ) [14:58] lukasa_, well until monday nayway === PaulePan1er is now known as PaulePanter [15:09] hi, any experts on pxeboot here? :) [15:10] i want to know if it is possible to provide a kickstart file on the server that provides the netboot infrastracture via its local filesystem instead of an url [15:18] How would a booting machine get to the local filesystem on the boot server? [15:20] the question is not clear [15:44] is there any good reason why ubuntu cloud image comes with grub-legacy instead of grub2? [15:45] is there any good reason why ubuntu cloud image comes with grub-legacy instead of grub2? [15:46] aryklein: Ask whoever provided the images, Like Amazon or whoever it is [15:47] genii: I got it from https://cloud-images.ubuntu.com/ [15:47] genii: so this is the reason why I'm asking it here === markthomas|away is now known as markthomas === lifeless_ is now known as lifeless [16:02] aryklein: From what I can find, it's due to an issue between Xen Hypervisor and Grub2 [16:02] genii: ah ok. Thanks for the info [16:29] hey all, I'm working colaboratevly on a server backend with someone and I've been wondering; is it possible to share a proccess, or control there of, to a group of people like you can files? [17:31] gartral: shared tmux or screen sessions [17:31] gartral: it's perhaps not as easy/transparent as sharing files, but it works well enough === manjo` is now known as manjo [19:03] When did linux-generic-lts-vivid drop? [19:03] Am I seeing it right, that it dropped on the 21st? [19:04] for 14.04... [19:04] looks like the 20th to me https://launchpad.net/ubuntu/+source/linux-meta-lts-vivid [19:05] sarnold: thanks, wasn't sure if it was the proposed date or the updates date on the 21st [19:05] the "full publishing log" shows 2015-05-20 17:13:35 PDT -- presumably it'll localiz e to your timezone if you visit https://launchpad.net/ubuntu/+source/linux-meta-lts-vivid/+publishinghistory [19:10] is it possible to access a copy of the archive in s3? we were hoping to point some of our servers at one of the new in-vpc s3 endpoints [19:10] (i see $region.ec2.archive.ubuntu.com, but it looks like that points at ec2 instances) [19:12] broder: I suggest checking in #ubuntu-mirrors -- I know some of the aws mirrors were on s3, but there were reliability issues, I don't know if they have been addressed or replaced with ec2 instances.. [19:12] ah, cool. will do - thanks! [19:12] it seems like it's been a while since I've heard reliability complaints there.. hehe [19:13] i've been trying to piece together what happened from email archives. it looks like it was originally s3, but i suspect they moved to ec2 because something something snapshot consistency [19:33] o/ has anyone here who's used fai configured their grub? i'm running into some snags trying to fix it [19:58] sarnold: the issue with shared tmux/screen is that it's a huuuge security hole... I don't want the other members to run programs as the dedicated user.. [19:59] sarnold: if I wanted to do what you were suggesting I would have just given them an SSH key for the user [20:03] also I have an edge-case issue that I'm unsure how to reslove.. I have a code base that I need to import from a *VERY* old backup, we're talking 5-1/2 inch floppies, I happen to HAVE a 5-1/2 inch floppy drive so that's not a problem... the problem is there's hundreds of files and they're all upper-case, how can I squash-case them without doing it file by file, by hand? [20:06] * gartral makes a mental note that he should have used better punctuation there... [20:07] gartral: try mount(8) option shortname=lower [20:08] sarnold: already tried, i got no data transfered [20:08] oh :( [20:08] sarnold: that is an interesting idea. I would going to suggest rename(1p) [20:08] s/would/was/ [20:08] gartral: you can probably also configure sudo to let your pals run a few specific commands as a specific user.. [20:08] sarnold: I have them safely on the HDD, I just want a move command that'll squash-case [20:09] jdstrand: heh, that was goiong to be my suggestion if the mount option didn't work :) [20:09] :) [20:10] oh dear god this is going to be fun compiling later, it's partially written in f***ing LISP [20:11] can GCC even handle in-line lisp? >.< [20:12] you may have better success getting emacs to do C :) [20:12] * sarnold runs [20:12] LOL [20:14] o.o ok, never mind... whoever wrote this was a bloody genius... there's a LISP parser, written in C, in the code to handle the Lisp in-line... I now feel wholy under-qualified to even touch these disks [20:19] an-e-way how would I go abuot squash-casing these? [20:26] is it gnu common lisp? [21:11] gartral: What was wrong with the rename suggestion? [21:24] * genii ponders rename 'y/a-z/A-Z/' [21:33] hello, i maintain an ubuntu vps for my blog, which i command from a mac via ssh [21:34] i'd like to backup and move my production site to my local mac, make changes and then upload back to production the changes [21:34] i'm trying to setup an rsync deal, but was wondering if that's the way to go, or should i use scp [21:34] or... ? [21:34] i'm kinda new to all of this [21:34] rsync is awesome, definitely beats scp individual files or re-copying everything needlessly [21:35] no need to run an rsync daemon though, just rsync -e ssh ... is sufficient [21:35] (and probably the -e ssh isn't even needed these days) [21:35] consider also using git; a local and remote repository, so you can just pull changes when you make them; thishas proven popular on e.g. heroku [21:36] yeah, i'm looking at git as well for this and currently have the site dir as a git repo as well [21:37] i'm using ghost for the blog and it requires turning off the ghost service before doing anything with the database, so i'm not sure yet how to handle that [21:37] with the git solution [21:37] with the rsync one, i wrote a script i run on my mac that ssh's into the vps, stops the service, rsyncs everything, and then starts the service again [21:40] i modified the sudo config so that a password isn't required over ssh when starting and stopping the service for my user [21:40] is this the right approach? [21:40] and should I set a delay after the call to stop the service to make sure it stops before running the rsync command (the next line in the script)? === _thumper_ is now known as thumper [21:40] it's probably fine if it's a single-purpose system [21:40] please promise me you're using ssh keys rather than passwords though :) [21:41] there may be a way to use e.g. status to find out if the service is still running or not [21:42] haha, i am using ssh keys to connect, although it used to ask me for passwords for sudo commands until i made that change [21:42] is there something else i need to do? [21:42] ah, that would be awesome to check for the service status and then run the rsync === heidi_vanator is now known as jrcconstela === Lcawte is now known as Lcawte|Away [22:48] Can I install maas on a VM? [22:48] I've been doing that last couple times [22:49] hey harushimo :) I thuoght of you when this got pasted around the other day: http://www.ubuntu.com/download/cloud/install-ubuntu-openstack [22:50] I did [22:50] I couldn't get passed step 3 [22:51] I needed to redo my VMs again [22:51] hehe [22:52] its an experiment [22:52] d'oh :) [22:52] hi [22:55] sarnold: any tips? [22:56] harushimo: you should be able to do maas in a vm, though that does mean your VM needs to be configured properly to allow the vm guest to do all the raw networking it wants to [22:56] so it can't do NAT and that kind of stuff around the vm [22:57] sarnold: I've been following http://marcoceppi.com/2012/05/juju-maas-virtualbox/ [22:58] harushimo: that guide is pretty out of date now [22:58] marcoceppi: any tips [22:58] harushimo: it's much better to use libvirt and qemu since MAAS can actually use that as a power type [22:58] marcoceppi: do you have some instructions on that [22:59] harushimo: not really, there are some on the MAAS website but they are kind of incomplete. I can do a blog post tonight if you'd like about it [22:59] marcoceppi: that would be great [22:59] marcoceppi: I can't get pass step 3 on that documentation [23:00] marcoceppi: I've been trying to install openstack so I can install cloud foundry [23:01] harushimo: well you're going to want something more than VMs [23:01] marcoceppi: oh really? [23:01] harushimo: yeah, I mean eventually [23:02] marcoceppi: I agree..sorry this for my dev purposes [23:02] marcoceppi: I would need more than VMs [23:12] harushimo: how are you deploying cloud foundry? [23:15] marcoceppi: It will be done through openstack [23:16] marcoceppi: I need to install hypervisor which is openstack and I install over that [23:17] Right right, no worries. Okay, I need to fix my MAAS machine but once that's sorted I'll start on the blog post/video [23:18] marcoceppi: thank you so much [23:19] marcoceppi: companies are going this route. I want to be learn the technology and sell myself too [23:19] marcoceppi: I'll continue to experiment