[01:46] <acmehendel> can someone suggest how to deploy node on a prod machine?
[01:51] <sarnold> probably it's best to get it from upstream and sbuscribe to their security list if they've got one; no one tends to the node packages in the archive so they may be a bit stale
[01:52] <sarnold> .. unless you want to be the one to tend to the archive packages ;) hehe
[02:52] <acmehendel> say I installed node.js under a user rather than root...how would I have node js launch automatically under services?
[02:57] <sarnold> you coulduse upstart with proper user and group configuration options http://upstart.ubuntu.com/cookbook/  or you could use vixie cron's @reboot specifier (look in the crontab(5) manpage)
[08:58] <mwhudson> rbasak: do you have opinions about maintaining packages in git vs just using git for merges?
[08:58] <mwhudson> rbasak: and are they written down anywhere?
[09:02] <rbasak> mwhudson: in the end I'd like to do both. That makes future merges easier, since every logical change will be a separate commit that could be rebased.
[09:02] <rbasak> mwhudson: for now, don't worry about it too much though. The git merge process can adapt ordinary uploads.
[09:02] <mwhudson> i'm not doing a merge here :-)
[09:02] <mwhudson> just some fairly hairy surgery
[09:03] <mwhudson> rbasak: do you use gbp much?
[09:03] <rbasak> mwhudson: not much. I used to. I think git-dch may still make sense.
[09:04] <mwhudson> is that sort of the opposite of debcommit ?
[09:04] <rbasak> gbp's import function works well when maintaining an upstream directly
[09:04] <rbasak> I never really understood debcommit
[09:04] <rbasak> It just updates debian/changelog with the commit messages from previous commits.
[09:04] <mwhudson> i've used it as a sort of one-off thing, gbp import-dsc current version, hack hack hack, get result uploaded, throw repo away again
[09:05] <rbasak> Rather than updating debian/changelog with each commit, which is another common pattern.
[09:05] <rbasak> Ah. For that, I use git-dsc-commit from my merge tooling.
[09:05] <mwhudson> um, i thought it committed to vcs with commit messages taken from d/changelog
[09:05] <rbasak> No, it's the other way round.
[09:05] <mwhudson> but hey, i have commit rights in debian golang now so ...
[09:06] <mwhudson> it would be nice to maintain the ubuntu stuff in git and push it to lp
[09:08] <mwhudson> extra confusion from us being ahead on upstream version too
[09:08] <mwhudson> eh i guess that doesn't really matter
[09:09] <mwhudson> rbasak: do you have a replacement for gbp import-orig too?
[09:27] <mwhudson> argh now i remember why i dislike gbp: cleaning up after a mistake is such a pain
[09:30] <rbasak> mwhudson: no. I use gbp import-orig for example in maintaining MySQL in Debian. But for Ubuntu dev, I just treat upstream as part of the same tree.
[09:30] <rbasak> (and just rely on Launchpad to keep the orig)
[09:31] <rbasak> mwhudson: also I understand that pristine-tar is considered fundamentally broken and deprecated now.
[09:34] <mwhudson> oh ok
[09:37] <mwhudson> agh well i've messed up my repo enough for one night i think
[09:49] <rbasak> Use tags and reset back. Can't mess up a repo then since you can undo everything.
[09:49] <rbasak> (and you can use the reflog if you didn't leave tags)
[09:54] <mwhudson> rbasak: well you have to reset back on two branches and delete 1-2 tags
[09:54] <mwhudson> it's not impossible of course, just a bit annoying
[10:01] <rbasak> mwhudson: I wonder if there should be some tooling around this. Essentially back up .git/refs and restore it, but in a safer way.
[11:23] <rbasak> kickinz1: I'm reviewing ntp now. Tag "logical/new-ubuntu" is what I should be reviewing for upload, right?
[11:24] <rbasak> Commit d4cc365?
[11:25] <rbasak> Oh, you're on holiday.
[11:25] <rbasak> Never mind!
[11:26] <caribou> rbasak: I did the CVE review on another tag, let me fetch it for you
[11:27] <caribou> rbasak: It was review/robie-1st-stage
[11:27] <rbasak> caribou: OK, thanks.
[11:28] <rbasak> That makes sense.
[11:30] <caribou> rbasak: btw, I'm mostly done with the clamav merge but there are a few things I'd like some expert's eyes on
[11:44] <rbasak> caribou: no problem. I can look after this ntp review. Do you want to leave me some notes?
[11:44] <caribou> rbasak: yes, I'll get that ready for you in a minute
[11:44] <caribou> where should I push the GIT tree ?
[11:46] <rbasak> caribou: do you want to try the full MP review process we've been developing? I can tell you what to push where for that. It's good for peer review as well as sponsor/sponsoree review.
[11:47] <caribou> rbasak: that's why I took the learning curve to use your git method so sure
[11:47] <caribou> rbasak: I did my best to follow the server team Wiki article
[11:48] <rbasak> OK so first in ~/.gitconfig:
[11:48] <rbasak> [url "git+ssh://racb@git.launchpad.net/~racb/ubuntu/+source/"] insteadof = lpmep:
[11:48] <caribou> rbasak: there might be some rough edges & missing bits but I think it is not so bad
[11:49] <rbasak> insteadof is on a second line there, paste error. And replace both occurrances of racb with your own lpid
[11:49] <rbasak> caribou: definitely rough edges and missing bits in the docs. And the exact process still. Feedback and wiki edits appreciated :)
[11:49] <rbasak> Then, push to lpmep:clamav
[11:50] <rbasak> You can push everything you like, since that helps with any review around the process.
[11:50] <rbasak> For sponsorees, I would specifically like the logical/<old ubuntu> tag
[11:50] <rbasak> And a "merge" branch for the actual proposed upload.
[11:51] <rbasak> Then propose a merge for that merge branch against the "ubuntu/dev" branch in ~ubuntu-server-dev.
[11:51] <rbasak> I see that does exist for clamav yet.
[11:51] <rbasak> Does not exist
[11:51] <rbasak> We will have an importer soon.
[11:52] <rbasak> Until then sponsorees are requesting the branch be created (by me right now) first, so that the work can be based on it.
[11:52] <rbasak> If done afterwards, it is still possible to rebase upon it, but that is a little painful.
[11:53] <rbasak> So not sure what you want to do there. The totally accurate (process-wise) way to do it would be for me to import and then for you to rebase, but I appreciate that's painful and maybe not worth the effort.
[12:01] <caribou> rbasak: maybe the best for now is just for me to push my git repo so you can look at it with my upcoming comments
[12:01] <caribou> rbasak: then once everything is ok, I can upload it the normal way for now
[12:02] <caribou> rbasak: then we can arrange the proper git repo for the next merge
[12:02] <caribou> how does that sound ?
[12:07] <rbasak> caribou: sure, that's fine.
[12:41] <caribou> rbasak: I'm finding a few more things as I'm writing the notes so I'll fix those along so it'll be a bit longer 'til I send it your way
[12:44] <rbasak> OK, no problem. I'm still working on NTP.
[12:59] <dannymichel> pressing up to go to past commands gives me weird character like '^[[A’ any ideas why?
[13:02] <hateball> dannymichel: how are you connected to the console
[13:02] <dannymichel> Just normal ssh via Mac terminal hateball
[13:04] <hateball> hmmm, usually get such issues if I connect with weird encoding, but OS X should be using utf8 as well
[13:04] <hateball> dannymichel: and you're not holding ctrl or any modifier key down? :p
[13:04] <dannymichel> not holding any keys down, no
[13:04] <hateball> I've no experience with OS X really so I can't say. Can you see if you get the same issue locally or using a linux ssh client?
[13:05] <dannymichel> it doesn’t happen when I’m logged in as root
[13:06] <hateball> root on OS X or ubuntu?
[13:07] <dannymichel> ubuntu
[13:07] <hateball> heh
[13:07] <hateball> check what locale the regular user has then
[13:07] <hateball> compared to root
[13:07] <dannymichel> not sure if i get your meaning
[13:12] <dannymichel> one thing thats different about this user is that bash starts at just $ rather than a username like dmichel@s:~$
[13:13] <hateball> is their shell even bash
[13:27] <mikky> hi, how am I supposed to turn around network interface on a 14.04-based server remotely and still be able to connect to it afterwards?
[13:29] <patdk-wk> what does, turn around, mean?
[13:29] <mikky> reload settings
[13:29] <maswan> you mean restart networking or ifdown+ifup? the second you can do in screen(1) on one command line with "ifdown eth0; ifup eth0". Of course, it is better to do it over an OoB console login
[13:29] <patdk-wk> I have never found any reason to do that ever
[13:30] <maswan> I have, fairly frequently actually
[13:30] <patdk-wk> the only thing I can think of, is to switch from/to dhcp/static
[13:30] <maswan> like swiching from dhcp to static, adding ipv6, moving to a bridge
[13:31] <patdk-wk> anything else, you can adjust without taking the interface down
[13:31] <patdk-wk> and even that, you can, just harder
[13:31] <maswan> yes, but that doesn't test that your new interfaces is correct
[13:32] <mikky> sorry, forgot to mention its a bond interface, set up as static. Somehow it works on boot but at runtime, it seems to fail. "restarting" the interface after changing network/interfaces is important if you want to be sure it will set up correctly at boot
[13:33] <maswan> My primary suggestion is to login over the console to do this
[13:33] <maswan> Otherwise, one command line in screen works if you make no mistakes
[13:33] <maswan> If you make mistakes, you need the OoB console login anyway
[13:34] <mikky> accessing the console is possible but it's a complicated company process, security-wise
[13:35] <maswan> then I suggest you don't make mistakes. :)
[13:35] <maswan> having a second interface would also help
[13:57] <mikky> ok, seems the problem is with bonding rather than network setup. I keep getting "waiting for slave to join bond0" for 60 seconds and then failing. But if I then ifup all the bonded slaves, it get up automagically. At boot, on the other hand, the bond get configured correctly.
[13:58] <patdk-wk> what is in your interfaces file?
[13:59] <mikky> it's rather long, the machine's got quite a few other interfaces. let me pull the relevant parts
[14:04] <mikky> interfaces: http://pastebin.com/rJCKEgss
[14:05] <mikky> i've tried both bond-slaves p3p2 p3p1 and bond-slaves none. No obvious difference.
[14:12] <patdk-wk> hmm, dunno
[14:33] <beisner> coreycb, ok, to confirm:  *-icehouse x next + stable ... and trusty-liberty x next + stable  ...  all @ proposed?
[14:33] <beisner> for deploy/tempest sru checks
[14:34] <beisner> coreycb, or, feel free to trigger at will  :-)
[14:34] <coreycb> beisner, sure I'll go ahead
[14:34] <coreycb> thanks
[14:34] <beisner> coreycb, ok cool.  yw & thanks too
[14:36] <beisner> jamespage, re-confirmed 14:17:44 ceph/0 does not have pool: cinder on the ceph erasure pool test
[14:46] <beisner> coreycb, can you confirm - will the icehouse uca sru pull all of these?   http://pastebin.ubuntu.com/15016392/
[14:46] <coreycb> beisner, yeah basically we just need to flush everything from http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/icehouse_versions.html to updates
[14:48] <beisner> coreycb, ack
[15:12] <rbasak> kickinz1: argh. I missed that pps-tools is in universe, and ntp is in main.
[16:57] <coreycb> beisner, can you promote horizon and neutron from trusty-liberty proposed to updates?  testing is complete.
[16:58] <coreycb> beisner, might as well also promote qemu as it was included in the testing.
[17:21] <AapjesKijken> hi good evening all
[17:21] <tarpman> good morning
[17:21] <AapjesKijken> how are you?
[17:22] <tarpman> not totally awake just yet :)
[17:22] <AapjesKijken> hehe maybe koffie?
[17:23] <tarpman> in progress
[17:23] <AapjesKijken> nice, take it easy
[17:23] <AapjesKijken> do you know ubuntu good?
[17:23] <teward> AapjesKijken: asking a real question will help
[17:24] <tarpman> teward: so hasty :)
[17:24] <teward> tarpman: you try dealing with people saying "hi who knows ubuntu" in #ubuntu every day
[17:24] <teward> :p
[17:24] <tarpman> ^^
[17:24] <teward> having said that, real questions DO actually get you better replies
[17:24] <teward> so just asking if someone knows Ubuntu gets you nothing worthwhile, as that's not a real question
[17:24] <AapjesKijken> my english is not so good but i will try
[17:24] <AapjesKijken> haha sorry for that, this all is new for me
[17:25] <AapjesKijken> if i have dowloaded ubuntu and i want install ubuntu
[17:25] <ogra_> teward, and it is kind of redundant inside an #ubuntu-* channel ;)
[17:26] <AapjesKijken> but if i wanna instal it there is a problem and i can't install it
[17:26] <AapjesKijken> sorry for that, but i'm new and don't know how i can start that question :$
[17:27] <AapjesKijken> somebody speak dutch?
[17:29] <teward> ogra_: indeed
[17:30] <teward> !dutch | AapjesKijken
[17:30] <teward> i think
[17:30] <AapjesKijken> thank you, i go try it again
[17:39] <nacc> smoser: for the eventual removal of php5 (or say tomcat7) ... how do we sync the removal from the archive with the seed update? or woudl that be the last thing the admins would help out with?
[17:40] <smoser> nacc, well i think you'd do the seed update first
[17:40] <smoser> which woudl drop it from main
[17:40] <smoser> and then at leisure archive admin can remove it from archive.
[17:41] <smoser> to clear it up for me, we're going to not have php5 packages when debian would, right ?
[17:44] <AapjesKijken> do some body now were i have to be for penetration testing/hacking? (wanna learn, now i have to sit home for few  months) ..
[17:45] <nacc> smoser: right, same for tomcat7 possibly
[17:45] <nacc> smoser: so i supposed we'd also need to avoid sync'ing automatically?
[17:46] <smoser> yeah, i dont actually know how that happens.
[17:46] <smoser> ive never dealt with a package that ubuntu did not want to have that debian did have.
[17:47] <AapjesKijken> very iritating ..
[17:53] <rbasak> nacc, smoser: autosync ignores anything with "ubuntu" in the version string in Ubuntu.
[17:54] <rbasak> It might be worth changing the seeds first, because though that'll through up a ton of component mismatches, then we know that things destined for main are built correctly with only main enabled. I'll defer to slangasek or infinity or some other archive admin though.
[17:54] <ogra_> arent we in DIF already anyway ?
[17:55] <rbasak> ogra_: DIF is synced with FF nowadays, so no.
[17:55] <ogra_> ah, but only a few days away :)
[17:55] <rbasak> Yes, it is tight :-/
[17:58] <nacc> rbasak: ok, i'll ask them that separately
[18:16] <beisner> coreycb, delayed response - do we have a bug and a card for the T-L promotions?
[18:23] <coreycb> beisner, no it's just the results of auto-backports from trusty SRUs that I wasn't a part of AFAIK
[18:24] <coreycb> beisner, sorry yes theres a card, no bug
[18:25] <beisner> coreycb, got it, thx
[18:56] <wk-work> is there any way to configure an network network interface with kickstart with additional routes? just specifying the ip, netmask and gateway is not enough to get network access.
[18:57] <beisner> coreycb, promoted from proposed @ liberty cloud archive:  horizon 2:8.0.1-0ubuntu1~cloud0, neutron 2:7.0.1-0ubuntu1~cloud0, qemu 1:2.3+dfsg-5ubuntu9.2~cloud0
[19:08] <coreycb> beisner, thanks
[19:19] <haidar_> hello , I would like to create a cisco router on ubuntu server after I download dynamips and dynagen and during the procedure need to create a dynagen configration file I already have the configration but How can I create thats file also where should put the file to run the Dynagen any Idea please??
[19:36] <genii> haidar_: /etc/dynagen.ini
[19:36] <haidar_> thanks sir
[19:36] <genii> ..is where you want to put the file, and what it's name should be
[19:36] <genii> :)
[19:38] <genii> haidar_: Might also want to point your web browser at: file://usr/share/doc/dynagen/docs/tutorial.htm
[19:39] <haidar_> ok sir create a folder or just type like that
[19:56] <wk-work> is there any way to configure an network network interface with kickstart with additional routes? just specifying the ip, netmask and gateway is not enough to get network access.
[19:57] <sarnold> do you need it during the kickstart or after the install is over?
[19:57] <sarnold> can you specify 'up' lines in /etc/network/interfaces in the kickstart?
[19:58] <wk-work> i need it during install yeah
[19:58] <wk-work> sarnold: thats what i'm asking if i can, i need it both during and after
[20:00] <sarnold> if you need it during install then perhaps the /etc/network/interfaces direction won't help much..
[20:01] <sarnold> can you run arbitrary scripts during kickstart? or is it entirely declarative?
[20:02] <wk-work> basically, we're using a kickstart file for automating VM installations. all vms are issued global ip addresses (not natted or on a local network) but require additional routing to get internet connectivity.
[20:04] <rbasak> wk-work: OOI, why aren't you just using Ubuntu cloud images instead of messing with "installations"?
[20:05] <wk-work> we're using KVM
[20:06] <sarnold> wk-work: it might be a touch more work but this sounds likes omething that ought to be done via ubuntu's cloud images, which have cloud-init support built in.. not that I know how to do the multiple routes with that off the top of my head either, but i know cloud-init makes it easy to supply scripts, files, etc..
[20:06] <rbasak> That's fine. Ubuntu cloud images work with KVM.
[20:06] <wk-work> never even heard about that
[20:06] <wk-work> my google-fu has failed me
[20:07] <rbasak> Google for "cloud-init". It's pre-installed on Ubuntu cloud images.
[20:07] <rbasak> You boot a pristine, official image. cloud-init runs inside and sets it up sensibly on first boot. That's it - done.
[20:07] <rbasak> You do need to tell cloud-init what you want (eg. ssh key or something else to make the system usable)
[20:07] <rbasak> Look up cloud-init docs on how to do that.
[20:08] <wk-work> ah i see, the thing is we're using a web interface for creating VMs, much like solusvm
[20:08] <wk-work> let me take a look
[20:14] <sarnold> rbasak: does uvtool serve the cloud-init data to the cloud images? I always get a bit confused about how you actually feed cloud-init data :)
[20:54] <rharper> sarnold: via config drive (cloud-init)
[20:55] <rharper> sarnold: uvtool creates a second disk (iso format) and use cloud-localds to write out the userdata and metadata
[20:56] <sarnold> rharper: aha ;) thanks!
[20:56] <rharper> sarnold: sure
[22:59] <VelusUniverseSys> hello all im not to sure if this is the best place to ask but where can i get a bit of software to stream playlists to an icecast server? does anyone know any good bits of software?
[22:59] <sarnold> apt-cache search icecast playlist   :)
[23:04] <VelusUniverseSys> hmmm give me a few to look at, im needing something that would do video lol
[23:04] <VelusUniverseSys> and easy to set up and have the list updated daily?
[23:05] <VelusUniverseSys> hmmm ezstream seems good but i cant find docs
[23:05] <VelusUniverseSys> lol
[23:06] <VelusUniverseSys> got them lol
[23:21] <nacc> rbasak: smoser: for demoting php5/promoting php7... shoudl i go ahead and send the seed update for php5 demotion now? it might lead to some component mismatches, but if i can get swig going (slogging through it on the side), they will be resolved by the end of the php7 update. Should it be two merge requests? one to demote and one to promote? Or is it better to do it in one commit and be sur there is s
[23:21] <nacc> ome php available in the seeds?
[23:26] <VelusUniverseSys> sarnold, how would i do a apt-get install ezstream to include suggested pacakge
[23:27] <sarnold> VelusUniverseSys: you can use apt-get install --install-suggests ezstream   if you want to include the Suggested: packages too
[23:27] <sarnold> VelusUniverseSys: note that that is recursive, which might mean that it installs a lot more packages than you really need
[23:27] <VelusUniverseSys> thanks
[23:28] <VelusUniverseSys> geese 2682 to intnall lol thats a lot lol
[23:29] <VelusUniverseSys> thanks god my hosting company gave me 100tb of space for free lol
[23:29] <sarnold> packages? o_O or kilobytes? or..
[23:31] <VelusUniverseSys> packages
[23:32] <sarnold> that seems .. wrong :)
[23:32] <VelusUniverseSys> thats what it said lol its a new system
[23:32] <sarnold> I've got 2847 packages on my system now, it's been through 3.5 years of upgrades, installing packages on a "geewhiz that looks neat" basis, etc :)
[23:33] <VelusUniverseSys> hmmmm ok
[23:36] <VelusUniverseSys> i now just need to know how a playlist is set out like how does the format look like for m3u so then i can create a php script to do it in the backgound every day lol
[23:38] <nestor_> ubuntu/nginx/php5 I set memory_limit=512M but limit shows as memory_limit=256M - Any ideas?
[23:39] <nestor_> I set it in the /etc/php5/fpm/pnp.ini
[23:41] <VelusUniverseSys> try a phpinfo() and check there which php.ini file its reading from? it may be reading from somewhere else
[23:43] <sarnold> pnp.ini?
[23:44] <VelusUniverseSys> i think he ment php.ini
[23:44] <nestor_> I create a phpinfo.php file and that is the one I am looking at.  I also copied it somewhere else and still the same memory_limit
[23:44] <VelusUniverseSys> hmmm
[23:45] <nestor_> I can see that the loaded config file is /etc/php5/fpm/php.ini
[23:59] <VelusUniverseSys> sarnold, can you think of a php script that can create a m3u playlist
[23:59] <sarnold> VelusUniverseSys: sorry, never looked for one