[02:14] <axisys> which file defines the default PATH ? do not see it in /etc/profile or /etc/bash.bashrc
[02:16] <axisys> is it /etc/environment ?
[02:18] <axisys> looks like /etc/login.defs
[02:21] <axisys> /etc/bash.bashrc will do..
[08:57] <mjau^> morning peoples!
[08:58] <mjau^> would anyone happen to know where I can find the source-rpms for apache on the latest ubuntu-release?
[09:03] <ikonia> mjau^: ubuntu doesn't use rpm's
[09:03] <mjau^> ikonia: lols, I meant the source-debs of course :)
[09:03] <ikonia> just use apt-get source
[09:12] <eagles0513875> hey guys I have a piece of software which i just purchased which is encrypted with ioncube does apache on 12.04 support ioncube
[09:14] <ikonia> there doesn't appear to be a module referencing iocube
[09:14] <ikonia> ion
[09:14] <eagles0513875> :-/ ok
[09:14] <eagles0513875> thanks
[09:17] <mjau^> apt-get source eh? great, thx, I'll do that :)
[09:25] <stiv2k> so i have a peculiar problem with my server
[09:25] <stiv2k> ever since i installed 12.10 it kernel panics every so often
[09:25] <stiv2k> and i noticed after a few panics, it happens on the 14th day
[09:25] <stiv2k> of uptime
[09:25] <stiv2k> each time
[09:26] <stiv2k> any ideas why?
[09:26] <stiv2k> could it be the clock?
[09:33] <ikonia> what does the actual panic message suggest
[09:34] <RoyK> stiv2k: pastebin logs
[09:35] <RoyK> if it panics so badly it can't write logs, enable netconsole or use an old-fashioned serial console to get the logs
[13:42] <koolhead17> zul: around?
[13:42] <zul> koolhead17: kind of...whats up?
[13:48] <koolhead17> zul: coolbhavi is my guide/mentor
[13:49] <coolbhavi> hey zul koolhead17 said something needs to be repatched and gave me a buildlog
[13:49] <zul> yep
[13:50] <hallyn> stgraber: are you going to send another version of your lxc-create template naming patch?
[13:50] <coolbhavi> zul, it was a build failure and what exactly is the background?
[13:51] <hallyn> stgraber: on a separate note, I fear that for 13.10 I am going to have to either spend a lot of time writing apparmor integration for libvirt-lxc, or we have to get the lxc2 driver working.  for the sake of openstack
[13:51] <zul> coolbhavi: basically patch failed to apply
[13:51] <coolbhavi> zul, yes I could see that
[13:52] <zul> coolbhavi: what do you mean background?
[13:52] <coolbhavi> zul, I meant was it applied to some source package?
[13:53] <zul> nova source package for precise
[13:54] <coolbhavi> ah never mind got it from the complete buildlog. thanks!
[14:05] <stgraber> hallyn: hopefully the second option will be easier, then we can just use that as a reason to drop libvirt-lxc ;)
[14:07] <stgraber> hallyn: I sent a v2 of the lxc-create patch on Friday adding the sha1 sum. I'm not planning on fixing the bash issues at this point as that's out of the scope for that patch (I just moved code around so the bashisms were already there)
[14:11] <hallyn> stgraber: the '-n ""' is a serious issue though, worth a v3
[14:11] <hallyn> we've had bugs due to such before - it's not jsut a posix issue
[14:12] <hallyn> jdstrand: so i've spend way too many hours on this before, only to finally realize i don't know how to best pass the hugepages mount path to virt-aa-helper.  Options are:
[14:12] <hallyn> 1. add it to the xml so it can be passed
[14:13] <hallyn> 2. add a new virSecurityAddSimplePath call
[14:13] <hallyn> 3. harcode /run/hugepages/kvm in the apparmor policy :)
[14:13] <hallyn> I don't see that (1) would really be acceptable upstream
[14:14] <jdstrand> hallyn: doesn't this fail with selinux?
[14:15] <jdstrand> I would think it would-- so they would be interested in whatever in this too
[14:15] <jdstrand> s/whatever in this/whatever fix is used/
[14:16] <jdstrand> also, how would virSecurityAddSimplePath work?
[14:16] <stgraber> hallyn: there were already two of those in the current lxc-create. I didn't add that code, just moved it around :)
[14:17] <jdstrand> (and just so I understand, the path to /run/hugepages/kvm is a qemu compile time option so libvirt doesn't inherently know what that is-- correct?)
[14:21] <hallyn> stgraber: those need to be fixed too then :)  Worth a script to find all the instances
[14:21] <patdk-wk> you can specify the hugepages path in libvirt xml config
[14:21] <hallyn> jdstrand: no, /run/hugepages/kvm is not a compile time option...
[14:21] <hallyn> patdk-wk: oh??
[14:21] <hallyn> you can specify it in qemu.conf, and otherwise libvirt finds it automatically, but all i've foudn for xml is

[14:22] <patdk-wk> hmm, damned been a few months since I last did it
[14:22] <hallyn> patdk-wk: ok - i'll look for it thanks
[14:22] <hallyn> if it's supported then that's the way to go.
[14:23] <patdk-wk> ya,  Itested it, found it really didn't help much for me, and just wrote it off as, not worth messing with currently
[14:23] <hallyn> jdstrand: the virSecurityAddSimplePath would just call virt-aa-helper with a new path and ask it to append that to the current policy
[14:23] <hallyn> we could then also use that for monitor and other stuff
[14:24] <hallyn> but i'll follow up on patdk-wk's suggestion and get back to you later - thanks
[14:24] <jdstrand> ok
[14:28] <patdk-wk> hmm, maybe I used the qemu automatic mount detection :(
[14:28] <patdk-wk> heh, fuzzy memory :(
[14:28] <patdk-wk> was back in sept when I was doing lots of hugepages work
[14:29] <hallyn> no qemu takes it as command line option, doesn't detect automatically,
[14:29] <hallyn> but libvirt will detect it automatically if not specified
[14:29] <hallyn> sadly i don't think it's specificable in the xml
[14:29] <hallyn> specifiable
[14:30] <hallyn> and the problem with adding it there is that then we have to decide what to do if it's in the xml at define time
[14:32] <stgraber> hallyn: sent the lxc-create cleanup patch to the mailing-list
[14:32] <zul> yolanda: can you have a a look please?
[14:32] <hallyn> jdstrand: ok so yeah, virSecurityAddSimplePath would basically work like AppArmorSetFDLabel but without resolving /proc/self/fd/N
[14:32] <hallyn> stgraber: thanks!
[14:33] <yolanda> zul, about the lxc-create cleanup patch?
[14:34] <zul> yolanda: oops https://code.launchpad.net/~zulcss/quantum/grizzly-fix/+merge/137576
[14:35] <hallyn> stgraber: sigh, i personally feel tabs would be better than spaces, but i'm sure i'm alone on that :)
[14:37] <jamespage> zul, hmm - can I express and opinion?
[14:38] <jamespage> or maybe ask a question at least
[14:38] <stgraber> hallyn: well, I usually prefer spaces, don't necessarily mind tabs but really hates mixed tabs and spaces which was what we had :)
[14:38] <stgraber> hallyn: as 90% of the script was indented with spaces, I just replaced the remaining tabs by spaces
[14:39] <hallyn> stgraber: yup, i'm going to ack it of course.
[14:39] <hallyn> stgraber: you didn't make any other changes on any lines where you changed indent?
[14:40] <hallyn> hm, i wonder why $opt doesn't need to be "$opt" in optarg_check
[14:40] <hallyn> oh, that's why.  nm
[14:41] <stgraber> hallyn: nope, those were just reindents
[14:42] <hallyn> stgraber: one more q - is 'if [ $a -eq 1 -a $b -eq 2 ]; versus 'if [ $a -eq 1 ] && [ $b -eq 2 ] really a bashishm?
[14:44] <stgraber> hallyn: no, it's not, that's the 'Use shell syntax for and/or in if statements instead of the "test"
[14:44] <stgraber> syntax.' part of my commit
[14:45] <hallyn> oh. oops.  i just replied with the q (and ack).  oh well
[14:46] <hallyn> hm,
[14:46] <hallyn> does that mean that your new version results in more forks?
[14:46] <hallyn> oh well
[14:48] <stgraber> hallyn: nope, it doesn't because those aren't spawned in sub-shells and test is a shell builtin
[14:49] <hallyn> even in dash?
[14:50] <chris_> Can iMacros be run with Lynx?
[14:50] <stgraber> hallyn: yep
[14:51] <jamespage> yolanda, zul: comment on that merge proposal re quantum metadata proxy stuff
[14:51] <chris_> serious?
[14:51] <chris_> I want to be able to browser automate on a headless box...is that possible?
[14:51] <jdstrand> hallyn: you said that hugepages is specifiable in qemu.conf?
[14:51] <zul> jamespage: son of a bitch
[14:51] <jacobw2> hi, i have a problem with virt-intsall on ubuntu server
[14:52] <jamespage> zul, I'm happy to spend some time on it in the next couple of days
[14:52] <jamespage> (that specific stuff works around a really ugly bit in folsom quantum)
[14:53] <zul> jamespage: i just merged it in the master branch but i can do it this afternoon should the package be like quantum-metadata or something?
[14:53] <jacobw2> using --location=<precise>, the kernel and initrd are downloaded to /var/lib/libvirt/boot but disappear when virt-install finishes, seabios hangs on 'booting from rom' because the files aren't there to boot from
[14:53] <jdstrand> hallyn: also, while you can't detect the path to hugepages in the xml, can virt-aa-helper see if hugepages is specified at all in the xml?
[14:53] <jamespage> zul, lemme take a lok
[14:53] <zul> k
[14:54] <jamespage> zul, quantum-metadata-agent I think - there is an /etc file for it as well
[14:55] <zul> awesome..im just fixing up the jenkins build but ill have a look this afternoon
[14:56] <jamespage> zul, something ugly happening in python-keystoneclient I think
[14:56] <jamespage> I've been trying to get something else finished today otherwise I would have dived in....
[14:56] <zul> jamespage: oh?
[14:56] <jamespage> zul, forget that - upstream already fixed it
[14:57] <jamespage> they added a pip-requires which was part of python core
[14:57] <jamespage> which made the package un-installable
[14:57] <zul> jamespage: awesome
[14:57] <jamespage> https://github.com/openstack/python-keystoneclient/commit/0f83602b6251c2547a9f3211037f65f6dd1105f1
[14:58] <hallyn> jdstrand: yes, specifiable through qemu.conf, otherwise it automatically tries to find a hugepages mount
[14:58] <hallyn> jdstrand: yes, it can find that hugepages are in use
[14:58] <hallyn> jdstrand: so virt-aa-helper *could* reproduce the qemu logic for detecting the mount point
[14:59] <hallyn> but that involves in part parsing /etc/libvirt/qemu.conf, so prefer not to
[14:59] <jdstrand> hallyn: what I was thinking was that we could make it easier-- I think it might make an acceptable compromise:
[14:59] <jamespage> zul, trying to figure out the differences between the two
[14:59] <jamespage> ns and no ns
[14:59] <jdstrand> if virt-aa-helper detects that hugepages are in use, it uses the hard-coded path
[15:00] <zul> jamespage: glance-precise-grizzly is still failing for some reason
[15:00] <hallyn> jdstrand: I guess on the bright side that won't break any current users...
[15:00] <jdstrand> *perhaps* we could hardcode that path in qemu.conf with a note saying that changing it means you would want to also upadte the apparmor profile
[15:00] <jdstrand> hallyn: right-- the idea here is that hugepages are only granted to those VMs that are configured to use it
[15:00] <jamespage> zul, I've seen that test fail before - I think it may be a little flakey
[15:01] <zul> ack...say it aint so :)
[15:01] <jdstrand> hallyn: as soon as an admin toggles them on or off, then the profile will be updated
[15:01] <hallyn> jdstrand: sadly that doesn't seem upstreamable either though.  I'm afraid I need to go ask this upstream
[15:01] <hallyn> jdstrand: heh, there is one other possibility -
[15:02] <jdstrand> hmm, I think that could be upstreamable personally, but really, this needs to be fixed in all svirt drivers
[15:02] <hallyn> have qemu_driver.c open the hugeapges_mount dir, and call the AppArmorSetFDLabel on that fd :)
[15:02] <jdstrand> so they may have an idea on how to fix it to give you, or may just fix it themselves once they realize it is busted in selinux
[15:02] <hallyn> right
[15:03] <hallyn> you know i think in the meantime i might go the fd route
[15:03] <hallyn> jdstrand: it's possible i misunderstand though - is that the purpose of AppArmorSetFDLabel ?
[15:04] <hallyn> must be - lemme go try that, then email the list
[15:04] <hallyn> after breakfast :)
[15:06] <jdstrand> hallyn: so, AppArmorSetFDLabel is very much apparmor specific
[15:06] <jdstrand> hallyn: you don't want to call taht from qemu_driver.c
[15:06] <hallyn> jdstrand: right, i'd use the virSecurityWhatever hook
[15:07] <jdstrand> AppArmorSetFDLabel is code refactoring for SetSecurityImageFDLabel and SetSecurityTapFDLabel
[15:07] <jdstrand> those are pretty specific
[15:07] <hallyn> oh.  drat
[15:08] <jdstrand> I'm guessing upstream would want a new SetSecurityHugepagesFDLabel
[15:08] <hallyn> i see.  not what it hought
[15:08] <hallyn> ok then i'll just email them.
[15:08] <jdstrand> then we would do something like:
[15:08] <jdstrand>     .domainSetSecurityHugepagesFDLabel      = AppArmorSetFDLabel,
[15:08] <jdstrand> but I'm guessing what they would want there
[15:08] <hallyn> jdstrand: but actually taht wouldn't do for selinux
[15:09] <hallyn> well, maybe.
[15:09] <jdstrand> selinux would implement SELinuxSetSecurityHugepagesFDLabel
[15:09] <jdstrand> or whatever
[15:09] <jdstrand> but yeah, get upstream involved :)
[15:09] <hallyn> right, it's just that they wouldn't change the fd label :)  but that's ok
[15:09] <hallyn> yup
[15:09] <hallyn> thanks jdstrand !
[15:09] <jdstrand> np
[15:14] <jazzkutya> hi, what packages should i install on 12.04 to run 32bit apps?
[15:15] <patdk-wk> ia32-libs-multiarch:i386
[15:16] <jazzkutya> thanks
[15:16] <jazzkutya> i have this problem with it: http://pastebin.com/NVM6eHxX
[15:17] <jazzkutya> what causes this, can I solve it somehow?
[15:17] <patdk-wk> it says you have issues
[15:17] <patdk-wk> you did run, apt-get update, right before attempting to isntall right?
[15:18] <jazzkutya> yes, even dist-upgrade because i had held back packages
[15:18] <jazzkutya> and even rebooted
[15:18] <jazzkutya> right now i have no issues reporter by apt-get install (no arguments)
[15:18] <jazzkutya> *reported, sorry
[15:43] <jazzkutya> how can i install ia32-libs without those 2 libs having problems? you know gphoto and sane are totally useless on a server :)
[15:45] <hallyn> stgraber: do you think all templates should use -H in the rsync to install?
[15:45] <hallyn> well i'll start with just lxc-clone
[15:46] <alex88> hi guys, is generally a bad practice to set tap devices 777?
[15:46] <stgraber> hallyn: that'd make sense
[15:47] <patdk-wk> jazzkutya, not sure you about, but for me, they are only *suggested* packages, and therefor not installed by default
[15:48] <patdk-wk> not even installed on my system, but ia32-libs-multiarch is
[15:50] <jazzkutya> apt-get install --no-install-recommends ia32-libs-multiarch gives same error and man page shows no similar option for suggested packages
[15:51] <patdk-wk> suggested are not installed by default, recommends are
[15:52] <jazzkutya> libsane is on a Depends: line of apt-cache show
[15:53] <patdk-wk> libsane != sane, and libsane doesn't depend on sane
[15:55] <jazzkutya> but it depends on libsane which it can't install and i awfully not need that on a server anyway
[15:58] <jazzkutya> solved my problem temporarily by installing libc6:i386 instead of ia32-libs-multiarch
[15:58] <jazzkutya> i hope the fucked up (i think the problem is this) will be fixed sometime
[15:59] <jazzkutya> *repo
[16:13] <jamespage> yolanda, quantum-ns-metadata-proxy must be included in the quantum-l3-agent package
[16:14] <jamespage> yolanda, I think it also makes sense to include the quantum-metadata-agent in that package as well (along with the configuration file)
[16:14] <jamespage> I can't see a use-case where you could deploy then separately
[16:14] <jamespage> yolanda, we also need an upstart configuration for quantum-metadata-agent
[16:15] <jamespage> the one for quantum-server is probably a good template to follow
[16:16] <yolanda> ok, i'm taking a look at these packages, i need to browse them a bit first to understand better
[16:17] <jamespage> yolanda, okay-dokey - zul - do you have an opinion on the above re the quantum-metadata-agent
[16:17] <zul> jamespage: sounds good to me
[16:18] <zul> yolanda:  youll have to patch the metadata agent conf file for the right state path directory and the right rootpath as well
[16:18] <jamespage> zul, is that something we should try to upstream?
[16:19] <zul> jamespage: yeah i was thinking of diong the rootwrapper at least
[16:26] <skrite> hey all
[16:33] <zul> yolanda/jamespage: i would suggest holding off on making that change for a couple of hours so this can get in: https://review.openstack.org/#/c/17362/
[16:38] <yolanda> zul, ok, i'm studying the code now
[16:38] <zul> ack
[16:54] <jamespage> adam_g, when you have time; I've put all of the changes for initial quantum support into the openstack charms up for review
[16:54] <jamespage> adam_g, bug 1079782
[16:54] <roaksoax> jamespage: i'll propose a MP tomorrow for the cluster stuff
[16:55] <roaksoax> jamespage: and integrate it with your deployer
[16:55] <jamespage> roaksoax, the quantum charm has now gone; I've renamed it 'quantum-gateway'
[16:55] <jamespage> quantum is now a core part of nova-compute and nova-cloud-controller
[16:56] <roaksoax> jamespage: ok cool, good to know
[16:57] <jamespage> roaksoax, the metadata service stuff sucks for quantum on folsom; so I would recommend testing with a quantal image + --config-drive True
[16:57] <jamespage> that way the network is not required for initialization by cloud init
[16:57] <roaksoax> ack
[16:57] <roaksoax> jamespage: i was testing this in canonistack and things seemed to work just fine though
[16:58] <jamespage> roaksoax, yeah - it does
[16:58] <jamespage> the only bit you can't do is connect up the external port for floating ip access; but you can access stuff from the gateway if need be
[16:59] <roaksoax> right, ack!
[17:10] <med_> jamespage, so just deploy nova-compute/nova-cc and it uses Quantum. Does it also use cinder?
[17:10] <jamespage> med_, it can do yes
[17:10] <med_> thanks.
[17:22] <sliddjur> I have setup a iptables table. I put all info in /etc/iptables.rules . How do I properly apply the settings?
[17:24] <RoyK> sliddjur: I just use ufw - it's simpler to work with and does most things
[17:24] <RoyK> !ufw
[17:24] <stiv2k> RoyK, hey
[17:25] <stiv2k> RoyK, i've already looked at the logs and cant seem to find anything useful
[17:25] <stiv2k> but i might be overlooking things
[17:25] <RoyK> logs? what logs?
[17:25] <stiv2k> RoyK, sorry, im just now replying to you from my question 8 hours ago
[17:25] <RoyK> oh, repeat it, please. it's been a long day
[17:25] <stiv2k> about my server panicking every 14 days
[17:25] <stiv2k> every 14 days it kernel panics
[17:25] <RoyK> every 14 days??
[17:26] <stiv2k> yes
[17:26] <stiv2k> thats what ive noticed so far
[17:26] <RoyK> is there a cron job scheduled to run at that time?
[17:26] <stiv2k> maybe its just coincidence , but it seems like on the 14th day it panics
[17:26] <stiv2k> um
[17:26] <stiv2k> i have a couple cron jobs that run several times a day
[17:26] <RoyK> do you have the panic message?
[17:27] <stiv2k> no
[17:27] <RoyK> then little can be do to help...
[17:27] <stiv2k> anything for me to keep in mind for the next time?
[17:27] <RoyK> what i'd do first if it was my server, was to start a thorough memory test
[17:27] <RoyK> yes, setup network console
[17:28] <stiv2k> network console?
[17:28] <RoyK> that way, the panic message will (probably) be loggable
[17:28] <qman__> yeah, gotta get that kernel panic message
[17:28] <RoyK> !netconsole
[17:28] <RoyK> google it
[17:28] <stiv2k> ok
[17:28] <RoyK> !netcon
[17:29] <stiv2k> https://help.ubuntu.com/community/Installation/NetworkConsole
[17:29] <stiv2k> this one?
[17:29] <RoyK> afaics, that's for installing with a network console
[17:29] <RoyK> you probably don't need that
[17:29] <stiv2k> oh
[17:29] <stiv2k> whoops
[17:30] <RoyK> https://wiki.ubuntu.com/Kernel/Netconsole
[17:30] <qman__> I have a question, I'm trying to restore a hardy system from file backup, and I've been fighting my hardware for close to a month now
[17:30] <stiv2k> thanks
[17:30] <stiv2k> yes this looks like it will be helpful
[17:30] <qman__> I finally got something that will boot in the system but it's quitting during the boot, saying it can't find the filesystem by UUID
[17:30] <stiv2k> if it will allow me to get the panic message
[17:31] <qman__> I think I may have accidentally created the filesystem as ext4, but my question is, would a hardy kernel be able to boot it as ext3 or not?
[17:31] <RoyK> stiv2k: still - I'd recommend running memtest86+ on that box. bad memory can make a system panic very easily
[17:31] <stiv2k> RoyK, pretty sure ive done that before
[17:32] <stiv2k> and it runs solid for 14 days straight
[17:32] <stiv2k> but on the 14th day it just goes kaput
[17:32] <stiv2k> im pretty sure that's the third time in a row it crashed on the 14th day
[17:32] <RoyK> are you sure it's 14 days?
[17:32] <stiv2k> i installed it the day 12.10 came out
[17:32] <stiv2k> and its been doing it ever since
[17:33]  * RoyK only uses LTS for servers...
[17:33] <qman__> same
[17:33] <stiv2k> where as on 11.04 i had a >1y uptime
[17:33] <qman__> hence the above problem trying to restore a hardy server
[17:33] <TheLordOfTime> same here, servers get LTS for stability! :P
[17:33] <RoyK> stiv2k: you need the panic dump, then
[17:33] <stiv2k> ok
[17:33] <stiv2k> thanks for info
[17:34] <qman__> think it'd be possible/advisable to try a do-release-upgrade from within a chroot via systemrescuecd?
[17:34] <qman__> that's how I got in to get grub working
[17:36] <stiv2k> RoyK, qman__, here is my server: http://stats.stiv2k.info
[17:36] <jamespage> zul, yolanda, adam_g: I really do need to get the auto-lander working for MP's for the lab don't I
[17:37] <zul> uh?
[17:37] <zul> yeah
[17:38] <RoyK> stiv2k: I'd install munin on that as well to get nice graphs showing performance numbers over time - something might be eating memory or similar. with only 512MB, a memory leak can kill the system within rather short time
[17:40] <stiv2k> RoyK, cool, ill check it out... been waiting until i stumble upon some old DDR333 modules to upgrade the ram
[17:40] <stiv2k> server was built from random parts i acquired for free
[17:40] <RoyK> stiv2k: http://munin.karlsbakk.net/munin/ <-- that's my servers ;)
[17:41] <stiv2k> whoa
[17:41] <stiv2k> munin is cool
[17:41] <RoyK> you get pretty detailed graphs from munin
[17:42] <stiv2k> RoyK, why do you have so many servers
[17:42] <sliddjur> RoyK, I am using ufw now. when doing ufw status i get port 53 allowed. But nmap myhostname doesnt show port 53 open...
[17:42] <sliddjur> i restarted aswell
[17:43] <RoyK> stiv2k: only two physical, lamia and smilla, the others are VMs for different purposes
[17:43] <stiv2k> oh
[17:43] <stiv2k> what language is your blog
[17:43] <RoyK> sliddjur: try 'ufw disable' and then 'iptables -vnL'
[17:43] <qman__> I've got six physical
[17:43] <RoyK> iptables rules aren't removed by ufw
[17:44] <qman__> while realistically I could get away with three physical if I virtualized the old junk, I can't afford to replace them right now
[17:44] <sliddjur> RoyK, what does iptables vnL do
[17:44] <sliddjur> then just start ufw again?
[17:44] <RoyK> sliddjur: it just prints whatever tables are present in iptables
[17:45] <RoyK> btw, how do you run the nmap scan?
[17:45] <sliddjur> nmap myhostname
[17:45] <sliddjur> not fqdn
[17:45] <RoyK> a better way would be to test for the service - 'host google.com ip.of.dns.server'
[17:45] <RoyK> unless you're running something else than dns on port 53 :P
[17:46] <RoyK> also, that nmap scan only scans for tcp, and dns is *usually* udp
[17:46] <sliddjur> I am setting up a dns server on my class. But I must first pass first problem in opening port :)
[17:46] <RoyK> (except zone transfers aren't, and tcp can be used otherwise)
[17:46] <sliddjur> nmap localhost gives me port 53 open
[17:47] <qman__> the port is open unless blocked
[17:47] <RoyK> does bind listen to 0.0.0.0:53?
[17:47] <qman__> just because it's not blocked, doesn't mean anything is listening, either
[17:49] <sliddjur> RoyK, wouldnt it be listening by default on port 53? im a bit lost...
[17:49] <qman__> only if it's configured to
[17:50] <sliddjur> qman__, where is that setting in bind?
[17:50] <RoyK> sliddjur: netstat -ln --tcp | pastebinit
[17:50] <RoyK> sliddjur: netstat -ln --inet | pastebinit
[17:50] <RoyK> i mean
[17:52] <sliddjur> http://pastebin.com/r1DRaAbv
[17:53] <RoyK> sliddjur: http://paste.ubuntu.com/1408341/
[17:53] <sliddjur> hmm
[17:54] <sliddjur> why isnt it showing up when i do it locally on my hostname??
[17:54] <RoyK> what?
[17:55] <qman__> this is why: 127.0.0.1:53
[17:55] <qman__> you're only listening on localhost
[17:55] <qman__> you need to configure it to listen on other addresses
[17:55] <RoyK> qman__: no, bind listens to all addresses
[17:56] <RoyK> qman__: it just doesn't listen to 0.0.0.0, it uses a socket per address
[17:56] <qman__> oh, I see
[17:56] <RoyK> typical bindishness
[17:56] <qman__> yeah, that's strange
[17:58] <patdk-wk> na, that is a udp thing
[17:58] <RoyK> oh, it is?
[17:58] <patdk-wk> to make sure the source udp package comes from the same location
[17:58] <RoyK> ok
[17:58] <RoyK> makes sense...
[17:59] <qman__> but then why do it on tcp too?
[17:59] <patdk-wk> no idea :)
[17:59] <patdk-wk> probably cause they already have the *function* setup to do it, and just reused code
[17:59] <RoyK> probably just uses the same socket setup code ;)
[18:00] <samba35> what is best practice to configure dns on 12.04.1 when i have domain /static ip with isp and i want to host mail and web server for personal use
[18:00] <RoyK> hrmf! -19.2 ̊C and falling - I don't like winter!
[18:01] <RoyK> samba35: just install bind and point your domain to the server's IP - and make sure you have a secondary somewhere
[18:01] <qman__> samba35, the best practice is to leave your DNS on the hosting provider unless you have a good reason to run it yourself
[18:01] <RoyK> heh - yeah
[18:02] <qman__> registrars do it for free, no sense putting up the effort or risk in doing it
[18:02] <RoyK> [slightly offtopic] Any idea what might cause this (on a RHEL server)? http://paste.ubuntu.com/1404641/
[18:02] <samba35> sorry i dont know much about dns setting ,it was complex for me
[18:03] <RoyK> bind configuration is a PITA before you get used to it. after that, it's just a slightly less PITA
[18:03] <patdk-wk> qman, well, registers also get ddos a lot too
[18:03] <samba35> pita ?
[18:03] <jacobw2> samba35: put it in /etc/resolvconf/resolv.head
[18:04] <RoyK> samba35: Pain In The Almightly
[18:04] <samba35> not in /etc/hosts
[18:04] <patdk-wk> jacobw2, what does that have to do with it?
[18:04] <jacobw2> samba35: /etc/resolvconf/resolv.conf.d/head even
[18:04] <qman__> oh, that
[18:04] <qman__> I was thinking DNS server, not DNS client
[18:05] <qman__> I still do it the old way, I just remove the link and make a file
[18:05]  * patdk-wk just puts it in interfaces file
[18:05]  * jacobw2 is a hipster :p
[18:06]  * RoyK uses the interfaces file as well - works stably...
[18:06] <qman__> I'll have to agree with that path though
[18:06] <qman__> using the interfaces file makes more sense logically and will work on more systems
[18:07] <RoyK> # Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
[18:07] <RoyK> #     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
[18:07] <RoyK> meaning - don't edit /etc/resolvconf/resolv.conf.d/head manually ;)
[18:07] <qman__> right
[18:14] <halvors> Hi!
[18:17] <samba35> more confused
[18:19] <halvors> My someone help me generate the certificates for this tutorial? https://help.ubuntu.com/12.04/serverguide/postfix.html (Mail-stack-delivery)
[18:22] <TheLordOfTime> halvors, you mean step 2 of SMTP auth?
[18:22] <TheLordOfTime> refer to https://help.ubuntu.com/12.04/serverguide/certificates-and-security.html
[18:22] <TheLordOfTime> since that's what it links,'
[18:22] <adam_g> jamespage: FYI ive been working on packaging the new kombu + pyamqp in ppa:gandelman-a/ppa
[18:22] <RoyK> halvors: selfsigned?
[18:23] <halvors> RoyK: I don't know what i need to enable SMTPS?
[18:23] <RoyK> usually you would want an official certificate
[18:24] <RoyK> I'd guess some servers will deny talking to something with a self-signed certificate
[18:24] <RoyK> some, or most
[18:24] <RoyK> http://www.openssl.org/docs/HOWTO/certificates.txt
[18:25] <halvors> I know.
[18:25] <halvors> But self signed is ok.
[18:25] <halvors> What i need help for is to generate these:
[18:25] <TheLordOfTime> halvors, only for testing, not public deployment
[18:26] <RoyK> halvors: just google 'create self signed openssl'
[18:26] <halvors> /etc/ssl/certs/ssl-mail.pem
[18:26] <RoyK> should work well
[18:26] <halvors> /etc/ssl/default/ssl-mail.key
[18:27] <halvors> I simply want to create a certificate for my mail server. I'm not gonna pay someone to do it...
[18:27] <halvors> I just wanna create it on my own...
[18:27] <yolanda> hi adam_g, about your question in the email, this is something that we've been discussing in the channel, seems that quantum-metadata-agent will be normally used with l3-agent
[18:28] <RoyK> halvors: that may mean other SMTP servers will deny talking to you over SSL - but then - your choice ;)
[18:29] <RoyK> SSL certs don't have to cost a fortune http://webdesign.about.com/od/ssl/tp/cheapest-ssl-certificates.htm
[18:30] <halvors> RoyK: If i don't enable SMTPS anyway, other smtp server arn't going to talk to me either :P
[18:31] <halvors> I only want my users to be able to...
[18:31] <TheLordOfTime> where're you getting that from...?
[18:31] <halvors> SMTPS is not enable in postfix by default...
[18:31] <ScottK> Don't get confused
[18:31] <ScottK> SMTPS is not SMTP over TLS/SSL
[18:32] <halvors> What then?
[18:32] <ScottK> SMTPS is a specific encrypted submission procotol used only by Microsoft on port 465
[18:32] <TheLordOfTime> mhm
[18:32] <RoyK> ScottK: eh...? http://en.wikipedia.org/wiki/SMTPS
[18:32] <halvors> So i don't wanna use SMTPS?
[18:32] <halvors> Go for submission?
[18:33] <EntropyWorks> so whats the deal with 12.10 and the new naming of NIC '
[18:33] <ScottK> Also, virtually all certs used in SMTP are self-signed, so there's virtually never a need to buy one for SMTP.
[18:33] <TheLordOfTime> RoyK, you're aware Wikipedia is untrustworthy right?
[18:33] <ScottK> RoyK: "Originally, in early 1997, the Internet Assigned Numbers Authority registered 465 for SMTPS."
[18:33] <ScottK> TheLordOfTime: It's correct, just not well worded.
[18:33] <EntropyWorks> I reboot a machine and sometimes I get em3 other times I get rename4 instead. this is really annoying
[18:34] <RoyK> TheLordOfTime: wikipedia is *usually* trustworthy, and a set of people on IRC aren't necessarily trustworthy either
[18:34] <TheLordOfTime> RoyK, true.
[18:34] <halvors> So i shouldn't enable SMTPS?
[18:36] <samba35> i need some help with dovecot ,i am getting ok message with telnet for user and passwd even but now what i should do
[18:36] <zul> adam_g: https://code.launchpad.net/~zulcss/nova/nova-testsuite-fix/+merge/137652
[18:36] <halvors> I installed the mail-stack-delivery package wich installs /etc/ssl/certs/ssl-mail.pem and /etc/ssl/private/ssl-mail.key from the ssl-cert package, but should't i generate them on my own?
[18:44] <RoyK> halvors: http://bit.ly/TBVsxY
[18:56] <ze_king> Someone now a program so i can archive rar in ubuntu server?
[18:56] <RoyK> doesn't 7zip support that?
[18:56] <RoyK> p7zip, that is
[18:56] <ze_king> i only get .7z fils with that
[18:57] <RoyK> apt-get install rar \o/
[18:57] <ze_king> that doesnt work neither :P
[18:57] <Pici> !info unrar
[18:58] <ze_king> Reading package lists... Done
[18:58] <ze_king> Building dependency tree
[18:58] <ze_king> Reading state information... Done
[18:58] <ze_king> Package rar is not available, but is referred to by another package.
[18:58] <ze_king> This may mean that the package is missing, has been obsoleted, or
[18:58] <ze_king> is only available from another source
[18:59] <RoyK> ze_king: works for me (on lucid)
[18:59] <ze_king> im on ubuntu server ;<
[18:59] <RoyK> and precise
[18:59] <RoyK> so am i
[18:59] <ze_king> hm, okey
[19:00] <RoyK> sudo apt-get install -y rar unrar
[19:00] <ze_king> same as before
[19:01] <ze_king> Package rar is not available, but is referred to by another package.
[19:01] <ze_king> This may mean that the package is missing, has been obsoleted, or
[19:01] <ze_king> is only available from another source
[19:01] <shauno> rar's in multiverse, which I don't believe is a default repo
[19:05] <ze_king> on what source list is rar then?
[19:05] <Pici> !info rar
[19:05] <Pici> also multiverse
[19:06] <ze_king> but how can i get it? =/
[19:06] <RoyK> have you enabled multiverse_
[19:06] <RoyK> ?
[19:07] <ze_king> na, i dont :p
[19:07] <Pici> Then thats a good place to start
[19:07] <ze_king> sry, i should rename myself from ze_king to ze_noob ;<
[19:08] <RoyK> just /nick ze_noob ;)
[19:08] <ze_noob> :D
[19:08] <RoyK> :)
[19:09] <ze_noob> irssi is the shit ;D
[19:12] <stiv2k> ze_noob yeah it is
[19:29] <yolanda> leaving for today, bye!
[20:33] <keithzg> Trying to send a message to all logged in terminal sessions of a specific group, but apparently -g isn't a valid option for the Linux version of "wall" (I swear it is on at least some form of BSD)
[20:33] <keithzg> is there any alternative, or fix to that?
[20:35] <sarnold> keithzg: some scripting around write(1)
[20:35] <sarnold> ?
[20:36] <RoyK> shouldn't be too hard to parse /etc/group and extract the members ;)
[20:36] <halvors> RoyK: By default when i'm trying to connect to my mail server using SMTP, i get the error "Relay access denied". But i provide the client with needed login information... How can i fix that?
[20:37] <keithzg> sarnold, RoyK: good thoughts! Sad that the -g flag is missing, nonetheless. One of the few (only?) times I've longed for something that *BSD has, heh
[20:38] <RoyK> halvors: you need to allow authenticated users to relay - google should know, I haven't setup such a thing myself, sorry
[20:38] <sarnold> keithzg: yeah, I've had that kind of feeling before myeslf.. I can't recall which specific feature, but it seemed like something was way easier in bsdland..
[20:40] <sarnold> hrm, and I don't see an easy getgrent()-based program in man -k getgr that you'd easily use in shell scripting. pity.
[20:42] <keithzg> alas
[21:30] <jdstrand> adam_g: fyi, bug #1065187 was fix in http://www.ubuntu.com/usn/usn-1626-1 (I updated the bug)
[21:30] <jdstrand> (I updated the bug)
[21:43] <adam_g> jdstrand: ah thanks. looks like i need to adjust this script to check for security updates like that.
[21:43] <adam_g> you might see a few more like that, sorry in advance
[21:44] <jdstrand> ok, no worries
[21:49] <jdstrand> adam_g: if you are adjusting a script, you might want to consult https://usn.ubuntu.com/usn-db/database-all.json.bz2
[21:50] <jdstrand> adam_g: there is also database.json.bz2 which contains only active releases of Ubuntu
[21:55] <adam_g> jdstrand: oh cool. i'll definitely take a look. when you send a out a security update, does a corresponding bug task get filed against the stable release thats being updated?
[21:58] <jdstrand> adam_g: no. we don't track CVEs in LP for a number of reasons. if a task already exists, we'll reference the bug in the changelog
[21:58] <jdstrand> assuming we know about it
[21:59] <jdstrand> adam_g: fyi, bug #1064914 and bug #1079216 were also already fixed (I adjusted the bugs)
[22:07] <qman__> so I have a drive which I want to automatically mount if it's there, but I don't want to stop the system from booting, which it currently is
[22:08] <qman__> it currently has this in fstab: UUID=[blahblah] /media/backup ext4 auto,relatime 0 0
[22:14] <smw_> qman__, does using the nofail option work?
[22:20] <tgm4883> Are there instructions anywhere for adding iscsi storage for libvirt?
[22:21] <tgm4883> I've been attempting to do it though virt-manager, but it keeps throwing errors.
[22:22] <Daviey> tgm4883: what error are you seeing?
[22:23] <tgm4883> Daviey, so in the hostname field, I add the IP address of the NAS, I'm assuming that "Source Path" should be attempting to see what iscsi shares are at that IP
[22:23] <tgm4883> since there is a browse, but that is all greyed out
[22:23] <tgm4883> so I put the IQN in that field
[22:24] <tgm4883> Daviey, basically, I'm at this point http://imagebin.org/238067
[22:24] <tgm4883> Clicking finish throws "Error creating pool: Could not start storage pool: internal error Child process (/sbin/iscsiadm --mode discovery --type sendtargets --portal 10.87.6.6:3260,1) status unexpected: exit status 1"
[22:25] <tgm4883> I'm assuming that is because I don't have access to the discovery DB
[22:25] <tgm4883> if I run that command in the terminal, I get permission denied
[22:26] <tgm4883> running with sudo works fine though
[22:26] <tgm4883> so the question then is, if that is the issue, what do I need to add myself access to, and does that need to be done on the server or my local workstation?
[22:38] <shauno> Okay, no more sugar for uvirtbot.
[22:41] <tgm4883> well this seems broke
[22:53] <halvors> I'm unable to connect to my mailserver (Postfix) using SMTP port 25, but Submission port 587 works just fine, is client connections on port 25 somehow disabled by default in Ubuntu?
[22:55] <JanC> halvors: are you sure it's not your ISP blocking outgoing port 25 (except for their own mail relay)?
[22:56] <fission6> i am in need of serious help
[22:56] <fission6> i think one of my servers has been hacked and i have no idea where to start
[22:57] <JanC> why do you think that?
[22:58] <fission6> JanC: i have a ticket opened in linode for TOS violation SSH brute force and a mysterious folder and a HoneyPot kippo logging thing, all of which i am trying to make sense of
[23:00] <JanC> fission6: sounds like you probably want to re-install the server then  ☺
[23:01] <sarnold> fission6: best is to take the server offline, re-deploy the services from backups, and investigate the hacked machine's hard drive offline...
[23:01] <JanC> (and keep it more secure next time)
[23:01] <fission6> i'd like to understand what happened
[23:01] <JanC> sarnold: linode = VPS
[23:01] <fission6> i am also in a rut where i did not back it up
[23:01] <sarnold> fission6: you wouldn't want to inspect that drive in any way from a machine you care about, since the ontents of the system may be able to further crack your inspection tools
[23:01] <fission6> i want to understand what happened
[23:01] <sarnold> JanC: oh, I missed that, I never saw him say linode...
[23:03] <fission6> is there a security channel or something i can review?
[23:03] <JanC> fission6: what do you need backups of?
[23:03] <sarnold> fission6: there's a few on #oss-security; I don't know for sure that it is on-topic, but it won't hurt to ask :)
[23:04] <fission6> mongo and mysql, i feel safe with dumbs from them
[23:04] <fission6> and images
[23:04] <fission6> damn this is gonan be a nightmare
[23:04] <fission6> its funny because for the last 2 weeks i have been debating using lingoes backup
[23:04] <fission6> linodes
[23:05] <JanC> you have no backup at all?
[23:06] <fission6> not really
[23:07] <JanC> I think that making a database dump should be fairly safe, especially if you check that there is nothing weird in it
[23:07] <fission6> yea i think so too
[23:07] <JanC> although, you can never be 100% sure...
[23:08] <JanC> certainly check all the database users & their permissions
[23:08] <JanC> (maybe don't dump those at all, or separately)
[23:10] <JanC> checking images might be more complicated
[23:11] <fission6> man i can't believe this
[23:12] <fission6> i just want to understand what happened exactly
[23:12] <fission6> i really like don't understand...
[23:13] <JanC> fission6: what applications did you run on it that can be accessed from the outside (web, sshd, ...?)
[23:13] <ScottK> first priority should be to salvage what you can.  Since it's a VPS, you'll probably never have enough information to know for sure.
[23:14] <sarnold> .. though if that mysql was remotely accessible, it'd be a good bet.
[23:15] <fission6> mysql wasn't remotely accessible, i think it was via ssh i mean i don't know i would think i would shave a log or something
[23:16] <sarnold> oh right, the ssh brute forcing. yeah, if you used password authentication, that can also be a source of trouble.
[23:16] <JanC> it does (but if an attacker get root he/she can remove/change the logs of course)
[23:16] <hallyn> stgraber: around?
[23:17] <JanC> using password auth for ssh is usually not such a good idea...
[23:17] <hallyn> preferences question...  clearly we want command line specified logfile/loglevel to trump what is in lxc.conf.  But,
[23:17] <hallyn> if logfile is present in both, do we want lxc_conf to store the command-line specified (active) log file, or the one in lxc_conf?
[23:18] <hallyn> I guess it has to be lxc_conf
[23:18] <hallyn> so what is in lxc_conf may not reflect what's going on
[23:18] <JanC> fission6: were you using any webapps?
[23:18] <hallyn> all right, that's settled, will dothath:)
[23:18] <stgraber> hallyn: :)
[23:18] <fission6> JanC: what do you mean specifically?
[23:18] <sarnold> hallyn :)
[23:19] <JanC> fission6: some webapps are known for their security issues  ☺
[23:19] <stgraber> hallyn: the command line should be an override of the container's config and we shouldn't try to change the config file unless the user explicitly wants us to, so yeah, it's possible that there will be running containers saving log entries somewhere else than what's defined in their config, but in such case, the lxc-start command line will let you find out where anyway
[23:21] <hallyn> stgraber: +1 :)  bbl
[23:47] <webfox> Hello folks!
[23:47] <webfox> Could someone help me figure how to verify which keyboard layout is my machine using right now please ?
[23:59] <webfox> Could someone help me figure how to verify which keyboard layout is my machine using right now please ?