[00:32] <gregcoit> hi all - i think I found a bug.  If you have cacti installed in either karmic or jaunty server, type php /usr/share/cacti/cli/add_device.php --help and a bunch of errors will scroll by - looks like incorrect paths for includes.  Am I crazy or should I report this? (not reported yet for either jauunty or karmic - hard to beleive)
[00:34] <jiboumans> gregcoit: a pastebin with the errors would be useful ofc.. and make sure your php.ini doesn't contain custom entries
[00:34] <gregcoit> jiboumans: of course - sorry
[00:35] <jiboumans> gregcoit: no need to appologize ;) but if everything is vanilla and errors are popping up, it'd warrant a bug report
[00:35] <gregcoit> jiboumans: vanilla php:
[00:36] <gregcoit> http://pastebin.com/d4d9c3ad3
[00:37] <gregcoit> all those files exist in /usr/share/cacti/site/lib/
[00:38] <jiboumans> hmm, that does look suspicious.. can you pastebin add_graphs.php as well please?
[00:38] <jiboumans> (dont have the source handy)
[00:39] <gregcoit> jiboumans: np - the relevant section: http://pastebin.com/d5a1595e3
[00:40] <gregcoit> jiboumans: thta's also the top of the script minus the copyright - nothing is processed before thos lines
[00:42] <jiboumans> gregcoit: hmm, this obviously isn't set: include_once($config["base_path"]."/lib/api_automation_tools.php");
[00:42] <jiboumans> since your previous paste shows /lib/...php
[00:42] <gregcoit> agreed
[00:42] <DDwi> this is with apache2?
[00:42] <gregcoit> yes
[00:42] <DDwi> how are you accessing it ?
[00:42] <gregcoit> but these script are for cli only
[00:42] <DDwi> virtualhost ?
[00:42] <gregcoit> now via apache
[00:42] <gregcoit> er, not
[00:43] <jiboumans> gregcoit: how does a  'find /usr/share/cacti -type f' look like?
[00:43] <gregcoit> spits out 272 files
[00:44] <jiboumans> mind pastebin'ing those too? last one, i promise :)
[00:44] <gregcoit> jiboumans: np
[00:45] <gregcoit> http://pastebin.com/d30b37dcd
[00:46] <jiboumans> gregcoit: ok, i'm happy to say 'bug' at this point
[00:47] <jiboumans> gregcoit: those 3 pastes + a dpkg -l for the relevant packages should make a good report
[00:47] <gregcoit> :(  i was hoping you were going to say I'm crazy...  Ok, I'll file.  Thanks for the support!
[00:47] <gregcoit> jiboumans: you got it
[00:47] <jiboumans> gregcoit: the workaround is pretty straightforward (but i guess you saw that already); it's not ../include/global.php it's ../site/include/global.php
[00:49] <gregcoit> funny thing.  I searched for cacti bugs on launchpad-  found none.  as soon as i type the problem in "file a bug" - up pops the exact issue.  So, alrady filed.  And yeah, thanks for the answer!
[00:50] <gregcoit> jiboumans: sorry to take your time
[00:51] <jiboumans> gregcoit: no worries. don't forget to hit the 'this affects me' button :)
[00:51] <jiboumans> and with that, it's time for sleep...nn
[00:51] <gregcoit> jiboumans: si.  and subscribed!
[02:27] <maxagaz> I have a server which hard drive is to small (80GB), I want to change it for a 160 GB, but without having to reinstall the filesystem
[02:27] <maxagaz> can I just move the content to another disk ?
[02:28] <maxagaz> what else do I need to do to make this work ?
[02:38] <qman__> maxagaz, you need to install grub to the new disk, which is fairly simple to do
[02:39] <qman__> and then modify /etc/fstab to update the UUIDs
[02:44] <maxagaz> qman__, what command should i use to have the same content on the new disk with permissions... ?
[02:45] <twb> maxagaz: you can just move content from one disk to another.
[02:46] <twb> maxagaz: simply boot some third medium (e.g. a live CD), then dd the entire 80GB from the first disk to the second.
[02:46] <twb> maxagaz: then, increase the partition and filesystem size (or simply allocate another partition).
[02:47] <maxagaz> twb, will dd also take the swap ?
[02:47] <twb> maxagaz: dd is copying the contents of the disk bit-for-bit.
[02:48] <maxagaz> twb, dd isn't convenient as I need free space somewhere to put the generaed image
[02:48] <twb> maxagaz: just put both disks in the system at once
[02:50] <qman__> and dd one whole disk to the other whole disk, like /dev/sda to /dev/sdb
[02:50] <qman__> then resize the partitions or create a new one
[02:50] <twb> Yup
[02:50] <qman__> if you do that, grub copies too, and you only need to edit /etc/fstab
[02:51] <twb> qman__: I was assuming this was a disk REPLACEMENT -- in which case, /dev/sda is still /dev/sda and the UUID and LABEL are unchanged
[02:56] <maxagaz> twb, ok so, during the dd, I have /dev/sda and /dev/sdb, and after removing /dev/sda, /dev/sdb becomes /dev/sda, right?
[02:57] <maxagaz> so non need to change /etc/fstab
[02:58] <qman__> I was under the impression that the UUID would change anyway, but I haven't tested it myself
[02:59] <qman__> I thought the whole point of the UUID was that it is unique to the disk, and wouldn't change if you plugged it into a different channel
[03:00] <twb> maxagaz: just so.
[03:00] <twb> qman__: the UUID would be DD'd, too.  It's a property of the filesystem, not the disk.
[03:00] <qman__> ah, that's true
[03:00] <twb> At least, the UUIDs that fstab cares about
[03:00] <qman__> yeah
[03:01] <twb> Disks have serial numbers
[03:02] <maxagaz> what's the dd command syntax to use to make the copy ?
[03:03] <qman__> dd if=/dev/sda of=/dev/sdb
[03:03] <twb> dd if=/dev/sda of=/dev/sdb, where sda is the source and sdb is the target
[03:03] <qman__> you could add tweaks like bs=1M if you want, too
[03:03] <twb> make sure they're the right way around before you start.
[03:03] <qman__> though I'm pretty sure it defaults to a sensible block size anyway
[03:03] <maxagaz> qman__, what does bs=1M means ?
[03:03] <qman__> sets the block size to one megabyte
[03:04] <qman__> it may or may not make the transfer faster
[03:04] <qman__> it all depends on the hardware, and it's not really needed
[03:09] <maxagaz> how to change the partition size ?
[03:09] <maxagaz> with parted, by just resetting the last block ?
[03:10] <qman__> no, you need to resize
[03:10] <qman__> I usually do it with gparted
[03:10] <qman__> from a live disc
[03:10] <twb> You need to write a new partition table, and then to run resize2fs (or equivalent).
[03:11] <twb> parted can do both operations at once for ext2 filesystems, but I don't really trust it.
[03:11] <twb> qman__: both operations can be done online, as long as you restart after editing the partition table.
[03:35] <TVrotsurbrain> !ops
[03:37] <maxagaz> qman__, twb, thanks a lot
[05:01] <uvirtbot`> New bug: #511020 in postfix (main) "package postfix None [modified: /var/lib/dpkg/info/postfix.list] failed to install/upgrade: subprocess pre-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/511020
[05:36] <blaenk> hey guys, I have an sqlite database that I need to access as its owner in order to modify it
[05:36] <blaenk> how can I do this?
[05:36] <blaenk> I just did sudo chmod ug+s thefile.db but that didn't seem to work
[07:41] <uvirtbot`> New bug: #511058 in vsftpd (main) "/etc/init/vsftpd.conf contains an error." [Undecided,New] https://launchpad.net/bugs/511058
[07:42] <uvirtbot`> New bug: #511057 in vsftpd (main) "/etc/init/vsftpd.conf contains an error." [Undecided,New] https://launchpad.net/bugs/511057
[08:41] <NublaII> good morning. I am running a server with apache2-mpm-prefork, and every day it goes down a couple of times, and I can't quite figure out why it is... I know it runs out of memory, when too many children are left open. How can I troubleshoot it?
[08:41] <NublaII> the machine is faily big, and it all happens in less than a minute... everything is fine and next minute it's gone
[08:48] <acalvo> did you check apache's logs?
[08:48] <NublaII> yeah, but I couldn't find anything out of the ordinary...
[08:48] <NublaII> not even on the error.log
[08:48] <acalvo> did you increase log verbosity?
[08:49] <NublaII> LogLevel debug
[08:50] <NublaII> I don't think I can go much further on that end...
[08:50] <acalvo> how does it breaks, with a segmentation fault?
[08:50] <NublaII> nope... it just hangs with too many children
[08:50] <NublaII> swapping out...
[08:51] <NublaII> 99% of the day it's fine
[08:51] <NublaII> but then a couple of times a day it just goes bersecker
[08:51] <NublaII> it hovers around 70 servers all day long
[08:52] <NublaII> and it goes all the way up to 140 (the limit) and dies...
[08:54] <NublaII> I've done a little math and tried to make it so the max number of servers never gets all the available ram...
[08:54] <NublaII> but it fluctuates a little, so from time to time I starts swapping like crazy and I have to kill it all
[09:04] <acalvo> is it always at the same time of day?
[09:04] <acalvo> (just trying to see if you have some background process)
[09:05] <NublaII> mmm... not always the same
[09:05] <NublaII> but kind of similar...
[09:05] <acalvo> maybe you've some cron job or something
[09:05] <NublaII> between 11.30pm and 12.30am
[09:05] <acalvo> that eats some RAM
[09:06] <NublaII> checked that, and I have nothing running at that time...
[09:06] <NublaII> I am tempted of just setting a cronjob to restart apache every 6 hours...
[09:06] <NublaII> :P
[09:07] <acalvo> I did one time to solve one problem
[09:08] <acalvo> is it related to a peek hour?
[09:09] <NublaII> not really... peak time for us is before that... it's sleep time in theory ;)
[09:10] <NublaII> yesterday I was monitoring it and it was running fine, 70 processes chugging along... and in 20 seconds it just went through the roof
[10:14] <acalvo> well, you could really use some kind of report
[10:14] <acalvo> of the system
[10:14] <acalvo> and check that
[10:15] <acalvo> I've had some problems with openLDAP
[10:15] <acalvo> eventually I've found out that was something related to another program
[10:25] <acalvo> I've my domain.com set up with BIND. However, I want that if someone loads domain.com on a browser it redirects to www.domain.com. If I ping domain.com on any computer it resolves to 127.0.0.1. How can I add an entry in the main BIND file to link domain.com to a computer?
[10:42] <_ruben> acalvo: the actual redirection will need to be done by your webserver, concerning bind you'll probably want to specify the same ip address for @ as for www
[10:42] <qman__> acalvo, you can't redirect from bind, you have to do that on the website, but the DNS entry you want to modify is the root
[10:42] <qman__> to refer to the root, use an @
[10:43] <acalvo> well, I've thought that if I can't ping domain.com and resolve it, I'll can't browse http://domain.com
[10:43] <acalvo> now it's fixed, I guess I just have to find where to put the .htaccess file
[10:44] <acalvo> I've thought that I could put the redirection in the definition of the site (in /etc/apache/sites-available)
[10:48] <_ruben> acalvo: wouldnt surprise me if you could (never tried myself)
[10:48] <_ruben> i'd probably do the redirection using php/perl/whatever im using for the site
[10:51] <acalvo> good option, btw
[10:51] <acalvo> I'll give it a try if I can't do it using apache's config files
[10:52] <qman__> you can do it either way, even in plain HTML if you want
[10:52] <qman__> each option has its own advantages and disadvantages
[10:52] <qman__> but it's done with the website/web server, not in DNS
[10:58] <acalvo> I know, but if the DNS wasn't resolving correctly, it could not work
[10:58] <acalvo> however
[10:58] <acalvo> I've tried setting up this site in apache
[10:58] <acalvo> <VirtualHost *:80>
[10:58] <acalvo>     ServerAlias example.com
[10:58] <acalvo>     RedirectMatch permanent ^/(.*) http://www.example.com/$1

[10:59] <acalvo> it kills the actual www.example.com
[11:02] <uvirtbot`> New bug: #502071 in spamassassin "FH_DATE_PAST_20XX scores on all mails dated 2010 or later" [High,Fix released] https://launchpad.net/bugs/502071
[11:02] <_ruben> you dont have a servername for that vhost
[11:03] <acalvo> well, I do
[11:03] <acalvo> I have a site domain.com and a www.domain.com
[11:04] <acalvo> I'm trying to use the ServerAlias directive
[11:05] <acalvo> but it screws up more all the things
[11:05] <acalvo> this is the www.domain.com file: http://paste.ubuntu.com/360589/
[11:06] <qman__> you can't have two sites that listen on *:80
[11:06] <qman__> each site must listen on a separate IP or domain name
[11:07] <qman__> so, domain.com:80 and www.domain.com:80
[11:07] <acalvo> well, I've a lot of sites, and all of them are listening on *:80 (and are working great...)
[11:08] <acalvo> if I need to have more than one domain name, should I specify it?
[11:08] <au> it never worked for me with *:80
[11:08] <au> only worked with ip:80
[11:08] <acalvo> http://paste.ubuntu.com/360591/
[11:09] <acalvo> this is another working on the same server
[11:09] <acalvo> I've a bunch more
[11:09] <acalvo> should I fix that?
[11:13] <NublaII> I have mine working with <VirtualHost *>
[11:13] <NublaII> do you have anything running with ssl?
[11:13] <NublaII> if you wanna use that syntax I believe you need to have the line
[11:13] <NublaII> NameVirtualHost *
[11:13] <NublaII> on the default vhost file
[11:13] <acalvo> I do have some sites under SSL
[11:15] <acalvo> where should I put the namevirtualhost?
[11:16] <acalvo> in the default site (/etc/apache/sites-available/default)?
[11:16] <NublaII> mmm... I have it on the first line of the default one
[11:16] <neriberto> hi everybody!!
[11:16] <NublaII> /etc/apache2/sites-available/default
[11:17] <acalvo> well, I've tried and now:
[11:17] <acalvo>  * Reloading web server config apache2                                                                                                                       [Fri Jan 22 12:16:41 2010] [error] VirtualHost *:80 -- mixing * ports and non-* ports with a NameVirtualHost address is not supported, proceeding with undefined results
[11:17] <neriberto> someone rebuild the ubuntu server from source and recompile to a new ISO?
[11:17] <NublaII> I guess you have to use * or *:80 on all of them...
[11:17] <NublaII> and since you have ssl on there you shoud use *:80... I guess :)
[11:17] <acalvo> let's see
[11:17] <acalvo> *:80 on all of them?
[11:18] <acalvo> or in the namevirtualhost directive?
[11:19] <acalvo> now:  * Reloading web server config apache2                                                                                                                       [Fri Jan 22 12:18:31 2010] [warn] NameVirtualHost *:80 has no VirtualHosts
[11:19] <NublaII> how many virtualhosts do you have?
[11:20] <acalvo> 7
[11:20] <acalvo> and 2 ssl
[11:20] <NublaII> I *believe* if you have "NameVirtualHost *:80" on the default vhost
[11:21] <NublaII> I think you need to use the *:80 on all the <VirtualHost *:80>
[11:21] <acalvo> ok, it is working like this
[11:21] <NublaII> I was asking how many to see how much trouble would it be for you to try it
[11:21] <acalvo> however I can't redirecty from domain.com to www.domain.com
[11:21] <NublaII> since you can't mix
[11:21] <acalvo> if I try to open domain.com it tries to download the typical "it works" file
[11:23] <NublaII> and I'm guessing your domain.com config goes to a vhost that is different from 000-default, right?
[11:24] <acalvo> should go, yes
[11:24] <NublaII> can you resend the config file for domain.com? I lost the scrollback
[11:26] <acalvo> this is www.domain.com: http://paste.ubuntu.com/360589/
[11:26] <acalvo> now I'm using some redirection matches in the default site of apache
[11:26] <acalvo> there is no domain.com now
[12:02] <ycy___> hi there
[12:02] <ycy___> on my system there's always active x11vnc
[12:02] <ycy___> and I don't know where, on startup, it is launched
[12:02] <ycy___> how do I know where x11vnc is launched?
[12:02] <ycy___> I mean, in which file...
[12:03] <soren> ycy___: Try asking in #ubuntu.
[12:11] <neriberto> i've been download a ISO of source...how can I rebuild this?
[12:29] <joegardner> Hello guys! I have installed ubuntu-server on my server computer but I have a problem I have set up a NFS server and while transfering files to and from the server the speed is really poor
[12:29] <joegardner> and i have 100mb card
[12:35] <mealstrom> joegardner: check with iperf real speed.
[12:45] <joegardner> mealstrom: sry what do you meen?
[12:48] <mealstrom> joegardner: if it is a problem with data cable you ll see it with iperf utility (client server)
[12:49] <mealstrom> if not -- you ve got some wrong option in conf
[12:49] <joegardner> mealstrom: well i've tried both with cable and wifi
[12:49] <joegardner> mealstrom: but you know it's like it's getting stucked
[12:50] <joegardner> mealstrom: and streaming movies from the server works fine
[12:52] <mealstrom> joegardner: I've got some similar problem with vsftp when wasn't using passive ports or passive ports range was small
[12:53] <joegardner> mealstrom: okey...I've also got vsftp
[12:54] <mealstrom> you can check how many ports / conenctions it opens
[13:51] <pmatulis> with quotas, if i set a user quota for /home/user and a group quota (where user is member of group) for /home/share/user which quota will be enforced when touching either of these directories?
[14:36] <Disconnect> smoser: got a sec? not quite understanding how get_data_source is expected to work.
[14:37] <smoser> sure
[14:38] <smoser> (it probably shoudl be part of the constructor)
[14:39] <smoser> but the general idea is to search through a list of "cloud data providers" and find one.
[14:39] <smoser> right now that list is only ec2.
[14:39] <Disconnect> firing everything at S20 (so after network, etc etc etc) but its bailing immediately with Could not find data source / Failed to get instance data.
[14:39] <Disconnect> although as I was about to explain the path I followed I realized it was really wrong. so maybe i'm ok :)
[14:40]  * Disconnect missed the datasource-map entirely somehow :(
[14:41] <smoser> i will admit that I hven't  made a concerted effort at thinking about anything other than lucid.
[14:41] <smoser> i know there are some lucid specific things.
[14:42] <smoser> the idea is that /etc/cloud/cloud.cfg contains 'cloud_type', which is a comma delimited list (maybe it should be a proper YAML list)
[14:42] <smoser> if that type is 'auto', then search through the available "cloud types" to find one.
[14:43] <smoser> if it is "ec2" (or possibly other in the future) , use that.
[14:43] <Disconnect> yah the jaunty part is mostly ok I think. upgraded a couple of minor python dependencies and created an old-style init script to fire cloud-config-ready, which then replaces the existing network/mounts test. (old upstart doesn't have the network-is-up tests or anythign good like that)
[14:44] <Disconnect> I think where I went wrong tracking it landed me in the cache directories, which don't exist yet :)
[14:44] <smoser> wow. you've made a lot of effort.
[14:44] <smoser> yeah, so 2 things there.
[14:44] <smoser> a.) the goal is to cache the ec2 crawl after the first time and store off th objects after we've processed evertyhign, so the later scripts don't have to do that.
[14:45] <smoser> b.) you may have noticed in 'get_data' in ec2, it will read from ec2init.cachedir/ec2//user-data.raw and /meta-data.pkl
[14:45] <smoser> which are not written anywhere.
[14:46] <smoser> i'm using those to supply a mock ec2 datasource
[14:46] <smoser> i put those files into an image and boot.
[14:47] <smoser> Disconnect, i took your ec2-get-data patch also
[14:48] <Disconnect> yah saw that :)
[15:01] <Disconnect> looks like a conflict between boto_utils and boto.utils. wheee
[15:02]  * Disconnect doesn't see any good way to tie the two branches together (my jaunty patches and your upstream) .. maybe through creative use of quilt. 
[15:21] <Disconnect> somewhere along the way i'm not getting into DataSourceEC2
[15:22] <Disconnect> yah dslist is empty. hmm.
[15:23] <Disconnect> oh.
[15:23] <Disconnect>         if not conf.has_key("cloud_type"):
[15:23] <Disconnect>             conf["cloud_type"]=None <------ shouldn't that be auto?
[15:28] <Disconnect> smoser: in boto_utils retry_url whats with the sleep? if i'm reading it correctly, it tries, continues-on-error and then reports an error and delays 2*n seconds even on success..?
[15:28] <smoser> hm... that is copied verbatim from boto
[15:29] <smoser> on success it 'return resp.read()' no?
[15:31] <Disconnect> i gotcha. (fyi 'import time')
[15:31] <Disconnect> hmm. so it logs the error, waits retries*2 seconds and tries again. that makes more sense.
[15:33] <Disconnect> hmmm. except empty user-data returns 200-OK with len 0
[15:40] <jiboumans> mathiaz: ping?
[15:40]  * jiboumans blinks
[15:40] <orudie> how can I check which version of java i have installed ?
[15:41] <uvirtbot`> New bug: #511205 in ntp (main) "Computer reboots when enabling/disabling ntp" [Medium,Confirmed] https://launchpad.net/bugs/511205
[15:42] <genii> That sounds like a really nasty bug
[15:53] <screen-x> orudie: java -version
[16:00] <smoser> Disconnect, "except empty user-data returns 200-OK with len 0" ?
[16:00] <smoser> you're saying that is the response from Eucalyptus?
[16:00] <Disconnect> smoser: it was a bug on my end, urllib2 doesn't take proxies arg. (the lack of error output was leading me astray)
[16:01] <smoser> ok
[16:04] <Disconnect> but yah, if there is no userdata euca returns 200 with length 0 (http://pastebin.ca/1761659) this is, i suspect, entirely correct :)
[16:17] <jjohansen> smoser: re test kernels failing, so it is succeeding in direct kvm boot but failing euca cloud? right
[16:18] <smoser> sorry. bad english
[16:18] <smoser> "In each of the above cases, the included kernel fails."
[16:18] <smoser> s/included/not-your-testing-kernel/
[16:18] <smoser> included in the image/archive, jjohansen
[16:18] <smoser> yours pass my tests.
[16:19] <jjohansen> ah, I was taking from the email that it was failing and trying to figure it out
[16:19] <jjohansen> smoser: in that case if you are happy, I will issue a pull request
[16:20] <smoser> note, limited testing, i just booted, saw that it booted to successful login prompt and then killed it.
[16:20] <smoser> it could have been on fire at the time
[16:20] <smoser> :)
[16:20] <jjohansen> :)
[16:21] <smoser> but from a "did we turn the right noptions on" perspective, the answer is yes, it looks good.
[16:21] <uvirtbot`> New bug: #511245 in autofs (main) "portmap is not started during boot process before autofs and hence autofs does not work properly" [Undecided,New] https://launchpad.net/bugs/511245
[16:57] <grapple> have a prob with permissions... have ubuntu server with instructor and 20 students. inst wants to cp files from his home dir to theirs, but the users cannot get write access even tho the files are set for 777
[16:59] <grapple> anyone have a clue as to why?
[17:00] <Pici> grapple: Are the destination files set with those permissions? or just the source file.  If just the latter then you need to make sure you are using cp -a
[17:06] <grapple> anyone help with permissions?
[17:06] <ScottK> grapple: Did you see Pici's reply to you?
[17:07] <grapple> oh, ok... newbie here
[17:07] <grapple> so then i would do this: sudo cp-a file /home/username
[17:08] <grapple> er, cp -a file /home/username
[17:08] <grapple> works thanks...
[17:08] <grapple> woot!
[17:23] <mathiaz> jiboumans: do you have access to the ubuntuserver blog?
[17:23] <jiboumans> mathiaz: still not (as per last email)
[17:24] <mathiaz> jiboumans: hmmm... wired - I need to investigate that then
[17:24] <mathiaz> jiboumans: I've already invited three times - but it seems to work correclty :(
[17:25] <jiboumans> i get the invite, i accept it.. but then... nothing shows up on the dashboard / etc
[17:59] <madcat1990> I'm in need of assistance, can someone help me?
[18:00] <madcat1990> hmm just ask.... ok
[18:00] <madcat1990> Anyways, I am in need of help with a network bridge on ubuntu server 9.10
[18:00] <madcat1990> namingly, bringing the internet of a wireless connection to a wired connection
[18:01] <madcat1990> but giving the wired connection a ip through a DHCP server on said server
[18:01] <madcat1990> in other words, making the server work as a router x)
[18:06] <mealstrom> haven't understand what you need
[18:07] <mealstrom> dhcp server -  wifi - network - client  ?
[18:08] <TeTeT> aubre: hey there :)
[18:21] <uvirtbot`> New bug: #511295 in dovecot (main) "dovecot -n silently reports wrong configuration when using dovecot-postfix" [Undecided,New] https://launchpad.net/bugs/511295
[18:24] <mathiaz> smoser: I'm thinking about existing images that rely on 169.254.169.254 being reachable
[18:24] <mathiaz> smoser: on a related note, is the InstanceId available from the meta-data service?
[18:25] <smoser> yes.
[18:25] <mathiaz> smoser: awesome - that's gonna help in my big puppet master plan :)
[18:25] <mathiaz> smoser: so you plan to enable or disable by default access to 169.254.169.254 on a lucid image?
[18:26] <smoser> enable by default.
[18:26] <smoser> mathiaz, "instance id" is also available to you as a part of the "DataSource" object in ec2init.
[18:28] <Disconnect> smoser: any idea what would cause self.cfg to be None?
[18:28] <mathiaz> smoser: how do you plan to make the meta-data information available in the filesystem?
[18:28]  * Disconnect has got everything running on time and in order, or so it seems, except for the fact that its not getting a config.
[18:28] <smoser> Disconnect, i just am fixing that :)
[18:28] <smoser> if there is no "cloud-config" user data.
[18:28] <Disconnect> oh. well good lemme know, been arguing with it all day ;)
[18:32] <smoser> mathiaz, right now, the metadata information is available in filesystem as pickle format python object
[18:32] <mathiaz> smoser: yeah - I'd suggest to go for a more inter-operable format
[18:32] <mathiaz> smoser: other whise only python script will be able to load the configuration
[18:32] <mathiaz> smoser: I'd suggest yaml
[18:33] <smoser> i think that would be in keeping, with yaml usage elsewhere. i can dump it along side the pckl file.
[18:33] <mathiaz> smoser: so that we don't restrict which langage should be used
[18:34] <mathiaz> smoser: why would keep the pckl file?
[18:34] <mathiaz> smoser: you can reload the yaml data from other python script
[18:34] <smoser> i dont know.
[18:34] <smoser> only if it were speed
[18:34] <mathiaz> smoser: or are their more information in the objects that could not be represented in a yaml file?
[18:34] <smoser> which may or may not be a.) true b.) a worry
[18:35] <smoser> no more info than can be represented in a hierarchical key/value set
[18:35] <smoser> remember, it all comes from a web "filessytem"
[18:35] <mathiaz> smoser: well - is the metadata service providing such a huge amount of data?
[18:35] <mealstrom> how to mount samba guest (+rw) share using fstab to local system (/var/shares or /media/shares) with rw ? ..
[18:35] <mealstrom> have tried but didn't solve it :(
[18:35] <mathiaz> smoser: right - another option would to actually use directories and files
[18:36] <mathiaz> smoser: I prefer yaml though - as it translates into native objects in most languages
[18:37] <mathiaz> smoser: if you'd use directories and files you couldn't easily use map, filters on the data structure
[18:37] <mathiaz> smoser: whereas if you already have everything as a hash table, it may be easier
[18:37] <Disconnect> at the end of the day though, python is only a small part of what might be using this information
[18:38] <mathiaz> smoser: the up side of directories+files is that you can easily write shell scripts to leverage that information
[18:38] <Disconnect> random binary files that can only be read by the originating app or derivatives is hardly the unix way :)
[18:38] <mathiaz> smoser: and upstart jobs are the first users of that information
[18:39] <mathiaz> smoser: so you could write upstart jobs that do things like: [ -e /etc/cloud-config/puppet ] && apt-get install puppet
[18:40] <mathiaz> smoser: the problem with yaml is that using it from shell scripts is hard
[18:40] <mathiaz> smoser: and upstart jobs are mainly shell scripts
[18:40] <smoser> i'm not disabreeing
[18:40] <smoser> but i will disagree that yaml is easily usable by shell
[18:41] <mealstrom> .//192.168.1.1/incoming	/media/shares/incoming	cifs	guest,rw		0	0 -- only READ works :(. But when connecting via gnome commander smb -- RW works.
[18:41] <mathiaz> smoser: right - yaml and shell don't play well together
[18:42] <mathiaz> smoser: so may as first iteration, provide a directory/file layout for the meta-data service
[18:43] <smoser> hm... i think we're miscommunicating here
[18:43] <smoser> there are 2 things. or possibly 3 things
[18:43] <smoser> a.) metadata service
[18:44] <Disconnect> smoser: how do i get it to detect text/cloud-config userdata?
[18:45] <smoser> (Disconnect, hold on)
[18:45] <smoser> meta data service has info like: http://paste.ubuntu.com/360818/
[18:46] <smoser> b.) user data
[18:46] <smoser> user data is essentially binary blob , whatever the user wnats to put there can go there.
[18:46] <smoser> c.) cloud config
[18:46] <smoser> cloud config is transported to ec2 inside of user data.
[18:46] <Disconnect> ..you changed the ssh key on that paste right? :)
[18:46] <smoser> ec2-init rips it out, yaml configuration and writes that yaml config file to a file on the filesystem that can be then read by antying reading yaml
[18:47] <mathiaz> smoser: isn't user-data part of the meta-data info?
[18:47] <smoser> the metadata service will be cached on disk, now that is in python pickle, but i agree yaml would be more useful.
[18:47] <smoser> mathiaz, not really. you get at them from the same "service", but they're different.
[18:47] <smoser> Disconnect, funny, no
[18:47] <smoser> :)
[18:48] <Disconnect> metadata needs to be updated periodically though - i could attach and detach storage, for example, without warning.
[18:48] <smoser> but thats just my public key
[18:48] <smoser> you can put that wherever you want!
[18:48] <Disconnect> heh
[18:48] <smoser> Disconnect, do you know that metadata service is updated?
[18:49] <smoser> i didn't think that that changed previously.
[18:49] <smoser> but now with ebs volumes that cna be turned off, it can (and user data) can change on restart.
[18:49] <mathiaz> smoser: right - the whole reason to remove access to the meta-data service after boot is based on the assumption that it's static information
[18:50] <mathiaz> smoser: user data can change on reboots?
[18:50] <mathiaz> smoser: I though it would stay the same during the whole instance life
[18:50] <smoser> mathiaz, on re-starts
[18:50] <smoser> not reboots
[18:50] <smoser> you can stop/start an ebs root instance
[18:50] <mathiaz> smoser: re-starts == new instance?
[18:50] <smoser> yeah, and you do get a new instance id.
[18:50] <mathiaz> smoser: ah right - ebs root instance
[18:50] <smoser> but the filesystem "kept"
[18:51] <smoser> so that is somethign that has to be addressed.
[18:51] <smoser> but i dont know if metadata service changes when you attach a volume. should check that.
[18:52] <mathiaz> smoser: is http://paste.ubuntu.com/360818/ the actually data received when do a wget on the metadata service?
[18:52] <mathiaz> smoser: or is it delivered in a different format at the http level?
[18:53] <smoser> no. its delivered in an annoyying format
[18:53] <smoser> you do a get, either get data or a list
[18:53] <smoser> and then you do a get for each item in the list
[18:53] <smoser> adn repeat
[18:53] <Disconnect> smoser: doesn't look like it changes.
[18:54] <mathiaz> smoser: ok - the meta data crawler is reponsible for creating a dictionary like you've pasted
[18:55] <smoser> yes
[18:55] <smoser> so, if you like, we can put that data in a yaml format
[18:55] <mathiaz> smoser: it seems that providing a directory/file structure representation would be trivial then
[18:56] <mathiaz> smoser: I'm trying to address the issue that shell and yaml don't play well together
[18:56] <Disconnect> btw if you want a quick commandline look at the metadata 'M_URL=http://169.254.169.254/2009-04-04/meta-data/ ; wget -O - -q $M_URL | while read a; do wget -O "$a" "$M_URL$a";done' works. doesn't keep following trees (so public keys won't work) but its a start.
[18:56] <smoser> well, your example of 'puppet-config' is not going to exist.  puppet-config will come from cloud-config, not metadata
[18:56] <Disconnect> that also suggests a format that might work for shell - present it locally exactly as its found on the http server.
[18:56] <smoser> cloud-config, by your suggestion, is yaml
[18:57] <smoser> we can dump it to disk too in some directory format, but i dont know that it is necissary
[18:57] <Disconnect> smoser: real quick tho, getting ec2-init to detect cloud-config data..? hoping to demo in a few mins :)
[18:57] <smoser> user data
[18:58] <smoser> https://wiki.ubuntu.com/ServerLucidCloudConfig
[18:58] <smoser> take that example, and put add to the top "#cloud-config"
[18:58] <Disconnect> ah ok. thats the part i was missing :)
[18:58] <smoser> then pass that as your user data (you can compress it too with gzip)
[18:59] <Disconnect> cool. i just need to feed it the user and tell it not to apt-get and all should be well.
[18:59] <Disconnect> oh. a thought on that actually.
[19:00] <Disconnect> the user config belongs in the image, not in the instance, unless you are going to create the user at firstboot. the name is fixed when the image is made.
[19:00] <smoser> Disconnect, yeah. i know that.
[19:00] <Disconnect> ok :)
[19:01] <smoser> so that doesn't fit all that wlel, but in general i liked that we just merged /etc/clouc/cloud.cfg and whatever came from the user
[19:01] <smoser> so that you can create an image with the /etc/cloud/cloud.cfg that you always send in user data.
[19:02] <mathiaz> smoser: you're right wrt to puppet
[19:05] <Disconnect> sweeet i'm set for an actual demo now :)
[19:21] <racquad> hi guys, I have just installed 9.10 server, but it keeps changings the screen resolution. I want a plai text resolution. How can I do it?
[19:28] <racquad> please, any idea?
[19:28] <smoser> blacklist vga16fb maybe
[19:29] <racquad> smoser, where?
[19:30] <smoser> /etc/modprobe.d/bad-vga.conf
[19:31] <racquad> smoser, vga16fb is not listed on lsmod
[19:31] <smoser> hm...
[19:31] <uvirtbot`> New bug: #511314 in bind9 (main) "package bind9 1:9.6.1.dfsg.P1-3ubuntu0.3 failed to install/upgrade: subprocess installed post-installation script returned error exit status 1" [Undecided,New] https://launchpad.net/bugs/511314
[19:31] <racquad> smoser, I have tried also vga=771 to force a lower resolution, didn't work
[19:33] <smoser> racquad, try 'nomodeset' on kernel config line?
[19:33] <racquad> not yet
[19:33] <smoser> sorry for not knowing off the top of my head
[19:39] <racquad> smoser, it worked. thanks a lot
[19:47] <smoser> Disconnect, fyi, the exceptions for 'None' cloud config should be fixed in my branch now.
[19:48] <Disconnect> cool
[19:48] <Disconnect> i'll prolly have another patch to send up before the weekend
[19:54] <smoser> i am still hoping to get a package together and sponsored and into the lucid images tonight.
[20:03] <Disconnect> smoser: i'll post the patch now. some whitespace fixes, better error messages and cloud_type should default auto, not None
[20:03] <smoser> Disconnect, no, it should be default 'None' :-(
[20:03] <smoser> as if you default it to auto, then people 'apt-get install ec2-init' and it hangs their system for minutes on boot looking for ec2 data service
[20:04] <Disconnect> least-surprise (and sane defaults) both say "try to determine which of the one cloud types we're in"
[20:04] <smoser> it used to behave that way, people complained, so heres where we are.
[20:04] <Disconnect> thats a need for sane timeouts
[20:04] <smoser> sane timeouts are i think hard to comebuy
[20:04] <smoser> its better now, i think i wait like 20 seconds or something
[20:04] <smoser> it did wait > 1/2 hour
[20:05] <smoser> (and tried multiple times :)
[20:05] <smoser> the problem is that you can come up and check for the metadata service before it is up
[20:05] <Disconnect> not including the urllib2 timeout its 2+4+8+16+..seconds. bad :(
[20:05] <smoser> so you cant rely on it.
[20:06] <smoser> but for now lets leave it at None. the images will have it configured to 'auto'
[20:07] <Disconnect> actually looks like that is changed. so its 10s plus urllib.
[20:08] <Disconnect> but in any case, 'the images' could have all this stuff configured to begin with. the fact that this is a package says they might not :)
[20:12] <mealstrom> can you help me figure out where is the problem with fstab on mounting samba share (guest) with RW rights?
[20:12] <mealstrom> after mounting only ROOT can write/delete files or directories there. And user only can CHANGE files (RW) but not create or delete
[20:13] <mealstrom> fstab //192.168.1.1/incoming	/media/shares/incoming	cifs	rw,guest	0 0
[20:13] <mealstrom> mtab //192.168.1.1/incoming /media/shares/incoming cifs rw,mand 0 0
[20:23] <dthacker> In ntp.conf, what is the name of the setting that limits correction if the time is too far off from the sync server?
[20:24] <unit3> man page doesn't say?
[20:26] <dthacker> unit3: only if you look at the correct man page.  Found it! :)
[20:26] <unit3> haha what was it?
[20:27] <dthacker> sanity limit, but it's set with a cl parameter when you invoke ntpd, not in the .conf
[20:33] <unit3> ahhh
[20:36] <erichammond> smoser, mathiaz: The EC2 instance id stays the same through EBS boot instance stop/start cycles.
[20:36] <smoser> oh really.
[20:36] <smoser> yeah, i guess i knew that.
[20:43] <Disconnect> smoser: couldn't it wait in the background if it can't get the metadata? until it issues the cluster-config event nothing will happen, and it can either background for a few mins and exit or wait until it finds the controller..
[20:43] <Disconnect> (sorry, was afk)
[20:44] <smoser> well, fo rnow that woud'nt be so bad, and i like the idea. bu tthe general goal is for ec2-init to block all things on boot. such that you could modify anything you wanted in the system prior to those things coming up
[20:46] <Disconnect> submitted the patch #511348
[20:46] <Disconnect> think i got all the jaunty-specific bits out
[20:50] <erichammond> smoser: As you know, I am skeptical of the proposal to block access to meta-data and user-data because there are other EC2 software applications written out there that Ubuntu developers do not control and which access these resources.
[20:50] <erichammond> FYI, at least public-hostname and public-ipv4 can change while an instance is running.
[20:51] <smoser> and they change in the metadata ?
[20:51] <erichammond> Now that Amazon has shown they are open to meta-data and user-data being changed, I would not assume that it will always take an instance stop/start to do this.
[20:51] <erichammond> smoser: running a quick test
[20:52] <smoser> erichammond, i agree that it might be problematic to turn it off. it is default not disabled. so theres really nothing to worry about.
[20:52] <erichammond> smoser: Ok, thanks
[20:52] <smoser> there absolutely is an issue with the metadata service, though.
[20:52] <smoser> it possibly contains sensitive data and there is no method of controlling access to it.
[20:53] <smoser> such that a compromise of any user that could do an http request could get at it.
[20:54] <erichammond> This is an EC2-wide issue that has been under a lot of discussion.  Shlomo did a great study on the various ways you can pass sensitive info to an instance and the tradeoffs.
[20:54] <resno> is it a security risk to run a router and data backup on the same machine?
[20:54] <resno> router and actually data. for a home network
[20:56] <erichammond> smoser: Yes, I just verified that public-hostname changes when an elastic IP address is associated or disassociated with an instance.
[20:58] <smoser> thank you for verifying that erichammond
[20:59] <smoser> it seems wierd to speak a full name in irc.  like i'm very formal with "mr erichammond"
[20:59] <erichammond> smoser: I used to have "esh" but somebody else took it after I left IRC for a while.
[21:00] <erichammond> I figured this way people would know who I was.
[21:03] <erichammond> mr_scott_moser_sir: Heading off to the office on my long commute through rain (always makes traffic more fun in LA)
[21:33] <unit3> kickban vtf plz.
[21:33] <guntbert> !ops
[21:34] <th0mz_> ./ignore vtf
[21:35] <guntbert> th0mz_: thanks - forgot ignore :)
[21:36] <th0mz_> stupid spammer
[21:36] <th0mz_> ..;
[21:36] <th0mz_> .
[21:37] <guntbert> th0mz_: and remember and tell: don't ever click on such a link :)
[21:38] <guntbert> !ops | ctcp flood - please set +R
[21:38] <niko> guntbert: already done
[21:39] <guntbert> niko: see it , thank you
[21:48] <smoser> mathiaz, ping
[21:48] <mathiaz> smoser: hi
[21:48] <smoser> woudl you be willing to sponsor an ec2-init upload for me ?
[21:49] <smoser> just uploaded build to ppa, i want to give it a quick final test from there and then be good.
[21:50] <mathiaz> smoser: sure - np
[21:50] <mathiaz> smoser: if you could post the bzr branch, or the debdiff
[21:51] <smoser> branch coming
[21:54] <smoser> mathiaz, lp:~smoser/ec2-init/ec2-init.devel.pkg
[21:55] <smoser> mathiaz, its "start in 9 hours" https://launchpad.net/~smoser/+archive/ppa/+builds?build_state=pending
[21:57] <gcleric> exit
[21:58] <smoser> i just checked it builds in a sbuild here. so that shouldn't be a problem
[21:59] <smoser> mathiaz, i've got to step out, and will check back later. let me know if you need anything else.  i know that its annoying that my branch has no common ancesstor with lp:ubuntu/ec2init. i have to fix that.
[22:24] <pting> is there a designated script to reset the mysql debian-sys-maint user?
[22:47] <unit3> do you mean reset its password?
[22:47] <unit3> I don't think so, I think you've just gotta edit the conf file and the mysql database entry.
[22:47] <unit3> but I could be wrong.
[22:54] <pting> unit3, ya, the password. i just wanted to sync up all the db user/passwords in my farm
[22:55] <unit3> gotcha. well, that's not that hard.
[22:55] <unit3> if you sync the "mysql" table between then, then the mysql auth info is synced.
[22:55] <unit3> and then you just need to sync the less /etc/mysql/debian.cnf file.
[22:55] <pting> unit3, true. it would be nice if it was in the preseed process
[22:55] <pting> ya, i'll do that. thx
[22:55] <unit3> erm -less. ;)
[22:59] <Hypnoz> I have a few nfs mounts in /etc/fstab that aren't mounting on bootup, but mount -a works
[22:59] <Hypnoz> anyone know a better way to put the line in fstab so it doesn't timeout
[23:00] <unit3> do you have the _netdev option on them so it knows to mount them only after the network is up?
[23:01] <Hypnoz> nah i heard about that
[23:02] <Hypnoz> so it would be NFSpath localpath nfs _netdev 0 0
[23:02] <Hypnoz> ?
[23:13] <unit3> Hypnoz: yeah, or nfs4 for the filesystem type.
[23:13] <unit3> depending on your server.