[00:09] <kirkland> hggdh: hi, still around?
[00:14] <hggdh> kirkland: still. Running tests now. I added a euca-get-console-output to the script
[00:16] <kirkland> hggdh: cool, i'm still around
[00:33] <hggdh> kirkland: OK. on a distributed env it does not seem that I can ssh into an instance from the CLC
[00:34] <hggdh> kirkland: I have to get my wife, will be back in 40
[00:36] <hggdh> kirkland: I think there is a bad route somewhere. I hope... the ionstance is not accessible from the CLC
[00:36] <hggdh> brb
[00:38] <RoAkSoAx> kirkland, i jsut had an idea... Desktop app for manpages. I don't think that exists, does it?
[02:02] <storrgie> hello anyone avail?
[02:05] <storrgie> the version of qemu-kvm and libvirt and virt-manager is REALLY far behind... is there a good ppa?
[02:23] <hggdh> kirkland: ping
[04:09] <kirkland> hggdh: just passing through
[04:09] <kirkland> hggdh: what's up?
[04:27] <PC_Nerd101> Hi, when I'm using apt-cacher do I have to change the lists on all machines or just the caching server when I'm changing the mirror I update from ?
[04:28] <twb> What's the format of an apt-cacher client's sources.list?
[04:28] <twb> Just one line will do
[04:29] <RoAkSoAx> kirkland, still around?
[04:32] <PC_Nerd101> um - jsut let me check
[04:39] <PC_Nerd101> http://paste.ubuntu.com/411430/ - this is my current source.list for a client to the apt-cacher
[04:41] <twb> I see no reference to apt-cacher there
[04:42] <PC_Nerd101> oh hang on....
[04:45] <PC_Nerd101> yes - but I have a line in /etc/apt/apt.conf.d/01proxy that redirects traffic to my cache....:  Acquire::http::Proxy "http://192.168.1.2:3142";     - sry - I havent worked with these machines in a while ...  just remembering how I originally configured them
[04:46] <twb> OK, if that's how apt-cacher works, then you shouldn't need to change the client.
[04:46] <PC_Nerd101> so then how would I specify the new mirror in apt-cacher?
[04:47] <twb> One of them works by prepending the cacher URL to the sources.list entries; and THAT needs the clients to be updated to use a different upstream mirror.
[04:47] <twb> I thought that was apt-cacher.
[04:48] <PC_Nerd101> I dont know - but there's a lot of results on google about setting up with apt-cacher and apt-mirror, so that might be related to what your thinking.
[04:48] <PC_Nerd101> I need to change these mirrors to my ISP who hosts an unmetered mirror... and seeing as I have abotu 6 ubuntu machines in teh house its nice to cut down 5/6'ths of my updates.
[04:48] <twb> There are like twenty different partial mirroring systems
[04:49] <PC_Nerd101> I just run on apt-cacher because it seemed hte most common across installations similar to my one.
[04:49] <twb> If you have unmetered access, I suggest you just run debmirror and create a local copy of your (release, arch) tuple.
[04:49] <twb> debmirror has Just Worked for me, whereas apt-cacher and apt-proxy were nothing but flaky
[04:50] <PC_Nerd101> I dont particularly want to host my own mirror, it seems a bit of overkill... and I dont particularly want to download the entire mirror for my 2x distro's..  I just want to cahce the 20 or so regular packages I use on the servers.
[04:51] <twb> It's not as big a deal as you seem to think.
[04:51] <PC_Nerd101> hmm - well for now I jsut want to make a config change rather than an installation change regarding how I run the server's
[04:51] <twb> IIRC (hardy, i386, main, no-source) is only like 4GiB
[04:52] <twb> PC_Nerd101: I've told you what I know about apt-cacher.  I can't help you with it any more than  that.
[04:53] <PC_Nerd101> thats kewl - but I might stick with searching around for a solution rather than installing a mirror...  but thanks for your advice (I will probably follow you up when I start using netboot and an image).
[05:10] <darkk^> Can anyone comment on KVM stability in 8.04 release? I have one hardy-based server that I'm going to use as host and I'm choosing between up-to-date virtualbox backported to hardy and kvm/hardy.
[05:11] <lifeless> grab lucid
[05:11] <lifeless> nearly released, and like hardy it is an LTS.
[05:12] <darkk^> I don't think it's best idea to deploy beta at production. I'm going to migrate VMs to lucid host in a couple of months.
[05:13] <ScottK> Depends on how soon you need to be in production.
[05:13] <ScottK> Sensible, IMO.
[05:13] <lifeless> darkk^: it won't be beta ina couple of months ;)
[05:14] <darkk^> moreover, I see no reason to upgrade hardy to lucid as that box is quite trashed and will be temporary host anyway :-)
[05:15] <twb> Pfft.
[05:15] <twb> I started targeting Lucid for a one-month project, back in November.
[05:15] <twb> Big surprise: it's unlikely to ship before Lucid does
[05:16] <darkk^> so in fact I consider two ways 1) use kvm/hardy and migrate to kvm/lucid later (easier migration) 2) use virtualbox and migrate to kvm/lucid (MAYBE, better stability, I don't know if hardy/kvm is stable enough - it was long long ago)
[05:17] <twb> I tried (1) a couple of months ago, using the sanctioned upgrade mechanism.  It was laughably failuriffic.  So I went back to using aptitude for dist upgrades, like the Goddess intended.
[05:19] <darkk^> (1) saying "migrate to kvm/lucid" I mean "migrate to another physical box, running lucid"
[05:20] <twb> Oh, you mean the *host* node
[05:25] <PC_Nerd101> regarding apt-cacher (used as a caching proxy), does the client download its package lists via the proxy or direct from the mirror, in which case how can I make sure that the mirror my proxy path_map's to is synced with the mirror the package lists are from ?
[05:29] <twb> Proxies are intended to work without direct access to the target, so I'd be VERY suprised if anything using apt-proxy/apt-cacher/whatever went directly to the upstream host.
[05:32] <PC_Nerd101> ok - so when the client is configured using the /etc/apt/apt.conf.d/01proxy  (Acquire::http::Proxy "server";) config - the package lists will be redirected to whatever teh proxy does?  fantastic :) ty :)
[05:33] <PC_Nerd101> if it does do that, why does the output from sudo aptitude update still list the repo's in my sources.list file?  is it just passively changing the location without "notifying" the current machine that the proxy changed its request?
[05:46] <twb> PC_Nerd101: just test it by firewalling off direct connections
[05:46] <PC_Nerd101> where would I firewall it from - the proxy machine or the client machine?
[05:47] <twb> That would depend on your privileges and on the network layout
[05:47] <twb> It doesn't really matter where
[05:52] <PC_Nerd101> internet connected router is at 192.168.0.2 - servers are all on 192.168.1.x, cacher on 192.168.1.2, all machines proxy through this machine ( whether in or out of the subnet via port forwarding)... I want to ensure that connections are indeed to the mirror specified in the cacher and not their own sources.list definitionsl
[05:53] <PC_Nerd101> so I would firewall on the caching machine?
[05:57] <twb> Whatever.
[05:58] <PC_Nerd101> ok.
[05:58] <PC_Nerd101> thanks
[06:01] <darkk^> you can also use wireshark/tshark or get netflow from the router to check your hypothesis without firewall rules modification.
[06:04] <twb> Nod
[06:52] <rcsheets> is there a more up-to-date version of https://help.ubuntu.com/community/UEC that's written for Lucid?
[07:24] <KurtKraut> rcsheets, AFAIK, help.ubuntu.com content is updated when the final version is released.
[07:35] <rcsheets> hmm. it seems like someone somewhere would be writing the new documentation before it goes to help.ubuntu.com
[07:37] <twb> Surely it's backed by a svn docbook repo or something
[07:37] <rcsheets> well i dunno, it's a wiki
[07:37] <rcsheets> are you suggesting the svn docbook repo is what the wiki stores the edits in?
[07:43] <twb> If it was a *good* wiki, it'd be backed by a VCS :-/
[07:43] <lifeless> it is
[07:44] <lifeless> a horrible one, but one.
[07:44] <lifeless> by horrible I mean, CVS-like.
[07:44] <twb> lifeless: is help.u.c running some kind of sucky Canonical-internal wiki?
[07:44] <lifeless> no
[07:45] <lifeless> just moin
[07:45] <twb> I didn't think moin's VCS backends were production-ready
[07:45] <lifeless> its flat file store on disk is effectively a VCS
[07:45] <lifeless> it would eat your brain to use it
[07:45] <lifeless> but nevertheless, its a vcs
[07:45] <twb> I've migrated from moin's plain text backend
[07:46] <twb> Calling it a VCS is a bit of a stretch
[07:46] <ttx> kirkland, hggdh: I have no status on the B2 validation work items from https://blueprints.launchpad.net/ubuntu/+spec/server-lucid-uec-testing -- please update
[07:46] <twb> It'd be like calling LVM snapshotting a VCS
[07:47] <twb> lifeless: but yeah, FYI the moin devs are/were working on VCS backends, so maybe you can migrate to git or (bleh) bzr in a couple of years
[07:47] <rcsheets> right, in a "you could use it for that" sense
[07:48] <rcsheets> well i'll have to continue my quest for lucid docs later. thanks for the info :)
[07:48] <twb> When I migrated wiki.darcs.net from moin to gitit, I even imported the old commit history (except the spam).  That was fun!
[08:38] <ttx> soren: around ?
[09:01] <indigoparrot> hi there, anyone a Bacula user here?
[09:05] <twb> !anyone
[09:07] <indigoparrot> I'm running a bacula server on Ubuntu and I can't get it to connect to any of my window's clients. I've checked the IPs, passwords and users, all of which are correct. Any ideas?
[09:32] <bigon> is'nt that bug https://bugs.edge.launchpad.net/ubuntu/+source/rng-tools/+bug/544545 a bug for ubuntu-server team?
[10:06] <indigoparrot> bumping my question from an hour ago - I'm running a bacula server on Ubuntu and I can't get it to connect to any of my window's clients. I've checked the IPs, passwords and users, all of which are correct. Any ideas?
[10:09] <darkk^> strace and/or wireshark it to check if you have proper connectivity (e.g. maybe firewall is blocking connects)
[10:09] <indigoparrot> I've telnet'd from the ubuntu box (director) to the bacula-fd servers (windows box) with no problem, it's accepting incoming connections on the right port
[10:12] <Schmidt> Am I right in assuming that it is futile to NAT traffic between two different private IP-networks? (range1 is 10.10.1.0/24 and range2 192.168.30.0/24)
[10:16] <darkk^> Schmidt, what do you mean saying "futile" ? It's possible. Private IPs do not differ from public one from NAT point of view. :-)
[10:25] <PC_Nerd101> does apt-cacher use the local sources.list to determine which mirror to request from?  or does it simply make the exact request ( to the requested mirror) on behalf of the client?
[10:36] <sherr> PC_Nerd101: it uses the proxy defined in the apt/preferences file.
[10:37] <sherr> PC_Nerd101: AFAIK, clients use the proxy, not connction to repos directly (tail the apt_cacher logs?).
[10:37] <sherr> PC_Nerd101: Although I use apt-cacher-ng not apt-cacher.
[10:38] <Schmidt> darkk^: I meant "not possible", I thought computers dropped traffic going between two private ip-networks because of the spoof problem...
[10:38] <Schmidt> My actual problem was solved though :)
[10:40] <darkk^> Schmidt, openvpn is your friend if you're going to pass traffic between two private netwoks via public internet :-)
[10:43] <Schmidt> darkk^: We will implement a VPN solution, it's in the pipeline, but this was a top prio thing
[10:43] <Schmidt> We could actually solve it with ssh-tunnels
[10:43] <Schmidt> (something I am quite new to)
[10:44] <darkk^> right, openssh support both dumb port forwarding and something VPN-like via TunnelDevice.
[11:53] <binBASH> moin
[12:02] <johe|work> even moin
[12:03] <binBASH> anyone knows how to enable vnc for vms running in cloud?
[13:21] <ttx> smoser, kirkland: ping me when you get up
[13:22] <smoser> ttx, here.
[13:23] <ttx> smoser: o/
[13:23] <ttx> smoser: see pm
[13:35] <kirkland> ttx: here
[13:35] <ttx> kirkland: I finish with smoser and I'm all yours ;)
[13:35] <ttx> kirkland: see my message a few hours ago about B2 tests signoff
[13:41] <zul> ttx: upgrading from intrepid to lucid is supported?
[13:41] <ttx> zul: no
[13:41] <kirkland> ttx: i have no pm from you
[13:41] <ttx> kirkland: it was a public message:
 kirkland, hggdh: I have no status on the B2 validation work items from https://blueprints.launchpad.net/ubuntu/+spec/server-lucid-uec-testing -- please update
[13:42] <kirkland> ttx: right, there were issues, hggdh was still running them when i went to bed last night
[13:42] <kirkland> ttx: i'm looking to hear back from him this morning
[13:42] <ttx> kirkland: ok
[13:43] <ttx> kirkland: makes sense, let's wait a little
[13:56] <hggdh> kirkland: ttx no luck last night, same ssh timeouts
[13:56] <hggdh> kirkland: ttx I will mark it as postponed
[13:58] <ttx> hggdh: it's not really postponed, it's not done... since now you'll move to test the B2 release rather than the B2 candidate (and that's another work item)
[13:59] <kirkland> ttx: hggdh: okay, i'll sign off on your tests, but we need to get a bug opened about the ssh timeout issue
[13:59] <kirkland> hggdh: open a bug, and attach all the logs you can
[13:59] <kirkland> hggdh: i'll look into the eucalyptus side
[14:00] <ttx> hggdh: well, it's "done with some failures to investigate"
[14:00] <kirkland> ttx: and I'll need mathiaz to look into the test suite side (in case it's the test that's broken)
[14:00] <ttx> hggdh: ideally we'd have a report saying what worked and what didn't, to use as a data point when we'll compare with future tests
[14:02] <PC_Nerd101> I'm attempting to upgrade ( server) from 9.10 to 10.04 beta2 however sudo do-release-upgrade --devel-release says there was no new release found.  I took the command from the LucidUpgrades page on comunity help.  Any suggestions on what I shoudl check?
[14:02] <kirkland> hggdh: poke me when you have that bug filed, and i'll sign off on the b2-candidate tests
[14:02] <kirkland> hggdh: also, create a junk bzr branch, and check in logs of your results
[14:03] <kirkland> hggdh: we need to come up with a better way of tracking "proof" that stuff worked at the milestones, but bzr will work for now
[14:03] <smoser> soren, ping
[14:03] <hggdh> ack
[14:09] <PC_Nerd101> bump* re. do-release-upgrade --devel-release reporting no new release.   Is this correct or is documentation incorrec ?
[14:09] <smoser> soren, ping regarding bug 524020. i attached a patch to trunk, if you'd like me to put a branch for sponsor for lucid i can.
[14:22] <sherr> PC_Nerd101: On 9.10, I just did "sudo do-release-upgrade -d" and it works.
[14:22] <sherr> PC_Nerd101: I aborted it at the (very) end of course :-)
[14:22] <sherr> This is 9.10 32bit desktop (laptop) though
[14:23] <PC_Nerd101> ok - well I just disabled the /etc/apt/apt.conf.d/01proxy file to disable its proxy connection to apt-cacher... and got it started and (seemed to be ) working.. so I suspect that somewhere along the line apt-cacher might not be getting the updates....  I'm looking at bug reports now to see if I can spot anything...
[14:23] <PC_Nerd101> * I'm on 32 laptop as well, partitioned with ntfs on sda1 and 9.10 on sda3
[14:25] <aurigus> Does hdparm -t test write speeds? Or read speeds only?
[14:25] <sherr> aurigus: what does the manual page say?
[14:26] <aurigus> read
[14:27] <aurigus> does anyone have a handy command to test drive write speeds
[14:29] <aurigus> just discovered zcav
[14:46] <PC_Nerd101> sherr: I've checked bug reports and upgraded both client and cacher... I cannot find the cause of this issue.  I've checked and I can upgrade the client when it is not set through the cache proxy..  but it wont work when I have it through the proxy...
[14:48] <PC_Nerd101> sherr: further - When I update the "signature" without the proxy, cancel, reenable the proxy and attempt to upgrade again, I get "Failed Upgrade tool signature. ....  There may be a network problem." ..  any ideas?
[14:49] <sr1n1> Hi, how do I output something to the console-output from an init script on an EC2 machine?
[14:49] <sherr> PC_Nerd101: Sorry, no. Some of this might be the design of the program (do-release-upgrade). I don't know.
[14:50] <PC_Nerd101> sherr: no problem - so would you say the only current alternative ( that you know of) would be to simply download the updates to all machines I want to update to? - ie, not cache it?
[14:51] <sherr> PC_Nerd101: sorry, no idea. I'd probably do som research on the do-release... program and operation. Or wait for release.
[14:51] <hggdh> kirkland: bug 559230 opened, I am saving the logs
[15:06] <smoser> ttx, why did you assign bug 523148 to kirkland
[15:06] <smoser> it can't be fixed.
[15:06] <smoser> at least not without new code in libvirt
[15:11]  * ttx looks
[15:12] <kirkland> smoser: thanks, i'm going to mark wont-fix for lucid
[15:12] <ttx> someone targeted to lucid, probably before the investigation
[15:13] <ttx> smoser: in fact, kirkland did nominate it to Lucid :)
[15:13] <kirkland> ttx: that was when I thought it was fixed in 0.7.7
[15:13] <kirkland> ttx: and jdstrand and I were trying to build a case for or against 0.7.7
[15:14] <ttx> kirkland: ack, makes sense to wontfix it then
[15:14] <kirkland> done
[15:14] <smoser> and this is not fixed in 0.7.7 or libvirt trunk.
[15:14] <binBASH> Anyone knows how to enable vnc for the vms in uec please?
[15:18] <smoser> binBASH, you'll need to install a vnc server inside them.
[15:18] <binBASH> smoser: it's not possible to use the kvm inbuilt vnc?
[15:18] <smoser> graphical console access is not something that uec/ec2 offer . and the serial console offered is read-only.
[15:19] <binBASH> smoser: I think I started kvm manually and was able to use vnc.
[15:19] <smoser> you could fairly easily hack it, and enable 'vnc' as console.. but without some trickery, you'd then have to connect to the node controller to get at it.
[15:20] <sommer> \
[15:20] <smoser> binBASH, yes, libvirt/kvm do offer this. eucalyptus/ec2 do not expose it (and the libvirt xml that they write do not contain 'console: vnc' or whatever the syntax is
[15:20] <binBASH> smoser: well I have to find a way how to get networking working, with this provider dilemma ;)
[15:21] <smoser> i havne't been following, so i dont know exactly the delimia
[15:21] <binBASH> smoser: The provider gives 4 ips per server. they cannot be moved to another one.
[15:22] <binBASH> and I don't wanna do NAT because there's a 2 tb traffic limit per server
[15:22] <binBASH> I configured a br0 bridge, and when I launch kvm manually I can configure the networking inside the vm.
[15:22] <binBASH> via the vnc
[15:23] <smoser> binBASH, well, whatever you do manually inside there, you can do via script in --user-data-file=
[15:23] <smoser> when launched
[15:23] <smoser> ie:
[15:23] <binBASH> I think via dhcp it's not possible to configure an ip range per mac address as well
[15:23] <smoser> euca-run-instances --user-data-file=my-setup-networking-script.txt emi-xxxxxxx
[15:24] <smoser> thats probably going to fail though for the lucid images....
[15:24] <smoser> as nothing will happen until eth0 comes up
[15:24] <smoser> hmm..
[15:28] <binBASH> smoser: But see this is a real dilemma :P
[15:28] <smoser> binBASH, you might be able to set up each node as an "availability zone"
[15:28] <smoser> if IP addresses could be limited to an availabilitty szone then you'd be set
[15:28] <smoser> but i dont know if they can
[15:29] <smoser> in ec2 they are not
[15:29] <binBASH> smoser: It would be nice if it's possible ;)
[15:29] <smoser> i ca'nt think of a clean way that isn't going to require you modifying eucalyptus
[15:30] <amine_> hello, looking for a good doc in Ubuntu bridging.. any suggestions !
[15:30] <binBASH> smoser: I don't think you can modify eucalyptus
[15:30] <binBASH> class files.....
[15:31] <smoser> well you can rebuild
[15:31] <smoser> ppa and such
[15:31] <binBASH> java files available?
[15:31] <smoser> you *can* modify class files too... you just have to be uber smart :)
[15:31] <smoser> its all built from source
[15:31] <binBASH> :p
[15:31] <smoser> bzr branch lp:ubuntu/eucalyptus
[15:32] <binBASH> smoser: The eucalyptus web iface is kinda limited.
[15:33] <binBASH> smoser: I'll write to eucalyptus forum first, before doing such big changes ;)
[15:33] <smoser> probably a good idea
[15:34] <binBASH> maybe it's better to write a custom cloud iface anyways for what I really need ;)
[15:34] <binBASH> because I want to define data centers virtually and have people routed via geoip to vms
[15:35] <binBASH> dunno if this is all possible with eucalyptus
[15:36] <binBASH> I dunno as well if it's possible to move running vms.
[15:36] <binBASH> like I have a node in netherlands and one in usa and want to move nl to usa
[15:38] <smoser> it is not possible to move running vms on eucalyptus
[15:38] <binBASH> do you know if it's possible with kvm at all?
[15:53] <ttx> mathiaz: ping
[15:53] <mathiaz> ttx: o^1098
[15:54] <ttx> mathiaz: see pm
[16:01] <smoser> binBASH, it is possible with kvm, yes.
[16:01] <smoser> but it requires shared storage between the nodes
[16:01] <smoser> as, i believe, does even vmware
[16:02] <smoser> which means the migrate-across-an-ocean thing is not too terribly reasonable.
[16:04] <binBASH> smoser: I have shared storage
[16:04] <binBASH> glusterfs......
[16:05] <smoser> well, it can be done. kvm does support it, and libvirt exposes it. eucalyptus does neither.
[16:05] <binBASH> ok
[16:23] <binBASH> smoser: I wonder what happens if I start a dhcpd on each node.
[16:23] <binBASH> if the dhclient in the vm can use it. :)
[16:23] <smoser> it would probably dpend on the type of setup you have.
[16:23] <smoser> i wondered what would happen there, though.
[16:24] <binBASH> because then I could configure the ranges there.
[16:24] <smoser> will a dhcp request from a node controller even get to your cloud controller ?
[16:24] <binBASH> Dunno
[16:24] <smoser> the fallout will be that euca-describe-images won't know the IP.
[16:24] <smoser> there is a mode in eucalyptus that allows for this.
[16:24] <smoser> it hackily tries to get the IP of the node via arp.
[16:25] <binBASH> I think it can't get the dhcp from cloud controller, because nodes are not on same switch
[18:41] <Rafael_> I posted this question a few days ago, but have not solve it yet: I use rsync to copy a  windows client folder into the ubuntu server every using rsync and Cron. So for example Folder “test” on windows is mounted on ubuntu and from there thu rsync it copies into another folder,  at the present moment to do this I have to share on the windows computer the folder “test”with every body for this to happen…. i would
[18:41] <Rafael_>  like to know if there is a way to avoid sharing my windows folder with everybody?
[18:59] <rcsheets`osu> while setting up grub-pc 1.98-1ubuntu4, i got this message:
[18:59] <rcsheets`osu> File descriptor 3 (pipe:[8183]) leaked on lvs invocation. Parent PID 2877: /bin/sh
[19:02] <rcsheets`osu> should i be concerned?
[19:39] <soren> ScottK: Care to take a look at bug 559462? I'm about to disappear for a week so it would be nice to get this handled before then.
[19:40] <ScottK> Looking
[19:40] <soren> ta
[19:41] <ScottK> soren: As long as you can find and archive admin with time for the New review, approved.
[19:41] <soren> ScottK: Awesome, thanks.
[19:46] <trappist> I've created a new user, and for some reason his processes are listed in ps by his uid, not his username.  this is giving me some permissions issues.  /etc/passwd looks right, where else should I look to resolve this?
[19:49] <guntbert> trappist: 1) getent passwd <user>   -- compare with 2) getent passwd <hisuid>
[19:53] <trappist> ah getent, that's what I was trying to conjure up
[19:54] <trappist> they match
[19:55] <guntbert> trappist: and what does ls -ld /home/user show, the uid or the name?
[19:57] <trappist> the name... I think ps is doing this because the username is a tad long, 'telluride'
[19:57] <trappist> just saw the same thing on another machine (that's behaving) with the same user
[19:59] <birmaan> hoi
[20:02] <guntbert> trappist: you are right I just tested it by varying a username
[20:05] <trappist> so I guess I've narrowed it down to a bug in the god rubygem
[20:21] <trappist> guntbert: thanks
[20:31] <KristianDK> Hello! I just installed an Ubuntu Enterprise Cloud Cluster/Controller - where do i find the username/password for the web interface?
[20:31] <KristianDK> i tried admin/admin as described on the wiki, but it claims there is no such username
[20:38] <kirkland> mathiaz: ping
[20:38] <mathiaz> kirkland: o/
[20:38] <kirkland> mathiaz: could you have a look at hggdh's results for config_multi?
[20:38] <kirkland> mathiaz: those failed pretty badly
[20:38] <mathiaz> kirkland: where?
[20:38] <kirkland> mathiaz: i'm hoping for a bug in the testing script, or perhaps the configuration
[20:39] <kirkland> hggdh: hey, where's your results posted?
[20:39] <kirkland> mathiaz: i asked hggdh to commit his test results to a bzr branch
[20:39] <kirkland> mathiaz: for tracking across milestones
[20:39] <kirkland> hggdh: and what's the bug # you opened?
[20:42] <hggdh> kirkland, mathiaz -- /home/cerdea/uec-testing.tar
[20:42] <hggdh> kirkland: I have not yet commited to a public bzr
[20:43] <gzmask> guys, after I install UEC I can't use my account to login in ecalyptus web portal
[20:44] <hggdh> kirkland, mathiaz no, this is not the last runs, I will upload them there
[20:44] <gzmask> do I need to adduser first?
[20:44] <kirkland> hggdh: okay
[20:45] <kirkland> gzmask: admin/admin
[20:45] <kirkland> gzmask: is the default username/password
[20:45] <gzmask> kirkland: Error: Username 'admin' not found
[20:46] <hggdh> kirkland, mathiaz /home/cerdea/uec-test.tar.gz on tamarinf
[20:47] <hggdh> kirkland, mathiaz bug 559230
[20:47] <ScottK> lamont: Now that you're EOW, would you mind putting your postfix maintainer's hat on for a moment?
[20:48] <kirkland> gzmask: in the web frontend?
[20:48] <kirkland> gzmask: https://wherever:8443/ ?
[20:48] <gzmask> ya, at port 8443
[20:49] <kirkland> gzmask: hmm, sounds like your install is incorrect?
[20:49] <gzmask> can be, first time trying to
[20:50] <gzmask> but I can login in the bash shell using the account I created
[20:50] <RoAkSoAx> kirkland, howdy!! I meant to ask you if for the modularization we should drop the default setting of variables in the code, and just use the config file for defaults. Or, should we keep it?
[20:52] <KristianDK> gzmask, i have the same problem - i just installed like an hour ago
[20:53] <gzmask> KristianDK: have you figured something out yet? I am googling but nothing catches my eye yet
[20:53] <KristianDK> gzmask, nothing at all - everywhere it says "just type admin/admin" and it should work
[20:53] <KristianDK> ive been googling like everything
[20:54] <gzmask> hmmm.... maybe I should switch to Xen on ubuntu then
[20:54] <KristianDK> kirkland, you never heard about this issue before?
[20:55] <kirkland> KristianDK: gzmask: gimme a minute ... i just installed fresh
[20:55] <kirkland> KristianDK: gzmask: can you confirm that this is 10.04 Beta2 ?
[20:55] <KristianDK> kirkland, Im using 9.10
[20:56] <KristianDK> but i could try with the 10.04 beta too
[20:56] <gzmask> 9.10 x64 versoin ubuntu server iso
[20:56] <kirkland> KristianDK: gzmask: okay, i just tested 10.04 Beta2, and admin/admin works perfectly on first login
[20:56] <kirkland> KristianDK: gzmask: i don't have a 9.10 setup right now
[20:56] <kirkland> https://help.ubuntu.com/community/UEC/CDInstall
[20:56] <KristianDK> kirkland, i can give you SSH to my brand new setup if you want :P
[20:56] <kirkland> that should be instructions
[20:57] <kirkland> KristianDK: sorry, i'm slammed trying to fix 10.04 issues
[20:57] <kirkland> KristianDK: no time for 9.10, dr. jones
[20:57] <KristianDK> hehe, ok - np :D
[20:57] <kirkland> :-)
[20:57] <KristianDK> i guess i have to try installing the 10.04 then
[20:57] <KristianDK> :D
[20:57] <gzmask> gonna check my installation steps. thanks kirkland
[20:57] <kirkland> KristianDK: it's way better :-D
[20:57] <kirkland> gzmask: k
[20:57] <kirkland> gzmask: open a bug, if you can reproduce this again
[20:58] <kirkland> oh, also ...
[20:58] <kirkland> gzmask: KristianDK: are you sudo apt-get dist-upgraded to the latest 9.10 ?
[20:58] <kirkland> gzmask: KristianDK: there are a few really important euca bug fixes in there
[20:58] <KristianDK> no, its just a fresh install, no commands used at all
[20:58] <kirkland> gzmask: KristianDK: one that could solve your issue (database problems)
[20:58] <gzmask> not yet, gonna do it now
[20:58] <kirkland> KristianDK: oh, i'm sure that's it
[20:58] <kirkland> gzmask: ^
[20:58] <kirkland> sudo apt-get update && sudo apt-get dist-upgrade
[20:59] <kirkland> give it a few minutes to restart all of your services, etc.
[20:59] <KristianDK> I'll just give it a go, it wont take more than a few minutes it seems :)
[21:00] <KristianDK> kirkland, do you btw know which date they will launch 10.04?
[21:00] <genii> April 29
[21:01] <KristianDK> ty :D
[21:03] <KristianDK> gzmask, kirkland, after the update it seems to work :)
[21:03] <KristianDK> just for the record
[21:04] <gzmask> cool, my internet sucks, still updating
[21:12] <gzmask> worked, apt-get won again
[21:24] <xgpt> hey everyone, what SMTP server should I use for my home server? I don't need anything too fancy...simple is better.
[21:26] <guntbert> xgpt: why do you need an smtp server at all?
[21:27] <xgpt> because I want to start spamming viagra ads...kidding i just want to play around with one
[21:28] <cloakable> xgpt: dovecot-postfix :)
[21:29] <guntbert> xgpt: if you keep it strictly private it doesn't really matter - but I like dovecot
[21:29] <funkyHat> dovecot isn't an smtp server
[21:34] <guntbert> funkyHat: thx - I don't know what happened to my brain  :-/
[21:34]  * guntbert blushes
[21:40] <funkyHat> guntbert: hehe
[21:40] <kirkland> KristianDK: ;-)
[21:46] <lullabud> is there a command that will indicate if your hardware does hardware virtualization?  i have a collection of misc boxes, trying to find some spares to use for a test environment for UEC
[21:47] <nekro_> lullabud: try kvm-ok
[21:47] <nekro_> lullabud: also "modprobe kvm ; lsmod | grep kvm" will show you kvm_intel or kvm_amd if you have hardware support
[21:48] <lullabud> thanks
[21:49] <lullabud> w00t, that is exactly what i need, thanks!
[21:57] <eaglecoth> Hey, I followed the InternetConnectionSharing Guide on ubuntu.org,  it works flawlesslys
[21:58] <eaglecoth> however, setup is not kept after reboot, where is the proper place to configure bootup internetsharing?
[22:16] <deliverance> hey guys
[22:24] <pwnguin> am i crazy or does NTP not work correctly out of the box?
[22:25] <pwnguin> it looks like ntpdate-debian uses /etc/ntp.conf by default, which requires ntp to be installed
[22:56] <GhostFreeman> Has anyone managed to get Tokyo Cabinet and Tokyo Tyrant running on 9.10
[23:01] <soren> What on Earth is that?
[23:04] <GhostFreeman> its a clone of dbm
[23:04] <GhostFreeman> its pretty popular with all the nosql kids