[00:17] <lukehasnoname> Can I config sshd to listen on multiple ports?
[00:17] <Kamping_Kaiser> yes
[00:17] <lukehasnoname> Like, in /etc/ssh/sshd_config, put in more than one "Port" listing?
[00:17] <Kamping_Kaiser> have multiple Port lines
[00:17] <lukehasnoname> k
[00:18] <lukehasnoname> Because not all programs that remote in take a custom port switch, but when I'm off my dorm LAN I can't get on port 22
[00:19] <Kamping_Kaiser> i suspect yourdoingitallwrong (tm)
[00:20] <lukehasnoname> ;_;
[00:22] <lukehasnoname> Kamping_Kaiser: how then, do I get virt-manager to tunnel ssh on a custom port?
[00:22] <Kamping_Kaiser> lukehasnoname, no idea what that is (and i cant stay around to help, because i'm off to work
[00:22] <Kamping_Kaiser> gl with it
[00:23] <lukehasnoname> later.
[02:04] <leonel> packages.ubuntu.org   says  that  hardy has  postgresql 8.3.1   and  really has  8.3.3   same for  gutsy  says postgresql  8.2.6 and really has  8.2.9
[02:34] <nxvl> mathiaz: it looks like lucas is mad with you
[02:34] <nxvl> :S
[02:48] <ScottK> nxvl: lucas isn't the only one.
[02:52] <ScottK> mathiaz: That gems change got no support when it was discussed.  Why in the world did you upload it?
[02:53] <nxvl> ScottK: being mad doesn't help to fix things is better to discuss it calm'd down
[02:54] <ScottK> nxvl: I'm not upset now.  I think it showed very poor judgement and distrespect for the community.
[02:54] <mathiaz> ScottK: I reread the thread and I didn't find there was strong disagrement about not doing it.
[02:54] <ScottK> mathiaz: Nobody was in favor.
[02:55] <ScottK> mathiaz: It is wrong all the way around.
[02:57] <ScottK> OK.  I take that back.   Darren Hinderer liked it.
[02:57] <ScottK> But given that he's a RoR developer he would.
[02:58] <ScottK> nxvl: Propose a fix then.
[02:58] <nxvl> hat do i broke now?
[02:58] <mathiaz> ScottK: IIUC, your main concern is that binaries installed by the gem commands are located in /usr/local/bin and that would take precedence over binaries in /usr/bin ?
[02:58] <ScottK> The RoR thing.
[02:59] <ScottK> From a packaging/technical perspective yes.
[02:59] <ScottK> Not having reviewed the code, I didn't understand the degree to which you were forking the package.
[02:59] <ScottK> That's a serious concern too.
[02:59] <mathiaz> ScottK: so you have to serious concerns ?
[03:00] <ScottK> Yes.
[03:00] <mathiaz> ScottK: or only the fact that there is a serious fork ?
[03:00] <ScottK> The fork and random versions taking precdence over system installed packages.
[03:00] <ScottK> Two.
[03:01] <mathiaz> ScottK: on the packaging bits, we're using hooks that are in upstream code repository - so when 1.3.0 is out, the debian maintainer will be able to use it.
[03:01] <mathiaz> ScottK: I agree that changing the patch system is the best move.
[03:01] <ScottK> I think that was VERY bad.
[03:01] <mathiaz> ScottK: OTOH all patches in 1.2.0 are included in upstream code.
[03:01] <ScottK> That's not relevant.
[03:02] <mathiaz> ScottK: so once 1.3.0 is package there isn't patch system needed anymore
[03:02] <ScottK> Say lucas thought your change was wonderful and wanted to incorporated it?
[03:02] <ScottK> You've built your change set in a way that's incompatible with his package and made it much harder than it needs to be.
[03:02] <mathiaz> ScottK: incorporate it in the current version of debian (lenny) ?
[03:02] <ScottK> It shows you've no intent of working with Debian.
[03:03] <ScottK> Possibly.  The point is you did your change in a way that makes it hard to feed back to Debian.
[03:03] <ScottK> Whether he tries to get a freeze exception or not is up to him.
[03:03] <ScottK> We shouldn't presume.
[03:04] <ScottK> Ubuntu tries to show it works hard to push things back to Debian and then incidents like this put us in a very bad light.
[03:04] <ScottK> I think as far as that goes lucas reaction to the change speaks for itself.
[03:04] <mathiaz> ScottK: if Lucas wants to incorporate our work, he can just grab the debian/operating_system.rb
[03:05] <ScottK> Keep in mind that as Debian developers go, he's very pro-Ubuntu.
[03:05] <ScottK> What possible benifit was there to change the patch system?
[03:05] <mathiaz> ScottK: that's where all the update-alternatives plumbery is done.
[03:05] <mathiaz> ScottK: none - it was not a move as I stated before.
[03:05] <ScottK> OK, but he still has to redo the patches or redo his package.
[03:06] <ScottK> ?? [22:01] <mathiaz> ScottK: I agree that changing the patch system is the best move.
[03:06] <mathiaz> ScottK: there isn't any patches to do to implement the update-alternatives system.
[03:06] <ScottK> But there are patches and you did change the patch system.
[03:06] <mathiaz> ScottK: debian/operating_system.rb relies on hooks that are already in upstream.
[03:07] <mathiaz> ScottK: correct - there is 1 patch.
[03:07] <ScottK> Right. so lucas could, if he wanted either wait and get the new upstream or he could redo all the patches, what he cannot do is take advantage the Ubuntu patch without rework.
[03:08] <mathiaz> ScottK: the current 1 patch in Ubuntu relies on upstream hooks IIRC - so lucas would have to update to the new upstream version first.
[03:08] <ScottK> OK.
[03:08] <ScottK> He still can't use the patch without rework.
[03:08] <soren> What's the name of the offensive package again?
[03:09] <mathiaz> ScottK: correct - as I said, changing the patch system wasn't a good idea
[03:09] <mathiaz> soren: libgem-ruby
[03:09] <mathiaz> soren: libgems-ruby
[03:10] <sommer> mathiaz: I think you may have had a typo earlier
[03:10] <sommer> mathiaz: about the patch system
[03:10] <mathiaz> ScottK: sommer correct
[03:11] <mathiaz> ScottK: I made a typo before - unfortunately it was a typo at a bad moment
[03:11] <ScottK> OK.
[03:12] <mathiaz> ScottK: so what was your other concern ?
[03:12] <ScottK> files installed by gems taking precedence over ones installed through the packaging system.
[03:13] <ScottK> This isn't the usual /usr/local situation where we can assume that if the admin installs something in there he want's it to take predence.
[03:13] <mathiaz> ScottK: so binaries installed by gems are available in /usr/local/bin/ which takes precedence over package system.
[03:13] <ScottK> Yes.
[03:14] <mathiaz> ScottK: why - he is using the gem command to install something that is not in the archive ?
[03:14] <soren> How are these different scenarios? I know nothing about gems or the issue at hand. I'm just curious.
[03:14] <ScottK> mathiaz: Gems embed the entire application stack that they need.
[03:14] <nxvl> soren: ubuntu-motu ML Lucas mail
[03:14] <ScottK> They have no idea what's already installed.
[03:15] <mathiaz> soren: imagine that you use easy_install and get a script in /usr/local/bin automatically.
[03:15] <ScottK> Except more crackish, less integrated, and more opaque.
[03:15] <ScottK> So a Gem always brings everything IT needs.  Regardless of the installed state of the system.
[03:16] <mathiaz> ScottK: right - and ?
[03:16] <ScottK> The Gem installing stuff in a location where it can find it, but is not in the system path is a reasonable compromise and about the best one could do today.
[03:16] <ScottK> mathiaz: So it's the RoR equivalent of DLL hell.
[03:17] <soren> ScottK: Ok. Even stuff that's already installed? It only assumes the presence of a ruby interpreter?
[03:17] <ScottK> soren: As I understand it, yes.
[03:17] <mathiaz> soren: yes - the gem command doesn't know about the ruby libraries installed by dpkg/apt.
[03:17] <ScottK> ez_install has at least been patched into submission to know about stuff that's already installed through that package management system.
[03:17] <ScottK> This is much worse.
[03:18] <mathiaz> soren: it still knows which one it has installed though.
[03:18] <soren> mathiaz: Well, it wouldn't have to know about packaging systems. It could just do an "import foo" (or whatever it's called in ruby) and see if it works. If not, go fetch the missing crack.
[03:18] <ScottK> The problem is that other packages will use the selected alternative in /usr/local/bin.
[03:20] <mathiaz> ScottK: ok - so if a package relies on a binary from a specific gem, and there is another version of the gem installed with the gem command, the deb package would use the version in /usr/local/bin instead of the one provided by the corresponding package.
[03:21] <ScottK> Right.  Or if there happened to be the same file installed via the packaging system (recall \sh's imagemagic example).
[03:22] <mathiaz> ScottK: don't remember - could you give me example ?
[03:24] <ScottK> The example he gave was, "I had this at one ocasion, there was this imagemagick gem and this module was only working with a special imagemagick version, so it shipped it together with the other cruft, but instead of installing it somewhere where this imagemagick lib didn't hurt, it was just a smartass and installed it in /usr/lib, overwriting the distro imagemagick."
[03:26] <mathiaz> ScottK: right - that's was a problem with the upstream gem, which was bad. I don't see how the version I've uploaded would have made thins worse in that case ?
[03:27] <ScottK> Right, but if it'd let it be installed in a 'normal' place instead of forcing things, then with your change, it's in /usr/local and even though it doesn't overwrite the distro version, the effect is the same.
[03:27] <mathiaz> ScottK: I take this as an example that there exists upstream gems that are wrong.
[03:28] <mathiaz> ScottK: I don't understand what you meant. what would be in /usr/local ?
[03:28] <mathiaz> ScottK: only the binaries declared by the gems would have a symlink in /usr/local/bin
[03:29] <mathiaz> ScottK: all the rest would end up in the usual place (/var/lib/ruby1.X/gems/imagemagick) if the gem was using the gem calls
[03:29] <mathiaz> ScottK: or in /usr/lib if the gem was bypassing everything
[03:29] <NCommander> nothing should be added to /usr/local by any package
[03:29] <NCommander> THe right fix isn't fixing gems, its packaging ruby gems into APT, just like python and python debian groups do
[03:29] <mathiaz> NCommander: the libruby-gem package doesn't add anything to /usr/local/. The gem command does everything.
[03:30] <NCommander> mathiaz, it symlinks things into /usr/local
[03:30] <NCommander> Thats enough to be a policy violate
[03:30] <NCommander> or should I say
[03:30] <NCommander> It configures gems to do that
[03:30] <mathiaz> NCommander: *it* = the gem command, not while installing the libruby-gem package.
[03:30] <NCommander> A bad policy at best since then apt can't remove everything correctly
[03:31] <NCommander> If you want to wreck the consistancy of users systems, then having gems's package manager exist at all is a miserably idea
[03:31] <NCommander> Neither perls' CPAN or php's PEAR are supported
[03:31] <NCommander> We repackage all those modules
[03:33] <mathiaz> NCommander: slangasek responded on bug https://bugs.launchpad.net/ubuntu/+source/libgems-ruby/+bug/262063
[03:34] <ScottK> I think slangasek is right about that.
[03:35] <ScottK> We aren't actually installing stuff in /usr/local.  We're just a faciliator.
[03:35] <NCommander> Ok
[03:35] <NCommander> I conceed the point
[03:35] <NCommander> But I still think its bad practice to allow gems to install packages outside of APT's control
[03:36] <ScottK> Doesn't make it a good idea however.
[03:36] <NCommander> Having multiple package managers is kludgy at best
[03:36] <ScottK> Just because you are dealing the crack instead of injecting it doesn't make you innocent.
[03:36] <mathiaz> Can you install CPAN module directly from perl ?
[03:36] <NCommander> yes, you can, but its depericated
[03:37] <NCommander> Its only recommended if your running unstable, and the module you need is not available via any other means, and even then its still discouraged
[03:37] <NCommander> (it ends up in /usr/local I believe, so it will still override any package installed by APT later on)
[03:37] <ScottK> Just as you can install Python modules through distutils or ez_setup.  One fo the big differences though is that even ez_setup knows what modules are already installed.
[03:37] <mathiaz> right - but it's still possible. What is the difference between doing a gem install something and download a tarball and ./configure; make; make install ?
[03:37] <NCommander> Because that should be a method of last restort
[03:37] <NCommander> If a package is not available via APT
[03:37] <NCommander> Fine, install it via gems
[03:38] <NCommander> But thats the only case
[03:38] <NCommander> The correct and proper solution is to package the gems individually
[03:38] <ScottK> But don't have the gems functionally replace stuff that's installed through the package system.
[03:38] <mathiaz> NCommander: I don't deny that. but what if they're not available ?
[03:38] <ScottK> mathiaz: Then don't put it in the path where it can mess other stuff up.
[03:38] <NCommander> mathiaz, the user should be able to use gems, I agree, but it should not go into the PATH, and not affect the general usage
[03:39] <NCommander> Make sure gems prints a giant warning label
[03:39] <mathiaz> ScottK: well - as stated by NCommander, if you install CPAN module, it will override the system modules.
[03:39] <NCommander> I consider that a bug in CPAN
[03:39] <NCommander> Its because Perl offers no alternative
[03:39] <ScottK> mathiaz: That doesn't make it a good idea.
[03:39] <mathiaz> so what's the point of having /usr/local/bin on the path then ?
[03:39] <NCommander> mathiaz, thats for things users install themselves. Its didicated by FHS
[03:40] <NCommander> that may have actually changed in recent years, I haven't needed a perl module that wasn't packaged in APT in a very long time
[03:40] <mathiaz> NCommander: exactly - and how does a end user install ruby library ?
[03:40] <NCommander> sudo apt-get install libruby-gems-*name*
[03:40] <mathiaz> NCommander: via the gem command, instead of ./configure; make; make install
[03:40] <NCommander> I'm telling you
[03:40] <mathiaz> NCommander: and if it's not available ?
[03:41]  * NCommander feels like a broken record
[03:41] <ScottK> Right, but it doens't just bring itself, it brings an entire applicaition stack.
[03:41] <ScottK> And that's the difference.
[03:41] <NCommander> Then its the users responsibility to install it and possibly shoot themselves in the foot
[03:41] <mathiaz> ScottK: it brings the dependencies needed to make the gem running.
[03:41] <NCommander> Which can include binary modules
[03:41] <ScottK> Yes, whether they are alerady installed or not.
[03:41] <NCommander> Which may have bugs or break the ABI with things already installed
[03:41] <NCommander> Assume it installs an expat update that breaks the ABI
[03:41] <mathiaz> NCommander: correct - via the gem command - we're not trying to support ruby libraries installed via gem.
[03:42] <NCommander> SUddenly GNOME doesn't wor on the next restart
[03:42] <ScottK> mathiaz: If I install a perl module or make install something I see what I get.
[03:42] <mathiaz> ScottK: you can also do that with the gem command
[03:42] <ScottK> If I install a gem, there's a whole train behind it that I don't necessarily get to see until it's to late.
[03:42] <ScottK> mathiaz: But that's not the typical RoR usage.
[03:43] <ScottK> mathiaz: I don't understand why each Gem can't just live in it's own private namespace and not disturb anything.
[03:43] <ScottK> It's going to bring the whole stack anyway, so it's not like it causes more code duplication.
[03:44] <mathiaz> ScottK: what's the typical RoR usage ?
[03:45] <ScottK> Developer bangs out cool looking application, stuffs it into a gem module and delivers it and moves onto the next project.
[03:45] <NCommander> you install a gem
[03:45] <NCommander> gem installs libraries
[03:45] <mathiaz> ScottK: I agree that being able to teach the gem command to check if there is already a ruby library installed by dpkg/apt is another step in the right direction.
[03:46] <NCommander> YOu install another gem which installs another version of the same library
[03:46] <ScottK> mathiaz: I just don't understand why being in the path is of any benifit.
[03:46] <ScottK> All I see is downside risk.
[03:46] <NCommander> Having gems as a package manager exist separately is insane if the distribution provides a package manager
[03:47] <mathiaz> ScottK: I would argue for user friendlyness
[03:47] <mathiaz> ScottK: if you install the rails gem, you'll get a rails binary
[03:47] <mathiaz> ScottK: the rails binary relies on the rake binary to be called.
[03:48] <mathiaz> ScottK: so the rails command doesn't work by default if you haven't modified your path
[03:48] <ScottK> mathiaz: So modify the path.
[03:48] <ScottK> I'm not at all convince I shouldn't just revert this entire uplaod.
[03:48] <ScottK> It doesn't make any sense at all.
[03:49] <ScottK> Note: I'm not actually doing that.  That's just my perspective.
[03:50] <mathiaz> ScottK: well - the reason for doing this upload is so that you won't have to modify the path.
[03:51] <ScottK> Yes, you can just destroy your system instead and that will be better.
[03:51] <ScottK> mathiaz: Doesn't ruby have the equivalent of sys.path.append?
[03:52] <ScottK> In Python at least this is the most trivial thing to do in the world.
[03:52] <mathiaz> ScottK: by destroying the system you mean that the end user could install random binaries in /usr/local/bin ? How is that different from an end user using ./configure; make; make install ?
[03:53] <ScottK> If that had the potential to drag in lots of not clearly related files and superced system functions, I'd agree.  It generally doesn't.
[03:53] <ScottK> The problem is that Gems aren't at all transparent about what they will bring with them.
[03:54] <ScottK> So the admins sees X and wants it and doens't now about Y, Z, and AA.
[03:54] <ScottK> AA causes problems and he didn't even know it was there.
[03:54] <ScottK> now/know
[03:54] <mathiaz> ScottK: there is the dependency command in gems.
[03:55] <ScottK> So you're telling me that any admin who installs a gem has a clear understanding of the dependencies he's bringing and what that might affect?
[03:56] <ScottK> mathiaz: What's the problem with an installed gem extending it's path to include what it needs?
[03:59] <mathiaz> ScottK: hm - it may possible to patch the ruby interpreter to include the /var/lib/ruby1.X/bin/ in it's path if it exists.
[03:59] <ScottK> That would resolve most of my technical concern.
[04:00] <mathiaz> ScottK: however any shell scripts coming the gem would work
[04:00] <mathiaz> ScottK: *would not*
[04:00] <ScottK> Right.
[04:01] <mathiaz> ScottK: but that would still require the end user to modify its PATH to include /var/lib/ruby1.X/bin/
[04:02] <mathiaz> ScottK: so that he can use the gem binaries directly.
[04:02] <mathiaz> ScottK: that's the issue the upload is trying to solve.
[04:02] <ScottK> Surely we can figure a way to add that to the environment for that user/gem.
[04:03] <mathiaz> ScottK: well - according the lsb you can drop things in /etc/profile.d/
[04:03] <ScottK> Put the main application in /usr/local/bin and then stuff all the dependencies in /var/lib/ruby1.X/bin/
[04:03] <mathiaz> ScottK: however, this violates the debian policy.
[04:04] <ScottK> Then the gem can be started, we just need a way to have it notice /var/lib/ruby1.X/bin/.
[04:04] <mathiaz> ScottK: well then all the binaries would end up in /usr/local/bin/
[04:04] <mathiaz> ScottK: let's take the example of rails
[04:05] <mathiaz> ScottK: the rails gem provides the rails binary.
[04:05] <mathiaz> ScottK: and depends on the rake gem.
[04:05] <mathiaz> ScottK: but you'd also want to have the rake binary available on the command line.
[04:05] <mathiaz> ScottK: so while installing the rails gem, it would pull in the rake gem
[04:06] <mathiaz> ScottK: the rake gem should install the rake command on the PATH since a end user may want to be able to use the rake command.
[04:06] <ScottK> Which puts us on the road to perdition.
[04:06] <ScottK> I see you want it entirely the way it is then.
[04:07] <ScottK> I guess there's no point in further discussion.
[04:07] <ScottK> I'll just have to consider if I want to take the heat for reverting it or not.
[04:17] <deleter> (linux newb) I'm trying to install Ubuntu server 8.04.1, but I keep running into this error - "Please insert the disc labeled: 'Ubuntu-Server 8.04.1 _Hardy Heron_ - Release i386 (20080701)' in the drive '/cdrom/' and press enter. Media change
[04:18] <deleter> i md5d the iso, verified the disc integrity, and am pretty sure the cdrom drive is not at fault, as it always fails at the same point (78% into the base system installation)
[04:19] <deleter> I tried the forums, but althought I found others with the error, I could not make out a solution
[04:19] <deleter> anyone know what to do / have any ideas? Thanks
[04:22] <ScottK-laptop> deleter: Did you try more than one CD anyway?
[04:23] <deleter> yeah I'm on my 4th one...
[04:23] <ScottK-laptop> OK.
[04:23] <ScottK-laptop> No great ideas on my part then.
[04:32] <azteech> have you attempted to download the iso from a different location and burning it from it?
[04:34] <deleter> yes, I don't think its the iso though because the hash matched the one online
[04:43] <Zelut> not sure if this is the right place, but can anyone tell me where my inputted ufw rules are stored?
[04:43] <wantok> /etc/ufw/ iirc
[04:44] <Zelut> I see before.rules and after.rules, but don't see any of my custom rules in those files.
[04:45] <wantok> you wont see the rules per-se, just the iptables
[04:45] <azteech> deleter, then suggest trying another machine to do download and burn and see if that helps ... if it doesn't .. then could be drive you are using to read disk on ...
[04:46] <Zelut> wantok: right.  I'd like to manually edit a few of the iptables lines (i have a few in incorrect order)
[04:46] <Zelut> problem is I just can't see any of my rules in ufw or iptables syntax
[04:48] <Zelut> ahh, its in /var/lib/ufw.
[04:59] <nealmcb2> ScottK, one question I have is what will best serve the average ruby or rails developer.  if gems is really popular, along with capistrano, and keeps things in sync itself, and deals with security issues, and works in a nice cross-platform way and allows users to track upstream better than ubuntu is likely to do, then we have to deal with that use case.  we could either explain to users how to do it ENTIRELY outside the package 
[05:00] <deleter> turns out it was the cddrive, use of a different one led to success, thanks for the ideas
[05:00] <ScottK> nealmcb2: None of which is an argument for putting every single gem that gets brought in when you install something in the system path.
[05:01] <ScottK> nealmcb2: Not to mention the social aspect of blowing off every developer who gave comment on the ML and gratuitously forking the package from Debian.
[05:03] <azteech> deleter: you are welcome ....
[05:03] <nealmcb2> I certainly hear the frustration, and also agree with steve that it is a tough situation.  I haven't looked enough at the issues to say what the best path forward right now is
[05:04] <nealmcb2> but I suspect we do want to figure out how to make it easy for users to use gems and capistrano
[05:05] <nealmcb2> too bad the conversation that was started back a few months ago never really got off the ground
[05:08] <nealmcb2> ScottK: finding ways to leverage the expertise of both ubuntu devs and ruby/gems devs is one challenge, from what I have seen
[05:08] <nealmcb2> (and debian :)
[05:09] <ScottK> We have a package management system and in the event of a conflict between that and something else, I have no doubt which we should go with.
[05:11] <nealmcb2> then at this rate from what I'm hearing I'm guessing that we won't be much of a platform for ruby.  but that is just a guess, since I haven't looked at it in detail
[05:12] <ScottK> The problem is that no attempt was made to try and make it work with the packaging system.
[05:12] <ScottK> Every language that has a packaging system has to do this.
[05:12] <ScottK> It's painful, but necessary work and they wanted a shortcut.
[05:13] <nealmcb2> that could well be the case.  but dealing with conflicts between packaging systems is even harder than dealing with conflicts within a single packaging system
[05:14] <nealmcb2> anyway I'm hardly the packaging expert.  I'm mainly trying to hold up a common use case and hoping we can address it
[05:15] <ScottK> Right.  That's the hard part.
[05:15] <ScottK> I thought we were having a good discussion towards compromise and all of a sudden he pulled back.
[05:16] <NCommander> nealmcb2, I'm not upset with you over this patch, I'm upset with the people who approved it.
[05:16] <NCommander> nealmcb2, and I can see it from your side of things, and at first glance, your solution isn't that bad until you realize what it means for APT :-)
[05:16] <NCommander> and I hope this doesnt' discourage you from futher ruby contributions
[05:16] <ScottK> nealmcb2: Was this your idea?
[05:17] <nealmcb2> and I'm hoping we can appreciate folks for putting possible solutions forward, and recognize the inherent difficulty of the problems, without too much unhelpful venting.
[05:17]  * NCommander re-reads the original bug description
[05:18] <nealmcb2> ScottK: nope - not my idea....
[05:18] <ScottK> OK.
[05:18] <NCommander> Well, I personally want to see the bad patch get wiped
[05:18] <nealmcb2> I did facilitate the server team meeting where it came up last week, but I was concentrating on the agenda, not the technical decisions
[05:20] <NCommander> The proper method is we collabrate with Debian on packaging gems individually
[05:23] <nealmcb> NCommander: how much do you know about how the average ruby and/or rails user works?  My sense is that gems is very widespread, but I haven't researched it a lot
[05:24] <NCommander> Very little
[05:24] <NCommander> I've tried Ruby on Rails
[05:24] <nxvl> are you still fighting about the gems issue?
[05:24] <NCommander> But I felt like I was fighting the tool more than anything else
[05:26] <nealmcb> yeah - I think that's our problem - lots of ubuntu/debian expertise and not enough ruby user perspective
[05:26] <NCommander> I find ruby on rails ATM to be more hype than being super-revolutionary.
[05:26] <NCommander> http://www.oreillynet.com/ruby/blog/2007/09/7_reasons_i_switched_back_to_p_1.html
[05:26] <NCommander> nealmcb, the way ruby does things with gems is the same problem we have when a user uses CPAN or PEAR directly
[05:27] <nealmcb> of course.  and it is a hard problem.
[05:27] <NCommander> (I'm familiar enough with gems to understand why its a bad thing the current setup, but I can't say I could build a gem now)
[05:27] <nxvl> Django is kewl
[05:27] <NCommander> nealmcb, the right (not the easy) solution would be to build a framework that can take a gem, and convert it to a debian package
[05:28] <nealmcb> NCommander: and have folks installing packages from random repos?  with no security backing?
[05:29] <NCommander> We can generate source packages that are part of the archive
[05:29] <NCommander> The Debian perl group have a set of scripts for quickly debianizing cpan modules
[05:30] <nealmcb> I think one aspect of this is that the ruby world is still moving very fast.  perhaps it will mature enough that our packaging will catch up.  anyway, I'm just hoping we can come up with a good answer, sooner rather than later
[05:31] <wantok> for non-root users doenst cpan install into ~/.cpan?
[05:32] <NCommander> wantok, yeah
[05:32] <NCommander> My personal feelings on ruby ATM though are generally its more hype then anything else
[05:32] <NCommander> It requires you to fight to a specific methodology
[05:32] <NCommander> Reminds me of MFC actually
[05:32] <NCommander> s/fight/think
[05:33] <wantok> i dont know how ruby's thing works at all, but if it doesnt do the same thing its not really the same as cpan at all
[05:33] <NCommander> wantok, gems doesn't support (to my knowledge) local user installations
[05:33] <NCommander> Perl and CPAN have almost 30 years of code behind them
[05:34] <wantok> i dont deny it, i just felt i should note hte difference between the perl 'you can shoot everyone or just yourself' and the (percieved) gems 'shoot everyone, or anyone'
[05:37] <NCommander> I think with perl it is
[05:37] <NCommander> "You can shoot yourself in the foot, but six months later, you'll have no idea how you did it"
[05:38] <wantok> nm. you have regex to save yourself ;) *mwhwhahahaah*
[05:39] <NCommander> well, python working to kill perl as the glue language
[07:15] <toshko> hi all
[07:15] <toshko> Soft RAID1 problems (invalid raid superblock magic), ubuntu server 8.04.1, anyone?
[07:18] <mm_202> Hey guys. I have a rather stupid question.  If I have a dir A, with dirs a,b,c, and a dir B, with dirs d,e,f.  If I do a mv B/ A/ will it overwrite dirs a,b,c?
[07:27] <owh> mm_202: No, it will move directory B inside of A, giving you A/a A/b A/c and A/B/d A/B/e A/B/f
[07:29] <mm_202> Sorry, I meant mv A/ B
[07:29] <mm_202> to just move the contents of A to B.
[07:30] <owh> mm_202: Well, go into the /tmp directory, then run mkdir -p A/a A/b A/c ... etc and test it for yourself.
[07:30] <mm_202> okay, will do.
[07:30] <mm_202> thanks, owh.
[07:39] <owh> toshko: I cannot help you directly, but perhaps if you asked an actual question, someone here might be able to.
[08:10] <toshko> owh: well this is the problem: install, configure raid1 with sda,sdb and the folowing message appears on random basis at the start up screen (this is from the syslog file but the same is shown at the start).
[08:10] <toshko> md: invalid raid superblock magic on sda
[08:10] <toshko> md: sda does not have a valid v0.90 superblock, not importing!
[08:12] <toshko> this is since ubuntu server 7.10 for me on several machines
[08:14] <owh> toshko: Are the machines running the same hardware?
[08:15] <toshko> no, it is not hardware issue, because the machines are different and i changed the mb and psu on the last one
[08:15] <toshko> tested the hdds - no problem
[08:17] <owh> Given that you've been having this issue for some time, I'd recommend putting your question with relevant background information, including the things you've tried and the hardware involved into an email and sending it to the ubuntu-server list.
[08:18] <toshko> did it in the forum but will do in the list also
[08:18] <toshko> thanks
[08:19] <owh> toshko: Also, make sure that you don't get stuck into a single thought pattern, as-in "it's not hardware because...", you may well find the solution in a place where you didn't expect it.
[08:49] <CrummyGummy> Elo
[08:54] <CrummyGummy> In ubuntu-8.04-server-amd64.iso the pxeboot install installs the server kernel. Any ideas how I can change this?
[08:57] <owh> CrummyGummy: The last time I looked at this was a little while ago, so what I'm telling you is not going to be accurate, but IIRC, you can configure exactly what happens with the appropriate config file. I recall setting it all up with several boot images and menu options. As I said, this isn't directly going to help.
[08:58] <owh> CrummyGummy: The pxeboot process from memory works like a TFTP server which you can configure to use different boot images.
[08:59] <owh> CrummyGummy: It sounds like you're using a ubuntu-server boot image, rather than a workstation.
[08:59] <CrummyGummy> Sorry, I'm half a sleep still.
[09:00] <CrummyGummy> The problem is that the server iso is installing a generic kernel, not the server kernel as expected.
[09:02] <owh> CrummyGummy: Uhm, which boot stanza are you using because I think it might be pointing at the wrong thing.
[09:03] <owh> (Bear in mind that as I said before, I've not done this for some time...)
[09:06] <CrummyGummy> This is the current http://pastebin.com/m1c460e6a
[09:07] <CrummyGummy> pxelinux.cfg
[09:08] <owh> CrummyGummy: Well that's using the ubuntu-installer initrd, so I'd not be surprised if it's using the workstation kernel.
[09:09] <owh> CrummyGummy: Where did the initrd come from?
[09:10] <CrummyGummy> That initrd came from the ubuntu-installer directory on the server iso.
[09:10] <CrummyGummy> (sorry, was on the phone)
[09:10] <owh> CrummyGummy: You're sure that's where it came from, as-in, no mistake?
[09:11] <owh> And while we're at it, there isn't another ubuntu-installer directory lying around anywhere?
[09:12] <owh> CrummyGummy: The way I implemented this at the time was to loop mount an .iso of the required installer and make sym-links to the right bits, so I could just change the iso mount and make it install something else.
[09:13] <CrummyGummy> I'm pretty sure that it came from that iso. Its the only amd64 iso I have on this server.
[09:14] <CrummyGummy> as in no workstation ones. I'll try again with the symlinks as suggested.
[09:15] <owh> CrummyGummy: There are no stray ubuntu-installer directories?
[09:15] <CrummyGummy> Runnign find.
[09:15] <owh> use locate -i
[09:17] <elnewb> Hey guys.  I tried installing Ubuntu 8.04 Server on an old P3 Box.  I put the disc in and booted then it loaded the cd and displayed the menu.  When I clicked install not the "try ubuntu livecd" option it still took me to the live cd.
[09:17] <owh> elnewb: Well, the server CD doesn't have a LiveCD, so you're in the wrong room :)
[09:18] <elnewb> Owh: are you sure?  it gave me the option?
[09:18] <elnewb> it booted into the comand line but it was running off the disc.
[09:18] <owh> It's possible that I'm getting old and grey and I'm wrong.
[09:21] <elnewb> Wait you are right..... just loaded the ubuntu iso in vmware
[09:21] <owh> Pfew. Thought my brain had finally had it :)
[09:22] <elnewb> This was at school today.  My teacher must have downloaded the wrong version.
[09:32] <owh> elnewb: Well before you download another one, check the MD5
[09:34] <elnewb> Nah I don't think thats the problem.  I asked my teacher to download the server version for me cause the students access to the internet is filtered so we are limited to direct downloading of documents (.doc .xls and pdf).
[09:35] <owh> elnewb: No, I mean, check which CD you have.
[09:35] <owh> elnewb: Not if it's corrupt or not :)
[09:35] <elnewb> owh: the iso will still be on the computer that he downloaded it toos desktop
[09:35] <elnewb> probably be like 8.04-desktop.iso
[10:03] <kraut> moin
[10:05] <owh> tag
[10:05] <owh> Or should that be 'tag ?
[10:40] <CrummyGummy> Okay, I've done it with symlinks to pxeboot stuff from the right iso. In the install process I chose Openssh server only. The kernel installed is linux-generic. Is this right?
[13:36] <nxvl> good morning
[14:04] <milestone> I have setup a mailserver (postfix+maildrop)
[14:04] <milestone> as a mailbox_command I have defined maildrop
[14:05] <milestone> so the user needs a $HOME/.mailfilter to function properly
[14:05] <milestone> since i made it generic, I have copied the .mailfilter to /etc/skel
[14:05] <milestone> when I create a new User, the file gets copied, but the permissions stay on root:root
[14:06] <milestone> any suggestions on where to tell that the permissions need to be updated as well?
[14:11] <acemo> is it possible to only reinstall grub and the mbr with the server cd?
[15:11] <slicslak> i have two shadow files: /etc/shadow and /etc/shadow-   i need to copy some existing users from another system.  which file should i edit to put the password hashes in?
[16:45] <toyotafosgate> hey does anybody here know anything about raid?
[16:46] <toyotafosgate> anyone here?
[16:46] <toyotafosgate> had to sound like a retart
[16:46] <toyotafosgate> *retard
[16:46] <toyotafosgate> but i'm using pidgin for the first time to connect to IRC
[16:46] <Koon> toyotafosgate: well, I know something about raid.
[16:46] <toyotafosgate> fair enogh
[16:46] <Koon> but maybe not enough, depends on your real question
[16:47] <toyotafosgate> so heres the deal: I have two hard drives mirrored (raid1)
[16:47] <toyotafosgate> they are mirroring only one partition
[16:47] <toyotafosgate> they are mirroring /home
[16:47] <toyotafosgate> i noticed that they were not synced
[16:47] <toyotafosgate> so i synced them
[16:48] <Koon> sorry, got to go now -- i'll see your question later if nobody else picks it up before
[16:48] <toyotafosgate> this caused the server to go down
[16:48] <toyotafosgate> alright
[16:48] <toyotafosgate> when it came back up the drive was no longer mounted
[16:48] <toyotafosgate> anyone have any ideas?
[16:54] <toyotafosgate> ﻿ hey does anybody here know anything about raid?
[16:55] <toyotafosgate> i've got a pretty serious issue and i could really use someones help
[16:58] <Brazen> toyotafosgate:I know how to set up md raid and that's about it.  If I ever had a failure, I'd have to pull out some google-fu.
[16:59] <Brazen> toyotafosgate: but, have to checked to make sure there is an entry in /etc/fstab for the /home partition?
[16:59] <Brazen> oops
[16:59] <Brazen> s/have to checked/have you checked/
[17:09] <kees> zul: I hatez mysql.
[17:09] <kees> zul: amd64 is randomly failing still.  I just keep clicking "rebuild".  first 6 failures, then 1.
[17:10] <kees> zul: now 3.
[17:10] <kees> zul: though they are all ndb
[17:12] <kees> zul: now 3.loaddata_autocom_ndb ndb_alter_table2 ndb_auto_increment ndb_autodiscover ndb_autodiscover2 strict_autoinc_5ndb
[17:13] <kees> gAh
[17:13] <kees> zul: so far it's always been a subset of loaddata_autocom_ndb ndb_alter_table2 ndb_auto_increment ndb_autodiscover ndb_autodiscover2 strict_autoinc_5ndb
[17:13] <Goosemoose> anyone have a hardy preseed file?
[17:16] <jmedina> Goosemoose: what is a preseed file?
[17:19] <Goosemoose> a file used to push installs of ubuntu via a server to clients over pxe
[17:19] <Goosemoose> could be used via cd too
[17:25] <jmedina> Goosemoose: good, do you have any document about that?
[17:26] <Goosemoose> only for gusty not hardy
[17:32] <jmedina> Goosemoose: share it please
[17:33] <lamont> jdstrand: if you'
[17:33] <lamont> re feeling generous, you can upload bind9 to address 252675, as mentioned in the bug
[17:33] <lamont> :-D
[17:33] <lamont> otherwise, I'll have to actually go figure out SRU stuff :-)
[17:35] <jdstrand> lamont: hmmm-- didn't debian decide not to actually do that in their security queue?
[17:35] <lamont> jdstrand: debian has 9.3.4 and 9.5.0
[17:35] <lamont> and disclaims 9.2.4 (sarge)
[17:36] <jdstrand> I must have just misremembered it then-- so only 9.4.2-P1 has the performance hit?
[17:36] <lamont> and, IIRC, upstream didn't actually bother to do the performance-improved P2 for 9.3
[17:37] <lamont> if they did, I'd still say "9.3.5-P2 for etch, or someone else can backport the fix to 9.3.4" :)
[17:37] <lamont> it's an uglier patch, and I think ISC makes good decisions on what to put into fix-version point releases
[17:38] <lamont> jdstrand: also, 175316, aka debbugs 459010 probably wants some security-review-like activity, as per the comments in 459010:
[17:38] <lamont> Is there security support for this part of BIND 9?
[17:39] <jdstrand> lamont: I'm inclined to upload it, but would like kees' opinion. also, have you used the patch in production anywhere?
[17:39]  * lamont is 9.5.0 everywhere
[17:39] <lamont> I see no reason that it should bypass SRU
[17:39] <lamont> I just don't want to be the one to deal with it... :-)
[17:40] <kees> I think it makes sense to SRU it.
[17:40] <jdstrand> I see. though I also see the argument that there is a regression, particularly if it affects a lot of people. but IIUC it is only for very high load servers-- is this accurate?
[17:40] <lamont> something like that
[17:41] <jdstrand> lamont: I'll do the SRU dance for you... this time ;)
[17:41] <jdstrand> lamont: but likely not today
[17:41] <lamont> well, I figured as long as you were being a defacto bind9-uploader.... :-)
[17:42] <jdstrand> heh
[17:43] <jdstrand> sounds more like punishment for bypassing git again :P
[17:43] <jdstrand> (though I did ask kees about it before doing it :)
[17:43] <lamont> for the next question... do you care if I merge your chagelog entries into the 9.5.0.dfsg.P2-2 changelog and drop the ubuntu ones?
[17:43] <jdstrand> lamont: not at all
[17:43] <zul> kees: *grumble*
[17:43] <lamont> I'll at least ack which version they went into
[17:44] <lamont> 9.5.0-P1 had the perf hit, fixed (and migrated to testing) in 9.5.0-P2
[17:44] <kees> zul: yeah, once this fourth rebuild attempt finishes, I'm just going to upload with a mess disabled -- each test has passed at least once, so there's no single culprit.
[17:45] <zul> fun fun :)
[17:46]  * lamont wonders if -server cares enough about bug 175316 that we want to fix it in intrepid, rather than Jay
[17:47] <lamont> heh. and that's blocked on security-review as per above.
[17:47]  * lamont goes back to working
[17:51] <lamont> kees: was that /var/log/named/ that you wanted rw ?
[17:54] <kees> lamont: yawp -- it's at least where I put logs, and at least one other person I know.
[17:54] <kees> it was the only AA change I had to make when moving my DNS to hardy.
[17:54] <jdstrand> kees: are you the one with the dnscvsutil issues?
[17:54] <kees> jdstrand: nope, not I.
[17:55] <jdstrand> oh, I guess not
[17:55] <lamont>   /var/log/named/** rw,
[17:55] <jdstrand> lamont: gotcha
[17:55] <lamont> jdstrand: dnscvsutil is me and a few buddies
[17:55] <lamont> though not my house
[17:56] <jdstrand> lamont: do have the required apparmor rules for that too? (I haven't used dnscvsutil)
[17:56] <lamont> jdstrand: they're already in 9.4.2-13 or so
[17:57] <lamont> which uh, is not an ancestor of 9.4.2-10ubuntu0.1 et al
[17:57] <lamont> also in 9.5.0
[17:57] <jdstrand> lamont: ok, I'll add that to the SRU too
[17:57] <lamont> you might just look at 9.4.2-13 and see if it makes sense to just migrate things to there...
[17:58]  * lamont looks to see how much pain that would be
[17:58] <lamont> kees: and no bug for the apparmor change.. for shame.
[17:58] <lamont> but don't file one now - that'd just be annoying
[17:58] <kees> lamont: you want me to make one?  :)
[17:59] <lamont> no
[17:59] <lamont> I already committed without the tag to generate the closure
[17:59] <kees> heh
[17:59] <lamont> oh, cool.
[18:00] <lamont> 9.4.2.dfsg.P2-1 _IS_ a descendant of 9.4.2-13
[18:00] <lamont> kees: do you want +sigchase in dig for 9.4.2 SRU?
[18:00] <kees> lamont: uhm, I don't know what that is.  :)
[18:01] <lamont> bug 257682
[18:02] <lamont> hardening?
[18:03]  * lamont adds sigchase - it only has potential issues if someone uses it, no change for the unaware (like, say, me before this morning)
[18:04] <lamont> and do you want the default named.conf.options to lose the "query-source ... port 53" comment block?
[18:08] <lamont> I'm inclined to say "no" to that one, because I don't like dpkg "replace this conffile" questions, especially on a -security/-updates upgrade
[18:09] <jdstrand> lamont: it'd be nice to have that removed, but I agree with your caution
[18:10] <lamont> http://paste.ubuntu.com/41298/ is the current changelog-to-be, modulo a little more cleanup
[18:10] <lamont> and it'll be NEW.  go sonames!!
[18:15] <looseparts> Hello. How might I do security updates without doing a 'apt-get dist-upgrade' ? - I don't want to upgrade every single app, just the ones that have security patches.
[18:15] <jdstrand> looseparts: disable -updates
[18:16] <looseparts> huh ?
[18:16] <toyotafosgate> brazen you still there?
[18:16] <lamont> kees/jdstrand: if you want to see the current proposal: git clone git://git.debian.org/~lamont/bind9.git; cd bind9; git checkout -b stable/v9.4.2 stable/v9.4.2
[18:16] <kees> lamont: seems like losing the "port 53" part would be nice.
[18:16] <lamont> and then it's just a question of whether -updates will squawk at 1:9.4.2.dfsg.P2-2 instead of 1:9.4.2-10ubuntu0.2
[18:17] <lamont> kees: yeah... that's a "your call" item... trivial to cherry-pick the patch back to 9.4.2.. I just loathe questions, and named.conf.options is a frequently-tweaked file --> lots of users touched by it
[18:22] <Goosemoose> so no one knows where there's a good hardy preseed file huh? the last one published is a few years old
[18:24] <lamont> kees: so if you say "DO IT JONES", it's done.  otherwise I'm chicken. :-)
[18:24] <arpu> hello
[18:24] <looseparts> jdstrand: would you please tell me what you mean when you say 'disable -updates' ?
[18:25] <arpu> i ask on #ubuntu but no help
[18:25] <arpu> i have this problem
[18:25] <arpu> Creation of temporary crontab file failed - aborting as user on ubuntu hardy server
[18:25] <lamont> looseparts: in /etc/apt/sources.list, comment out the hardy-updates lines
[18:25] <lamont> I expect there's some nice GUI-way to do that
[18:26] <looseparts> if i was running a GUI i'm be asking another list
[18:26] <looseparts> ; - )
[18:27] <kees> lamont: let me just double-check it in a minute...
[18:27] <lamont> kees: no worries - it'll be $HALFDAY before I get to it
[18:27]  * kees nods
[18:27] <arpu> this is the hole output
[18:27] <arpu>  crontab -e
[18:27] <arpu> no crontab for rails - using an empty one
[18:27] <arpu> /tmp/crontab.aH5Mbo: Permission denied
[18:27] <arpu> Creation of temporary crontab file failed - aborting
[18:27] <toyotafosgate> hey does anyone know if you can change the jfs filesystem (raid drive) to another in order to retrieve the data?
[18:27] <lamont> looseparts: cool.
[18:28] <looseparts> lamont: thank you. just to clarify, comment out the hardy-updates lines,
[18:28] <looseparts> run apt-get update
[18:28] <lamont> and then dist-upgrade should just pull down hardy-security
[18:28] <looseparts> then run apt-get dist-upgrade ?
[18:29] <lamont> for bonus points, "apt-get -ud dist-upgrade"
[18:29] <lamont> that'll show you what it's doing in more detail, and download without actually installing.
[18:29] <looseparts> purrrfect : - )
[18:29] <looseparts> thanks a lot
[18:32] <toyotafosgate> ﻿does anyone know if you can change the jfs filesystem (raid drive) to another in order to retrieve the data?
[18:39] <arpu> no on an idea about the crontab problem ?
[18:43] <didrocks> jdstrand: as there is the feature freeze, do we have to continue the work on the "second zone packages" for integrating ufw?
[18:44] <arpu> what is the starndard permission of the /tmp directory ?
[18:45] <arpu> 755 is not right ?
[18:46] <lamont> arpu: 1777
[18:46] <arpu> ok than this is a bug in ubuntu hardy
[18:46] <arpu> :-/
[18:48] <arpu> in ubuntu ists 755
[18:55] <\sh> arpu: drwxrwxrwt   6 root root        122 Aug 28 19:56 tmp
[18:55] <\sh> it's what lamont said
[18:56] <lamont> the t == 1000
[18:57] <lamont> arpu: not on my machine it isn't... nor any other hardy box I've installed..
[18:59] <arpu> hmm this is a new hardy server install
[18:59] <lamont> it's entirely possible that something blatted it after the base install... stupid package or some such
[19:00] <\sh> arpu: hardy server tells me the same as I posted..1777 drwxrwxrwt
[19:00] <arpu> strange
[19:01] <arpu> drwxr-xr-x  3 root root  4096 2008-08-28 16:43 tmp
[19:19] <toyotafosgate> still none who has any idea about raid?
[19:29] <toyotafosgate> ?
[19:43] <daedra> what duz I need to mount usb flashdrives?
[19:43] <daedra> `sudo mount -t vfat /dev/sdf1 /media/fl` isn't working
[19:45] <daedra> fdisk -l /dev/sdf says there is a W95 FAT32 device at /dev/sdf1, and /media/fl exists
[19:50] <lamont> daedra: modprobe vfat?
[19:50] <lamont> or maybe the actual error message...
[19:50] <daedra> lamont: mount: wrong fs type, bad option, bad superblock on /dev/sdf1, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail  or so
[19:51] <lamont> right.  modprobe is probably your friend
[19:51] <maw> anyone familiar with a tool similar to "tripwire" but for windows?
[19:51] <maw> working on PCI compliancy here at work :|
[19:51] <daedra> same output after modprobe vfat
[19:52] <_ruben> grrr .. wonder wassup with perl on this box .. it thinks a certain module is still at 2.0005 and required 2.0008, yet it is already 2.0008 .. stupid .pm caching
[20:00] <jdstrand> didrocks: it's up to you. it would certainly be nice, but now those packages need to go through a feature freeze exception process
[20:00] <lamont> daedra: and fdisk /dev/sdf tells you that sdf1 is a vfat partition?
[20:04] <daedra> dosfstools did it
[20:05] <daedra> had to make a new filesystem on the device. lost the original contents :( but after 6 rewrites it now mounts
[20:21] <Goosemoose> where do i need to upload the preseed.cfg file on the server for network boot?
[20:35] <didrocks> jdstrand: and do you think it worth it and will be able to have the feature freeze exception?
[20:37] <jdstrand> didrocks: personally, I don't find mysql and postgresql super interesting, as they only listen on the localhost
[20:37] <jdstrand> didrocks: squid would probably be good though
[20:37] <jdstrand> s/localhost/loopback/
[20:37] <didrocks> jdstrand: yes, that was my first concerned (about postgres and mysql) and I didn't understand why include it by default
[20:38] <didrocks> squid is probably great, yes
[20:38] <jdstrand> didrocks: I added them simply because they are part of Ubuntu server's tasksel
[20:38] <didrocks> so, if for the feature freeze exception, I need someone advocating in my side, I can count on you?
[20:39] <jdstrand> absolutely
[20:39] <didrocks> (no sure it is a very correct english ^^)
[20:39] <didrocks> :)
[20:39] <jdstrand> (it's such a cmall change in the packages now anyway)
[20:39] <jdstrand> s/cmall/small/
[20:39] <didrocks> for sure
[20:40] <didrocks> I will take a loog this weed-end to try to make the profile case insensitive and also branch your code with bzr (I think this is better than proposing you a patch, isn't it?)
[20:44] <jdstrand> didrocks: yes
[20:45] <jdstrand> didrocks: wrt case insensitivity-- I feel pretty strongly about the presentation of the profile name with what is presented with 'status'
[20:46] <jdstrand> didrocks: because it has a bit to do with branding (eg OpenSSH)
[20:46] <didrocks> I will try to make some trick this week-end, but be indulgent if I make something wrong with bzr, I am not used to it (I am in charge at my company on a prioprietar VCS and also used to VCS/SVN)
[20:46] <didrocks> hum
[20:47] <jdstrand> didrocks: as for the user interface, if the user can type 'ufw allow openssh' or 'ufw allow OpEnSsH', that seems to be ok
[20:48] <didrocks> oh, yes, it is just for the user interface
[20:48] <didrocks> ufw allow/deny profile
[20:48] <didrocks> ufw status profile
[20:48] <jdstrand> didrocks: ufw status
[20:48] <didrocks> ufw app update profile (--add-new)
[20:49] <jdstrand> (you don't specify the profile with 'status')
[20:49] <didrocks> hum, no status filtered in just one profile ?*
[20:49] <didrocks> sorry, my bad :)
[20:49] <jdstrand> didrocks: status is the status of the ufw command managed parts of the firewall
[20:50] <didrocks> jdstrand: yes, but I thought it was possible to filter a rule from a profile to the status
[20:51] <jdstrand> didrocks: no, you might be thinking of 'status verbose' which gives a different view of application rules
[20:51] <didrocks> so, just 3: ufw allow/deny profile, ufw app update profile (--add-new) and ufw app info profile
[20:52] <didrocks> jdstrand: yes, the verbose mode give the associated port currently recorded in the firewall, isn't?
[20:52] <jdstrand> didrocks: well, there is 'limit' too-- but you'll likely be able to be able to change just a couple lines
[20:53] <jdstrand> didrocks: 'status verbose' shows the port/protocol instead of the profile name, yes
[20:55] <didrocks> jdstrand: I just gave a quick look and the only matter is that you use the profile as a key. But I have my idea to do (beautifully, of course) the trick :)
[20:56] <jdstrand> cool :)
[20:58] <didrocks> jdstrand: btw, I will keep you in touch. Have a good evening!
[20:58] <didrocks> (or day)
[20:58] <jdstrand> didrocks: you too! (and I bet you'll grow to love bzr :)
[20:59] <didrocks> jdstrand: thx (I love already bzr just having read the full user guide one month ago, but had no time to practice :))
[21:00] <tarab> hello? gues
[21:03] <tarab> i use ubuntu 7.10, i already installed bind9 then how to configure dns (bind) server/
[21:55] <Brazen> There are some commands I want to run at the end of the bootup process.  Would the correct method for this be to add the commands to "/etc/rc.local" and then "chmod +x /etc/rc.local" ?
[22:05] <mathiaz> sommer: is doc.ubuntu.com up-to-date now ?
[22:05] <mathiaz> sommer: I think you said so in the meeting last tuesday
[22:13] <iongion> anyone knowing small embedded lamp devices that could run ubuntu server ?
[22:13] <iongion> or at least if u know small ... little noise devices/computers that could be used as ubuntu/apache/php/mysql home servers ?
[22:27] <Brazen> iongion: check out the "gos dev kit" (google for it), it's a mobo with embedded via x86 processor.  it's supposed to be very low power and runs linux very well, and it's super cheap.
[22:28] <Brazen> iongion: I only wish it had vt extensions :(
[22:31] <NCommander> Brazen, what chip does that mobo have?
[22:31] <NCommander> Most newer ones have VT on it
[22:32] <Brazen> NCommander: it's a VIA C7-D.  I'm positive I double checked a while ago, and it does not have VT.
[22:33] <NCommander> Yeah
[22:33] <NCommander> VIA is about the only one that doesn't have VTx
[22:33] <NCommander> I had to mod my BIOS to get it on my laptop though
[22:33] <Brazen> nice
[22:33] <NCommander> FreeDOS FTW
[22:34] <NCommander> Upgrading the BIOS on this machine was a nightmre
[22:39] <kees> lamont: instead of "port 53", use "port *", I think.
[22:52] <kEiNsTeiN^^> hello.
[23:12] <lolufail> hi
[23:12] <lolufail> I need to know how to extract the xen initramfs to /etc/initramfs-tools, so I can add md-raid support.
[23:19] <lolufail> because I _just_ registered this nick, the question again, dont kill me plz if it appears twice ;): I need to know how to extract the xen initramfs to /etc/initramfs-tools, so I can add md-raid support.
[23:20] <lolufail> and that is xfs over lvm over dm-crypt over md-raid to be exact
[23:20] <acemo> is it possible to have virtual servers on the same ip? while not having a domain name
[23:22] <lolufail> acemo: depends on what you want to do. you would have to do port-forwarding on the host, to the veths of the VMs
[23:22] <acemo> veths of the vms what do you mean?
[23:23] <lolufail> yes
[23:23] <lolufail> actually I mean their IPs
[23:23] <lolufail> I have the same layout, using xen and iptables on dom0
[23:23] <acemo> but its on the same computer, no virtual machines
[23:24] <lolufail> then what do oyu want to do?
[23:24] <acemo> hmm
[23:24] <acemo> like
[23:24] <lolufail> just apache or what?
[23:24] <acemo> when you go to 127.0.0.1/acemo it should use /home/acemo/www as root and when going to 127.0.0.1/hitoi it should use /home/hitoi/www as root
[23:25] <lolufail> oh, just for http
[23:25] <lolufail> sure
[23:25] <acemo> yep
[23:26] <lolufail> I dont know how ;) but it's simple. google for ...
[23:26] <lolufail> uhm
[23:26] <lolufail> apache jail
[23:26] <lolufail> maybe
[23:26] <lolufail> chroot?
[23:26] <lolufail> sry ;)
[23:26] <lolufail> ill be quiet
[23:27] <acemo> thanks ill try searching for that
[23:27] <Goosemoose> anyone have a preseed more practical than https://help.ubuntu.com/8.04/installation-guide/example-preseed.txt
[23:28] <Goosemoose> I also can't remember where to save this file, I haven't set a server up using preseed in about 16 months, and the docs don't say where to save it
[23:37] <lolufail> damnit, Imma try the gentoo channel -.-
[23:39] <acemo> o.o
[23:40] <acemo> seems chroot would just jail the whole apache to a directory
[23:42] <lolufail> yeah, but it's more secure
[23:42] <lolufail> otherwise, use simple vhosts
[23:42] <acemo> yeah but it wont do any good for what i want right now
[23:43] <acemo> virtual hosts seem to not allow me to do what it want.. or probably.. i dont know how to do it
[23:47] <lolufail> acemo: how about you give details?
[23:51] <qhartman> I'm preparing to deploy a virtual server host, which I would like to do on Ubuntu. However, KVM just doesn't feel like a good server-oriented virtualization system right now. Maybe it will be someday, but for now it seems distinctly half-baked. Does Xen officially have a future on Ubuntu server?
[23:51] <acemo> am using webmin.. i go to create virtual host, i get to see this.. http://i37.tinypic.com/jszwcg.png
[23:57] <acemo> i guess ill have to fill in /home/acemo/www at the document root.. but i have no idea what to fill in at the address part