[02:34] <phuh> how do you properly create a backup of a Ubuntu server? /var/log and /etc? anything else?
[02:37] <sarnold> phuh: depends heavily upon what the server does; /home if it serves as a shell host for your users, or if you run a database server you'll want to find out how to get a consistent database dump out of it..
[02:37] <phuh> sarnold: what about system specific ones?
[02:38] <phuh> are /etc and /var/log the only ones important for backup purposes?
[02:38] <sarnold> phuh: for every service that your server runs, figure out what you'd be sad to lose if it dissapeared, and go track it down. :)
[02:39] <sarnold> phuh: I keep /home/ /etc/ /usr/local/ -- I'm not too worried if I lose my logs.
[02:39] <phuh> sarnold: why isn't log important for you/
[02:39] <sarnold> phuh: I only ever investigate them if something seems broken and I want to find out why. I wouldn't care to restore them.
[02:40] <sarnold> phuh: if you want to keep your logs safe from intruders or something similar, probably a separate log host would make more sense
[02:41] <phuh> sarnold: do you at least save them somewhere like S3 just in case you'll be interested?
[02:41] <phuh> sarnold: what's the benefit of a separate log host?
[02:41] <sarnold> phuh: no, but that makes sense..
[02:42] <sarnold> phuh: it's updated real-time, so you can examine logs immediately when problems happen -- if you only back up the logs every six hours, you'll probably miss the most important logs when something does happen
[02:43] <sarnold> phuh: I'm off, have a good night :)
[02:44] <phuh> thank you sir sarnold
[02:44] <sarnold> phuh: you're welcome :) have fun!
[04:19] <geowany> bacula is wonderful!
[04:33] <wazer> Hello all! I'm having a slight issue getting vnc4server working on Ubuntu 14.04
[04:33] <wazer> I have the VNC server running, but when I access it I just get 3 check boxes saying: "Accept clipboard from viewers"
[06:41] <InFierno> Anyone around who has used bind in the not to distant past
[06:43] <ikonia> InFierno: be better if you just stated what you needed help with
[06:44] <InFierno> True dat; configs for a clients site that is just pointed to my nameservers (i setup mine just fine) but clients gives errors - ive forgotten something really obvious
[06:45] <InFierno> For client I created @, mail - A records - ftp, www CNAME records and MX records
[06:45] <InFierno> what did i forget
[06:46] <InFierno> and for NS IN records i created mynameservers IN NS domain
[06:46] <InFierno> I think its on the NS in i have the problem
[06:57] <TJ-> InFierno: Added the zone file to them main config file, configured any allowed slaves, etc?
[06:57] <InFierno> yup
[07:07] <InFierno> Found it
[07:07] <InFierno> forgot to add an exception for that domain to the firewall
[07:07] <InFierno> (hardware one)
[07:31] <lordievader> Good morning.
[08:00] <InFierno> TJ-, can you take a look at the ns records and see if you can spot the issue
[08:01] <TJ-> InFierno: Sure
[08:03] <InFierno> rtrproject.com
[08:07] <k1ng> can anyone help me install samba client on ubuntu server?
[08:07] <TJ-> You're missing the ns1.rtrproject.com A record
[08:08] <InFierno> its ponted to ns1.blacksail.software
[08:08] <InFierno> so i shouldnt need its own NS record?
[08:09] <TJ-> You have the SOA pointing to "ns1.rtrproject.com" which has no A record
[08:09] <InFierno> Ah cheers
[08:09] <InFierno> it shouldnt be :)
[08:09] <TJ-> maybe you want that to be "ns1.blacksailsoftware.com."
[08:10] <InFierno> yea :)
[08:10] <InFierno> Gonna swap something else at the same time --> just seeen some stuff in the code of zpanel i dont like
[08:10] <TJ-> And I usually have the SOA contact being the hostmaster@ the actual domain that manages the SOA
[08:11] <InFierno> You tried vesta cp by any chance ?
[08:11] <InFierno> I have 2 clients who need a CP so any ideas
[08:11] <TJ-> Unless you can guarantee the DNS administrator will see emails to the email address currently set
[08:11] <TJ-> CP?
[08:11] <InFierno> control panel
[08:11] <TJ-> oh... I prefer terminal shells
[08:12] <InFierno> me also
[08:12] <InFierno> But they are pretty dumb and need a cp
[08:13] <xnox> dw1: that's interesting, can you open a bug number and pass it to me?
[08:44] <InFierno> k1ng, What do you mean?
[08:49] <InFierno> k1ng, : http://www.debian-administration.org/article/340/Using_Samba_on_Debian_Linux
[09:00] <InFierno> TJ-, u here?
[09:05] <dw1> xnox: ok https://bugs.launchpad.net/ubuntu/+source/initramfs-tools/+bug/1316989
[09:12] <xnox> dw1: thanks, i'll correct the package though =)
[09:14] <xnox> dw1: do you have control over /etc/mdadm/mdadm.conf? a long time ago ubuntu has switched to UUID based assembly.
[09:14] <dw1> xnox: yeah I can add the UUIDs
[09:14] <xnox> dw1: i would do backups of the config
[09:15] <xnox> dw1: and then ddo /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
[09:15] <xnox> dw1: however, just adding UUIDS to the existing stanzas should also work.
[09:17] <dw1> xnox: I was told the explicit non uuid declarations would "help it autodetect, if you use older version metadata" so I felt more comfortable keeping them. but if it doesn't matter, well i can go to what mkconf suggests
[09:17] <dw1> xnox: kind of new to raid stuff :)
[09:17] <xnox> dw1: it is weird that we complain though.
[09:18] <xnox> dw1: both should be equivalent.
[09:23] <dw1> xnox: error gone when using either the mkconf one or adding UUID= parameter to end of others
[09:24] <dw1> xnox: as expected
[09:24] <xnox> dw1: good. if you want to keep all options, then by all means just add UUID= parameter at the end.
[09:24] <dw1> xnox: sounds good, thanks
[09:38] <DenBeiren> Hi, I destroyed my system by accidently erasing all kernels and rebooting. I have found a nice help page http://to.ly/xgZ4 , but i ran into a problem.
[09:39] <DenBeiren> i have a software raid setup
[09:39] <DenBeiren> so mounting the partitions as described doesn't work
[09:40] <OpenTokix> DenBeiren: you have to assemble the raid in the livesystem to
[09:40] <OpenTokix> DenBeiren: with mdadm and then mount the partitions fomr the raid
[09:40] <OpenTokix> DenBeiren: check out the manpage for mdadm
[09:41] <DenBeiren> ok will try
[09:41] <DenBeiren> thanks for the tip!
[09:41] <OpenTokix> mdadm --assemble --scan might be the command you are looking for
[09:49] <DenBeiren> seemed to work out
[09:50] <DenBeiren> only the boot partition raid array has en error
[09:51] <DenBeiren> when i try to mount, the device doesn't exist
[10:09] <DenBeiren> nm, i made a syntax error
[10:14] <DenBeiren> http://pastie.org/9148436   what am i doing wrong here?
[12:14] <onorua> guys, my crypted devices are not getting booted
[12:15] <onorua> I mean not getting mounted during boot without manual intervention
[12:15] <onorua> boot complains about some serious errors were found while checking the disk drive for all ecnrypted devices
[12:15] <onorua> have someone seen such an errors?
[12:15] <onorua> how did you overcome?
[13:43] <fun> hey folks, how I can upgrade 13.10 64 bit server to 14.10 64 bit server?
[13:43] <fun> I mean 14.04
[13:44] <cfhowlett> fun wait until point release #1 in July.  then sudo do-release-upgrade
[13:45] <fun> point?
[13:45] <cfhowlett> fun 14.04.1
[13:46] <fun> how come?
[13:46] <fun> i though 14.04 already pretty good
[13:46] <cfhowlett> fun OR download 14.04, make a USB, boot USB and install
[13:47] <Pici> cfhowlett: You shouldn't need to wait if you already have 13.10
[13:47] <fun> what usb lol its remote server
[13:47] <cfhowlett> Pici do-release-upgrade unavailable until point release
[13:47] <Pici> cfhowlett: Sure it is.  Just not for 12.04 to 14.04
[13:48] <cfhowlett> Pici NOT true.  LTS to LTS is allowed.
[13:48] <fun> Pici so how to I do it? just run do-release-upgrade?
[13:48] <fun> nope there is more to it
[13:49] <fun> http://ubuntuhandbook.org/index.php/2014/01/upgrade-ubuntu-13-10-saucy-to-ubuntu-14-04-trusty/#comments
[13:49] <fun> bingo :D
[13:50] <Pici> fun: you shouldn't need to use the -d switch at all.
[13:50] <Pici> cfhowlett: 13.10 to 14.04 should work just fine right now using do-release-upgrade.  12.04 to 14.04 will not work until 14.04.1 is released.
[13:50] <fun> Pici so do-release-upgrade will make sausy 14.04?
[13:51] <cfhowlett> Pici ah!  good to know!
[13:51] <fun> and do I need to edit /etc/update-manager/release-upgrades ?
[13:51] <fun> to change normal to lts?
[13:51] <Pici> fun: It should work as long as prompt is not set to none.
[13:55] <fun> :) running update
[13:55] <Pici> yay
[14:02] <fun> some minor warnings however overall seems fine
[14:02] <fun> :)
[14:02] <fun> unable to delete some old dirs
[14:03] <fun> lol grub now
[14:03] <fun> saying local version modified but it has new version
[14:06] <MTughan> I'm having some trouble with cloud-init. Is there a list somewhere of the steps cloud-init goes through and somewhere where I can see what data it's all using?
[14:07] <MTughan> I'm using a local installation of OpenStack. If I launch a VM manually with my own cloud-config script, everything works as intended. If I launch the VM using Heat (OpenStack's orchestration tool) with the same cloud-config file, stuff breaks.
[14:07] <fun> no idea
[14:08] <MTughan> Most notably, if I specify a custom username for cloud-init, the created home directory when using Heat is owned by root:root.
[14:09] <fun> Pici: ok upgrade done, some minor warnings for some soft overall seems fine :D
[14:10] <Pici> huzzah
[14:10] <fun> I am suprised OVH dont have 14.04 in its CP yet :D I could promt them to add it :)
[14:12] <fun> Ubuntu 14.04 LTS haha
[14:12] <fun> done :D
[14:13] <patdk-wk> suprised people use ovh
[14:13] <fun> its great
[14:13] <fun> who do you use?
[14:13] <patdk-wk> if you don't care about being blacklisted
[14:13] <fun> blacklisted where?
[14:13] <fun> lol
[14:14] <patdk-wk> most mailservers have most of ovh blacklisted
[14:14] <patdk-wk> they made a huge statement, about fixing the issue
[14:14] <patdk-wk> but nothing ever came of it
[14:16] <fun> patdk-wk: simply use sendgrid
[14:16] <fun> :)
[14:16] <patdk-wk> http://www.spamhaus.org/sbl/listings/ovh.net
[14:17] <fun> re read above :D
[14:17] <fun> aws similar well many ips :)
[14:17] <fun> also who do u use then :)
[14:17] <patdk-wk> aws blocks you, if you send more than just a few emails per day
[14:18] <patdk-wk> personally?
[14:18] <patdk-wk> I bought my own hardware, ip address space, and asn
[14:18] <fun> so here u go
[14:18] <fun> :)
[14:18] <fun> also aws allows to send tons of emails if u pay
[14:18] <fun> ;)
[14:19] <patdk-wk> spammers aren't into paying
[14:19] <fun> patdk-wk: lol well aws is well known company
[14:19] <fun> and ovh same
[14:20] <patdk-wk> but the reputations of both, are widely different
[14:20] <fun> to you yes
[14:21] <fun> ovh is going on roadshow worldwide feel free to visit and voice your concerns there :)
[14:21] <fun> hehe
[14:33] <MTughan> I'm having some trouble with cloud-init. Is there a list somewhere of the steps cloud-init goes through and somewhere where I can see what data it's all using?
[14:33] <MTughan> I'm using a local installation of OpenStack. If I launch a VM manually with my own cloud-config script, everything works as intended. If I launch the VM using Heat (OpenStack's orchestration tool) with the same cloud-config file, stuff breaks.
[14:33] <MTughan> Most notably, if I specify a custom username for cloud-init, the created home directory when using Heat is owned by root:root.
[16:34] <budman_> anyone have any ubuntu 12.04 servers with netextreme broadcom cards? trying to figure out a speed issue
[16:36] <davidwebb> I need to install dropbox on my headless 12.04 Server for multiple users - does anyone have experience with this? I can't find a tutorial focused on using dropbox with more than 1 user...
[16:38] <budman_> davidwebb: does it need to be dropbox? maybe "owncloud" might work
[16:39] <davidwebb> budman_: that looks interesting.  My clients already have dropbox, but it looks like setting this up would be pretty easy too...
[16:40] <budman_> I know someone that just set it up and is loving it, I think the only downfall he said was you have to pay for the apple/android app$1 or so
[16:40] <patdk-wk> for dropbox or owncloud?
[16:40] <budman_> owncloud
[16:41] <davidwebb> so does this mean all the data stays on my server?
[16:41] <davidwebb> I'm pretty sure one of my clients will want it sync'd elsewhere as well...
[16:41] <patdk-wk> as long as you don't download it
[16:41] <patdk-wk> or back it up somewhere
[16:42] <davidwebb> Does it work just like dropbox, where if he's running it on his desktop, it'll download / sync files to his desktop automatically?
[16:42] <budman_> yea it might not meet your needs, just throwing it out there, while I wait for help :) hoping someone runs ubuntu with broadcom cards
[16:42] <patdk-wk> the world attempts not to run anything with broadcom cards
[16:45] <budman_> yes i know, it just happens to be what I have to work with. im thinking about just ordering some 4 port intel cards
[16:45] <budman_> but seeing my options if I can fix it, with a different driver or similar, has to be 1 lone ranger out there ha.
[16:50] <davidwebb> owncloud has packages for xUbuntu - are they the same as what I'll need for Ubuntu?
[16:51] <davidwebb> (I don't know the differences between xUbuntu and Ubuntu....)
[16:54] <sarnold> davidwebb: just skimming their opensuse build service page, I actually think they mean 'xUbuntu' as a placeholder for 'ubuntu, kubuntu, xubuntu, lubuntu, edubuntu, etc.'
[16:54] <davidwebb> gotcha. thanks sarnold
[16:56] <sarnold> (and probably it'll work for derived distributions such as mint..)
[17:03] <pseudo> is there any reason whatsoever for a ubuntu 14.04 server - freshly installed - not to reply to icmp requests coming from certain ip addresses?
[17:06] <sarnold> pseudo: probably it won't reply to broadcast packets
[17:07] <pseudo> sarnold - it is seeing, but not responding to icmp echo packets coming from a certain ip address from the 10.x subnet
[17:08] <patdk-wk> that is a pretty large subnet
[17:08]  * patdk-wk blames routefilter
[17:08] <pseudo> the server is on 10.30.0.0/24, but another box connected to the same switch is on 10.10.10.40. it responds to about 10% of the icmp packets coming from that ip
[17:09] <patdk-wk> well, that won't work
[17:09] <patdk-wk> not without a router
[17:09] <pseudo> same switch
[17:09] <pseudo> i.e. layer 2 packets
[17:09] <pseudo> no reason it shouldnt work
[17:09] <patdk-wk> ya, should not work
[17:09] <patdk-wk> sure it does
[17:09] <patdk-wk> it's not in the same subnet
[17:10] <patdk-wk> not in same subnet == must be routed
[17:10] <patdk-wk> did you skip networking 101?
[17:10] <pseudo> apparently i did
[17:10] <pseudo> i guess that makes sense
[17:10] <pseudo> give me a sec to rethink a few things
[17:15] <pseudo> okay, you are right about the layer 2 thing - i believe that the network is setup such that the 10.10.10.40 router connects to an upstream router and then hairpins back into the 10.30.0.0/24 network.
[17:15] <pseudo> the tl;dr is that packets origininating from 10.10.10.40 make it to the 10.30.0.x machine
[17:15] <pseudo> i can see the icmp request's come into that box with tcpdump
[17:15] <pseudo> but it never sends out a reply
[17:16] <pseudo> i other ubuntu 14.04 machines in the same subnet that respond just fine to icmp requests
[17:17] <sarnold> pseudo: compare their routing tables, perhaps that one machine doesn't know how to route to 10.10/16 ?
[17:17] <pseudo> the routing tables are identical
[17:19] <pseudo> again - the packet makes it to the box. the box just never tries to send a reply
[17:21] <sarnold> wb pseudo_, you missed nothing while you were gone
[17:21] <patdk-wk> how many nics does that box have?
[17:22] <pseudo_> ty :)
[17:22] <pseudo_> one nic
[17:22] <patdk-wk> where are you running tcpdump?
[17:22] <pseudo_> on the node that i am trying to ping
[17:24] <pseudo_> omg, this is so frustrating - it just started working again. literally nothing has changed.
[17:25] <patdk-wk> arp issues?
[17:25] <sarnold> o_O
[17:25] <sarnold> that would be bloody annoying to track down :)
[17:25] <pseudo_> that's what i thought, but i flushed my arp tables earlier, plus - i was seeing EVERY icmp request packet
[17:25] <sarnold> I wonder if those packets are arriving with broken checksums?
[17:26] <patdk-wk> request doesn't mean anything
[17:26] <patdk-wk> if the server doesn't know the arp entry for the gateway/router
[17:26] <patdk-wk> therefor it never makes a reply
[17:26] <pseudo_> patdk-wk: there is one gateway router and networking was fine everywhere else
[17:26] <patdk-wk> everywhere else != this switch port
[17:26] <pseudo_> i could ping out through that router
[17:27] <patdk-wk> ok
[17:27] <pseudo_> patdk-wk: actually, i lied. i don't know that for sure
[17:27] <pseudo_> thinking back - i was able to ping within my subnet, but i don't think that i tried pinging upstream
[17:29] <pseudo_> patdk-wk: although. this node is just a kvm image who's nic is a bridged card. and other vms were working fine
[17:29] <patdk-wk> oh?
[17:29] <patdk-wk> could be lots of fun then
[17:30] <pseudo_> yeah, there are alot of moving parts - i still think it needs to be an ubuntu/kernel thing though - the fact that tcpdump saw a request and did not respond says that networking is all right
[17:30] <pseudo_> at the same exact time that it was not responding to that ip, it would repond to a ping from a local ip
[17:30] <sarnold> does tcpdump automatically report back packet checksums?
[17:31] <patdk-wk> and you are doing it inside the vm right? not on the host?
[17:31] <pseudo_> yes, inside a vm
[17:31] <pseudo_> and sarnold, i think it will report a bad checksum by default, but im not sure about that
[17:33] <sarnold> aha :) --dont-verify-checksums
[17:33] <sarnold> bugger. i liked that theory.
[17:33] <pseudo_> idk what to say guys, it seems to be working now - i guess i will chalk it up to some sort of arp caching issue. i really don't think that was the problem, but i'm out of theories and it unfortunately just started working again
[17:36] <sarnold> pseudo_: good luck, and if you figure it out I'd be curious to learn why :)
[17:37] <pseudo_> sarnold: yeah, if i figure it out i will definitely report back. thanks for taking the time to help. you too patdk-wk.
[18:05] <pseudo_> sarnold: i have a theory - i think that our upstream firewall/switch was trying to preventing arp spoofing by disallowing unsolicited arp replies. before this problem started, i took down a vm that had the same ip address and brought a new one(with a different MAC) up in it's place.
[18:07] <sarnold> pseudo_: ooh, seems plausible enough. the best part is you've got something you can test now :)
[18:53] <zealj> ubuntu server liveCD doesn't include fdisk
[18:53] <zealj> anyone know if this is intentional? is there are substitute?
[18:53] <lordievader> zealj: I suppose parted is available.
[18:54] <lordievader> Fdisk for gpt disks is useless.
[18:54] <zealj> does parted allows you to view the partition table without changing them?
[18:54] <lordievader> Yes.
[18:54] <zealj> cool, thanks
[18:54] <lordievader> zealj: man parted ;)
[18:54] <zealj> yep :)
[19:10] <dasjoe> Hi, VMs created with uvt-kvm are not persistent, correct? What are possible use cases for uvtool, is there a way to make a VM persistent?
[19:12] <stoned> oh here it goes
[19:12] <stoned> finally
[19:12] <stoned> I tried to join this channel like 5 times
[19:12] <stoned> it wouldn't join
[19:15] <Jordan_U> stoned: What ended up actually working? Going through GUI buttons rather than "/join"?
[19:19] <stoned> Yes!
[19:19] <stoned> Jordan_U, my friend yes
[19:19] <stoned> I got join to work now. The command was broken for /
[19:19] <stoned> I am on ZNC so I restarted my client and now it works
[19:19] <stoned> Thanks
[19:20] <MTughan> You also have to be identified with NickServ to join here, I believe.
[19:20] <MTughan> Huh, nevermind actually.
[19:30] <replman> Hi! I would like to create a sftp user for a www folder. This works if the folder belong to root, but if i set the owner to root, the webserver cannot read it. Isn't it possible to have a sftp user which doesn't need the folder to be owned by root?
[19:36] <stoned> sure
[19:36] <stoned> addgroup web
[19:36] <stoned> adduser www-data web
[19:36] <stoned> adduser root web (optional)
[19:37] <stoned> chown -R www-data:www /var/www
[19:37] <stoned> chmod -R ug+srw /var/www
[19:37] <stoned> 1) you create a group called 'web'
[19:38] <stoned> 2) you put the apache www-data user (and files in /var/www belong to root) so you do #3
[19:38] <stoned> 3) you add root to group web
[19:38] <stoned> 4) you make everything inside /var/www owned by www-data (apache user) and owned by group 'www' you just created
[19:38] <stoned> 5) you set permissions for user and group to be able to read and write ug+rw
[19:39] <stoned> 6) permission 's' is sticky bit. New files created in /var/www /will take parent directory permissions
[19:39] <stoned> for example after all this
[19:39] <stoned> touch /var/www/tmp
[19:39] <stoned> ls -l /var/www/tmp will report same permissions as /var/www minus d and x
[19:39] <stoned> I hope that helps/explains.
[19:40] <stoned> replman, ^^
[19:45] <verdeP> sticky bit is so nice, let me quickly drop files from my user account into /var/www
[19:54] <replman> stoned: thank you very much! I will try it this way...
[19:55] <stoned> replman, you are most welcome
[20:07] <replman> stoned: I always get a "fatal: bad ownership or modes for chroot directory component" in auth.log when trying to login through sftp
[21:17] <budman_> http://d.pr/i/2r6y - wth is it just ubuntu 14.04 doing something special? I even tried a 3.11 kernel on the 3.2 kernel boxes.
[21:32] <fridaynext> what's the proper location for a subdomain, if I currently have /home/fridaynext/public/site.com/public, and I want sub.site.com.
[21:33] <fridaynext> where does the 'sub' folder go?
[22:12] <webfox> hello guys!
[22:13] <webfox> I was trying to download the ubuntu server for pc-64bit version but all I found was the amd64, is that all right if I install it on a intel processor?
[22:14] <sarnold> webfox: yup, that's expected :)
[22:14] <webfox> sarnold: why amd64 then?
[22:14] <sarnold> (the long story is, intel's 64 bit instruction set was the "itanium" cpu, which took off nearly nowhere and not at all. amd invented the 64 bit extension to the x86 instruction set, so it got called amd64, and then intel capitulated and made clone chips...)
[22:16] <webfox> right, thank you.
[23:04] <rostam> Hi I have a server with two internet interface , eth0 and eth1. I have installed Ubuntu 12.04. The eth1 is connected to internal network and the server acts as dhcp server on that interface. I like the configure the dhcp server so anything the server reboots it remove the lease file and start again. Is this possible? thx
[23:05] <sarnold> rostam: I'm curious why you want that.. it feels like the kind of thing that wil lcause all kinds of confusion, so I'm hesitant to suggest anything..
[23:06] <rostam> sarnold,  agree with you. This is a condition was asked from me, and if you think it is not correct then I will not follow on it. Thanks
[23:07] <sarnold> rostam: well, until I know more about it I don't want to say it is wrong -- but I think the clients that have been handed addresses might not expect their lease to be revoked when the server reboots.
[23:09] <rostam> sarnold,  in our test bed, we are experimenting on network configuration, so anything they change something they expect the dhcp request to be honored but the system does not since it has already leased the available IPs. I hope this is clear. However, if I want to just test this condition (renew lease time) is there a option? thx
[23:10] <sarnold> rostam: okay, that's good enough for me :) hehe
[23:10] <sarnold> rostam: the first thing I'd try is adding some lines to the dhcpd upstart or sysv-init file to delete the leases database
[23:11] <rostam> sarnold, any pointer  how I can investigate this? thx
[23:12] <sarnold> rostam: sure, check the manpage for the dhcpd for the path to the leases database; if it doesn't say, look around in /var/lib/, it probably lives there.
[23:13] <sarnold> rostam: then look in /etc/init/dhcpd or /etc/init.d/dhcpd or something similar for the script that starts the service -- just add a 'rm /var/lib/whatever || true' command in the right place -- the || true will make the command exit with success even if the file doesn't already exist, which might be handy
[23:13] <rostam> sarnold, thanks