[OmniOS-discuss] "df" reports wrong value

sergey ivanov sergey57 at gmail.com
Tue Feb 26 07:05:08 EST 2013


Dear Ram Chandler,
I guess this space is used by snapshots. Also there is possibility it is
used by children (nested zfs filesystems) but then they must be unmounted
so that they are not reported in your du results. Or they are subvolumes?
If you are using zfs, then I would suggest using zfs tools to monitor it's
status. Like "zfs list" and "zfs get". You see, zfs is not just a
filesystem, - it combines in one things like filesytem, volume manager,
raid and may be others. And to find out what is using the space, I usually
do something like "zfs get all | grep used" Or, for example, on one of our
systems:
---
root at megaera:~# df -h /raid/data{,/subversion}

Filesystem            Size  Used Avail Use% Mounted on
raid/data             5.7T   88K  5.7T   1% /raid/data
raid/data/subversion  5.8T   39G  5.7T   1% /raid/data/subversion
root at megaera:~# zfs list  -o
name,used,usedbychildren,usedbysnapshots,available,compressratio,mountpoint
\

    raid/data{,/subversion}

NAME                   USED  USEDCHILD  USEDSNAP  AVAIL  RATIO  MOUNTPOINT
raid/data              222G       222G         0  5.70T  1.30x  /raid/data
raid/data/subversion   151G          0      113G  5.70T  1.09x
/raid/data/subversion

---

-- 

  Regards,

  Sergey Ivanov.



On Tue, Feb 26, 2013 at 6:30 AM, Ram Chander <ramquick at gmail.com> wrote:

> Hi,
>
>
> I have 547G parition and df  shows  only 97G used and 31G  available .
> When checked with "du" it  also reports only 97G used. Where are the rest
> of space ? And why is "df" reporting wrong available space.
> There are no process holding deleted files.
> *
> -bash-4.2# df -h*
> Filesystem             size   used  avail capacity  Mounted on
> rpool/ROOT/omnios      547G    97G    31G    76%    /
> /devices                 0K     0K     0K     0%    /devices
> /dev                     0K     0K     0K     0%    /dev
> ctfs                     0K     0K     0K     0%    /system/contract
> proc                     0K     0K     0K     0%    /proc
> mnttab                   0K     0K     0K     0%    /etc/mnttab
> swap                  10.0G   300K  10.0G     1%    /etc/svc/volatile
> objfs                    0K     0K     0K     0%    /system/object
> sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
> /usr/lib/libc/libc_hwcap1.so.1
>                        128G    97G    31G    76%    /lib/libc.so.1
> fd                       0K     0K     0K     0%    /dev/fd
> swap                  10.0G     4K  10.0G     1%    /tmp
> swap                  10.0G    40K  10.0G     1%    /var/run
> rpool/export           547G    32K    31G     1%    /export
> rpool/export/home      547G    31K    31G     1%    /export/home
> rpool                  547G    36K    31G     1%    /rpool
>
> *-bash-4.2# du -sch /**
> 512     bin
> 74M     boot
> 707K    dev
> 322K    devices
> 38M     etc
> 3.0K    export
> 512     home
> 112M    kernel
> 49M     lib
> 2.0K    media
> 1.5K    mnt
> 512     net
> 439M    opt
> 341M    platform
> du: cannot access `proc/23443/fd/3': No such file or directory
> du: cannot access `proc/23443/path/3': No such file or directory
> 4.0G    proc
> 5.7M    root
> 7.5K    rpool
> 1.8M    sbin
> 3.7M    system
> 1.5K    test
> 8.0K    tmp
> 906M    usr
> 91G     var
> *97G     total*
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>


-- 
Regards,
Sergey Ivanov | 443-844-6318 | sergey57 at gmail.com
http://www.linkedin.com/pub/sergey-ivanov/8/270/a09
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20130226/07ea3c06/attachment.html>


More information about the OmniOS-discuss mailing list