Skip to main content

Filesystem is full, but can't find space being used - Solaris

I saw a strange issue once where the  /staging filesystem is almost full, but I can't find where the space is being used. 

'df -h' command shows that the filesystem is 100% used as below.

==>df -h
Filesystem             size   used  avail capacity  Mounted on
/staging                 99G   99G   0K     100%      /staging

But 'du -h' command says that only 3.7G out of 99G is used.

==> pwd
/staging
==> du -h | grep "\.$"
3.7G .



This is strange , now how will you find which files are consuming remaining 95.3G of disk space. The unix command which can be used here is 'lsof' - list of open files.

==> lsof | grep -i /staging

cat        4723  user1 txt   VREG     270,28001          50028743680      15496 /staging (/dev/vx/dsk/staging)
cat        4723  user1 3r  VREG     270,28001          50028743680      15496 /staging (/dev/vx/dsk/staging)
sed        4724  user1 1w  VREG     270,28001          50028743680      15496 /staging (/dev/vx/dsk/staging)
ksh        8796   jg8789  cwd   VDIR     270,28001              1949696         11 /app/staging/pollers
ksh       10834  collect  cwd   VDIR     270,28001              1949696         11 /app/staging/pollers
cat       15794  user1  txt   VREG     270,28001          52067398891      36877 /staging (/dev/vx/dsk/staging)
cat       15794  user1 3r  VREG     270,28001          52067398891      36877 /staging (/dev/vx/dsk/staging)
sed       15795  user1 1w  VREG     270,28001          50028743680      15496 /staging (/dev/vx/dsk/staging)

All the highlighted entries sheds light on to the processes that are using huge disk space. Here we can see many 'cat', 'sed' commands are holding huge disk space which can be released by terminating the processes.

==> kill -9 4723  4724  15794  15795  


 DESCRIPTION ABOUT lsof 
    Lsof revision 4.82 lists on its standard output file infor-  
    mation about files opened by processes for the following  
    UNIX dialects:  
      AIX 5.3  
      Apple Darwin 9 (Mac OS X 10.5)  
      FreeBSD 4.9 for x86-based systems  
      FreeBSD 7.[012] and 8.0 for AMD64-based systems  
      Linux 2.1.72 and above for x86-based systems  
      Solaris 9 and 10  
    (See the DISTRIBUTION section of this manual page for infor-  
    mation on how to obtain the latest lsof revision.)  
    An open file may be a regular file, a directory, a block  
    special file, a character special file, an executing text  
    reference, a library, a stream or a network file (Internet  
    socket, NFS file or UNIX domain socket.) A specific file or  
    all the files in a file system may be selected by path.  

Comments

Popular posts from this blog

How to format and install macOS in your old Macbook/ iMac

 You can follow these steps to install a mac OS on an old Mac book following these steps. Here I assume that you have the actual bootable CD for the OS for installation. 1. Restart the laptop 2. Press Command + R key until it shows recovery mode 3. Open Disk Utilities 4. Select the hard drive and try to partition the drive. For example I have created a partition called Partition1 5. Insert bootable CD and restart the laptop. When option comes choose to boot from the CD. 6. Choose partition1 as the place to install the OS 7. Continue the installation process. 8. Once installation is completed then it might need to restart for further updates. 9. Most of the times a more recent compatible version of the OS might be available. In order to upgrade to the more latest compatible OS follow below steps. 11. Find the latest compatible version of OS. 12. Go to apple support sites and manually download the image and click to install. 13. Follow installation instructions and this would upgrade you

How to create a minikube single node cluster for learning Kubernetes

In this post I will explain how to setup a minikube single node kubernetes cluster using AWS EC2 instance which would help anyone who is trying to learn kubernetes and also help them to gain practical knowledge in kubernetes by running kubernetes commands, creating kubernetes objects etc. Minikube is a single node kubernetes cluster which means a kubernetes cluster with only one node that is a single VM. Minikube is only used for learning purposes and it is not an alternative for a real kubernetes cluster and should not be used for development and production usage. In this example I have launched an AWS EC2 instance with below configuration where I will install minikube and related tools. AWS EC2 Instance Configuration AMI: Ubuntu Free tier eligible 64 bit Instance type : t2-large ( For me t2-small or t2-micro is giving performance issues due to less memory) Once the EC2 instance is up and running, login to the instance using below command on terminal. If you are using wi

log4j - How to write log to multiple log files using log4j.properties

In Java applications some times you may need to write your log messages to specific log files with its own specific log properties. If you are using log4j internally then first step that you need to do is to have a proper log4j.properties file. Below example shows 2 log4j appenders which write to 2 different log files, one is a debug log and another one is a reports log. Debug log file can have all log messages and reports log can have log messages specific to reporting on say splunk monitoring. # Root logger option log4j.rootLogger=ALL,STDOUT,debugLog log4j.logger.reportsLogger=INFO,reportsLog log4j.additivity.reportsLogger=false     log4j.appender.STDOUT=org.apache.log4j.ConsoleAppender log4j.appender.STDOUT.layout=org.apache.log4j.PatternLayout log4j.appender.STDOUT.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %C:%L - %m%n     # Direct log messages to a log file log4j.appender.debugLog=org.apache.log4j.RollingFileAppender log4j.appender.debugLo