[OpenAFS-devel] some requests
Sven Oehme
oehmes@de.ibm.com
Tue, 22 Feb 2005 09:27:52 +0100
This is a multipart message in MIME format.
--=_alternative 002E7A90C1256FB0_=
Content-Type: text/plain; charset="US-ASCII"
i think i know what i am talking about , but mixed something up .. :-)
i am more interested in the extended info of the volume, but over a bulk
interface with formatted output.
especially the statistic stuff is very important ....
ep14:~ # vos examine root.afs -extend
root.afs 536870912 RW 7 K used 5 files
On-line
afsfs01 /vicepba
RWrite 536870912 ROnly 536870913 Backup 0
MaxQuota 1024 K
Creation Sat Dec 29 00:28:49 2001
Last Update Wed Jan 14 19:33:56 2004
0 accesses in the past day (i.e., vnode references)
Raw Read/Write Stats
|-------------------------------------------|
| Same Network | Diff Network |
|----------|----------|----------|----------|
| Total | Auth | Total | Auth |
|----------|----------|----------|----------|
Reads | 0 | 0 | 0 | 0 |
Writes | 0 | 0 | 0 | 0 |
|-------------------------------------------|
Writes Affecting Authorship
|-------------------------------------------|
| File Authorship | Directory Authorship|
|----------|----------|----------|----------|
| Same | Diff | Same | Diff |
|----------|----------|----------|----------|
0-60 sec | 0 | 0 | 0 | 0 |
1-10 min | 0 | 0 | 0 | 0 |
10min-1hr | 0 | 0 | 0 | 0 |
1hr-1day | 0 | 0 | 0 | 0 |
1day-1wk | 0 | 0 | 0 | 0 |
> 1wk | 0 | 0 | 0 | 0 |
|-------------------------------------------|
RWrite: 536870912 ROnly: 536870913
number of sites -> 4
server afsfs01 partition/vicepba RW Site
server mfgmzafsc01 partition /vicepaa RO Site
server afsfs01 partition /vicepaa RO Site
server afsfs02 partition /vicepda RO Site
Sven
-------------------------------------------------------------------------------------------------------------------------
Dept. A153, STG/ISC EMEA AIS Strategy and Architecture
Development Leader Stonehenge
IBM intranet ---> http://w3.ais.mainz.de.ibm.com/stonehenge/
internet ---> http://www-5.ibm.com/services/de/storage/stonehenge.html
Phone (+49)-6131-84-3151
Fax (+49)-6131-84-6708
Mobil (+49)-171-970-6664
E-Mail : oehmes@de.ibm.com
Chaskiel M Grundman <cg2v@andrew.cmu.edu>
Sent by: openafs-devel-admin@openafs.org
21/02/2005 20:07
To
openafs-devel@openafs.org
cc
Subject
Re: [OpenAFS-devel] some requests
--On Monday, February 21, 2005 18:43:47 +0100 Sven Oehme
<oehmes@de.ibm.com> wrote:
> but the VLDB output is the output i would be most interested in ..
> especially if you like to export this data to use it in excel or other
> tools ..
Do you really care about the vldb part?
% vos exa user.cg2v
volser> user.cg2v 1970723513 RW 179994 K
On-line
volser> VICE16.FS.andrew.cmu.edu /vicepb
volser> RWrite 1970723513 ROnly 0 Backup 1970723515
volser> MaxQuota 200000 K
volser> Creation Thu Aug 6 17:18:21 1992
volser> Last Update Mon Feb 21 13:48:15 2005
volser> 1658 accesses in the past day (i.e., vnode references)
vldb> RWrite: 1970723513 Backup: 1970723515
vldb> number of sites -> 1
vldb> server VICE16.FS.andrew.cmu.edu partition /vicepb RW Site
the information tagged 'volser' will be present in vos listvol -long
(which
is a bulk interface)
the information tagged 'vldb' will be present in vos listvldb (which works
either singly or in bulk; it has optional -server -partition switches)
I think you may misunderstand what the vldb is precisely. The vldb
(vlserver) is a "directory service" that maps volume names to and from ids
and also identifies the set of fileservers that the volume allegedly
exists
on. The volserver is a volume-level management interface. Most volume
metadata is stored on the fileserver(s) that the volume is resident on and
is accessible via the volserver interface. The 'vos' program utilizes both
of these services, but they are distinct.
--=_alternative 002E7A90C1256FB0_=
Content-Type: text/html; charset="US-ASCII"
<br><font size=2><tt>i think i know what i am talking about , but mixed
something up .. :-)</tt></font>
<br><font size=2><tt>i am more interested in the extended info of the volume,
but over a bulk interface with formatted output. </tt></font>
<br><font size=2><tt>especially the statistic stuff is very important ....</tt></font>
<br>
<br>
<br><font size=2><tt>ep14:~ # vos examine root.afs -extend</tt></font>
<br><font size=2><tt>root.afs
536870912 RW
7 K used 5 files On-line</tt></font>
<br><font size=2><tt> afsfs01 /vicepba</tt></font>
<br><font size=2><tt> RWrite 536870912 ROnly 536870913
Backup 0</tt></font>
<br><font size=2><tt> MaxQuota 1024 K</tt></font>
<br><font size=2><tt> Creation Sat Dec 29 00:28:49
2001</tt></font>
<br><font size=2><tt> Last Update Wed Jan 14 19:33:56 2004</tt></font>
<br><font size=2><tt> 0 accesses in the past day (i.e., vnode
references)</tt></font>
<br>
<br><font size=2><tt>
Raw Read/Write Stats</tt></font>
<br><font size=2><tt> |-------------------------------------------|</tt></font>
<br><font size=2><tt> | Same
Network | Diff Network |</tt></font>
<br><font size=2><tt> |----------|----------|----------|----------|</tt></font>
<br><font size=2><tt> | Total
| Auth | Total | Auth |</tt></font>
<br><font size=2><tt> |----------|----------|----------|----------|</tt></font>
<br><font size=2><tt>Reads | 0
| 0 | 0 |
0 |</tt></font>
<br><font size=2><tt>Writes | 0
| 0 | 0 |
0 |</tt></font>
<br><font size=2><tt> |-------------------------------------------|</tt></font>
<br>
<br><font size=2><tt>
Writes Affecting Authorship</tt></font>
<br><font size=2><tt> |-------------------------------------------|</tt></font>
<br><font size=2><tt> | File Authorship
| Directory Authorship|</tt></font>
<br><font size=2><tt> |----------|----------|----------|----------|</tt></font>
<br><font size=2><tt> | Same
| Diff | Same | Diff |</tt></font>
<br><font size=2><tt> |----------|----------|----------|----------|</tt></font>
<br><font size=2><tt>0-60 sec | 0 |
0 | 0 |
0 |</tt></font>
<br><font size=2><tt>1-10 min | 0 |
0 | 0 |
0 |</tt></font>
<br><font size=2><tt>10min-1hr | 0 |
0 | 0 |
0 |</tt></font>
<br><font size=2><tt>1hr-1day | 0 |
0 | 0 |
0 |</tt></font>
<br><font size=2><tt>1day-1wk | 0 |
0 | 0 |
0 |</tt></font>
<br><font size=2><tt>> 1wk | 0
| 0 | 0 |
0 |</tt></font>
<br><font size=2><tt> |-------------------------------------------|</tt></font>
<br>
<br><font size=2><tt> RWrite: 536870912 ROnly:
536870913</tt></font>
<br><font size=2><tt> number of sites -> 4</tt></font>
<br><font size=2><tt> server afsfs01 partition/vicepba
RW Site</tt></font>
<br><font size=2><tt> server mfgmzafsc01 partition
/vicepaa RO Site</tt></font>
<br><font size=2><tt> server afsfs01 partition
/vicepaa RO Site</tt></font>
<br><font size=2><tt> server afsfs02 partition
/vicepda RO Site</tt></font>
<br>
<br>
<br><font size=2 face="sans-serif">Sven<br>
<br>
-------------------------------------------------------------------------------------------------------------------------<br>
Dept. A153, STG/ISC EMEA AIS Strategy and Architecture<br>
Development Leader Stonehenge <br>
IBM intranet ---> http://w3.ais.mainz.de.ibm.com/stonehenge/<br>
internet ---> http://www-5.ibm.com/services/de/storage/stonehenge.html<br>
Phone (+49)-6131-84-3151<br>
Fax (+49)-6131-84-6708<br>
Mobil (+49)-171-970-6664<br>
E-Mail : oehmes@de.ibm.com</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td width=40%><font size=1 face="sans-serif"><b>Chaskiel M Grundman <cg2v@andrew.cmu.edu></b>
</font>
<br><font size=1 face="sans-serif">Sent by: openafs-devel-admin@openafs.org</font>
<p><font size=1 face="sans-serif">21/02/2005 20:07</font>
<td width=59%>
<table width=100%>
<tr>
<td>
<div align=right><font size=1 face="sans-serif">To</font></div>
<td valign=top><font size=1 face="sans-serif">openafs-devel@openafs.org</font>
<tr>
<td>
<div align=right><font size=1 face="sans-serif">cc</font></div>
<td valign=top>
<tr>
<td>
<div align=right><font size=1 face="sans-serif">Subject</font></div>
<td valign=top><font size=1 face="sans-serif">Re: [OpenAFS-devel] some
requests</font></table>
<br>
<table>
<tr valign=top>
<td>
<td></table>
<br></table>
<br>
<br>
<br><font size=2><tt>--On Monday, February 21, 2005 18:43:47 +0100 Sven
Oehme<br>
<oehmes@de.ibm.com> wrote:<br>
<br>
> but the VLDB output is the output i would be most interested in ..<br>
> especially if you like to export this data to use it in excel or other<br>
> tools .. <br>
Do you really care about the vldb part? <br>
<br>
% vos exa user.cg2v<br>
volser> user.cg2v
1970723513 RW 179994 K On-line<br>
volser> VICE16.FS.andrew.cmu.edu /vicepb <br>
volser> RWrite 1970723513 ROnly
0 Backup 1970723515 <br>
volser> MaxQuota 200000 K <br>
volser> Creation Thu Aug 6 17:18:21
1992<br>
volser> Last Update Mon Feb 21 13:48:15 2005<br>
volser> 1658 accesses in the past day (i.e., vnode references)<br>
<br>
vldb> RWrite: 1970723513 Backup: 1970723515<br>
vldb> number of sites -> 1<br>
vldb> server VICE16.FS.andrew.cmu.edu
partition /vicepb RW Site <br>
<br>
the information tagged 'volser' will be present in vos listvol -long (which<br>
is a bulk interface)<br>
the information tagged 'vldb' will be present in vos listvldb (which works<br>
either singly or in bulk; it has optional -server -partition switches)<br>
<br>
I think you may misunderstand what the vldb is precisely. The vldb<br>
(vlserver) is a "directory service" that maps volume names to
and from ids<br>
and also identifies the set of fileservers that the volume allegedly exists<br>
on. The volserver is a volume-level management interface. Most volume<br>
metadata is stored on the fileserver(s) that the volume is resident on
and<br>
is accessible via the volserver interface. The 'vos' program utilizes both<br>
of these services, but they are distinct.</tt></font>
<br>
--=_alternative 002E7A90C1256FB0_=--