In a real installation (1 active namenode, many datanodes) hadoop must be installed on each of the nodes. CDH (and most other vendors) provide software to help with the distributed installation.
You can see file metadata (and generally browse hdfs) via webhdfs, by enabling webhdfs (set property dfs.webhdfs.enabled to true in hdfs-site.xml, and restart hdfs), directing your browser to localhost:50070, and browsing to a file of interest.
File metadata can also be retrieved programmatically in Java via the hadoop FileInputFormat API. e.g, for file splits, you can use getSplits(). It will return the location of each split of the file of interest. A more straight forward solution can be to use the FileSystem API, specifically FileSystem.listFiles() which returns block location information. The latter may be only included in later hadoop 2.x versions though, I'm not sure.