Hey,
Yes, Spark can run without Hadoop. All core spark features will continue to work, but you will miss things like, easily distributed all your files to all the nodes in the cluster in HDFS, etc.
But, Spark is only doing processing and it uses dynamic memory to perform the task, but to store the data you need some data storage system. Here Hadoop comes into the role with the spark, it provides the storage for spark. One more reason for using with spark is they are open source and both can integrate with each other as easily compare to other storage systems.