df = spark.createDataFrame([("A", 2000), ("A", 2002), ("A", 2007), ("B", 1999), ("B", 2015)], ["Group", "Date"])
+-----+----+
|Group|Date|
+-----+----+
| A|2000|
| A|2002|
| A|2007|
| B|1999|
| B|2015|
+-----+----+
# accepted solution above
from pyspark.sql.window import *
from pyspark.sql.functions import row_number
df.withColumn("row_num", row_number().over(Window.partitionBy("Group").orderBy("Date")))
# accepted solution above output
+-----+----+-------------+
|Group|Date|row_num|
+-----+----+-------------+
| B |1999| 1 |
| B |2015| 2 |
| A |2000| 1 |
| A |2002| 2 |
| A |2007| 3 |
+-----+----+-------+
After this you can write a UDF to list it out.