You are right. As Hadoop follows WORM principle i.e write once and read many times. So, once the data gets uploaded to HDFS then it cannot be deleted or terminated anymore.
append is used when rows in a source table in DB get inserted regularly and the table must have a numeric primary key, if not then a numeric –split-by column that is used in absence of the numeric primary key. And that's how we keep track of the last value in the table. For e.g.
$sqoop import –connect jdbc://mysql:/localhost/DB_name –user username –password pasword –table tablename –incremental append –check-column colname –last-value 100