The purge process is two stages: first, data is identified for purging by one process and then the actual deletion is done by the second process.
These processes run every hour and the only a portion of the data is identified in each go to avoid locking and resource over-utilization. Also, the process runs with a lower priority to insertion of new data to avoid missing incoming data which can impede the process.
Given enough time the size will reduce or you have the option to suspend monitoring to allow the process to move more swiftly.
Other alternatives include starting a fresh data repository with the new purge settings or truncating particularly large tables - though these options are more extreme and will remove all historical data.
When performing a truncation you would need to shrink your database after the data has been removed and then set your retention settings a bit more aggressive so that you retain less information so the database does not grow to be the same size again.