Archiving from command line


What is the easiest and fastest way to create a snapshot of a web page from the command line? (only one page will be crawled for one time)

Currently I am using WGET with below parameters:
wget -q -E -H -k -K -p -t 1 -T 10 --reject mp4,mov,avi,mkv --execute robots=off --delete-after https://url-of-webpage --warc-file=20201223-warc-filename

I face issues with some web pages which use different tags for images, wget is not able to download images properly.