Browse Source

Updates from virtual machine

main
Khoi 1 year ago
parent
commit
2524cdd728
6 changed files with 6 additions and 1 deletions
  1. BIN
      Forums/HiddenAnswers/__pycache__/crawler_selenium.cpython-311.pyc
  2. BIN
      Forums/HiddenAnswers/__pycache__/parser.cpython-311.pyc
  3. BIN
      Forums/Initialization/__pycache__/forums_mining.cpython-311.pyc
  4. BIN
      Forums/OnniForums/__pycache__/crawler_selenium.cpython-311.pyc
  5. BIN
      Forums/OnniForums/__pycache__/parser.cpython-311.pyc
  6. +6
    -1
      MarketPlaces/ThiefWorld/parser.py

BIN
Forums/HiddenAnswers/__pycache__/crawler_selenium.cpython-311.pyc View File


BIN
Forums/HiddenAnswers/__pycache__/parser.cpython-311.pyc View File


BIN
Forums/Initialization/__pycache__/forums_mining.cpython-311.pyc View File


BIN
Forums/OnniForums/__pycache__/crawler_selenium.cpython-311.pyc View File


BIN
Forums/OnniForums/__pycache__/parser.cpython-311.pyc View File


+ 6
- 1
MarketPlaces/ThiefWorld/parser.py View File

@ -1,7 +1,7 @@
__author__ = 'DarkWeb'
# Here, we are importing the auxiliary functions to clean or convert data
from typing import List
from typing import List, Tuple
from MarketPlaces.Utilities.utilities import *
# Here, we are importing BeautifulSoup to search through the HTML tree
@ -89,6 +89,11 @@ def thiefWorld_description_parser(soup: BeautifulSoup) -> Tuple:
return row
def thiefWorld_listing_parser(soup: BeautifulSoup):
pass
#parses description pages, so takes html pages of description pages using soup object, and parses it for info it needs
#stores info it needs in different lists, these lists are returned after being organized
#@param: soup object looking at html page of description page


Loading…
Cancel
Save