Python
Full Stack Python t.o.c.
Python cheat sheet
cat test.json | python -m json.tool
(pretty-print json) | RealPython site |
Matplotlib cheat sheets
List all installed modules: pydoc modules
Show help for a module: pydoc sys
Python: Timezone and Daylight savings
Analyze data with Python and Jupyter Notebook — First Python Notebook 1.0 documentation | Used for this free course recommended by simonw |
PyNaCl: Python binding to the libsodium library — PyNaCl 1.3.0 documentation Data Analysis Made Simple: Python Pandas Tutorial
Python tutorials as ipynb/Jupyter notebooks: stats, plotting, astronomy | source |
Space Science with Python — A Data Science Tutorial Series Ultimate Guide to Python Debugging ReadTheDocs: Python | Scrapy |
Build web apps to automate sysadmin tasks, with Python, Flask, etc Working with SQLite in Python
Python Flask project: This is a remix of the original app by cjeller1592. His source code for it. Also, his Writeas-Search github page which uses his write.as API. WTForms documentation | Flask Tutorial | Flask github repo | Flask documentation | Flask Resources List | Glitch Support Forum | Flask-Python tutorials | The Glitch console and editor now automatically sync. Previously, you had to manually run the refresh command from the console and to force a refresh, updating the editor with any files created/edited via the console or programmatically. glitch: how to link to other pages from index.html |
Important Notes: TutorialsPoint online python3 compiler | pyfiddle.io | Python re test page | python fiddle | trinket.io python fiddle | The Standard Python Library | Python3 good reference | PythonAnywhere | PythonAnywhere web db example | How do I Extract Specific Portions of a Text File Using Python? | Python3 regex tutorial | Python TutorialsPoint | Official Python Tutorial | Google's Python Tutorial | Dive Into Python3 book | Pyzo | Full Stack Python | Install PyQt5 and build gui app | Another gui app | Spyder github repo | Spyder site | Spyder docs incl debug |
Simple python debugging with pdb, PhysicsForum series Scientific computing with python .ipynb | matplotlib – 2D and 3D plotting in Python | Jupyter nbviewer | Guide to Python Plotting with Matplotlib |
BeautifulSoup: Cheat sheet | Docs | web scraping with python 3 and BeautifulSoup Scrape a Website With This Beautiful Soup Python Tutorial Scraping Data on the Web with BeautifulSoup Write your first web scraper in Python with Beautifulsoup A beginner's guide to web scraping with Python https://www.scrapehero.com/a-beginners-guide-to-web-scraping-part-2-build-a-scraper-for-reddit/ python script gist: Print list of upvoted links on reddit by count per subreddit, then subreddit name | Q: posts I have upvoted in a specific subreddit | https://www.digitalocean.com/community/tutorials/how-to-scrape-web-pages-with-beautiful-soup-and-python-3 http://www.kashifaziz.me/web-scraping-python-beautifulsoup.html/ (good refs) http://www.storybench.org/how-to-scrape-reddit-with-python/ https://realpython.com/python-web-scraping-practical-introduction/ https://medium.freecodecamp.org/how-to-scrape-websites-with-python-and-beautifulsoup-5946935d93fe https://codeburst.io/web-scraping-101-with-python-beautiful-soup-bb617be1f486 https://automatetheboringstuff.com/chapter11/ https://www.pythonforbeginners.com/python-on-the-web/web-scraping-with-beautifulsoup https://www.pythonforbeginners.com/python-on-the-web/beautifulsoup-4-python/ https://likegeeks.com/python-web-scraping/ https://www.geeksforgeeks.org/implementing-web-scraping-python-beautiful-soup/ https://dev.to/hackersandslackers/scraping-data-on-the-web-with-beautifulsoup-11fl
WinPython is portable: https://winpython.github.io/ | https://github.com/winpython/winpython/wiki Portable or cloud python versions: http://portablepython.com/ Install pip on Ubuntu 16.04: https://www.rosehosting.com/blog/how-to-install-pip-on-ubuntu-16-04/
Spyder is a MATLAB-like IDE for scientific computing with python. It has the many advantages of a traditional IDE environment, for example that everything from code editing, execution and debugging is carried out in a single environment, and work on different calculations can be organized as projects in the IDE environment.
Python: Formatting dates and times
Working version for reddit upvotes:
from bs4 import BeautifulSoup as bs
import re
#get page source from file
path = 'F:\\papps\\WPy-3661\\rupvotes1.txt'
f = open(path, 'r')
htm = f.read()
soup = bs(htm, 'html.parser') #, features="lxml")
#remove links that interfere with the getting authors of the reddit posts
#THESE MAY OR MAY NOT BE NECESSARY; RUN PROGRAM, THEN SEE
rem = soup.find(class_='content')
rem.decompose()
rem = soup.find(class_='bottom')
rem.decompose()
#Problem when author is [deleted] and there is no <a> tag for it
#Manul workaround - copy a valid <a> author link & replace name with deleted
pclass = soup.find_all("a", class_ = re.compile("^title may-blank" ))
plink = soup.find_all("a", class_ = re.compile("^bylink comments" ))
#plink = soup.find_all("a", class_ = "bylink comments may-blank" ) #could be 'bylink comments empty may-blank' which would throw off indexing
pauth = soup.find_all("a", class_= re.compile("^author " ))
outp = '<a href="'
i = 0;
for item in pclass:
outp = '<li><a href="https://www.reddit.com'
z = plink[i]
outp += z.get('data-href-url')
outp += '" tags="">' + item.text + ', by ' + pauth[i].text + '</a></li>'
print(outp)
i += 1
#print(pclass.prettify())
Another program:
from bs4 import BeautifulSoup as bs
import re
#get page source from file
path = '/home/zzz/upvoted.txt'
f = open(path, 'r')
htm = f.read()
soup = bs(htm, 'html.parser')
#print(soup.prettify())
#remove links that interfere with the getting authors of the reddit posts
#rem = soup.find(class_='content')
#rem.decompose()
#plink = soup.find_all("a", class_ = re.compile("^bylink comments" ))
#plink = soup.find_all("a", class_ = "bylink comments may-blank" ) #could be 'bylink comments empty may-blank' which would throw off indexing
#pauth = soup.find_all("a", class_= re.compile("^author " ))
pclass = soup.find_all("a", class_ = 'html-attribute-value html-external-link')
hlast = ''
i = 0;
for item in pclass:
h = item['href']
if '/comments/' in h and '/r/' in h and h != hlast:
hlast = h
h = h[:-1]
x = h.rfind('/') + 1
s = h[x:300]
s = s.replace("_", " ")
outp = '<a href="' + h + '" tags ="">' + s + '</a><br>'
print(outp)
Hashtags: #devel