Wednesday, December 14, 2016

Un quart d'heure d'anonymat en ligne

Le texte ci-dessous est issu d’un article rédigé originellement pour l’INA, actualisé et republié sur le blog de Jean-Marc Manach sous le titre Comment (ne pas) être (cyber)espionné ?. L’article original a été rédigé pendant l’été 2012 [lire plus ...]


Chat sécurisé

Échanger des fichiers

Notes confidentielles

De la difficulté de téléphoner

IRL

Pour aller plus loin

Friday, July 8, 2016

Virtual and Augmented Reality



The ENTiTi software lets you easily transform your art into virtual and augmented reality in just minutes! FREE download now!


a powerful cloud-based platform that enables any company or individual to create interactive virtual and augmented reality content without any developer skills.

Monday, June 27, 2016

SharePoint Calculated Columns


SharePoint Calculated Columns are powerful tools when creating out-of-the-box solutions. With these columns, we can manipulate other columns in the list item. Below are a few basic functions complete with details on how to utilize them...

Here is my lookup values, for an corporate environment sample, with some conditional formatting, HTML and CSS:

=IF([Owner]="Press Review",CONCATENATE("<DIV style='color: #ffffff; background-color: #ff0000; padding: 2px 4px !important;'>"," ",Owner," ","</DIV>"),IF([Owner]="Decisions",CONCATENATE("<DIV style='color: #ffffff; background-color: #2C5700; padding: 2px 4px !important;'>"," ",Owner," ","</DIV>"),IF([Owner]="Staff Notices",CONCATENATE("<DIV style='color: #ffffff; background-color: #FF9E00; padding: 2px 4px !important;'>"," ",Owner," ","</DIV>"),IF([Owner]="Job Vacancies",CONCATENATE("<DIV style='color: #ffffff; background-color: #009ECE; padding: 2px 4px !important;'>"," ",Owner," ","</DIV>"),IF([Owner]="Training",CONCATENATE("<DIV style='color: #ffffff; background-color: #CE0000; padding: 2px 4px !important;'>"," ",Owner," ","</DIV>")

Tuesday, June 7, 2016

Dominate creating and manipulating HTML documents


Dominate is a Python library for creating and manipulating HTML documents using an elegant DOM API. It allows you to write HTML pages in pure Python very concisely, which eliminate the need to learn another template language, and to take advantage of the more powerful features of Python.

Simple Image Gallery

import glob
from dominate import document
from dominate.tags import *

photos = glob.glob('photos/*.jpg')

with document(title='Photos') as doc:
    h1('Photos')
    for path in photos:
        div(img(src=path), _class='photo')

with open('gallery.html', 'w') as f:
    f.write(doc.render())


Result:

<!DOCTYPE html>
<html>
  <head>
    <title>Photos</title>
  </head>
  <body>
    <h1>Photos</h1>
    <div class="photo">
      <img src="photos/IMG_5115.jpg">
    </div>
    <div class="photo">
      <img src="photos/IMG_5117.jpg">
    </div>
  </body>
</html>

in stackoverflow creating html in python

Tuesday, May 24, 2016

Python and HTML files

Python Phrasebook Covers

url parse

from urllib.parse import urlparse, urlunparse, urljoin
print("=== 0801_url_parse.py ===")

URLscheme = "http"
URLlocation = "www.python.org"
URLpath = "lib/module-urlparse.html"

modList = ("urllib", "urllib2", "httplib", "cgilib")

#Distribution de l'adresse dans un tuple
print("Recherche Google parsée pour urlparse")
parsedTuple = urlparse("http://www.google.com/search?hl=en&q=urlparse&btnG=Google+Search")
print(parsedTuple)

#Fusion de liste en URL
print("\nPage de document Python non parsée")
unparsedURL = urlunparse((URLscheme, URLlocation, URLpath, '', '', ''))
print("\t" + unparsedURL)

#Fusion du chemin avec le nouveau fichier pour créer
#la nouvelle URL
print("\nPages Python supplémentaires utilisant join")
for mod in modList:
    newURL = urljoin(unparsedURL, "module-%s.html" % (mod))
    print("\t" + newURL)

#Jointure du chemin au sous-chemin pour créer une nouvelle URL
print("\nPages Python avec jointures de sous-chemins")
newURL = urljoin(unparsedURL, "/module-urllib2/request-objects.html")
print("\t" + newURL)



html open

import urllib.request
print("=== 0802_html_open.py ===")

webURL = "http://www.python.org"
localURL = "file:\\tmp\default2.html"

#Ouverture de l'URL sur le Web
u = urllib.request.urlopen(webURL)
buffer = u.read()
print("Lecture web ***")
print(u.info())
print("%d octets de %s.\n" % (len(buffer), u.geturl()))

#Ouverture de l'URL en local
print("Lecture d'un fichier local ***")
print(localURL)
u = urllib.request.urlopen(localURL)
buffer = u.read()
print(u.info())
print("%d octets de %s.\n" % (len(buffer), u.geturl()))
print(buffer)


html links

from html.parser import HTMLParser
import urllib.request, urllib.parse, urllib.error
import sys
print("=== 0803_html_links.py ===")

#Definition du parseur HTML
class parseurLiens(HTMLParser):
    def handle_starttag(self, tag, attrs):
        if tag == 'a':
            for name,value in attrs:
                if name == 'href':
                    print(value)
                    print((self.get_starttag_text()))

#Creation d'une instance du parseur HTML
monParseur = parseurLiens()

#Ouverture du fichier HTML
data = urllib.request.urlopen("http://www.python.org/index.html").read()
monParseur.feed(data.decode('utf-8'))
monParseur.close()


html images

from html.parser import HTMLParser
import urllib.request, urllib.parse, urllib.error
import sys
print("=== 0804_html_images.py ===")

urlString = "http://www.python.org"

#Enregistrement du fichier d'image sur le disque
def lireFicIMA(addr):
    u = urllib.request.urlopen(addr)
    data = u.read()

    splitPath = addr.split('/')
    fName = splitPath.pop()
    print("Stockage local de %s" % fName)

    f = open(fName, 'wb')
    f.write(data)
    f.close()

#Définition du parseur HTML
class parseImage(HTMLParser):
    def handle_starttag(self, tag, attrs):
        if tag == 'img':
            for name,value in attrs:
                if name == 'src':
                    lireFicIMA(urlString + "/" + value)

#Création de l'instance du parseur HTML
monParseur = parseImage()

#Ouverture du fichier HTML
u = urllib.request.urlopen(urlString)
print("Ouverture de l'URL =============================")
print(u.info())

#Alimentation du fichier HTML dans le parseur
data = u.read()
monParseur.feed(data.decode('utf-8'))
monParseur.close()
print("Les fichiers sont dans le dossier courant")


html text

from html.parser import HTMLParser
import urllib.request
print("=== 0805_html_text.py ===")

urlText = []

#Définition du parseur HTML
class parseurTexte(HTMLParser):
    def handle_data(self, data):
        if data != '\n':
            urlText.append(data)

#Création de l'instance du parseur HTML
monParseur = parseurTexte()

#Alimentation du fichier HTML dans le parseur
data = urllib.request.urlopen("http://docs.python.org/lib/module-HTMLParser.html").read()
monParseur.feed(data.decode('utf-8'))
monParseur.close()
for unBloc in urlText:
    print(unBloc


html cookie

import os
import urllib.request, urllib.error
import http.cookiejar
print("=== 0806_html_cookie.py ===")

cookieFile = "cookie.dat"
testURL = 'http://maps.google.com'

#Creation de l'instance de cookie jar
boiteACooky = http.cookiejar.LWPCookieJar()

#Creation de l'objet opener HTTPCookieProcessor
opener=urllib.request.build_opener(urllib.request.HTTPCookieProcessor(boiteACooky))

#Installation de l'opener HTTPCookieProcessor
urllib.request.install_opener(opener)

#Creation de l'objet Request
r = urllib.request.Request(testURL)

#Ouverture du fichier HTML
h = urllib.request.urlopen(r)
print("En-tete de la page \n======================")
print(h.info())

print("Cookies de la page \n======================")
for ind, cookie in enumerate(boiteACooky):
  print("%d - %s" % (ind, cookie))

#Enregistrement des cookies
boiteACooky.save(cookieFile)


html quotes

from html.parser import HTMLParser
import urllib.request, urllib.parse, urllib.error
import sys
print("=== 0807_html_quotes.py ===")

localURL = "file:\\tmp\default.html"

#Definition du parseur HTML
class parseAttrs(HTMLParser):
    def init_parser (self):
        self.pieces = []

    def handle_starttag(self, tag, attrs):
        fixedAttrs = ""
        for name, value in attrs:
            fixedAttrs += "%s=\"%s\" " % (name, value)
            self.pieces.append("<%s %s>" % (tag, fixedAttrs))

    def handle_charref(self, name):
        self.pieces.append("&#%s;" % (name))

    def handle_endtag(self, tag):
        self.pieces.append("</%s>" % (tag))

    def handle_entityref(self, ref):
        self.pieces.append("&%s" % (ref))

    def handle_data(self, text):
        self.pieces.append(text)

    def handle_comment(self, text):
        self.pieces.append("<!-%s->" % (text))

    def handle_pi(text):
        self.pieces.append("<?%s>" % (text))

    def handle_decl(self, text):
        self.pieces.append("<!%s>" % (text))

    def parsed (self):
        return "".join(self.pieces)

#Creation d'une instance du parseur HTML
parseAttrib = parseAttrs()

#Initialisation des donnees de parseur
parseAttrib.init_parser()

#Alimentation du fichier HTML dans le parseur
data = urllib.request.urlopen(localURL).read()
parseAttrib.feed(data.decode('utf-8'))

#Affichage du contenu du fichier original
print("Fichier original\n========================")
print(data)

#Affichage du fichier parse
print("\nFichier final\n========================")
print(parseAttrib.parsed())

parseAttrib.close()



in Brad Dayley, Python 3 (Python Phrasebook), L'essentiel du code et des commandes, peason.fr

Wednesday, May 18, 2016



ipconfig (to configure network interfaces and view related information)

net use (e. g. drive mapping)

netstat (displays incoming and outgoing connections and other information)

ping  (sends ICMP echo request packets to a destination)

traceroute  (similar to ping, but provides information about the path a packet takes)

nslookup  (looks up the IP addresses associated with a domain name or the reverse)

whois  (looks up the registration record associated with a domain name)

nmap (port scanner)

Nmap.org

more in:
TechWorm