Math

Various essays on how to write mathematics:

Lists of open problems:

If you are interested in this blog but don’t know where to start, scroll down and start by reading the two manuscripts of Grothendieck,”Recolte et semaille”(see below for the complete version) and “la Cle des songes” .

Then if you speak french read this paper from Meyer entitled “Les ondelettes”. The use of the (French) language to describe the story behind the paper as well as the clarity and mastering of the mathematics behind this story show what every paper should look like.

I really recommend reading some of the work by the masters. There is indeed something common to every field ranging from high level cuisine to painting and through mathematics which lies in the “taste” for beauty and the race for perfection. And it is usually best to learn a field directly from its experts.

On that line, I recommend the book: “Mathematics, Frontiers and perspectives” from the AMS. In this book M. Atiyah is also briefly discussing another beautiful paper “The Mathematician” from J. von Neumann which can be found online.

 

On the UGC

One of the clearest explanation of the implications of the UGC to me can be found here

How to do beautiful plots in lateX without having to store a humongous number of points.

See section 4.3.8 in the pgfplot manual (This is not the best way though)

About Pytorch and machine learning tools

Before introducing the principal programming tool for Deep Learning. I list some references I find interesting below.

The language of machine learning is undoubtedly python. A interesting set of talks in the broad growing area of the mathematics of learning can be found here

Deep Learning with PyTorch: A 60 Minute Blitz

Many codes from PyTorch require to use CuDa. For this it usually much better to run codes on remote servers.

use

sshfs ID@xxxx.xxxx.xxx.edu:/ /path/to/local_dir

to mount the server locally. When giving the path to the directory it is also better to mount only the part of it that we are using. So

ID@xxxx.xxxx.xxx.edu:/path/to/remote/repo /path/to/local_dir

unmouting the repo might be tricky. A couple of alternative exist. Some working better than other. You can try umount, or diskutil umount. When none of the previous command work, a third possibility is to force the computer to unmount the server

diskutil unmountDisk force /Volumes/VOLUMENAME

This last command usually works.  Once the remote directory has been mounted locally, the best is to create a virtual environment with the desired python version. You will also need a proper text editor. For this I recommend Sublime Text which is available on the net at no charge.

1. With virtualenv (ideally scroll down to the conda installation below)

Once the remote repo has been mounted locally, it is good to create a virtual environment. Installing anaconda doesn’t require creating a virtual envonronment. However, if one needs to use multiple python versions, creating a virtual environment is much better (see below)

When you create your virtualenv use the -p flag to give it the path to the python executable you want to use:

virtualenv -p /path/to/python-anaconda-version

Once the virtual environment has been created, the creation of the environment should have created a whole folder with a couple of subfolders, one of which named bin. From the bin folder it is possible to reactivate the virtual environment via the following command:

source my_project/bin/activate

2. With conda (preferred installation).

first install anaconda via the distribution website (for example). Download the sh file. Then run the “bash Anaconda2-4.4.0-Linux-x86_64.sh” command from the terminal. Then make sure the “conda” command is working in the terminal for example by typing “conda -V”. If it doesn’t work then check the PATH variable through the following

for anaconda 2 :

export PATH=~/anaconda2/bin:$PATH

for anaconda 3 :

export PATH=~/anaconda3/bin:$PATH

and then

conda --version

 

and copy paste it in the bash to avoid having to copy the command each time you. The .bashrc file which is run each time the terminal is lauched prevent you from having to reset the PATH variables each time. To modify this file, you must go to the home directory with cd then run the command “ls -A” to reveal the hidden files. The .bashrc should be there. Open it with a command line text editor for example nano (most often pre-installed) or any other text editor for example sublime installed by you.

copy paste the line above then exit the .bashrc file.

Check that anaconda is installed for example by checking the version : “conda -V”

This should now work and you can go ahead with the installation of Pytorch.

To install PyTorch just go to the distribution website  and use conda to make the installation. This should work just fine.

Many codes in machine learning requires you to deal with Graphical processing units (GPU). Those are more efficient when a large number of small tasks need to be done in parallel. To handle GPUs you need to have access to an API that will make the link with the general program and the GPU. Such a plateform is provided for example through NVIDIA ‘s Compute Unified Device Architecture (CUDA). In python, NVidia’s CUDA architecture can be used through, among others, the package scikit-cuda which can be installed via the following command:

pip install scikit-cuda

 

other important package should be installed as well such as for example pynvrtc

pip install pynvrtc

 

a third important package is the package Cupy (see also here). Cupy enables the use of all the usual tools from numPy but combined with the parallelization advantages of CUDA. Before installing cupy you will have to perform a couple of pre-installation steps. Numpy should have been installed with anaconda. Otherwise you should install it. Then install SciPy. Finally check setuptools. and possibly upgrade it.

pip install -U setuptools

You are now ready to install CuPy via pip:

pip install cupy

Once the installation is done, the best is to start with the tutorials to learn Pytorch. When dealing with the tutorial, you might generate an error trying to deal with the grad_fn function. This function wa previously named creator

Many of the operations on the PyTorch tensors are detailed on the website

One of the most important function is the out.backward() function. Pytorch works by applying a chain of operations one after the others. The succession of operations is stored as a tree whose leafs are the inputs.

When applying the backward function. The backward function computes the gradient at the point.

Neural nets require the (torch.)nn package. objects from the nn class contain layers, a forward() method to propagate through the network.

Conv nets are in fact a succession of low pass filters

inputs are “autograd.Variable”. It important to note that conv nets in pyTorch will take 4D inputs of dimension corresponding to nSamplesx nChannels x size. What is nSamples ?

An example of such a graph given in the tutorials is as follows

input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d
      -> view -> linear -> relu -> linear -> relu -> linear
      -> MSELoss
      -> loss

The online pyTorch version is not up to date with the Tutorials and there might be a couple of fix to be made such as indicated here. Those follow from the switch between creator and grad_fn.

It is amazing to see how easy it is to do backpropagation with pyTorch. Once the loss is defined just call loss.backward(). The important lines of codes to perform the gradient descent are

learning_rate = 0.01
for f in net.parameters():
    f.data.sub_(f.grad.data * learning_rate)

torch.optim gives a variety of optimization schemes.

pyTorch is especially design to deal with image databases by means of the TorchVision package.

I recommend going really quickly throught the tutorial until the part “Training a classifier” which is the most important part. In this example, a couple of comments have to be made.

When debugging a Cuda code, it might be useful to use the command CUDA_LAUNCH_BLOCKING=1 python script.py

When dealing with scattering networks, there are several possibilities to start. One is to start with the MNIST example of Edouardo

 

For those who like me change from one institution to another institution quite often,

usefull tricks for interactions with servers

https://amaral.northwestern.edu/resources/guides/mounting-remote-folder-os-x-over-ssh

import command to unmount on mac (sometimes harder than expected when connection is unexpectedly broken)

sudo umount -f YOURDEVICE

On RIP matrices

A nice summary can be found here

Interesting posts by Luca Trevisan

The Riemann hypothesis for graphs

and review by Wigderson et al. on Expander graphs

Here is a beautiful post on the relation between Ramanujan and the spectrum of the contiunuous Laplacian

 

On Continuous relaxations (Kohn, Strang)

In variational problems, a necessary condition to derive a minimizer for a given functional is that this functional should be lower semicontinuous. This is required to go from a limit of iterates to a unique minimizer.

 

Good slides by John Duchi on concentration and characterization of Sub-exponential variables

Neural networks, scattering transform, and the mathematics of learning

To come

Notes by Lovasz on SDP and combinatorial optimization

Notes by Vershynin

Click to access MIT18_S997S15_Chapter1.pdf

Chaining

Chaining was first introduced by Talagrand. The idea

 Introduction to number theory by Hildebrand with summary on asymptotic notations

Optimal transport

interesting references include

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.726.35&rep=rep1&type=pdf

Theoretical CS

I recommend Sipster’s Book “Introduction to the theory of computation”

The renormalization group and deep learning

I. Introduction

Cette page s’efforce de maintenir un compte rendu des quelques avancées récentes dans les sujets mentionnés dans le titre. Avant toute chose,  je dois m’excuser pour le mélange des langues qui s’entremêlent ci dessous. Je dois avouer qu’en fonction des endroits ou je mets le blog à jour, ma langue maternelle prends ou pas le dessus sur ma langue de “travail”. Dès que possible, je synchroniserai les deux.

En guise d’introduction, je voudrais partager avec tous ceux qui se passionnent pour les mathématiques et les science, l’ouvrage “Récolte et semaille” d’Alexander Grothendieck. (Voir le papier Who Is Alexander Grothendieck? ). J’en donne ci dessous mon interprétation personnelle, ce qui me semble juste, ce qui le semble moins.

I.a. Récolte et semaille

Le manuscrit peut être téléchargé via le site de Gérard Duchamp à Paris 13. Certaines sections valent vraiment la peine d’être lues. Notamment la très belle section 2.2 sur l’importance d’être seul.

I.1 De l’importance et de la joie du raisonnement indépendant. La première section de “Récolte et semaille donne l’idée de ce que devrait être le socle de l’enseignement des mathématiques: “C’était les problèmes du livre et pas mes problèmes”. Grothendieck met le doigt sur l’appropriation, l’investissement personnel des jeunes dans la résolution d’un problème. Les mathématiques ne devraient pas être enseignés comme une recette, comme c’est trop souvent le cas mais comme de la philosophie, comme une manière d’aborder la compréhension de la nature et de formaliser des idées.  

II. A career in math.

Click to access gowers_VIII_6.pdf

III. The little Grothendieck problem and its relaxations

I recently got interested in that problem through a discussion with Afonso Bandeira. Below are some interesting references on the problem:

The Grothendieck constant is the smallest constant \kappa such that

to be done

A few conjectures and open problems

Open problems on the Lasserre/SOS hierarchy (courtesy of Boaz Barak)

Open problems from Data science (courtesy of Afonso Bandeira)

 

Arxiv Interests

Siam Interests

Non science articles

Events :

http://www.math.harvard.edu/cdm/index.html

https://terrytao.wordpress.com/2010/01/03/254a-notes-1-concentration-of-measurehttps://video.ias.edu/topology/2014/1210-BerndSturmfels

About optimality conditions in gloptypoly :

http://homepages.laas.fr/henrion/papers/extract.pdf

Jensen’s inequality

For a given random variable X and a given convex function f(X), Jensen’s inequality simply expresses the fact that f(\mathbb{E}\{X\})\leq \mathbb{E}\{f(X)\}.

The convex geometry of linear inverse problems (02/18/2015)

In their paper “The convex geometry of Linear inverse problems” (2012), Parrilo, Recht, Chandrasekeran and Willsky consider the notion of atomic norm as the gauge function of the convex hull of the set of atoms \mathcal{A}. Introducing the results of this paper requires the introduction of some notions in convex analysis. I recall some of them below :

The support function of a closed convex set \mathcal{A} is the function defined by
h_{\mathcal{A}}= \sup \left\{\langle x, a\rangle,\; a\in \mathcal{A} \right\}

We can define the norm

BV sparsity and compressed sensing

On the DARPA problem and link between linear Algebra and Algebraic geometry

http://www.stat.uchicago.edu/~lekheng/work/icm1.pdf

Nice talk by D. Steurer on SOS and the UGC

http://www-old.newton.ac.uk/programmes/POP/seminars/2013073015002.html

On scattering theory

http://www.tcm.phy.cam.ac.uk/~bds10/aqp/lec20-21_compressed.pdf

What Johnsson Lindenstrauss really is about

References :

https://www.cs.princeton.edu/~chazelle/pubs/FJLT-sicomp09.pdf

Doubling constant, The work of Ben Green, T Tao and E. Breuillard

The Simons lecture this year was based on a series of papers by Ben Green (Oxford).The first talk was dealing with the paper on approximate subgroups that he submitted together with T Tao and E Breuillard. This first paper deals with K-approximate groups which are subparts A of groups such that e\in A, A is symmetric and A\cdot A can be covered by

A core idea of the paper is to consider sets which are approximately closed under multiplication or conversely which have a small “doubling constant” \sigma(A)

introduced the idea of doubling constant \sigma[A]. A representative paper can be found here : http://arxiv.org/pdf/1301.7718v1.pdf. In this paper Ben Green considers the sets for which when considering the product A\cdot A, we have the relation |A\cdot A|\leq K|A|

Some properties of eigenvalues of Hermitian matrices (see Handbook of linear algebra)

Let A, B \in \mathcal{H}_n with eigenvalues \lambda_1(A)\geq \lambda_2(A)\geq \ldots \geq \lambda_n(A), and \lambda_1(B)\geq \lambda_2(B)\geq \ldots \geq \lambda_n(B). Then
|\lambda_i(A) - \lambda_i(B)| \leq \|A-B\|_2\quad i=1,\ldots,n .
\displaystyle \sum_{i=1}^n |\lambda_i(A)- \lambda_i(B)|\leq \|A-B\|_F^2.

Rank relations on the norms

\|A\|_F \leq \sqrt{r}\|A\|

Indispensable references in Compressive sensing

http://arxiv.org/pdf/1011.3854.pdf

An important introduction is the paper by E. Candès and M. Wakin: An introduction to compressed sensing . For those who want to know more on the notion of incoherence, look at the Donoho Huo paper Uncertainty principles and ideal atomic decomposition

E.J. Candès, B. Recht, Exact Matrix Completion via Convex Minimization

E.J.Candès, T.Tao, Decoding by linear programming

David Donoho, Optimally sparse representation in general (non orthogonal) dictionaries via \ell_1 minimization

Click to access lasserre.pdf

References on ill-conditioning, well posedness and all those ideas used when dealing with inverse problems

Bertero, M., Poggio, T. A., & Torre, V. (1988). Ill-posed problems in early vision. Proceedings of the IEEE76(8), 869-889.

Books on differential geometry

Optimization Algorithms on Matrix Manifolds
P.-A. Absil, R. Mahony & R. Sepulchre (two former professors @ UCLouvain)

Useful books on Matrix differentiation

Click to access matrix_cookbook.pdf

 Useful books on Algebraic geometry and polynomial optimization

Alexander Prestel • Charles N. Delzell, Positive Polynomials: From Hilbert’s 17th Problem to Real Algebra, June 9, 2004. A copy can be found here : http://dangtuanhiep.files.wordpress.com/2009/02/positive.pdf

Monique Laurent, Sums of squares, Moments Matrices and optimization over polynomials. A copy can be found here : http://homepages.cwi.nl/~monique/files/moment-ima-update-new.pdf

Useful notes on functional analysis

An interesting note on frechet differentiability with useful examples : http://links.uwaterloo.ca/amath731docs/frechet.pdf

A note on the DARPA and Hilbert problems from AMS

Click to access tx080400445p.pdf

Some interesting links

 

Some of the most beautiful math papers

Upcomming conferences

  • 12th International Conference on Mathematical and Numerical Aspects of Wave Propagation, Karlsruhe, 2015
  • Algorithm Engineering and Experiments (ALENEX15)
  • SIAM Conference on Applied Linear Algebra
  • CT15 — SIAM Conference on Control and Its Applications
  • International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2015
  • IEEE International Conference on Image Processing (ICIP 2015), paper subj. Janvier 2015
  • SIAM Conference on Computational Science and Engineering
    March 14-18, 2015
  • International Congress on Industrial and Applied Mathematics (ICIAM) 2015
  • SampTA 2015, American University, Washington DC
  • SIAM Conference on Analysis of Partial Differential Equations (PD15)
  • Applied Inverse Problems 2015, Helsinki, Finland
  • SIAM Conference on the Mathematical and Computational Issues in the Geosciences
Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s