Machine Learning Idioms

A collection of short idiomatic code snippets which I've used in my Machine Learning tasks, written in J. Hopefully I'll come back here to update this list once in a while.

Why?

Even though most of my machine learning work is done in Python and NumPy, I prefer prototyping some things in J. The terser syntax and native support for multidimensional arrays make it more enjoyable for quick experiments than Python.

Softmax

Take the exponential (base \(e\)) of each value, divide by the sum of the exponential of each value. The sum of the resulting array is 1.

This function is often used when trying to classify an example into one of several (>2) classes, such as recognizing individual letters or numbers for optical character recognition.

softmax_1 =: (%+/)@:^       NB. smallest version I can think of
softmax_2 =: (]%+/)@:^      NB. fork instead of hook
softmax_3 =: {{z%+/z=. ^y}} NB. with temp variables

I usually prefer explicit code over tacit, and I usually prefer forks over hooks, but the first version is so short it's hard not to like it.

Logistic/Sigmoid function

Converts a real value in the range [-inf,inf] to a real value in the range [0,1], with a smooth s-shaped curve.

This function is often used as an activation function for a binary or linear classifier.

logistic_1 =: {{1%1+^-y}}   NB. direct translation
logistic_2 =: %@>:@^@-      NB. tacit verbs only

Nonnegative Matrix Factorization

e =: 1e_10 NB. epsilon
mm =: +/ . *  NB. matrix multiply
error_function =: {{
  'xx w h' =. y
  wh =. e + w mm h
  +/ xx -~ wh + xx * ^.xx % wh
}}

nmf =: {{ NB. x:iterations  y:data
    errs =. 0$0
    basis =. 30
    's0 s1' =. $y
    w =. ? 0 $~ s0,basis
    h =. ? 0 $~ basis,s1
    ft1 =. 1 $~ $y
    for_i. i.x do.
      wh =. e + w mm h
      w =. w * ((y % wh) mm |:h) % (e+(ft1 mm |:h))
      wh =. e + w mm h
      h =. h * ((|:w) mm y % wh) % (e + (|:w) mm ft1)
      errs =. errs,error_function y;w;h
    end.
    w;h;errs
}}

d =: (,|."1)0,~0,:i.20
'w h errors' =: 10 nmf d