Table cards:
2♣ | 3♥ | 10♠ | 8♣ | 8♥ |
Your cards:
K♦ | 4♥ |
Result: Pair
Hands required: 1
Hand Type:
Table cards:
2♣ | 3♥ | 10♠ | 8♣ | 8♥ |
Your cards:
K♦ | 4♥ |
Result: Pair
Hands required: 1
Hand Type:
24-Sep-2006: Update
I’ve added code that logs each alert to a local database so that 1) duplicate alerts aren’t sent, even if the the stories are posted days or weeks apart, and 2) you can easily check which alerts have been sent historically.
Feel free to download the new code or the old code and tell me what you think.
Additionally I added a debug mode such that if you run
> ./feedMonitor.py --console
the script prints to the console only and doesn’t send the email or log to the DB, so that you can see what types of alerts might be sent with the present script options.
Lastly, a simple PHP script can be used to display the alerts which have been sent in a HTML table that looks like this.
I recently wrote a Python script which monitors RSS feeds for user-specified keywords. The need for this script arose from the large number of security websites and mailing lists I monitor for work-related posts. And you know how big a fan of automation I am…
All you need to run this script is Python and a SMTP server to send messages. If you know of an open SMTP relay feel free to use that, but you may find it easier to simply install Sendmail locally. The script is configured by default to send messages to localhost
, so if you’re going to go the open relay route, you’ll need to configure the script accordingly.
For now, it is probably best to set the script up to run as a cron job and have the script run automatically every 10-30 minutes or so, depending on how urgently you need the alerts to come. Eventually, I will add to the script the ability to log which alerts were mailed in a SQL database so that 1) you don’t receive duplicate alerts, and 2) you can have a record of which alerts have been mailed historically.
Stay tuned for updates to the script, and feel free to contact me if you have any questions.
01-Sep-2009: Update
This code no longer works with the current eBay API, but I’ll leave it posted for reference.
eBay has some very cool functionality that allows application programmers to make API calls on a user’s behalf without using their site credentials. I decided to investigate the eBay API a bit and wrote a function that allows web developers to print the items listed in an eBay buyer’s My eBay section on third party web pages. Feel free to check out the code.
The eBay Developer Program has a guide to help developers begin using the API. You can make all kinds of unauthenticated function calls to the API, like listing search results, without using eBay’s Auth and Auth (Authentication and Authorization) system. But to make calls on behalf of a particular user, you will need to generate an auth token. This can be done using eBay’s Authentication Token Tool.
Essentially, this process works by entering an eBay user’s site crentials in the token generator and eBay returns a cryptographic token. This token can then be passed along with the API function calls to authenticate the caller without including the user’s login and password.
The function I wrote is essentially a wrapper to the API call GetMyeBayBuying. Calling my function, printMyEbay, will print the auctions that a user is a) watching, b) bidding on, and c) has won.
PyRSA is a command line utility that allows users to digitally encrypt and sign messages using the public key encryption scheme, RSA. There are three basic functions that PyRSA performs: encryption, decryption, and key generation.
Downloads:
Source: pyrsa.py
Sample Use:
1. Generate a public and private key. In this example, we will specify a key of length 1024 bits. Allow several seconds of CPU time for the generation of the keys.
pyrsa.py -g 1024 Enter file identifier (i.e. first name): brandon
2. Now the files
brandon_privateKey.txt
and
brandon_publicKey.txt
are in the current directory. Next place the text we want to encrypt in a text file.
echo "The sky above the port was the color of television, tuned to a dead channel." > message.txt
3. Encrypt the message using the public key and redirect the output to a text file.
pyrsa.py -e message.txt -k brandon_publicKey.txt > ciphertext.txt
4. At this point the file ciphertext.txt contains the encrypted message. The file can safely be sent to a recipient, i.e. as an email attachment, the contents utterly unreadable to anyone without the private key.
cat ciphertext.txt 32464047998704731086703458860763720628883125201 840735448292781611869424600546740055592235111171870058664751326891 416030992911165222195048303846516331939189036032662981573683210672 785053735077400433222553780571914729993485153779710689497701348386 214277988780913721453283666357504772556433129612632786845350983
5. Next we will assume the message has been sent to the individual who possesses the corresponding private key and he wants to decrypt the message.
pyrsa.py -d ciphertext.txt -k brandon_privateKey.txt Decrypted text: The sky above the port was the color of television, tuned to a dead channel.
We have spent the last several weeks learning about encryption in my computer security class so I thought I’d share what I’ve learned on public key cryptography.
There is a very good description of RSA on Wikipedia, so I don’t want to simply restate what they have. The focus here will be the generation of public and private keys as I feel many of the RSA tutorials on the web are lacking a bit in that department. Computing the multiplicative inverse to get d from e is a little tricky, but we will walk through it step-by-step.
First, a brief overview of RSA, for those not familiar with it already. A message M is encrypted by raising it to the power of e and then taking the result modulo some number N. To decrypt the message, you simply raise the value of the encrypted message C to the power of d and again mod by N. The beauty of RSA is that e and N can be published publicly. Together they, in fact, comprise the public key. The private key, which is not be published, is comprised of d and N.
C = M^{e} mod N
M = C^{d} mod N
If you’re like me, then you are astonished at 1) how simple this system is, and 2) that you can exponentiate messages twice (modulo some number) and leave the original message unaltered. The main question that my skeptical mind came up with when presented with this powerful encryption tool was, “wouldn’t it be easy to compute d if you have the values of e and N?” The answer is, of course, no. It turns out that it is very hard to do so. We shall see later that it is easy to compute d only when we have the factors of N. If we choose N to be arbitrarily large, factoring N can take an arbitrarily long period of time. Currently, there are no known polynomial-time algorithms which can perform this task. Factorization has, in fact, been shown to be in the set of problems known as NP. So the security of RSA is essentially provided by the hardness of the factorization problem. If someone figures out a way to factor large numbers fast, then RSA is out of business.
As was mentioned above, RSA’s security is rooted in the fact that N is hard to factor. Therefore, we should choose N to be the product of two large primes, p and q. For clarity in this example, we will choose relatively small values for p and q, but later we will discuss the proper choices for these coefficients given a desired level of security.
The best way to compute the multiplicative inverse, d from e and φ(N) is to use the Extended Euclidean Algorithm. Here is Euclid’s algorithm for our example:
1127 | 1208020 | (1, 0) | (0, 1) | We start with unit vectors (1, 0) and (0, 1) which correspond to the values of e and φ(N), respectively.
For each operation we perform on the left two columns, we perform the same operation on the right two columns. For example, in the first step, 1127 divides 1208020 1071 times and leaves a remainder of 1003. The corresponding operation in columns 3 and 4 is to subtract (1, 0) from (0, 1) 1071 times yielding (-1071, 1). The algorithm terminates when we have 1 and 0, not necessarilly in that order, in the first two columns. The value for d is in the column that corresponds to the 1 in the first two columns. *Note: it is worth mentioning that it is possible for the extended Euclidean algorithm to yield a negative result for d. Obviously, this is not a suitable decryption exponent because raising an integer to a negative number results in a fraction. The simple fix here is to mod the negative value of d by φ(N), giving us a positive value of d between 0 and φ(N). |
1127 | 1003 | (1, 0) | (-1071, 1) | |
124 | 1003 | (1072, -1) | (-1071, 1) | |
124 | 11 | (1072, -1) | (-9647, 9) | |
3 | 11 | (107189, -100) | (-9647, 9) | |
3 | 2 | (107189, -100) | (-331214, 309) | |
1 | 2 | (438403, -409) | (-331214, 309) | |
1 | 0 | (438403, -409) | (-1208020, 1127) |
From the above calculations we know that d = 438403. So we have both the public and private keys for this user:
public key = (1127, 1210537) private key = (438403, 1210537)
To prove that this system works, observe the following computations. Let our message M = 247. The first step is to compute C = 247^{1127} mod 1210537.
A brief aside:
This exponentiation can be computed easily because we are using relatively small values for e and d. However, real world implementations of RSA often use 1024 bit encryption, meaning the exponent is 1024 bits long. That is roughly equivalent to a 300 decimal digit number. To compute an exponent of that order of magnitude in the conventional way, multiplying the base by itself e times would be prohibitively expensive. Even if we could compute 1 billion multiplications per second, the computation would take longer than the current age of the universe. So it is useful to use an alternative method like exponentiation by squaring. Here is a script that computes large exponents fast. Another consideration is the storage of a very large number such as C^{d}. Rather than keeping the value in main memory as we exponentiate, we can simply keep the value modulo N. And now back to our example…
247^{1127} mod 1210537 = 611545. This number was easily obtained with the Python interpreter in a fraction of a second. Raising this number, however to the value of d, 438403, should not be done the conventional way. On the school’s mainframe this calculation took 11 minutes, 23.65 seconds. This is a situation where we can see the power of divide-and-conquer algorithms. Using our recursive exponentiation function we show that 611545^{438403} mod 1210537 = 247. Voilá, out pops our original message. Additionally, the exponentiation took only 31.16 seconds on the same machine with the repeated squaring method. This can be vastly improved, too, once we develop a non-recursive function. That will be critical when we want to provide real security via RSA and we don’t want to wait 10 minutes to decrypt the message.
PyRSA now available.
Introduction:
The Nearest Neighbor Classifier, while robust and capable of handling streaming data, is sensitive to outlying data points and to irrelevant features. One critical part of designing a good nearest neighbor classifier is deciding which feature set to use in classifying new data points. One great way to choose the correct subset is through search. Exhaustive search isn’t realistic, though, for data sets with large numbers of features as the number of possible subsets of features is exponential: n = 2^F where F is the number of features in the data.
The purpose of this program is to search through the space of possible subsets of features in a faster way, that is, polynomial or better, without sacrificing too much accuracy in the classification. The first two methods we use are fairly straightforward: Forward Selection, and Backward Elimination. The former method begins with the empty set of features and adds one feature at a time while the latter method begins with all the features and removes one feature at a time.
The third, original, method to search for a good subset of features requires some explanation. By relaxing our criteria for what constitutes the “nearest neighbor”, we are able to avoid some of the calculations that make searching this space expensive. In other words, we sacrifice some accuracy of the classifier in order to gain a great deal of speed in computing the subset. The algorithm works as follows:
Here I’ve omitted a sample trace of the program because it is lengthy, but you could download the source and run it in the Python Interpreter just as easiliy.
Running these three algorithms on various data sets yielded the following statistics:
Large Data Set – 1000 points, 30 Features | |||
Best Set | Acc. % | Time (s) | |
Forward | {7, 9, 12} | 87.40 | 14255.89 |
Backward | {7, 9, 17, 25} | 93.00 | 20507.24 |
Special( 1 ) | {0, 1, 4, 7, 9, 10, 14, 15, 16, 17, 18, 19, 20, 27, 28} | 71.60 | 4909.35 |
Special( 2 ) | {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28} | 69.60 | 647.04 |
Small Data Set – 600 points, 16 Features | |||
Best Set | Acc. % | Time (s) | |
Forward | {2, 4, 9} | 89.66 | 1120.73 |
Backward | {2, 4} | 88.17 | 1519.89 |
Special(.25) | {2, 4, 5, 11, 12, 13, 14} | 81.66 | 449.00 |
Special( .5 ) | {1, 2, 3, 4, 5, 6, 7, 11, 12, 13, 14} | 78.83 | 221.63 |
Special( 1 ) | {0, 1, 2, 4, 5, 6, 7, 9, 10, 13, 14} | 73.66 | 47.09 |
Special( 2 ) | {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15} | 75.33 | 8.27 |
Small Special Set – 1000 points, 19 Features | |||
Best Set | Acc. % | Time (s) | |
Forward | {0, 1, 2, 3} | 86.60 | 2205.22 |
Backward | {0, 1, 2, 3} | 86.60 | 3806.77 |
Special(.25) | {0, 1, 2, 3, 5, 7, 13, 14, 15} | 67.20 | 1025.89 |
Special( .5 ) | {0, 1, 4, 8, 12, 14, 16} | 61.10 | 590.12 |
Special( 1 ) | {0, 1, 2, 3, 4, 7, 8, 10, 11, 12, 14, 15, 16, 17} | 61.10 | 128.71 |
Special( 2 ) | {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 13, 14, 15, 16, 17, 18} | 60.00 | 14.13 |
Analysis:
It is worth noting that running Special Search with delta = 0 is equivalent to running Forward Selection. As delta increases beyond a certain threshold (data set dependent), Special Search degenerates rapidly. This is because for every data point in the set, we will simply choose the next point we look at as the nearest neighbor. With two classes of points, this method will essentially yield 50% accuracy. Hence, it is possible to wind up with a feature set that is less accurate than “leave-one-out” evaluation with all the features.
Therefore, the selection of delta is important to achieve good performance in Special Search. There are some things we can say, quantitatively, about the selection of delta. First, it must by definition fall between 0 and F where F is the number of features in the data set. The lower bound is set because two points cannot be closer than 0 units in Euclidean space. The upper bound comes from the fact that when the data are normalized to values between 0 and 1, each point can differ by no more than 1 unit per feature (the reason it is not is because the distance function I am using doesn’t compute the square root part to save time).
It is apparent that the “proper” choice for delta will vary greatly according to the data set. If the data had great separation a large delta value would suffice and lead to faster computing time. Conversely, a data set with tightly grouped instances of opposite classes will require a smaller delta value to preserve a high degree of accuracy.
It is possible to do some simple preprocessing of the data before we begin the search in order to give us some idea of the data points’ separation. In fact, I used the following strategy to give me a ballpark figure for a reasonable delta value: First, I selected an arbitrary number of points at random from the set. For each point, I then recorded the class of that point, the distance to each other point in the set, and the class of each other point. I then sorted these data by class and ascending distance.
From this data, I got an idea for what value, on average, would allow us to correctly classify points without searching every point for the nearest neighbor. For the above example, a delta value of 1.75 would not sacrifice any accuracy because the closest point of opposite class was at a distance of 1.757 units. It must be mentioned though, that randomly sampling the data in the above way will not give us perfect data to use for the delta choice, only a subjective “feel” for the data’s separation.
It is up to the user to weigh the accuracy of the classifier versus the speed at which it classifies new points. Some data sets will be very conducive to Special Search while others will perform very poorly.
East Bay Psychotherapist
Licensed Clinical Social Worker provides psychotherapy and counseling services for couples and individuals in the East Bay Area.