Tuesday, January 10, 2017

Unit testing Spring caching with grails

Grails unit tests do no autowire by default (certainly in version 2.2.3) , so to enable caching in a unit test we had to jump through a few hoops.

The easiest thing in the end was to manually create an xml to load the bean in question. (Once we created the bean in the xml, then the cachable annotations were recognized)
This worked in terms of loading the bean with the caching functionality built in, but then we began to run into class cast exceptions, because of the way that spring implements the caching (using proxys). See http://spring.io/blog/2012/05/23/transactions-caching-and-aop-understanding-proxy-usage-in-spring

The easiest solution we found to this, was to create an interface for the service in question. Then the proxying was able to cast the dynamically generated proxyClass to the interface.

Test xml (in test/unit)

        http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/cache http://www.springframework.org/schema/cache/spring-cache.xsd">


          class="org.springframework.cache.ehcache.EhCacheCacheManager" p:cache-manager-ref="ehcache"/>

          class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:config-location="TestEhCache.xml"/>

EhCache.xml (in grails-app/conf)


                  overflowToDisk='false' />


import org.springframework.cache.annotation.Cacheable
interface MyServiceIF  {

    // calling stored procedure to determine the as_of_date
    public Date priorToDate(String yyyymmdd);


class MyService implements MyServiceIF  {

    static transactional = false

    public Date priorToDate(String yyyymmdd) {
        return evaluate(yyyymmdd, -1);

Spock Test

Note also that if you are declaring a method cacheable, with multiple parameters, then you may want to define a keyGenerator, (or ignore the params)
 public void validateCache() {
        CacheManager cacheManager = ctx.getBean("cacheManager")
        String result;

        Cache dateCache = cacheManager.getCache(testName);
        String result1FromCache  = dateCache.get(dateToTest);   // Verify that the cache is empty
        Object resultFromSds
        Object result2FromSds
        Object result2FromCache
        if(testName =="futureBusinessDate" || testName == "pastBusinessDate"){
            resultFromSds = dateToString(daoService."$testName"(dateToTest,1 ))
            Object key = new DefaultKeyGenerator().generate(daoService, DalSdsDateIF.class.getMethod(testName, String.class, int.class), dateToTest, 1)   //compund params, so must generate key
            result2FromCache = dateToString(dateCache.get(key).get());
            result2FromSds = dateToString(daoService."$testName"(dateToTest,1 ))
        } else  {
            resultFromSds = dateToString(daoService."$testName"(dateToTest) )
            result2FromCache = dateToString(dateCache.get(dateToTest).get());
            result2FromSds = dateToString(daoService."$testName"(dateToTest) ) // expect this come from cache, so will not call log again

        result1FromCache==null    //verify cache is empty
        count ==expectedCallsToLog    // count number of calls  to log.info.. Expect one per call, except for isBusinessDate

        testName            |  dateToTest   | expectedResult | expectedCallsToLog
        "priorToDate"       | "2016-09-06"  | "2016-09-02"   | 1
        "nextToDate"        | "2016-09-02"  | "2016-09-06"   | 1
        "futureBusinessDate"| "2016-09-02"  | "2016-09-06"   | 1
        "pastBusinessDate"  | "2016-09-06"  | "2016-09-02"   | 1


e.g. in the test
Object key = new DefaultKeyGenerator().generate(daoService, DalSdsDateIF.class.getMethod(testName, String.class, int.class), dateToTest, 1)   //compund params, so must generate key

If you have parameters in the method call that you don't want influsencing the cahce (e.g. ignroe them you can do this, or this)
e.g. to ignore params you can do this

 @Cacheable(value="myCache", key="#root.methodName")// Force key name to be fixed no matter what params passed in
 public Map getValues(List warnings){

Friday, December 30, 2016

Simple script to send email once job finishes

If you have a unix process running then this script can be used to send an email when it is done.
First find th epid of the process

(while kill -0 ; do sleep 1; done) && (echo 'Process Finished now' | mail -s 'job done' Email@recipient.com

Thursday, October 13, 2016

Oracl sql notes

I'm about as far from a Sql expert as its possible to get, so these notes are probably qiute basic

  • Insert into a table only if row does not exists

SELECT 'jonny', NULL
  FROM dual -- Not Oracle? No need for dual, drop that line
 WHERE NOT EXISTS (SELECT NULL -- canonical way, but you can select
                               -- anything as EXISTS only checks existence
                     FROM table
                    WHERE name = 'jonny'

Note its possible to have multiple conditions, e.g.g if table3 also has expected data
    SELECT HIBERNATE_SEQUENCE.NEXTVAL, 1, (select id from table2 where account='052BAFJJ8'), (select id from table3 where name='ABCDE'), 1, 0 FROM dual
     WHERE NOT EXISTS (SELECT id FROM table WHERE account_id = (select id from table2 where account='052BAFJJ8'))
  and EXISTS (SELECT id FROM table3 where name='ABCDE');
Note, there may be race conditions with this approach. In our case we were running it in liquibase scripts, and we didn't have multiple servers running in parallel so this wasn't an issue
  • Delete duplicates
After all my inserts, I ended up with some unexpected duplicates. so I had to delete them. 
I was able to find te duplicate rows easily enough. I searched for all rows with a the same name having a count >1 to find duplicates.
However to delete them we had foreign key relationships, that meant that we could only delte the newly created rows. But these weren't easily identifiable. We decided on the following approach.
We ran the following expression twice. Once with max(rowid) and once with min(rowid), since the table had foreign key dependencies it couldn't be reliably be delted, so this way I managed to get all the duplicates. note if you have many duplicates this may prove more problematic.
exec dbms_errlog.create_error_log(dml_table_name => 'table3' ,err_log_table_name => 'table3_ERRORS') DELETE FROM table3 where rowid in (select min(rowid) from table3 group by name having count(*)>1) log errors into dcu_fund_ERRORS('Is referenced') reject limit 999999999;

Thursday, January 28, 2016

Security conscious coding

OWASP maintain a top 10 list of  security vulnerabilites in systems ( https://www.owasp.org/index.php/Top_10_2013-Top_10 )

They have also now introduced a Developer centric top ten list for proactive controls .
Full document is here.  https://www.owasp.org/images/5/57/OWASP_Proactive_Controls_2.pdf

1. Verify for Security Early and Often
2. Parameterize Queries
3. Encode Data
4. Validate All Inputs
5. Implement Identity and Authentication Controls
6. Implement Appropriate Access Controls
7. Protect Data
8. Implement Logging and Intrusion Detection
9. Leverage Security Frameworks and Libraries
10. Error and Exception Handling

Thursday, January 14, 2016

Hibernate n+1, and Error: a different object with the same identifier value was already associated with the session

I ran into this issue today. It is somewhat related to the causes behind the LazyInstantiationError, in that it is hibernate Sessions getting into a twist.

I had a class structure where we had a Process object, that contains many ProcessEvents. Also, the processEvents could be nested, so optionally they could refer to a parent ProcessEvent.

In grails

class Process {

  static hasMany = [processEvents: ProcessEvent]
  public enum ProcessStatus {
  public enum ProcessSeverity {

  //Persisted members
  String name
  Date initiated
  Date complete
  Float progress //Progress percentage
  ProcessStatus status
  String userId
  Map context     //Map to pass arbitrary data
  Date dateCreated
  Date lastUpdated
  ProcessSeverity severity // To determine how to log the error

  static transients = ["context"]

  static constraints = {
    name(blank: false, nullable: false)
    initiated(blank: false, nullable: false)
    complete(blank: true, nullable: true)
    progress(blank: false, nullable: false, max: 100F)
    userId(blank: false, nullable: false, maxSize: 20)
    severity(nullable: true)
    dateCreated(editable: false, required: true)
    lastUpdated(editable: false, required: true)

  static mapping = {
    processEvents sort: 'id'
    processEvents fetch: 'join'
    sort initiated:  'desc'
    processEvents cascade: "all-delete-orphan"

A few things of note here, is in the mapping, we are specifiying fetch join, for the processEvents. This means that instead of loading the processEvents individually (n+1 loads), we bulk load all in advance. Be careful with this if you have large tables, as this can quickly mount up.

Note also we set the processEvents to cascade all deletes, so that all child events get deleted when the parent process is deleted. Note this may not be needed since we have a belongsTo in the processEvent below

class ProcessEvent {
static belongsTo = [process: Process, parent: ProcessEvent]

  public enum EventLevel {DEBUG, INFO, WARN, ERROR}

  String message;
  EventLevel eventLevel
  Date dateCreated
  Date lastUpdated
  Date timestamp
  Boolean hasChildEvents = false // This is for performance increase, instead of calling DB.
  static constraints = {
    parent(nullable:  true)
    dateCreated(editable: false, required:true)
    lastUpdated(editable: false, required:true)
    timestamp(editable: false, required:true)
    hasChildEvents(required:false, nullable: true)

  /** table mappings */
  static mapping = {
    parent index: 'processEvent_idx'
    process index:  'process_idx'
sort id:"asc"

  void setMessage(String d){
        message = d?.length() > 3000 ? d.substring(0,3000) : d


In the processEvents, we have 2 belongs 2 relations, denoting that all processEvents are a child of a single process, and (optionally) a single parent processEvent.

We began to see the Hibernate Error a different object with the same identifier value was already associated with the session once we added the belongsTo processEvent clause.

The problem was in our add method
Originally we had it coded this way. This will add a new ProcessEvent, to an existing Process object, and an exsitng processEvent parent Event. (We have another method where we do not sepcify a ProcessEvent parent, but that was not causing any problems)

public ProcessEvent addProcessEvent(Long argProcessId, String argMessage, EventLevel argEventLevel, ProcessEvent parent)
        if (parent != null) {
            parent = parent.refresh()
            parent.hasChildEvents = true
            parent.save(flush: true)
Process pd = Process.findById(argProcessId)
        ProcessEvent pe = new ProcessEvent(
            message: argMessage,
            eventLevel: argEventLevel,
            timestamp: new Date(),
            parent: parent)

      saveProcess(pd)  // saves top level Process, flushes, and logs errors


When we got to the pd.addToProcessEvents (which basically does a save on the Process parent object), it would fail and throw the Hibernate exception.

With some help from Stackoverflow. It mentioned that we had multiple java objects referring to the same row. 
The problem was that we were had a java reference to the parent processEvent object (parent). However we were also loading (findById) the Process object, which was loading a 2nd java reference to the same processEvent object. When we then saved it, there were 2 java references to parent, which was not correct.

The correct version was to load the top level Process object first, and then use the parent ProcessEvent from there. See below

public ProcessEvent addProcessEvent(Long argProcessId, String argMessage, EventLevel argEventLevel, ProcessEvent p)
        Process pd = Process.findById(argProcessId)
        Iterator i = pd.processEvents.toArray().iterator()
        ProcessEvent parent=null
        while(i.hasNext()) {
            ProcessEvent next = i.next()
                parent = next
        if (parent != null) {
            parent.hasChildEvents = true
        ProcessEvent pe = new ProcessEvent(
            message: argMessage,
            eventLevel: argEventLevel,
            timestamp: new Date(),
            parent: parent)



Worth mentioning also are some other pages with good information
Gorm gotchas part 1, part2, and part 3

Wednesday, October 28, 2015

Linux scripting, date functions and renaming

This is something I always shy away from as I view it somewhat like a black art.

Anyway every now and again you have to bite the bullet so here's a script I wrote to bulk rename a number of files (and perl rename was not available), on newYears day

There are some nice tricks in here

# Usage
# ksh newYearsTasks.sh |DayOfWeek|     Note DayOfWeek is optional and only to be used for testing purposes
# e.g.  ksh newYearsTasks.sh    This is normal behaviour. The script will default to days day of week
# ksh newYearsTasks.sh Mon . This will override actual DayOfWeek to be Monday for testing purposes
# ksh newYearsTasks.sh . If a paramater other than Mon is set, then the DayOfWeek is set NOT to be Monday. This is for forcing testing of this behaviour on Mondays
#  Requirement is to copy filesfrom a number of src dirs to dest dirs, and rename them somewhat along the way
#Testing Note
# In order to test this we need create files in the archive directories that would be present on new years day.
# Pay attention to the year component. The script calculates the previous year to generate the rename command, so 
#If you are testing this in 2015 make sure you rename the old files to have 2014 as the year component. Like wise if you are testing in 2016, set the old files to have 2015 as the year component.
# The script copyies files based on their age. If the script is run on a Monday it will copy files that are less than 3 days old
# If the script is run on any other day it will try to copy files that are less than 1 day old (based on modifation date)
# Both scenarios shoudl be tested, and for simplicity sake it is recommended to test Mondays behaviour first.
# 1/ Test Mondays
# To test Mondays behaviour you will need to use touch -d command to set some archive files 3 days old. testDate format is yyyyMMdd, 
#     e.g. if running tests Nov 2. We want to set the files to be 3 days old on touch -d 20151030 * in each of the archive directories to be tested
# To manually invoke the script you can call "ksh newYearsTasks.sh Mon"", which will force the script to behave as if it is Monday regardless of day of week.
# 2/ Test other weekday 
#     Use touch *  in each of the archive directories to be tested to update to be less than one day old
# Run this to force testing with NotMonday behaviour ... "ksh newYearsTasks.sh Tue"

# First Check if server is active
# Set variables
DayOfWeek=$(date +%a)
typeset -i Year
Year=$(date +%Y)
function Rename {
echo "$CurrentDateTime : newYearsTasks.sh : Copying files dropped in the last $1 day(s) from archive to filedrop " >> $LOG_FILE
        #find files modified in $1 time, and copy them to dest
find . -mtime -$1 -type f  -exec cp {} ../dest \;
cd ../filedrop
echo "$CurrentDateTime : newYearsTasks.sh : Renaming file date from ${OldYear}$2 to ${Year}0101, and from ${2}${OldYear} to 0101${Year} " >> $LOG_FILE
rename ${OldYear}$2 ${Year}0101 *.csv
rename ${2}${OldYear} 0101${Year} *.csv

# Can pass in parameter to overrider todays day of week for testing. Set it to Mon for Monday testing, or anyting else for other day testing
if [[ $# -eq 1 ]]; then
if [[ $1 = "Mon" ]]; then
echo "$CurrentDateTime : newYearsTasks.sh : Overriding Day Of week to $1"  >> $LOG_FILE
echo "$CurrentDateTime : newYearsTasks.sh : Overriding Day Of week to Tue"  >> $LOG_FILE

# Script will run on both active and inactive server, to ensure both Prod and DR have correct files

echo "$CurrentDateTime : newYearsTasks.sh : Executing New Years Day script today " >> $LOG_FILE
# Three Arrays representing 1/ srcDirs to copy files from
# 2/ findStr used by the find command to select filename for renaming
# 3/ renameREgex, used to rename the files selected by the find
findStr[0]="*ext-[0-9][0-9]*.csv"  #findStr represents the variable used in the find
#rename regex causes original value ('p'), and renamed value to be output. Result is piped to mv
renameRegex[0]="p;s/ext-[0-9]*.csv/ext.csv/" # renameRegex represents the sed regex usedTo rename
set -A loopIdx 0 1 # loop index values
# loop through srcDirs
for i in ${loopIdx[@]};
echo "$CurrentDateTime : newYearsTasks.sh : Checking Dir=$dir. Not expecting file delivery from here today, so copying files from previous day" >> $LOG_FILE
cd  $dir
if [[ $DayOfWeek = "Mon" ]]; then
Rename 3 1229
else   # Tues - Sat
Rename 1 1231
#echo "find . -name \""${findStr[$i]}\"" -print | sed \"${renameRegex[$i]}\" | xargs -n2 mv"
find . -name "${findStr[$i]}" -print | sed "${renameRegex[$i]}" | xargs -n2 mv
if [[ $? -eq 0 ]]; then
echo "$CurrentDateTime : newYearsTasks.sh : Finished without error"  >> $LOG_FILE
echo "$CurrentDateTime : newYearsTasks.sh : No files copied. Please check if process flow"  >> $LOG_FILE

Tuesday, September 02, 2014

java.net.SocketException: Too many open files

We've recently moved our servers from windows to linux, and started getting the above error.

Its a well known scenario. Linux keeps a max number of open files limit, and we were exceeding it. Its possible to increase the limit, but first we should see if there is any process that may be causing the problem.

This is taken from http://doc.nuxeo.com/display/KB/java.net.SocketException+Too+many+open+files

Count File Descriptors in Use

Count Open File Handles
sudo lsof [-u user] | wc -l
Count File Descriptors in Kernel Memory
sudo sysctl fs.file-nr
# => The number of allocated file handles
# => The number of unused-but-allocated file handles
# => The system-wide maximum number of file handles

There is a global limit and a per user limit

Raising the Global Limit

  1. Edit /etc/sysctl.conf and add the following line:
    fs.file-max = 65536
  2. Apply the changes with:
    sudo sysctl -p /etc/sysctl.conf

Raising the per-User Limit

On some systems it is possible to use the ulimit -Hn 8192 and ulimit -Sn 4096 commands. However most of the time this is forbidden and you will get an error such as:
ulimit: open files: cannot modify limit: Operation not permitted
In those cases, you must:
  1. Edit as root the following system configuration file:
    % sudo vi /etc/security/limits.conf
  2. Modify the values for user
    user           soft    nofile          4096
    user           hard    nofile          8192
If you want to raise the limits for all users you can do instead:
*           soft    nofile          4096
*           hard    nofile          8192
To check whether changes are taken into account, open a new session w. And check that the change has been taken into account:
% su user
% ulimit -n
Here we can see that the new value has not been taken into account.
To fix this:
  1. Edit /etc/pam.d/su:
    % sudo vi /etc/pam.d/su
  2. Uncomment the line:
    session    required   pam_limits.so
    The change should now be taken into account the next time you login with your user:
    % su user
    % ulimit -n

Thursday, August 28, 2014

Log file rotate

Just to remind myself , that the linux logrotate daemon is very handy for those occasions when applications simply output to a single file that keeps growing over time.

THis will rotate daily (and add a date ext to the old file), after 31 days it will start to delete old files. It will not compres them (remove this line if you want it to compress the old log files). Not the size attribute is no longer been specified as it overrides the daily directive, and only rotates if files grow greater than10M. (Apparently there is a maxSize directive in newer versions of logrotate, that can be combined).. See http://serverfault.com/questions/391538/logrotate-daily-and-size

This job gets run nightly. If you want to run it immediately (e.g. to rotate a large file)  then,

>sudo /usr/sbin/logrotate /etc/logrotate.conf

e.g. > sudo vi /etc/logrotate.d/tomcat
/dirto/catalina.out /dirto/tomcat.log /dirto/admin.log /dirto/stacktrace.log{
  rotate 31
  size 10M

Wednesday, December 11, 2013

Windows task manager

Having trouble figuring out which process is which in taskManager

Obviously there is also sysInternal ProcExp, however this isn't always available if you are troubleshooting a Production issue

heres some tips
wmic process list
or for more specifric fields you can use Get
WMIC PROCESS get Caption,Commandline,Processid

If you want to get more information from a process (such as command line used to start it), then you can combine this with TaskMgr. (First add processId to the list of attributes that TaskMgr displays), then correlate it with ist from wmic
e.g. if grep installed on windows
wmic process get commandLine,processId | grep
if grep is not instaled (there seems to be a problem with findStr), you can pipe it to a temp file and use a decent editor such as notepad++
wmic process list > tempFile

Also tasklist is a command line version of taskMgr. It lists PID, and memory usage, so if you can't add PID to the taskManager (apparenty can't in windows server 2003), then you can correlate the memoryUsage back to taskManager and get the PID that way. (Note taskList has some other options  that may help)

Wednesday, November 27, 2013

groovy cusips

Some sample code for verifying cusips

// Note must map these character specifically since ascii mapping does not work
//println "*#@ = "+((int)('*'-'A'))+", "+((int)('#'-'A'))+", "+((int)('@'-'A'))

// sample cusips taken from http://www.ksvali.com/2009/02/security-ids-symbol-cusip-isin-sedol-ric-code/
def cusips = [  "14149YAR9", // - Corporate Bond - Cardinal Health Inc
                "126650BG4", // - Corporate Bond -CVS Caremark Corp
                "254709AC2", // - Corporate Bond -Discover Finl Services
                "437076AQ5", // - Corporate Bond -Home Depot Inc
                "441060AG5", // - Corporate Bond -Hospira Inc
                "50075NAN4", // - Corporate Bond -Kraft Foods Inc
                "574599BE5", // - Corporate Bond -Masco Corp
                "617446B99", // - Corporate Bond -Morgan Stanley
                "637640AC7", // - Corporate Bond -Natl Semicon Corp
                "713291AL6", // - Corporate Bond -Pepco Hldgs Inc
                "852061AE0", // - Corporate Bond -Sprint Nextel Corp
                "887317AA3", // - Corporate Bond -Time Warner Inc
                "925524BF6", // - Corporate Bond -Viacom
                "125509BG3", // - Corporate Bond -Cigna Corp
                "125896AV2"] // - Corporate Bond -CMS Engy Corp

    println "Verifying $it "+verifyCusip(it)   

public boolean verifyCusip(cusip){
   int check = getCheckCode(cusip.trim().toUpperCase())
   //println "  CheckCode = "+check)
   //println "Checking $check == "+cusip.charAt(cusip.size()-1)
   return check==((int)cusip.charAt(cusip.size()-1)-(int)'0')

 *Code based on algorithm at https://en.wikipedia.org/wiki/CUSIP
public int getCheckCode(String cusip){
   int sum = 0
   int v,p
      char c = cusip.charAt(i)
         v = (int)c-(int)'0'
      else if(Character.isLetter(c))
         v = (int)c - (int)'A'+10
      else if(c == '*')
         v = 36
      else if(c == '@')
         v = 37
      else if(c == '#')
         v = 38
      if(i%2!=0) // if i NOT even   N.B. since we count from 0 it is not even.. If we count from 1 it is even
         v *= 2
      //println "  for letter $c val = $v"
      sum += v/10 + (v %10)
   return (10 - (sum % 10)) % 10

Monday, November 04, 2013

7 zip

7zip is a nice free ware zip program.

For some reason my install had failed to get it opening .zip files by default. (Despite setting it multiple times via the 7zip options page.)

Heres how I fixed it (thanks winzip).. http://kb.winzip.com/kb/entry/155/

Also worth noting, is that it is good at deleting files and directories that other programs (including windows explorer) can't. I have had to install it, in order to delte some stubborn files that were lying around.

Thursday, October 10, 2013

Groovy Closures, and line endings

I've been using Groovy for a while now so when I was coding the following and it was crapping out at the compiler stage I was confused

Date newDate =  (Date)toObject("Problem converting imDate", new StringBuilder(), [:], {
            println "Parse date = $it";
            df.parse(it.date) }
private Object toObject(def errMsg, StringBuilder errors, def row, Closure c){
            Object ret = c.call(row);
                throw new RuntimeException("Returned null")
        }catch (Exception e){
            errors.append("$errMsg :-${e.getMessage()}\n")

This is simply creating a method with a Closure parameter and calling it with an anonymous closure. Simple stuff.. But the compiler doesn't like it.

BTW I did look up using Types and Generics so I didn't need to cast the response, but looking into it it doesn't appear to be possible to define the return type of a Closure.. so little bit of a gap there in groovy.

Changing the closure from anonymous to defined fixed the problem
def c= {
            println "Parse date = $it";
            df.parse(it.date) }

Date newDate =  (Date)toObject("Problem converting imDate", new StringBuilder(), [:],c)

But I didn't want to do this, as I was calling the method multiple times and it messed up my Feng Shui.

So the problem, apparently was the casting. It seems to expect the object to cast to appear on the same line as the cast (Date) directive. Wrapping the method call in  parenthesis  fixed the problem.

Date newDate =  (Date)(toObject("Problem converting imDate", new StringBuilder(), [:], {
            println "Parse date = $it";
            df.parse(it.date) }

Monday, September 23, 2013

Big Numbers

Sometimes it good to have them all listed in one place



http://en.wikipedia.org/wiki/Names_of_large_numbers http://en.wikipedia.org/wiki/Binary_prefix

Monday, August 12, 2013

Grails command update

Grails is quite memory hungry. To save time I often update the  grails batch file to give more memory on startup, so you don't get those outOfMemory errors, such as
java.lang.OutOfMemoryError: PermGen space


to include the following line  (near bottem , e.g. on line 135 of grails1.3.5 startGrails.bat)

set JAVA_OPTS=%JAVA_OPTS% -Xms128m -Xmx512m -XX:MaxPermSize=512m

Monday, July 22, 2013

SPA... Single Page Applications.. My journey

Going to try and document my forays into web 2.0 (old term but I think it makes sense for these new JS framework based webapps).

Heres my requirements. I want to develop a modern webapp using NodeJs on the server side, and some of the newer client side frameworks. As a tester project I'm going to implement a time management system. This is a personal bugbear of mine since every week, and month I have to struggle through some overwieght and cumbersome time managment tools to submit my work hours.

TO start with I'm going to try angularJS, and bootstrap.

So first step is find a good template/ boilerplate for this. I stumbled upon Yeoman (http://yeoman.io/) which is an opinionated (I like this) framework for front end developemnt.. (Its made up of 3 tools, yo (scaffolding cmd line tool), Bower (dependency mgmt), and grunt (build tasks such as minification etc), SO I'm going to start using that. (See Aside 1)

I already had nodejs installed, so here goes

npm install -g yo
npm install -g generator-webapp
npm install -g generator-angular
yo angular --minsafe    

This gives you lots of options bu by default will setp angular, twitter, compass, and a load of angular addons

yo angular:controller myController
yo angular:directive myDirective
yo angular:filter myFilter
yo angular:service myService


In the past I've stuck to my Java IDE such as Eclipse or IntelliJ (see aside 2 below).
However nowadays all the kool kids are using Sublime.
Alternatives are Brackets and Atom (the one I'm using). Atom comes the apm (atom package manager) which is build on npm and comes with a rich array of plugins
Ones I recommend are
Typescript - since I am planning on using Typescript for my Js development going forward

Woah.. An old fashioned text based generator tool (and Paul Irish is one of the people behind it).. Old-skool I like it.. reminds me of rogue
AS you run npn install scripts, yo's menu will grow offering you new options (e.g. Run the Angular generator and webapp generator are added as you run the install scripts above).

Aside 1 Console2
One of the side benfits of going through this process was discovering a recommendation to use console2 (or powershell) for windows users. Heres a blog entryon getting the most out of it. http://www.hanselman.com/blog/Console2ABetterWindowsCommandPrompt.aspx

Aside 2: IntelliJ has a nodejs plugin http://www.jetbrains.com/idea/webhelp/browse-repositories-dialog.html#search
THis allows you to run nodejs from intellij. Need to point at app.js which i normally located at app\scripts\app.js

Rogue Aside
Original ascii based maze dngeon game with randomly generated levels. (www.play.vg/games/87-Rogue.html?). Personally I found rogue way too difficult..
Nethack is another, again very difficult, but you do stsrt with a pet, and you can encounter your own previously killed characters in later games http://en.wikipedia.org/wiki/NetHack
Far more approachable and the only on eI managed to finish was Larn http://en.wikipedia.org/wiki/Larn_%28video_game%29

Wednesday, July 03, 2013

Groovy Dynamic code

This is fairly old, but still cool, and I just had a need to use it today, so I thought I'd post about it.

My problem was I wanted to mock out a Domain class. If a proerty value was null., then I wanted to create a mock value for that property. IF the property had a value, then I wanted to return that.

Using Dynamic programming this was easy.

class A{
  def doit(){
        println "doing..";
    String var="yoyo";

def a = new A();

a.metaClass.getProperty = {p ->
    def meta = a.metaClass.getMetaProperty(p)
        def mp = meta.getProperty(delegate)
            return mp
    return "DynoGet${p}"

println a.var
println a.mar
println a.var



The key here is the metaClass.getMetaProperty, and the meta.getProperty(delegate). They check if the property exists, first, and then if it has a value assigned. If not then I create a default value.

Monday, March 04, 2013


Gradle is my new build tool favourite. I love the groovy syntax, and built in libraries, and find it a huge improvements on maven and ant.

gradle tasks   // This will list all tasks that build file supports
To exclude a task -x

e.g. to exclude tests from a build

gradle build -x test 

Gradle comes with a series of plugins which can be used for commonly executed builds, e.g. java, groovy, also ones for IDE's such as idea and eclipse. These provide tasks commonly used with these.

apply plugin: 'java'   // Provides build compile test testCompile
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'groovy'
apply plugin:'application'

Running gradle tasks on this gives
C:\svn>gradle tasks

All tasks runnable from root project

Application tasks
distTar - Bundles the project as a JVM application with libs and OS specific scripts.
distZip - Bundles the project as a JVM application with libs and OS specific scripts.
installApp - Installs the project as a JVM application along with libs and OS specific scripts.
run - Runs this project as a JVM application

Build tasks
assemble - Assembles the outputs of this project.
build - Assembles and tests this project.
buildDependents - Assembles and tests this project and all projects that depend on it.
buildNeeded - Assembles and tests this project and all projects it depends on.
classes - Assembles the main classes.
clean - Deletes the build directory.
jar - Assembles a jar archive containing the main classes.
testClasses - Assembles the test classes.

Documentation tasks
groovydoc - Generates Groovydoc API documentation for the main source code.
javadoc - Generates Javadoc API documentation for the main source code.

Help tasks
dependencies - Displays all dependencies declared in root project 'myproj'.
dependencyInsight - Displays the insight into a specific dependency in root project 'myproj'.
help - Displays a help message
projects - Displays the sub-projects of root project 'colline'.
properties - Displays the properties of root project 'colline'.
tasks - Displays the tasks runnable from root project 'colline' (some of the displayed tasks may belong to subprojects).

IDE tasks
cleanEclipse - Cleans all Eclipse files.
cleanIdea - Cleans IDEA project files (IML, IPR)
eclipse - Generates all Eclipse files.
idea - Generates IDEA project files (IML, IPR, IWS)

Verification tasks
check - Runs all checks.
test - Runs the unit tests.

Other tasks

Pattern: build: Assembles the artifacts of a configuration.
Pattern: upload: Assembles and uploads the artifacts belongin
g to a configuration.
Pattern: clean: Cleans the output files of a task.

To see all tasks and more detail, run with --all.


Monday, January 28, 2013

Windows remote control

I'm going to try and gather some useful commands and tools here for controlling servers. Initially this is based ona  need to control a windows server, bu tI may add unix commands too just for completness.

Start Stop services
  • sc [command]
 One issue I had was permissions. Some service (run as local system user) were unable to start/ stop services on a different server (return code 5: Access Denied was returned)

To counter this you can use the following PsExec - http://technet.microsoft.com/en-us/sysinternals/bb897553    (taken from  http://serverfault.com/questions/359010/execute-windows-sc-command-as-another-user)

If you want to script using the sysInteranl files then the popUp EULA can cause problems. (It pops up and you have to accept it). One way around this is to add -accepteula to your script

pskill -accepteula

Remote Desktop controls
List existing Remote Desktop connections. (Note in the end I ran these on the remote machine in question (after using the mstsc command to login below)
  • qwinsta /SERVER:
once you have the list, you can evict some users using
  •  rwinsta /SERVER:
dir> qwinsta
 SESSIONNAME       USERNAME                 ID  STATE   TYPE        DEVICE
>rdp-tcp#29        myUser                  0  Active  rdpwd
 rdp-tcp                                 65536  Listen  rdpwd
                   sa507823                  1  Disc    rdpwd
 console                                     4  Conn    wdcon

Here I had a disconnected instance still hanging around. This could be killed using
rwinsta 1

Also to 'force' a login you remoteDesktop command in console mode e.g.

You can run the remote desktop in consoel mode (http://h0w2.blogspot.com/2011/04/how-to-force-go-in-to-remote-desktop.html )
If you need to force, follow this trick :
- From the Start menu  click RUN Command
- type mstsc /console /v:nameserver or ip address of remote computer 
for example:   mstsc /console /v: 
- type mstsc /console /v:nameserver or ip address of remote computer /admin
for example:   mstsc /console /v: /admin
- Click OK