Sunday, October 16, 2016

Creating data driven tests dynamically with FPTest/DUnit

I've been trying, whenever possible, to write tests along side new code i write. In fact, in one recent mid sized project, i created the tests before writing the code and the experience was broadly positive.

Given the personal need of a JSON Schema validator implemented in pascal, i decided to write one and, naturally, with a test driven approach.

The JSON Schema organization maintains a language agnostic test suite.  It's comprised of JSON files describing the specifications for each rule a validator must check.

I could write a program to convert the JSON specification to pascal units with the tests cases like i've done with mustache spec, but is far from optimal approach, imposing the need to recreate the test application each time a change is done in the spec.

So, i looked a way to create the tests dynamically reading directly the JSON files. A quick search lead me to the solution of creating a custom TTestCase class with a published method (named generically as Run) that implements the test. An instance of this class is created for each test, passing the appropriate data.

While it works, this approach has the drawback of the generic method name that would clutter the test runner output with meaningless information. To overcome this issue is possible to aggregate tests in one big test case, e.g., a unique test case for JSON Schema type rule, instead of creating one test case for each test description.

With the confidence that should exist a better solution, i digged into FPTest source code (a freepascal port of DUnit2) looking how i could have the best of two worlds, data driven dynamic tests with the granularity of handcraft tests.

Fortunately, i've got a way. The key is to subclass TTestProc and properly instantiate it .

  TJSONSchemaTestProc = class(TTestProc)
  private
    FData: TJSONObject;
    procedure ExecuteTest(SchemaData, TestData: TJSONObject);
    procedure ExecuteTests;
  public
    constructor Create(Data: TJSONObject);
  end;

constructor TJSONSchemaTestProc.Create(Data: TJSONObject);
begin
  inherited Create(@ExecuteTests, '', @ExecuteTests, Data.Get('description', 'jsonschema-test'));
  FData := Data;
end;

A TJSONObject with the test specification is passed in constructor. A not published method (ExecuteTests) is registered with the description of the test as name.

procedure TJSONSchemaTestProc.ExecuteTest(SchemaData, TestData: TJSONObject);
var
  Description: String;
  ValidateResult: Boolean;
begin
  Description := TestData.Get('description', '');
  ValidateResult := ValidateJSON(TestData.Elements['data'], SchemaData);
  if TestData.Booleans['valid'] then
    CheckTrue(ValidateResult, Description)
  else
    CheckFalse(ValidateResult, Description);
end;

procedure TJSONSchemaTestProc.ExecuteTests;
var
  SchemaData: TJSONObject;
  TestsData: TJSONArray;
  i: Integer;
begin
  SchemaData := FData.Objects['schema'];
  TestsData := FData.Arrays['tests'];
  for i := 0 to TestsData.Count - 1 do
    ExecuteTest(SchemaData, TestsData.Objects[i]);
end;

In ExecuteTests, the assertions (called in the specification tests) are executed one by one.

With this i get a comprehensive test suite that allows to effectively drive the development.

 

While took me some time to understand the FPTest/DUnit2 source code, the solution ended simpler and clearer than i think earlier. In a way that i foresee using this technique for testing other projects, not only third party specifications.

BTW: the test runner source code can be found here

Sunday, July 24, 2016

Effect of using a constant parameter for string types (revisited)

Long eight years ago i wrote an post about using const for string parameters and effects in generated code. It showed benefits in using const for string types but was far from difference showed with similar test done with Delphi. I never bothered to replicate in Freepascal, i took it as granted.

As the discussion arose in forum, i decided to do a test equals to Delphi one, basically just using the parameter without modifying it, by the way, the most common usage.

I compared
procedure ByValueReadOnly(V: String);
begin
  DoIt(V);
end;
with
procedure ByReferenceReadOnly(const V: String);
begin
  DoIt(V);
end;    
The result talks by itself

Also compared
procedure ByValue(V: String);
begin
  V := V + 'x';
  DoIt(V);
end;
with
procedure ByReference(const V: String);
var
  S: String;
begin
  S := V + 'x';
  DoIt(S);
end;
The generated code is similar, size and performance wise.

For those that underestimate the impact of such differences, read this.

For the curious (or the wary), i uploaded the code.

Wednesday, June 01, 2016

Resistance was futile: Git won

I consider myself a seasoned Subversion user.  Since at least 2005, when FreePascal migrated to SVN, i've been using it to manage my own projects or projects that i collaborate. It's not by chance that i bought not one, but two licenses from SmartSVN.

I'm also a adept of "If it ain't broke, don't fix it" philosophy so i did not bother to change the source code management software even with all the fuzz around Git. A system designed for distributed development would not improve over what Subversion offers for an "one man" work flow. Or so i thought.

With the time, i started to use Git to interact with a couple of GitHub hosted projects. Initially, just to fetch the code and, eventually, to send patches, or better, do pull requests. Following other teams work flows based in advanced branch management, i realized that Git could improve my software development efforts. So i bite the bullet, read a book and started to migrate my repositories.

And I do not regret, here are a few cases where Git made my life easier:


  • Save and share local modifications. There was times when i needed to test local, work in progress modifications in other environments before commiting. To do so, i had to keep moving a patch file around. Now i just create a temporary branch and push it, later delete it. No need to worry with HD crashes or messing with the main development line.
  • Sync a forked repository. In Subversion days, i had to manually sync the Lazarus VirtualTreeView fork. Only those that did a three way merge of 1MB of source code with heavy modifications knows how hard is. Now is a matter of doing a merge and resolving a few conflicts
  • Test different Lazarus package versions. When maintaining Lazarus packages in different branches, to switch between versions is necessary to load the respective file. With git no need to load a different file, just checkout the branch and recompile.
  • Develop alongside upstream projects. Some times there are changes that are not suitable to send upstream. Git makes easy to maintain personal changes at same time that tracks main development line. No need to bother upstream maintainers.

Saturday, May 24, 2014

MV* with Lazarus: between Presenter and ViewModel

The MVC conundrum


Sooner or later a programmer will get in touch with the acronym MVC (Model-View-Controller). Despite its ubiquitous presence in discussions or articles about code design, there's few comprehensive examples of using this pattern with Delphi / Lazarus. Most of the examples are just "one form application" that does not show how to organize a large scale application. There's not even a common pattern between them, some have reference to the controller in the view while others do the opposite.

This is not an object pascal exclusive issue. Other languages have the same problem and the reason is simple: the MVC as was designed to Smalltalk decades ago does not fits naturally in the modern, event driven, GUI architecture. The MVP (Model-View-Presenter) and its variations Passive View and Supervising Controller updates the pattern to match the requirements of today user interfaces. There's also Presentation Model and its most famous deviation MVVM (Model-View-ViewModel).

Meeting the presentation layer


All in all, the objective of all these patterns are to separate the presentation from the business layer thus facilitating the code maintenance. The difference lies in the responsibility of each presentation layer component and how they interact with the model (business object). In order to improve the architecture of my Lazarus projects, i found that the MVP is the most doable to be used with object pascal. There are good examples of MVP/PassiveView with Delphi that could be easily adapted but, in my opinion, is overkill and counterproductive to define read and write properties for each GUI element.

I have forms as simple as seem below

procedure TAppConfigViewForm.FormShow(Sender: TObject);
begin
  BaseURLEdit.Text := Config.BaseURL;
end;

procedure TAppConfigViewForm.SaveButtonClick(Sender: TObject);
begin
  Config.BaseURL := BaseURLEdit.Text;
  Config.Save;
end;

Having to define a view and a presenter interfaces and implement a presenter to such a simple view is a no-no to me. On the other hand, in complexes views, handling the GUI logic in a separated component is worth the work.

With these in mind, i defined an interface (IPresentation) to abstract how a view (TForm) is configured and show. To use just reference one by a string id, call SetProperties to set published properties and ShowModal to show it.

  
var
  Presentation: IPresentation;

Presentation := PresentationManager['myview'];
Presentation.SetProperties(['ConfigProp', FConfig]).ShowModal;

The presentations are registered to a specialized IoC container through two overloaded methods:

  
IPresentationManager = interface
  procedure Register(const PresentationName: String; ViewClass: TFormClass);
  procedure Register(const PresentationName: String; PresenterClass: TPresenterClass);
end;

Both has a PresentationName argument that will identify the presentation. The first overload accepts a TFormClass, the view is instantiated directly and there's no presenter. The second, accepts a PresenterClass that will be responsible to show the view.

This is how the presenter and view classes looks:

//presenter 
interface

  TNutritionEvaluationPresenter = class(TBasePresenter)
  public
    function ShowModal: TModalResult; override;
    function CanImportPreviousEvaluation: Boolean;
    procedure ImportPreviousEvaluation;
    procedure SaveEvaluation;
    property EvaluationData: TJSONObject read GetEvaluationData;
  end;

implementation

uses
  NutritionEvaluationView;


function TNutritionEvaluationPresenter.ShowModal: TModalResult;
var
  View: TNutritionEvaluationViewForm;
begin
  View := TNutritionEvaluationViewForm.Create(nil);
  try
    View.Presenter := Self;
    Result := View.ShowModal;
  finally
    View.Destroy;
  end;
end;

//view
interface

uses
  NutritionEvaluationPresenter;


  TNutritionEvaluationViewForm = class(TForm)
  [..]
  published
    property Presenter: TNutritionEvaluationPresenter read FPresenter write SetPresenter;
  end;

procedure TNutritionEvaluationViewForm.ImportPreviousLabelClick(Sender: TObject);
begin
  FPresenter.ImportPreviousEvaluation;
end;

procedure TNutritionEvaluationViewForm.SaveButtonClick(Sender: TObject);
begin
  FPresenter.SaveEvaluation;
end;

procedure TNutritionEvaluationViewForm.FormShow(Sender: TObject);
begin
  ImportPreviousLabel.Visible := FPresenter.CanImportPreviousEvaluation;
  //update GUI with evaluation data
end;


The Presenter here is acting more like a ViewModel (expose data, state, operations to view) than a true presenter. It works fine but with serious caveats:
  • The view and the presenter know each other which defeats the purpose of independent implementations. Also is not possible to hold a view reference in presenter interface (circular unit reference)
  • The TForm presenter property must be set manually (subject to forget)
  • Registering a TForm class that expects a presenter directly will crash since there'll be no presenter

Interfaces and conventions to the rescue


I was not not really satisfied with the above approach, so reworked the code and got the following design:

  • The presentation register method now has three arguments: name, view class and presenter class (optional). When the presenter class is not defined, the view is instantiated directly
  • The view (TForm) is show by the internal code. No need to the presenter do it.
  • If a presenter class is specified, the view class must define a published property named Presenter. An error is throw if the property does not exists or if is of an incompatible type
  • The presenter property can be declared as a interface also, allowing to completely decouple the presenter from the view implementations
  • There's the possibility to bind a view instance to a presenter property. Not implemented since, until now I did not need.
So much talk. The current code can be found here  and a example how I use it here.

Wednesday, February 19, 2014

Thoughts about application architeture with Lazarus

The Delphi books of my days (or why i'm not guilt of my application's poor design)

As most of Lazarus developers, i started to code in Delphi (in fact i learned computer programming with turbo pascal) and to get most of the tool i read some books, i bought three or four and read part of others in bookstores. This is supposed to be a good practice when learning a new technology.

The problem, noticed by me only years later, is the lack of teaching of good application design like separation of concerns (view, business, persistence layers) and how to achieve them with Delphi. Most of the books focused in the visual aspect (how to create a good looking form, reports etc) and how to setup datasets and the db aware controls. The closer to a good practice advice was putting datasets and datasources in data modules instead of forms.

We can't even blame the book authors. The Delphi's greatest selling point was (is?) the Rapid Application Development (RAD) features.

Recipes for a bulky spaghetti

In early days, when developing my applications, i was a diligent student: i put database logic in data modules and designed the forms as specified in the books. But, as all developers that created applications with more than three forms knows, things started to get hard to evolve and maintain.

Keeping the database components in data modules did not help much. You end with shared dataset states, and all problems that comes with it, across different parts of applications.

Below is a data module's snapshot of my first big application (still in production, by the way).

It could be even worse if i had not started to use a TDataset factory in the middle of development 
In the end, the project has code like:

  // a form to select a profile
  DataCenter.PrescriptionProfilesDataset.Open;
  with TLoadPrescriptionProfileForm.Create(AOwner) do
  try
    Result := ShowModal;
    if Result = mrYes then
      DataCenter.LoadPrescriptionProfile;
  finally
    DataCenter.PrescriptionProfilesDataset.Close;
    Destroy;
  end;
  
  //snippet of DataCenter.LoadPrescriptionProfile (copy the selected profile to PrescriptionItemsDataset)
  with PrescriptionItemsDataset do
  begin
    DisableControls;
    try
      FilterPrescriptionProfileItems(PrescriptionProfilesDataset.FieldByName('Id').AsInteger);
      while not PrescriptionProfileItemsDataset.Eof do
      begin
        NewMedication := PrescriptionProfileItemsDataset.FieldByName('Medication').AsString;
        if Lookup('Medication', NewMedication, 'Id') = Null then
        begin
          Append;
          FieldByName('PatientId').AsInteger := PatientsDatasetId.AsInteger;
          FieldByName('Medication').AsString := NewMedication;
          FieldByName('Dosage').AsString := PrescriptionProfileItemsDataset.FieldByName('Dosage').AsString;
          [..]
          Post;
        end;
        PrescriptionProfileItemsDataset.Next;
      end;
      ApplyUpdates;
      PrescriptionProfileItemsDataset.Close;
    finally
      EnableControls;
    end;
  end;

Its not necessary to be a software architect guru to know that this is unmanageable

Eating the pasta with business objects and inversion of control

In the projects that succeeded the first one, most of the data related code is encapsulated in business objects. The data module does not contain TDataset instances anymore, it's responsible only to act as a TDataset factory and to implement some specific data action. To work with dataset it's necessary just reference one from a key which leads to code like the below:

  FWeightHistoryDataset := DataModule.GetQuery(Self, 'weighthistory');
  FWeightHistoryDataset.ParamByName('prontuaryid').AsInteger := FId;
  FWeightHistoryDataset.Open;

This fixes the shared state issue since each dataset has a clear, limited scope. But does not solve the  business objects dependency of a global instance (DataModule), which  makes testing harder.

In the project that i'm starting, i solved the dependency to the global instance by using the service locator pattern through the IoC Container i cited in a previous post. I defined a resource factory service that is resolved as soon as the business object is created, opening the doors to setup testing environments in a clear manner.

All done?

Not yet. The business logic is contained in specific classes, there's no shared state across application and no hardcoded global dependency but the view layer (forms) is still (dis)organized  in the classic way with each TForm calling and being called by other ones directly. This problem, and the solutions i'm working, will be the subject to a future post.

Saturday, February 15, 2014

Number of units, optimization and executable size

I tend to split my code in small units instead of writing a big unit with lot of classes or functions. The drawback is the large number of files but, in my opinion, the benefits outweight it.


One thing that always bothered me was if this practice has any effect in file size.


Seems not. I wrote two versions of the same program. In the first, all classes (one descendant of another) are defined in the same unit while in the second each class lives in a separated unit. When compiled with the debugging info the separated program is a little bigger. This difference in size does not exist when compiled without debugging info.


As a side note, i noticed that compiling with -O2 flag leads to smaller executables compared with -O1, the Lazarus default. It's just a few kilobytes but worth the note.  





Sunday, February 02, 2014

Using TComponent with automatic reference count

For some time, i know the concepts of Inversion of Control (IoC) and Dependency Injection (DI) as well the benefits they bring but never used them in my code.  Now that i'm a starting a new project from scratch, and the deadline is not so tight, i decided to raise the bar for my code design.

I'll implement an IoC container in the line of VSoft's one. While adding the possibility of doing DI through constructor injection would be great, i won't implement it. It's not a hard requirement of mine and fpc currently does not support the features (basically Delphi's new RTTI) needed to implement it without hacks.

Automatic reference counting


Most of Delphi IoC implementations use COM interfaces and rely on the automatic reference count to manage object instance life cycle. So do i. This approach's drawback is that the class to be instantiated must handle the reference count. When designing new classes or when class hierarchy can be modified, is sufficient to inherit from TInterfaced* classes. The problem rises when is necessary to use a class that has a defined hierarchy and does not handle reference counting, like LCL ones.

Since i plan to decouple TForm descendants, i need a way to use them with the IoC container. Below is the (rough) design, in pseudo code:

//Define interface
  IPersonView = interface
  ['{9B5BBA42-E82B-4CA0-A43D-66A22DCC10DE}']
    procedure DoIt;
  end;

  //Implement an IPersonView
  TPersonViewForm = class(TForm, IPersonView)   
    procedure DoIt;
  end;

  //Register implementation   
  Container.Register(IPersonView, TPersonViewForm); 

  //Instantiate the view
  Container.Resolve(IPersonView)

At first look, it should work seamlessly. And in fact does: a TPersonViewForm is instantiated and returned as IPersonView. The only issue is that the object instance will never be freed even when the interface reference goes out of scope. This occurs because _AddRef and _Release methods of TComponent does not handle reference count by default.

VCLComObject to the rescue


Examining the code, we observe that TComponent _AddRef and _Release forwards to VCLComObject property. There's not good documentation or examples of using this property. So i wrote an example to see if it would solve my problem.

Basically i wrote TComponentReference, a descendant of TInterfacedObject with a dummy implementation of IVCLComObject that gets a TComponent reference in the constructor and free it in BeforeDestruction.

constructor TComponentReference.Create(Component: TComponent);
begin
  FComponent := Component;
end;
procedure TComponentReference.BeforeDestruction;
begin
  inherited BeforeDestruction;
  FComponent.Free;
end;


And this is how i tested:

function GetMyIntf: IMyIntf;
var
  C: TMyComponent;
  R: IVCLComObject;
begin
  C := TMyComponent.Create(nil);
  R := TComponentReference.Create(C);
  C.VCLComObject := R;
  Result := C as IMyIntf;
end;
var
  MyIntf: IMyIntf;
begin
  MyIntf := GetMyIntf;
  MyIntf.DoIt;
end.   

It worked! I get a IMyIntf reference and no memory leaks. Easier than i initially think.

The code can be downloaded here.