A primer on OWIN cookie authentication middleware for the ASP.NET developer


There have been many changes to how authentication is performed for web applications in Visual Studio 2013. For one, there’s a new “Change Authentication” wizard to configure the various ways an application can authenticate users. The approach to authentication that’s undergone the most changes in this version is local cookie-based authentication and external login providers based upon OAuth2 and OpenID (social logins). This style of logins is now collectively known as the “Individual User Accounts” and it’s one option in the new authentication wizard. This purpose of this post (and followup posts) is to explain the new authentication plumbing for this option.


OWIN authentication middleware

With .NET 4.5.1, for ASP.NET applications, all the underlying code that handles “Individual User Accounts” (as well as the templates in Visual Studio 2013) is new. This means for cookie based authentication we no longer use Forms authentication and for external identity providers we no longer…

View original post 1,019 more words


A fresh look at JavaScript Mixins

Still hot topic with new developers

JavaScript, JavaScript...

(Russian, Japanese)

In this article I’ll explore JavaScript mixins in detail, and introduce a less conventional, but to my mind more natural mixin strategy that I hope you’ll find useful. I’ll finish up with a profiler matrix summarizing the performance impact of each technique. [A big Thank You to the brilliant @kitcambridge for reviewing and improving the code on which this blog is based!]

View original post 1,236 more words

You were previously added to the Hyper-V Administrators security group, but the permissions have not taken effect. Please sign out of your computer for the permissions to take effect. Android sdk with Visual Studio – xamarin – Part 2

Part 1

The popup about permission was the reason to write part-2, but later I realized there are many more.

After completing activities on Part 1, I now hit the run button to got a pop up saying.

You were previously added to the Hyper-V Administrators security group, but the permissions have not taken effect. Please sign out of your computer for the permissions to take effect.

Yes, this is something related to permission, I read the  “sign out of your computer” as “restart the computer“.

When I came again after restarting.

I hit the run button to see the output on 5” Kitkat (4.4) Virtual device.

Again popup episode started.

Popup-1 : Microsoft Visual Studio

The emulator requires an Internet connection to start. Do you want to configure the emulator to connect to the Internet?

Your computer might lose network connectivity while these changes are applied. This might affect existing network operations.
Yes No

I clicked on Yes

Popup-2 appeared : Visual Studio Emulator for Android

Click “Retry” to run the emulator in elevated mode.

You do not have permission to modify internal Hyper-V network adapter settings, which are required to run the emulator

[Retry] [Close]

I retried and see emulator phone screen, saying “OS starting..”

Output Window Progress…

1>—— Build started: Project: App1, Configuration: Debug Any CPU ——
1> App1 -> D:\Misc\android\App1\App1\bin\Debug\App1.dll
1> Processing: obj\Debug\res\layout\main.xml
1> Processing: obj\Debug\res\values\strings.xml
1> Processing: obj\Debug\res\layout\main.xml
1> Processing: obj\Debug\res\values\strings.xml
2>Starting deploy 5″ KitKat (4.4) XXHDPI Phone …
2>Starting emulator 5″ KitKat (4.4) XXHDPI Phone …
2>Validating emulator arguments…
2>Determining if emulator is already running…
2>Preparing virtual machine…
2>Launching emulator…
2>An error occured. See full exception on logs for more details.
2>Could not launch ‘VS Emulator 5″ KitKat (4.4) XXHDPI Phone’ device. Exit code 10.
2>An error occured. See full exception on logs for more details.
2>Could not launch ‘VS Emulator 5″ KitKat (4.4) XXHDPI Phone’ device. Exit code 10.
========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========
========== Deploy: 0 succeeded, 1 failed, 0 skipped ==========

OMG, It stuck again and displayed me another popup.

The emulator is unable to verify that the virtual machine is running:

Something happened while starting a virtual machine: ‘VS Emulator 5-inch KitKat (4.4) XXHDPI Phone.lima’ failed to start. (Virtual machine ID 618636A2-0A76-46A5-A5BA-0CD352B1BEE5)

‘VS Emulator 5-inch KitKat (4.4) XXHDPI Phone.lima’ could not initialize. (Virtual machine ID 618636A2-0A76-46A5-A5BA-0CD352B1BEE5)

Not enough memory in the system to start the virtual machine VS Emulator 5-inch KitKat (4.4) XXHDPI Phone.lima with ram size 2048 megabytes. (Virtual machine ID 618636A2-0A76-46A5-A5BA-0CD352B1BEE5)

I ran dxdiag and ensured that my system has 4GB RAM (4096 MB), it was double then requirement of 2048 megabytes.

Now what, am I in a war zone between Google and Microsoft? should I gave up or keep trying, lets give it a last try. This time we will run the Visual Studio in elevated mode via “Run as Administrator” option.

I have failed today, following is the output of my work on post 1 and this post.vsandriod

Visual Studio is saying:

Visual Studio Emulator for Android
The emulator is unable to verify that the virtual machine is running:
Not enough memory is available in the system to start an emulator that uses 2048 MB of startup RAM. Please close other applications and try to launch the emulator again.
If closing other applications doesn’t help, please follow the instructions on this KB article: http://support.microsoft.com/kb/2911380/en-us

The suggested work around is to add a guaranteed MemoryReserve in Registry for Virtualization. Guaranteed means when You run emulator this amount of memory should be always free. I added 1024 as decimal .

Serious problems might occur if you modify the registry incorrectly. Before you modify it, back up the registry for restoration in case problems occur.

To work around this problem in a system that is running many programs that are using lots of memory, try to close those programs and then restart the emulator.

If the emulator still does not start, you can disable the Hyper-V runtime memory monitoring functionality by adding a MemoryReserve registry. To do this, follow these steps:

  1. Start Registry Editor.
  2. Locate the following registry subkey:
    HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Virtualization
  3. Right-click the Virtualization folder, point to New, and then click DWORD Value.
  4. Type MemoryReserve, and then press Enter.
  5. Double-click MemoryReserve, enter 2048 in the Value data box, select the Decimal option, and then click OK.
  6. Close Registry Editor.

In systems that experience this problem and that have fewer than 8 GB of RAM installed, a MemoryReserve value of 2048 (2 GB) is recommended. A value of zero (0) causes this registry setting to be ignored.

Note You must restart the computer for this registry setting to take effect.

To close the chapter, I added the desired DWORD MemoryReserve.

When I came again after restarting, the same out of memory story.

I modified the MemoryReserve in Registry for Virtualization to Zero, still , the same out of memory story.

It seems that we need 8 GB of RAM to run default emulator.

Then I decided to create an Galaxy AVD, this time it was a success.

1>Build succeeded.
1>Deploy successfully on AVD_for_Galaxy_Nexus_by_Google.

nexus avd.png


Error: CS1703 – Multiple assemblies with equivalent identity have been imported. Android sdk with Visual Studio – xamarin – Part 1

I was looking for Visual Studio extension for editing Perl, I ended up here http://stackoverflow.com/questions/3755892/is-there-a-perl-extension-for-visual-studio

I followed the solution provided.

Visual Studio Update 1 RTM now (2015) has Perl support, along with Go, Java, R, Ruby, and Swift.

I  noticed the Android project type and thought to give it a try.

When build the default project, stuck on following error.

xamarin android sdk not found

Installing Android sdk from URL resolved this issue.

The new road block, I faced.

Error CS1703 Multiple assemblies with equivalent identity have been imported: ‘C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework\MonoAndroid\v1.0\mscorlib.dll’ and ‘C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\mscorlib.dll’. Remove one of the duplicate references.

For resolving this, I went ahead and removed C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\mscorlib.dll reference from Project References. As mentioned in error both mscorlib.dlls were getting referenced automatically.

Now it was build successfully.

Part 2

Is PERL dead? Is PERL still there? Is Perl still used? Why should I care about Perl?


Perl 1.0 was developed for system admins to parse huge logs and extract human readable reports/summary. Later became a web language due to the ability of managing the strings/text very effectively. The text parsing engine become handy in generating HTMLs. In the days of 1997,  addition of CGI.pm to Perl 5.004 helps the Perl to became main stream web language. Perl has ruled as CGI scripting language with PHP at that time.

Version 1: Larry Wall, a programmer at Unisys released the version 1.0 to the comp.sources.misc newsgroup on December 18, 1987. The language expanded rapidly over the next few years.

Perl 2, released in 1988, featured a better regular expression engine.

Perl 3, released in 1989, added support for binary data streams.

Perl 4.036 released in 1993 having enhancements to Perl 3.

Perl 5.000 was released on October 17, 1994, the complete rewrite of the interpreter, and many new features were added to the language, including objects, references, lexical (my) variables, and modules.

Importantly, modules provided a mechanism for extending the language without modifying the interpreter. This allowed the core interpreter to stabilize, even as it enabled ordinary Perl programmers to add new language features. Perl 5 has been in active development since then.

Perl 5.001 was released on March 13, 1995.

Perl 5.002 was released on February 29, 1996 with the new prototypes feature. This allowed module authors to make subroutines that behaved like Perl built-ins.

The most important milestone in the life of Perl 5  was its module support. In 1995, the Comprehensive Perl Archive Network (CPAN) was established as a repository for Perl modules and Perl itself. Till today this is very active and contains million modules and mirrored worldwide.


Perl 5.004 was released on May 15, 1997, and included among other things the UNIVERSAL package, giving Perl a base object to which all classes were automatically derived and the ability to require versions of modules.

Another significant development was the inclusion of the CGI.pm module, which contributed to Perl’s popularity as a CGI scripting language.

Perl is also now supported running under Microsoft Windows and several other operating systems.

Perl 5.005 was released on July 22, 1998. This release included several enhancements to the regex engine, new hooks into the backend through the B::* modules, the qr// regex quote operator, a large selection of other new core modules, and added support for several more operating systems, including BeOS.

2000–present latest stable version is 5.6. Major changes included 64-bit support, Unicode string representation, large file support (i.e. files over 2 GiB) and the “our” keyword

Why Use Perl for CGI?

Socket Support-create programs that interface seamlessly with Internet protocols. Your CGI program can send a Web page in response to a transaction and send a series of e-mail messages to inform interested people that the transaction happened.

Pattern Matching-ideal for handling form data and searching text.

Flexible Text Handling– The way that Perl handles strings, in terms of memory allocation and deallocation, fades into the background as you program. You simply can ignore the details of concatenating, copying, and creating new strings. Perl includes powerful tools for processing text that make it ideal for working with HTML, XML, and all other mark-up and natural languages.

Today, Perl isn’t used very much in this area, as other languages like PHP have taken over this specific niche. While Perl is a general-purpose programming language that was used for the Web, PHP was specifically built for creating websites via CGI.

Perl Web Framework

  • Dancer (software)- Dancer (basically just a simple way to map routes to templates)
  • Mojolicious (a very complete, low-dependency web framework)

As these frameworks are built on PSGI, they can be deployed via CGI or via any other interface where appropriate middleware exists (there are also a few specialized PSGI servers).

 Test.pl code for any web web server


$url = “http://$ENV{SERVER_NAME}$ENV{URL}”;

$ip = “$ENV{REMOTE_ADDR}”;

print <<ENDOFTEXT;

HTTP/1.0 200 OK

Content-Type: text/html




<H4>Hello from PERL 5</H4>

<P>URL: <a href=”$url”>$url</a></P>

<P>IP address: $ip</P>






The highlighted portion in above code is http header. When returned from Windows IIS, then only this is required. Apache add this automatically to response.

HTTP/1.0 200 OK

Content-Type: text/html

Just another sample.code

Sending inputs/textbox to formProcess.pl from form.htm;  yes, we do not need Perl to post, its action of the form, doing the job


<FORM action=”/formProcess.pl” method=”GET”>
First Name: <input type=”text” name=”first_name”> <br>

Last Name: <input type=”text” name=”last_name”>
<input type=”submit” value=”Submit”>

Getting Input from Text Box in formProcess.pl : use of CGI.pm


use CGI;
my $cgi = new CGI;

my $fname = $cgi->param( ‘first_name’ ) || ”; # capturing first_name from form.htm into $fname variable
my $lname = $cgi->param( ‘last_name’ ) || ”; # capturing last_name from form.htm into $lname variable

print <<ENDOFTEXT;
HTTP/1.0 200 OK
Content-Type: text/html


As Perl was become main stream web language, many web applications were developed at that time. Most of these applications are still live and very much used. That is why Perl is not dead and its continuously evolving.

Now you can jump to  Perl 6 (https://perl6.org/) to continue…




Developer Cheetsheet

Jquery 1.7


jQuery function

$.jQuery( selector [, context] | element | elementArray |jQueryObject ), .jQuery( )
$.jQuery( html [, owner]  | html, props )
$.jQuery( fn )
$.holdReady( hold )

jQuery Object Accessors

$.each( fn(index, element) )
num.size( ), .length
$.eq( index )
jQuery.error( str )
[el],el.get( [index] )
num.index( ), .index( selector | element )
$jQuery.pushStack( elements, [name, args] )
arr.toArray( )


$jQuery.noConflict( [extreme] )


Low-Level Interface

jqXHRjQuery.ajax( options, [settings] )

  • mapaccepts
  • boolasync = true
  • fnbeforeSend( jqXHR, config)
  • boolcache = true
  • fncomplete( jqXHR, status)
  • mapcontents
  • strcontentType
  • objcontext
  • mapconverters
  • boolcrossDomain
  • obj, strdata
  • fndataFilter( data, type )
  • boolglobal = true
  • mapheaders
  • boolifModified = false
  • strjsonp
  • fnjsonpCallback
  • strpassword
  • boolprocessData = true
  • strscriptCharset
  • mapstatusCode
  • numtimeout
  • booltraditional
  • strtype = ‘GET’
  • strurl = curr. page
  • strusername
  • fnxhr
  • strdataType ∈ {xml, json, script, html}
  • fnerror( jqXHR, status, errorThrown )
  • fnsuccess( data, status, jqXHR )
jQuery.ajaxSetup( options )


str.serialize(  )
[obj].serializeArray(  )
strjQuery.param( obj, [traditional] )

Shorthand Methods

$.load( url [, data] [, fn( responseText, status, XHR )] )
jqXHRjQuery.get( url [, data] [, fn( data, status, XHR )] [, type] )
jqXHRjQuery.getJSON( url [, data] [, fn( data, status )] )
jqXHRjQuery.getScript( url [, fn( data, status )] )
jqXHRjQuery.post( url [, data] [, fn( data, status )] [, type] )

Global Ajax Event Handlers

$.ajaxComplete( fn( event, XHR, options ) )
$.ajaxError( fn( event, XHR, options, thrownError ) )
$.ajaxSend( fn( event, XHR, options ) )
$.ajaxStart( fn(  ) )
$.ajaxStop( fn(  ) )
$.ajaxSuccess( fn(event, XHR, options) )


Page Load

$.ready( fn() )

Event Handling

$.on( events [, selector] [, data], handler )1.7+
$.on( events-map [, selector] [, data] )1.7+
$.off( events [, selector] [, handler] )1.7+
$.off( events-map [, selector] )1.7+
$.bind( type [, data ], fn(eventObj) )
$.bind( type [, data], false )
$.bind( array )
$.unbind( [type] [, fn])
$.one( type [, data ], fn(eventObj) )
$.trigger( event [, data])
obj.triggerHandler( event [, data])
$.delegate( selector, type, [data], handler)
$.undelegate( [selector, type, [handler]]) | selector, events | namespace )

Live Events

$.live( eventType [, data], fn() )
$.die( ), .die( [eventType] [, fn() ])

Interaction Helpers

$.hover( fnIn(eventObj), fnOut(eventObj))
$.toggle( fn(eventObj), fn2(eventObj) [, …])

Event Helpers

function ( [data,] [fn] )



$.show( [ duration [, easing] [, fn] ]  )
$.hide( [ duration [, easing] [, fn] ]  )
$.toggle( [showOrHide] )
$.toggle( duration [, easing] [, fn] )


$.slideDown( duration [, easing] [, fn] )
$.slideUp( duration [, easing] [, fn] )
$.slideToggle( [duration] [, easing] [, fn] )


$.fadeIn( duration [, easing] [, fn] )
$.fadeOut( duration [, easing] [, fn] )
$.fadeTo( [duration,] opacity [, easing] [, fn] )
$.fadeToggle( [duration,] [, easing] [, fn] )


$.animate( params [, duration] [, easing] [, fn] )
$.animate( params, options )
$.stop( [queue] [, clearQueue] [, jumpToEnd] )1.7*
$.delay( duration [, queueName] )



str.attr( name | name , value )
$.attr( name, val | map | name, fn(index, attr) )
$.removeAttr( name )
$.prop( name )
$.removeProp( name )


$.addClass( class | fn(index, class) )
bool.hasClass( class )
$.removeClass( [class] | fn(index, class) )
$.toggleClass( class [, switch] | fn(index, class) [, switch] )

HTML, text

str.html( )
$.html( val | fn(index, html) )
str.text( )
$.text( val | fn(index, html) )


str,arr.val( )
$.val( val | fn() )



str.css( name )
$.css( name, val | map | name, fn(index, val) )


obj.offset( )
$.offset( coord | fn( index, coord ) )
obj.position( )
int.scrollTop( )
$.scrollTop( val )
int.scrollLeft( )
$.scrollLeft( val )

Height and Width

int.height( )
$.height( val | fn(index, height ) )
int.width( )
$.width( val | fn(index, height ) )
int.innerHeight( )
int.innerWidth( )
int.outerHeight( [includeMargin] )
$.outerHeight( val | fn(index, outerHeight ) ) 1.8+
int.outerWidth( [includeMargin] )
$.outerWidth( val | fn(index, outerWidth ) ) 1.8+



$.eq( index )
$.first( )
$.last( )
$.has( selector ), .has( element )
$.filter( selector ), .filter( fn(index) )
bool.is( selector | function(index) | jQuery object | element )1.7*
$.map( fn(index, element) )
$.not( selector ), .not( elements ), .not( fn( index ) )
$.slice( start [, end] )

Tree traversal

$.children( [selector] )
$.closest( selector [, context] | jQuery object | element )
arr.closest( selectors [, context] )removed
$.find( selector | jQuery object | element )
$.next( [selector] )
$.nextAll( [selector] )
$.nextUntil( [selector] )
$.parent( [selector] )
$.parents( [selector] )
$.parentsUntil( [selector] )
$.prev( [selector] )
$.prevAll( [selector] )
$.prevUntil( [selector] )
$.siblings( [selector] )


$.add( selector [, context] | elements | html )
$.andSelf( )
$.contents( )
$.end( )


Inserting Inside

$.append( content | fn( index, html ) )
$.appendTo( target )
$.prepend( content | fn( index, html ) )
$.prependTo( target )

Inserting Outside

$.after( content | fn() )
$.before( content | fn() )
$.insertAfter( target )
$.insertBefore( target )

Inserting Around

$.unwrap( )
$.wrap( wrappingElement | fn )
$.wrapAll( wrappingElement | fn )
$.wrapInner( wrappingElement | fn )


$.replaceWith( content | fn )
$.replaceAll( selector )


$.detach( [selector] )
$.empty( )
$.remove( [selector] )


$.clone( [withDataAndEvents], [deepWithDataAndEvents] )


deferred object = {

def.always(alwaysCallbacks [, alwaysCallbacks])
def.notify( args )1.7+
def.notifyWith(context, [args])1.7+
def.pipe([doneFilter] [, failFilter] [, progressFilter])1.7*
def.progress( progressCallbacks )1.7+
defrejectWith(context, [args])
defresolveWith(context, [args])
defthen(doneCallbacks, failCallbacks [, progressCallbacks])1.7*




callbacks object = {1.7+

und.fireWith([context] [, args])


cb $.Callbacks( flags )


Browser and Feature Detection


Basic operations

objjQuery.each( obj, fn( i, valueOfElement ) )
objjQuery.extend( [deep,] target, obj1 [, objN] )
arrjQuery.grep( arr, fn( el, i ) [, invert] )
arrjQuery.makeArray( obj )
arrjQuery.map( arrayOrObject, fn( el, i ) )
numjQuery.inArray( val, arr )
arrjQuery.merge( first, second )
fnjQuery.proxy( fn, scope | scope, name )
arrjQuery.unique( arr )
strjQuery.trim( str )
objjQuery.parseJSON( str )

Data functions

$.clearQueue( [name] )
$.dequeue( [name] ), jQuery.dequeue([name] )
objjQuery.data( el, key ), jQuery.data(  )
obj.data(  ), .data( key )
$.data( key, val | obj )
$.removeData( [name] |[list])1.7*
[fn].queue( [name] )jQuery.queue( [name] )
$.queue( [name,] fn( next ) ), jQuery.queue([name,] fn(  ) )
$.queue( [name,] queue ), jQuery.queue([name,] queue )

Test operations

strjQuery.type( obj )
booljQuery.isArray( obj )
booljQuery.isFunction( obj )
booljQuery.isWindow( obj )
booljQuery.isNumeric( val )


Form Events

Event Object Constant


Event Object Properties

Event Object Methods

  • initEvent()
  • preventDefault()
  • stopPropagation()

EventTarget Object

  • addEventListener()
  • dispatchEvent()
  • removeEventListener()

EventListener Object

  • handleEvent()

MouseEvent/KeyboardEvent Object

MouseEvent/KeyboardEvent Methods

  • initMouseEvent()
  • initKeyboardEvent()

DOM Node

Node Types

  • Element1
  • Attr2
  • Text3
  • CDATASection4
  • EntityReference5
  • Entity6
  • ProcessingInstruction7
  • Comment8
  • Document9
  • DocumentType10
  • DocumentFragment11
  • Notation12

nodeName Returns

  • Element
  • element name
  • Attr
  • attribute name
  • Text
  • #text
  • CDATASection
  • #cdata-section
  • EntityReference
  • entity reference name
  • Entity
  • entity name
  • ProcessingInstruction
  • target
  • Comment
  • #comment
  • Document
  • #document
  • DocumentType
  • doctype name
  • DocumentFragment
  • #document fragment
  • Notation
  • notation name

nodeValue Returns

  • Element
  • null
  • Attr
  • attribute value
  • Text
  • content of node
  • CDATASection
  • content of node
  • EntityReference
  • null
  • Entity
  • null
  • ProcessingInstruction
  • content of node
  • Comment
  • comment text
  • Document
  • null
  • DocumentType
  • null
  • DocumentFragment
  • null
  • Notation
  • null



  • i
  • Perform case-insensitive matching
  • g
  • Perform a global match (find all matches rather than stopping after the first match)
  • m
  • Perform multiline matching


  • [abc]
  • Find any character between the brackets
  • [^abc]
  • Find any character not between the brackets
  • [0-9]
  • Find any digit from 0 to 9
  • [A-Z]
  • Find any character from uppercase A to uppercase Z
  • [a-z]
  • Find any character from lowercase a to lowercase z
  • [A-z]
  • Find any character from uppercase A to lowercase z
  • [adgk]
  • Find any character in the given set
  • [^adgk]
  • Find any character outside the given set
  • (red|blue|green)
  • Find any of the alternatives specified


  • .
  • Find a single character, except newline or line terminator
  • \w
  • Find a word character
  • \W
  • Find a non-word character
  • \d
  • Find a digit
  • \D
  • Find a non-digit character
  • \s
  • Find a whitespace character
  • \S
  • Find a non-whitespace character
  • \b
  • Find a match at the beginning/end of a word
  • \B
  • Find a match not at the beginning/end of a word
  • \0
  • Find a NUL character
  • \n
  • Find a new line character
  • \f
  • Find a form feed character
  • \r
  • Find a carriage return character
  • \t
  • Find a tab character
  • \v
  • Find a vertical tab character
  • \xxx
  • Find the character specified by an octal number xxx
  • \xdd
  • Find the character specified by a hexadecimal number dd
  • \uxxxx
  • Find the Unicode character specified by a hexadecimal number xxxx


  • n+
  • Matches any string that contains at least one n
  • n*
  • Matches any string that contains zero or more occurrences of n
  • n?
  • Matches any string that contains zero or one occurrences of n
  • n{X}
  • Matches any string that contains a sequence of X n‘s
  • n{X,Y}
  • Matches any string that contains a sequence of X to Y n‘s
  • n{X,}
  • Matches any string that contains a sequence of at least X n‘s
  • n$
  • Matches any string with n at the end of it
  • ^n
  • Matches any string with n at the beginning of it
  • ?=n
  • Matches any string that is followed by a specific string n
  • ?!n
  • Matches any string that is not followed by a specific string n

RegExp Methods

Core DOM

Nodelist Properties

Nodelist Methods

NamedNodeMap Properties

NamedNodeMap Methods

Element Properties

Attr Properties

What is ASP.NET Core 1.0 and .NET Core 1.0 and what is future of ASP.NET 5 and 4.6!

ASP.NET 5 is now ASP.NET Core 1.0.

.NET Core 5 is now .NET Core 1.0.

The name is has changed, nothing more than this!

Why new name 1.0?

Because the concept is new, its not having same architecture as previous version of ASP.net till 4.6.

However till now, the .NET Core 1.0 is not mature as earlier .NET Frameworks were, this is in testing and development phase. Earlier ASP.NET version are more mature and very well tested for developing a new project.

ASP.NET Core 1.0 is a 1.0 release which has Web API and MVC but not SignalR and Web Pages. It does not have VB and F# till now, they will be added in near future.

ASP.NET Core 1.0 a new framework, but earlier ASP.NET 4.6 version will remain there, and will be fully supported, but ASP.NET Core 1.0 is new, very new…

SQL SERVER – Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding

Some errors when encountered take most of us for a spin. In this category the error related to “Timeout” surely falls. If you are a web developer and receive the same there are a hundred combinations why this can possibly happen. The web results can sometimes lead us in completely opposite direction because we have…

Source: SQL SERVER – Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding

Pl/SQL Concepts: How to with an example?

Import Data/file using a sql script:

Big files: sqlloader or external tables & Small files: own parsing using UTL_FILE

Import: sqlloader Example

  1. Creating a control file:
 INFILE '<path and file name of csv file>'
 INTO TABLE <your table name>
 feature_type CHAR, county CHAR,
 latitude CHAR,
 longitude CHAR,
 update_time DATE "YYYYMMDDHH24MI")

Remember that you need to use char instead of varchar 2 and integer for number columns save your control file as name.ctl
2. Start sql loader from the command like this: This can also be made a procedure to create a job for this to make the import process automatic.

sqlldr username/password@connect_string control=ctl_file.ctl log=log.log
you should see the rows inserted and commited as they load

Import: External tables Example

Oracle9i Database introduced external tables, which allow a formatted plain text file to be visible to the database as a table that can be selected by regular SQL.Create a directory object named dump_dir as:

create directory dump_dir as '/home/oracle/dump_dir';
Create an external table:
create table trans_ext
(   ... <columns of the table> ...)
organization external
 type oracle_loader 
default directory admin  
access parameters
      records delimited by newline
      badfile 'trans_ext.bad'
      discardfile 'trans_ext.dis'
      logfile 'trans_ext.log'
      fields terminated by ","  optionally enclosed by '"'
      (... <columns> ...)
   location ('trans_flat.txt')
) reject limit unlimited;

Now load the external table into the regular tables using any common method such as direct load insert and merge.

Import: Read/Import file by using UTL_FILE

   infile              utl_file.file_type;
   outfile             utl_file.file_type;
   buffer              VARCHAR2(30000);
   b_paragraph_started BOOLEAN := FALSE; -- flag to indicate that required paragraph is started
   infile := utl_file.fopen('TEST_DIR', 'mytst.txt', 'r');    -- open a file to read
   outfile := utl_file.fopen('TEST_DIR', 'out.txt', 'w');    -- open a file to write
   IF utl_file.is_open(infile)    -- check file is opened
   THEN       -- loop lines in the file
            utl_file.get_line(infile, buffer);
            IF buffer LIKE 'foo%' OR b_paragraph_started
               utl_file.put_line(outfile, buffer, FALSE);                --write to out.txt
               b_paragraph_started := TRUE;
            END IF;
            IF buffer LIKE '%ZEN'
               b_paragraph_started := FALSE;
            END IF;
            WHEN no_data_found THEN EXIT;
         END; END LOOP;
   END IF;
      raise_application_error(-20099, 'Unknown UTL_FILE Error');
END my_app2l;

Export: Write a file using a sql script: Example text file creation

FILEHANDLE        := UTL_FILE.FOPEN('\\\d\orahome\bin','test.txt','w');
 WRITEMESSAGE  := 'This is created for testing purpose \n' || ' \n This is the second line'; 

Export: a file using External files

From the database, create a plain text file with the contents of the table TRANS. The file can be called trans_flat.txt in the directory /home/oracle/dump_dir. Usually this file is created with this SQL:

spool trans_flat.txt
select <column_1> ||','|| <column_2> ||','|| ...
from trans;
spool off

Automating a Process

DBMS_SCHEDULER is an internal Oracle package (since Version 10g) which provides database driven jobs.
It’s divided into 3 parts:

  1. Time schedule part – dbms_scheduler.create_schedule
  2. Program declaration part – dbms_scheduler.create_program
  3. Job (conflation) part -dbms_scheduler.create_job

Examples of the dbms_scheduler.create_schedule part:

 begin -- daily from Monday to Sunday at 22:00 (10:00 p.m.)
 (schedule_name => 'INTERVAL_DAILY_2200',
  start_date=> trunc(sysdate)+18/24, -- start today 18:00 (06:00 p.m.)
  comments=>'Runtime: Every day (Mon-Sun) at 22:00 o'clock');
-- run every hour, every day
 schedule_name  => 'INTERVAL_EVERY_HOUR',  
   start_date    => trunc(sysdate)+18/24,  
   repeat_interval => 'freq=HOURLY;interval=1',  
   comments     => 'Runtime: Every day every hour');  
 -- run every Sunday at 18:00 (06:00 p.m.)
 (schedule_name => 'INTERVAL_EVERY_SUN_1800',
  start_date=> trunc(sysdate)+18/24,
  repeat_interval=> 'FREQ=DAILY; BYDAY=SUN; BYHOUR=18;',
  comments=>'Runtime: Run at 6pm every Sunday');

 Example of the dbms_scheduler.create_program part:

Begin -- Call a procedure of a database package
 (program_name=> 'PROG_COLLECT_SESS_DATA',
  program_type=> 'STORED_PROCEDURE',
  program_action=> 'pkg_collect_data.prc_session_data',
  comments=>'Procedure to collect session information'

 Example of the dbms_scheduler.create_job part:

 -- Connect both dbms_scheduler parts by creating the final job
  (job_name => 'JOB_COLLECT_SESS_DATA',
   program_name=> 'PROG_COLLECT_SESS_DATA',
   comments=>'Job to collect data about session values every 5 minutes');

Example to run job immediate:





Export/write a text file from pl sql block using UTL_FILE and then create a procedure for this and then create a job and then schedule the job to make this automatic.

UTL_FILE: With the UTL_FILE package, PL/SQL programs can read and write operating system text files. UTL_FILE provides a restricted version of operating system stream file I/O.

Subprogram Description
FCLOSE Closes a file
FCOPY Copies a contiguous portion of a file to a newly created file
FFLUSH Physically writes all pending output to a file
FGETATTR Reads and returns the attributes of a disk file
FOPEN Function Opens a file for input or output
FREMOVE Deletes a disk file, assuming that you have sufficient privileges
FRENAME Renames an existing file to a new name, similar to the UNIX mv function
FSEEK Adjusts the file pointer forward or backward within the file by the number of bytes specified
GET_LINE Reads text from an open file
GET_LINE_NCHAR Reads text in Unicode from an open file
GET_RAW Function Reads a RAW string value from a file and adjusts the file pointer ahead by the number of bytes read
NEW_LINE Writes one or more operating system-specific line terminators to a file
PUT Writes a string to a file
PUT_LINE Writes a line to a file, and so appends an operating system-specific line terminator
PUT_LINE_NCHAR Writes a Unicode line to a file
PUT_NCHAR Writes a Unicode string to a file
PUTF PUT with formatting
PUT_RAW Function Accepts as input a RAW data value and writes the value to the output buffer


Collections are Oracle's version of arrays; collections are single-dimensioned lists. To create a collection or record variable, you first define a collection or record type, and then you declare a variable of that type. In a collection, the internal components are always of the same data type, and are called elements.
Nested Tables, they are the most common form of collection and so represent a useful basis of comparison. A nested table is a variable which can hold more than one instance of something, often a record from a database table. They might be declared like this:

type emp_nt is table of emp%rowtype;

emp_rec_nt emp_nt;

Use: to store multiple instances of data against which we want to do the same thing. The classic example is using BULK COLLECT to store multiple records:

select * bulk collect into emp_rec_nt from employees;

 An Index By table / Associative Array: These are simple collections of single attributes with an index. Nested tables also have indexes but their indexes are just row counts. With an associative array the index can be meaningful, i.e. sourced from a data value. So they are useful for caching data values for later use. The index can be a number, or (since 9iR2) a string which can be very useful. For instance, here is an associative array of salaries which is indexed by the employee identifier.

type emp_sal_aa is table of emp.sql%type      index by emp.empno%type;

l_emp_sales emp_sal_aa;

Elements of an array can identified by an index value, in this case EMPNO:

l_emp_sals(l_emp_no) := l_emp_sal;

Other than caching reference tables or similar look-up values there aren't many use cases for associative arrays.
Variable arrays are just nested tables with a pre-defined limit on the number of elements. So perhaps the name is misleading: they are actually fixed arrays. They are declared like this:
type emp_va is varray(14) of emp%rowtype;
emp_rec_va emp_va;
We can use bulk collect to populate a VArray ...
select * bulk collect into emp_rec_va from employees;
The query will return the no of elements specified in the VArray's declaration otherwise ORA-22165: given index [string] must be in the range of [string] to [string]  will be thrown.
Use: same as Nested Table, here we can set the limit and another one big advantage of VArrays over nested tables is that they guarantee the order of the elements. So if you must get elements out in the same order as you inserted them use a VArray.

Autonomous Transaction

An autonomous transaction is an independent transaction that is initiated by another transaction, and executes without interfering with the parent transaction. When an autonomous transaction is called, the originating transaction gets suspended. Control is returned when the autonomous transaction does a COMMIT or ROLLBACK.
Routine can be marked as autonomous by declaring it as PRAGMA AUTONOMOUS_TRANSACTION. You may need to increase the TRANSACTIONS parameter to allow for the extra concurrent transactions. Top-level (not nested) anonymous PL/SQL block. The routine can be Standalone, packaged, or nested subprogram Method of a SQL object type or Database trigger


AFTER insert ON tab1
  COMMIT; -- only allowed in autonomous triggers
 END; / 

Use: that with the above example will insert and commit log entries – even if the main transaction is rolled-back! One situation in which autonomous transactions can prove extremely useful is with auditing and debugging PL/SQL.

Materialized View

Materialized views are disk based and are updated periodically based upon the query definition. Normal Views are virtual only and run the query definition each time they are accessed.
The QUERY REWRITE clause tells the optimizer if the materialized view should be consider for query rewrite operations. The ON PREBUILT TABLE clause tells the database to use an existing table segment, which must have the same name as the materialized view and support the same column structure as the query.
Use: Performance, on queries performing aggregations and transformations of the data. This allows the work to be done once and used repeatedly by multiple sessions, reducing the total load on the server
Queries to large tables using joins. These operations are very expensive in terms of time and processing power. The type of materialized view that is created determines how it can be refreshed and used by query rewrite.
Limitations: must define column names explicitly; you cannot include a SELECT * .
Do not include columns defined as TIMESTAMP WITH TIME ZONE in the materialized view. The value of the time_zone_adjustment option varies between connections based on their location and the time of year, resulting in incorrect results and unexpected behavior.
When creating a materialized view, the definition for the materialized view cannot contain, references to other views, materialized or not , references to remote or temporary tables, variables such as CURRENT USER; all expressions must be deterministic, calls to stored procedures, user-defined functions, or external functions

Mutating Exception

ORA-04091 (table xxx is mutating. Trigger/function might not see it): The error is encountered when a row-level trigger accesses the same table on which it is based, while executing. The table is said to be mutating.
 FOR EACH ROW – This is the culprit here
 v_Count NUMBER;
SELECT count(*) INTO v_count FROM TEST WHERE status = ‘INVALID’;
dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);
Throwing the exception by updating the status to ‘INVALID’
update test set status = 'INVALID'  where object_name = 'TEST1'; 
ERROR at line 1:
 ORA-04091: table SCOTT.TEST is mutating, trigger/function may not see it
 Different ways to handle mutating table errors
  1. First one is to create statement level trigger instead of row level. If we omit the ‘for each row’ clause from above trigger, it will become statement level trigger. Let us create a new statement level trigger.
v_Count NUMBER;
SELECT count(*) INTO v_count FROM TEST WHERE status = ‘INVALID’;
dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);
 2. Second way of dealing with the mutating table issue is to declare row level trigger as an autonomous transaction so that it is not in the same scope of the session issuing DML statement. Following is the row level trigger defined as pragma autonomous transaction. By defining row level trigger as an autonomous transaction, we got rid of mutating table error but result is not correct. The latest updates are not getting reflected in our result set as oppose to statement level trigger. So one has to be very careful when using this approach.
  1. In version 11g, Oracle made it much easier with introduction of compound triggers. Let us see in this case how a compound trigger can resolve mutating table error. Let’s create a compound trigger first:
/* Declaration Section*/
v_count NUMBER;
dbms_output.put_line(‘Update is done’);
SELECT count(*) INTO v_count FROM TEST WHERE status = ‘INVALID’;
dbms_output.put_line(‘Total Invalid Objects are ‘ || v_count);
SET Operator:  UNION, INTERSECT, MINUS, UNION ALL, used on complex queries
UNION removes duplicate records (where all columns in the results are the same), UNION ALL does not. There is a performance hit when using UNION vs UNION ALL, since the database server must do additional work to remove the duplicate rows, but usually you do not want the duplicates (especially when developing reports). The implication of this, is that union is much less performant as it must scan the result for duplicates


A trigger is a named program unit that is stored in the database and fired (executed) in response to a specified event. The specified event is associated with either a table, a view, a schema, or the database, and it is one of the following:
A database manipulation (DML) statement (DELETE, INSERT, or UPDATE)
A database definition (DDL) statement (CREATE, ALTER, or DROP)
Use:Automatically generate derived column values, Enforce referential integrity across nodes in a distributed database, Enforce complex business rules, Provide transparent event logging, Provide auditing, Maintain synchronous table replicates, Gather statistics on table access, Modify table data when DML statements are issued against views, Publish information about database events, user events, and SQL statements to subscribing applications, Restrict DML operations against a table to those issued during regular business hours, Enforce security authorizations and Prevent invalid transactions

Limitations/points to be considered when creating trigger

  1. A SQL statement within its trigger action potentially can fire other triggers, resulting in cascading triggers,
  2. It should not be generate mutating behavior.
  3. Analyze to using the autonomous transaction?
  4. Trigger Size Restriction, The size of the trigger cannot exceed 32K. In this case create an procedure.
  5. A trigger cannot declare a variable of the LONG or LONG RAW data type. A SQL statement in a trigger can reference a LONG or LONG RAW column only if the column data can be converted to the data type CHAR or VARCHAR2. A trigger cannot use the correlation name NEW or PARENT with a LONG or LONG RAW column.
  6. One of the most dangerous attributes of a database trigger is its hidden behavior, we do not know it was fired or not, if it was disabled due to change in a table.


Oracle creates a memory area, known as context area, for processing an SQL statement, which contains all information needed for processing the statement, for example, number of rows processed, etc.
A cursor is a pointer to this context area. PL/SQL controls the context area through a cursor. A cursor holds the rows (one or more) returned by a SQL statement
Using the SELECT-INTO statement: implicit cursor
Fetching from an explicit cursor: A SELECT-INTO is also referred to as an implicit query, because Oracle Database implicitly opens a cursor for the SELECT statement, fetches the row, and then closes the cursor when it finishes doing that (or when an exception is raised). You can, alternatively, explicitly declare a cursor and then perform the open, fetch, and close operations yourself.
 l_total       INTEGER := 10000;
      CURSOR employee_id_cur
           SELECT employee_id FROM plch_employees ORDER BY salary ASC;
      l_employee_id   employee_id_cur%ROWTYPE;
     OPEN employee_id_cur;
        FETCH employee_id_cur INTO l_employee_id;
        EXIT WHEN employee_id_cur%NOTFOUND;
        assign_bonus (l_employee_id, l_total);
        EXIT WHEN l_total <= 0;
     END LOOP;
     CLOSE employees_cur;
Using a cursor FOR loop: the cursor FOR loop is an elegant and natural extension of the numeric FOR loop in PL/SQL. With a numeric FOR loop, the body of the loop executes once for every integer value between the low and high values specified in the range. With an implicit cursor FOR loop, the body of the loop is executed for each row returned by the query.
The following block uses a cursor FOR loop to display the last names of all employees in department 10:
   FOR employee_rec IN (SELECT * FROM employeesWHERE department_id = 10)
   LOOP       DBMS_OUTPUT.put_line (employee_rec.last_name);

You can also use a cursor FOR loop with an explicitly declared cursor:
   CURSOR employees_in_10_cur 
      SELECT * FROM employees WHERE department_id = 10;
   FOR employee_rec 
   IN employees_in_10_cur
      DBMS_OUTPUT.put_line (employee_rec.last_name);
The nice thing about the cursor FOR loop is that Oracle Database opens the cursor, declares a record by using %ROWTYPE against the cursor, fetches each row into a record, and then closes the loop when all the rows have been fetched (or the loop terminates for any other reason).
Using cursor variables: A cursor variable is, as you might guess from its name, a variable that points to a cursor or a result set. Unlike with an explicit cursor, you can pass a cursor variable as an argument to a procedure or a function. There are several excellent use cases for cursor variables, including the following:
Construct a result set inside a function, and return a cursor variable to that set. This is especially handy when you need to use PL/SQL, in addition to SQL, to build the result set.
Pass a cursor variable to a pipelined table function—a powerful but quite advanced optimization technique. A full explanation of cursor variables, including the differences between strong and weak REF CURSOR types, is beyond the scope of this article.
Cursor variables can be used with either embedded (static) or dynamic SQL.
         name_type_in IN VARCHAR2)
      l_return   SYS_REFCURSOR;
      CASE name_type_in
         WHEN 'EMP'
           OPEN l_return FOR
                SELECT last_name FROM employees ORDER BY employee_id;
        WHEN 'DEPT'
           OPEN l_return FOR
                SELECT department_name FROM departments ORDER BY department_id;
     END CASE;
     RETURN l_return;
  END names_for;

Strong and weak REF CURSOR types

Ref cursor is a cursor variable which acts as a pointer to the sql memory area. Ref cursor can be asssociated with multiple sql statements where as a cursor can be associated with only one sql statement. Refcursor is dynamic where as cursor is static. Ref cursors can be typed/strong and untyped/weak:
A strongly typed ref cursor always returns a known type, usually from a declared TYPE object. The compiler can find problems in a PL/SQL block by comparing the types returned to how they are used.
A weakly typed ref cursor has a return type that is dependent on the SQL statement it executes, i.e. only once the cursor is opened is the type known (at runtime). The compiler cannot determine the types until it is ran, so care must be taken to ensure that the cursor result set is handled properly to avoid runtime errors.

User Defined Exception

Is it possible to create user-defined exceptions and be able to change the SQLERRM? Yes, You could use RAISE_APPLICATION_ERROR like this:
    ex_custom       EXCEPTION;
    RAISE ex_custom;
    WHEN ex_custom THEN
        RAISE_APPLICATION_ERROR(-20001,'My exception was raised');
That will raise an exception that looks like: ORA-20001: My exception was raised, The error number can be anything between -20001 and -20999.
How to catch and handle only specific Oracle exceptions? Refer to the exception directly by number:
      IF SQLCODE = -955 THEN
        NULL; -- suppresses ORA-00955 exception
      END IF;


BULK COLLECT: SELECT statements that retrieve multiple rows with a single fetch, improving the speed of data retrieval
The bulk processing features of PL/SQL are designed specifically to reduce the number of context switches required to communicate from the PL/SQL engine to the SQL engine. Use the BULK COLLECT clause to fetch multiple rows into one or more collections with a single context switch. Use the FORALL statement when you need to execute the same DML statement repeatedly for different bind variable values. The UPDATE statement in the increase_salary procedure fits this scenario; the only thing that changes with each new execution of the statement is the employee ID. PL/SQL collections are essentially arrays in memory, so massive collections can have a detrimental effect on system performance due to the amount of memory they require.  In some situations, it may be necessary to split the data being processed into chunks to make the code more memory-friendly.   This “chunking” can be achieved using the LIMIT clause of the BULK COLLECT syntax.
The bulk_collect_limit.sql script uses the LIMIT clause to split the collection into chunks of 10,000; processing each chunk in turn.  Notice the use of the explicit cursor for this operation.
FORALL: INSERTs, UPDATEs, and DELETEs that use collections to change multiple rows of data very quickly
      department_id_in   IN employees.department_id%TYPE,
      increase_pct_in    IN NUMBER)
      TYPE employee_ids_t IS TABLE OF employees.employee_id%TYPE
              INDEX BY PLS_INTEGER; 
      l_employee_ids   employee_ids_t;
      l_eligible_ids   employee_ids_t;
     l_eligible       BOOLEAN;
     SELECT employee_id
       BULK COLLECT INTO l_employee_ids
       FROM employees
      WHERE department_id = increase_salary.department_id_in;
     FOR indx IN 1 .. l_employee_ids.COUNT
        check_eligibility (l_employee_ids (indx),
        IF l_eligible
           l_eligible_ids (l_eligible_ids.COUNT + 1) :=
              l_employee_ids (indx);
        END IF;
     END LOOP;
     FORALL indx IN 1 .. l_eligible_ids.COUNT
        UPDATE employees emp
           SET emp.salary =
                  + emp.salary * increase_salary.increase_pct_in
         WHERE emp.employee_id = l_eligible_ids (indx);
  END increase_salary;
Steps to generate Excel sheet output
  1. Create a custom DAD if required using Enterprise Manager Console of MidTier for HTTP Server or use Portal DAD itself to implement the solution.
     Create a new procedure (a webdb solution) to stream the HTML for the excel sheet report which will be downloaded.
     cursor p_emp is select * from PORTAL_DEMO.EMP;
     owa_util.mime_header( 'application/vnd.ms-excel', False ); -- Here change the content-type to PDF Format accordingly
     htp.print('Content-Disposition: attachment;filename="pvasista.csv"');
     for v_emp_cur in p_emp
     htp.p (v_emp_cur.ename || ',' || v_emp_cur.deptno || chr(13));
     END LOOP;
Probably we can apply appropriate content-type for PDF in the PL/SQL written below, to get the same in PDF. i suppose it should be application/pdf
Sending email: UTL_MAIL: The UTL_MAIL package is a utility for managing email which includes commonly used email features, such as attachments, CC, BCC, and return receipt.
SEND Procedure
Packages an email message into the appropriate format, locates SMTP information, and delivers the message to the SMTP server for forwarding to the recipients
Represents the SEND Procedure overloaded for RAW attachments
Represents the SEND Procedure overloaded for VARCHAR2 attachments
Most used functions: Merge, NVL, Decode