Experimenting with GO

4,028 views
Skip to first unread message

Dibyendu Majumdar

unread,
Jun 8, 2010, 6:40:59 PM6/8/10
to golang-nuts
Hi,

I am porting one of the my Java programs to Go; both as an exercise in
learning as well as to see the suitability of using Go for my project.
The code implements a Lock Scheduler such as used in database
management systems. I have not completed the port yet; the deadlock
detector is still to be ported. The Go version has not been tested
properly - I have not ported the test cases yet. But I was excited to
have an initial working version and so could not wait to share.

Anyway, here are the links:

Go version (everything in one source file for now):
http://code.google.com/p/try-catch-finally/source/browse/trunk/lockmgr.go/lockmgr.go

Original Java version (several source files ... only main ones listed
below)
http://code.google.com/p/simpledbm/source/browse/simpledbm-common/code/simpledbm-common/src/main/java/org/simpledbm/common/api/locking/LockMode.java
http://code.google.com/p/simpledbm/source/browse/simpledbm-rss/code/simpledbm-rss/src/main/java/org/simpledbm/rss/api/locking/LockManager.java
http://code.google.com/p/simpledbm/source/browse/simpledbm-rss/code/simpledbm-rss/src/main/java/org/simpledbm/rss/impl/locking/LockManagerImpl.java

I shall post my thoughts on the Go version in the next few days as I
complete writing the test cases. Initial reaction:
- It seems that the mutexes in Go are not reentrant?
- The mutexes do not provide TryLock() or Lock(with timeout) which in
my view are essential

I would very much welcome feedback/comments on the Go version.

Regards
Dibyendu

Ian Lance Taylor

unread,
Jun 8, 2010, 6:58:40 PM6/8/10
to Dibyendu Majumdar, golang-nuts
Dibyendu Majumdar <[email protected]> writes:

> - It seems that the mutexes in Go are not reentrant?
> - The mutexes do not provide TryLock() or Lock(with timeout) which in
> my view are essential

Both statements are true. Go programs generally use goroutines and
channels to meet these sorts of needs. E.g., route all changes
through a single goroutine which therefore does not need locks. Or
use a single goroutine to manage the locks, and use a separate timer
goroutine to implement timeouts via a select statement.

Ian

Dibyendu Majumdar

unread,
Jun 8, 2010, 7:07:23 PM6/8/10
to golang-nuts


On Jun 8, 11:58 pm, Ian Lance Taylor <[email protected]> wrote:
> Both statements are true.  Go programs generally use goroutines and
> channels to meet these sorts of needs.  E.g., route all changes
> through a single goroutine which therefore does not need locks.  Or
> use a single goroutine to manage the locks, and use a separate timer
> goroutine to implement timeouts via a select statement.

I don't think channels are suitable for my needs (at least in this use
case).
I understand the reasoning behind pushing folks to use channels etc.
and don't disagree with that. But there seems to be no reason to not
provide support for reentrant/timeout capable mutexes; I'd be happy to
work
on this and contribute the code if that is okay. Not my area of
expertise, but
will be fun.

Regards

Russ Cox

unread,
Jun 8, 2010, 7:59:19 PM6/8/10
to Dibyendu Majumdar, golang-nuts
Recursive (aka reentrant) mutexes are a bad idea.

The fundamental reason to use a mutex is that mutexes
protect invariants, perhaps internal invariants like
``p.Prev.Next == p for all elements of the ring,'' or perhaps
external invariants like ``my local variable x is equal to p.Prev.''

Locking a mutex asserts ``I need the invariants to hold''
and perhaps ``I will temporarily break those invariants.''
Releasing the mutex asserts ``I no longer depend on those
invariants'' and ``If I broke them, I have restored them.''

Understanding that mutexes protect invariants is essential to
identifying where mutexes are needed and where they are not.
For example, does a shared counter updated with atomic
increment and decrement instructions need a mutex?
It depends on the invariants. If the only invariant is that
the counter has value i - d after i increments and d decrements,
then the atmocity of the instructions ensures the
invariants; no mutex is needed. But if the counter must be
in sync with some other data structure (perhaps it counts
the number of elements on a list), then the atomicity of
the individual operations is not enough. Something else,
often a mutex, must protect the higher-level invariant.
This is the reason that operations on maps in Go are not
guaranteed to be atomic: it would add expense without
benefit in typical cases.

Let's take a look at recursive mutexes.
Suppose we have code like this:

func F() {
mu.Lock()
... do some stuff ...
G()
... do some more stuff ...
mu.Unlock()
}

func G() {
mu.Lock()
... do some stuff ...
mu.Unlock()
}

Normally, when a call to mu.Lock returns, the calling code
can now assume that the protected invariants hold, until
it calls mu.Unlock.

A recursive mutex implementation would make G's mu.Lock
and mu.Unlock calls be no-ops when called from within F
or any other context where the current thread already holds mu.
If mu used such an implementation, then when mu.Lock
returns inside G, the invariants may or may not hold. It depends
on what F has done before calling G. Maybe F didn't even realize
that G needed those invariants and has broken them (entirely
possible, especially in complex code).

Recursive mutexes do not protect invariants.
Mutexes have only one job, and recursive mutexes don't do it.

There are simpler problems with them, like if you wrote

func F() {
mu.Lock()
... do some stuff
}

you'd never find the bug in single-threaded testing.
But that's just a special case of the bigger problem,
which is that they provide no guarantees at all about
the invariants that the mutex is meant to protect.

If you need to implement functionality that can be called
with or without holding a mutex, the clearest thing to do
is to write two versions. For example, instead of the above G,
you could write:

// To be called with mu already held.
// Caller must be careful to ensure that ...
func g() {
... do some stuff ...
}

func G() {
mu.Lock()
g()
mu.Unlock()
}

or if they're both unexported, g and gLocked.

I am sure that we'll need TryLock eventually; feel free to
send us a CL for that. Lock with timeout seems less essential
but if there were a clean implementation (I don't know of one)
then maybe it would be okay. Please don't send a CL that
implements recursive mutexes.

Recursive mutexes are just a mistake, nothing more than
a comfortable home for bugs.

Russ

Dibyendu Majumdar

unread,
Jun 8, 2010, 8:28:25 PM6/8/10
to golang-nuts
Hi Russ

I am not really sure I understand the issue.
Perhaps we are thinking of different use cases?

In complex concurrent applications, you sometimes need
to hold more than one mutex simultaneously across many
function calls, unless you adopt a global mutex. The mutex
acquisitions may occur in different parts of the program. It is
simpler and less error prone for each component to acquire/release the
mutex without having to worry about whether this mutex was
already acquired by a some other function in the call tree.
Without reentrant mutexes, it can be quite difficult, as the called
routine may not know whether the mutex is already acquired
or not.

It is a basic principle of acquiring/releasing resources locally,
without having to pass state info up and down the call tree.

From the example you have given, I think you are probably
thinking about data structures like hash maps - where for example
in Java, you have one that uses a mutex (HashTable) and another
which doesn't (HashMap). These are simple use cases because
only one mutex is involved.

Anyway, I don't really follow your argument that reentrant
mutexes don't preserve invariants.

Regards

Rob 'Commander' Pike

unread,
Jun 8, 2010, 8:40:38 PM6/8/10
to Dibyendu Majumdar, golang-nuts
>
> Anyway, I don't really follow your argument that reentrant
> mutexes don't preserve invariants.

Say the mutex protects the invariant that a == b, as in these functions:

func up() {
mutex.Lock()
a++
// See below
b++
mutex.Unlock()
}


func down() {
mutex.Lock()
if a != b { panic("HELP!") }
a--
b--
mutex.Unlock()
}

What happens if down() is called on the line marked "See below" in f? (this could happen through subtle paths in realistic code, as you know.) The invariant is not true when down() is called *even though it holds the mutex*, and down() will panic.

That is what it means to say that reentrant mutexes don't preserve invariants. It's because they don't preserve invariants when called recursively, which defeats the purpose of having a mutex.

-rob

Dibyendu Majumdar

unread,
Jun 8, 2010, 8:52:45 PM6/8/10
to golang-nuts
Hi Rob,

The example you have given is invalid because the panic
could occur in the first method, and the invariant would not
be hold - so this has nothing to do with reentrant mutexes.

The example you have given requires transactions to execute
correctly; either manually coded in a recover section, or by some
other means.

Regards

Rob 'Commander' Pike

unread,
Jun 8, 2010, 8:57:15 PM6/8/10
to Dibyendu Majumdar, golang-nuts
No.

-rob

Dibyendu Majumdar

unread,
Jun 8, 2010, 9:05:47 PM6/8/10
to golang-nuts
I think I am kind of getting where you are coming from.
But in my view your argument is incorrect. A mutex can never
enforce invariants in the code; it is meant to protect access to a
data
item by ensuring only one thread has access to it. And this
invariant is preserved whether the mutex is reentrant or not.

If a mutex is not released properly then the function that
fails to do this is not preserving an invariant that is in its
contract - but this has nothing to do with the mutex type.

Regards

Steven

unread,
Jun 8, 2010, 10:57:34 PM6/8/10
to Dibyendu Majumdar, golang-nuts
Here's how I see it:

A mutex represents a contract, not an enforcement. But as long as everyone follows the contract, then it is in force.

Honouring a standard mutex protects algorithms against accessing incompletely manipulated data, and from having their data corrupted by a third party while they are still manipulating it.

Honouring a reentrant mutex protects separate algorithms (in separate goroutines) from butting in on each other. If one algorithm butts in on another, then this can screw up the algorithms by messing with the order of operations.

Both prevent concurrent manipulations, but this is not, in and of itself, enough to protect a complex algorithm from corrupt data. If the algorithm is partway through manipulating the data, then accidentally access the data while it is still malformed, then corruption can (and will) occur. This makes code using (only) an reentrant mutex contract bug prone, since it may be next to impossible to notice this mistake in a sufficiently complex algorithm. The mistake is essentially breaking an informal standard mutex contract. If you make the contract formal (by using a mutex), then you can be (more or less) certain that the mistake won't happen.

I can't really draw any conclusions from this since my analysis undoubtedly contains holes, and I'd tend to agree with Russ, simply because he's (a lot!) more experienced than I am, but from where I stand, it seems that a reentrant mutex is dangerous unless used right, and I can't say for sure that there is a right way to use it, and I'm almost certain that for any problem where it would be useful, there's another tool for it that does have a (relatively) safe way to use it.

David Leimbach

unread,
Jun 8, 2010, 11:14:01 PM6/8/10
to golang-nuts
I think it's better to try "The Way of Go" first, and then come back
and tell us why you don't think it will work. We'll probably be able
to help you figure out how it can be made to work, and why it might be
better than the alternative. If not, great! We'll figure out what
else Go needs to be a stronger environment, but I believe there's
people here who've got a great deal of experience working with the
channel approach to already have wisdom in this area from having run
into the sharp corners for us first so we don't have to :-).

Either way the global understanding for channels vs real locks will be
enhanced, and everyone has a shot at learning something from the
experience.

It's just a suggestion. I don't think anyone in the Go community is
against listening to new ideas, but they may try to warn about paths
that may not seemingly make sense to them to take.

David Leimbach

unread,
Jun 9, 2010, 12:30:35 AM6/9/10
to golang-nuts
I'm afraid this is long...

On Jun 8, 6:05 pm, Dibyendu Majumdar <[email protected]> wrote:
> I think I am kind of getting where you are coming from.
> But in my view your argument is incorrect. A mutex can never
> enforce invariants in the code; it is meant to protect access to a
> data
> item by ensuring only one thread has access to it. And this
> invariant is preserved whether the mutex is reentrant or not.

Let's get the what's and the how's separated from the why's of
mutexes, and maybe it will look clearer... (Sorry if some of this is
a bit fundamental, but I think some folks get confused, not
necessarily anyone on this thread, but maybe another reader will
understand this stuff better).

Yes, mutexes are for providing mutually exclusive access to data by
blocking out multiple concurrent updaters. Why do we want that?

It's because our code depends on certain data properties that render
it either incorrect or correct. Those dependencies are often termed
"invariants". An invariant can be a completely atomically updated
record, such that no piece of the record is left in an inconsistent
state from the rest of the record. This could be a transaction's
completion, or rollback of a partially applied transaction for
consistency of a record in a DB or even a software transactional
memory system. Or the invariant could be like in Rob's example where
a must be equal to b. You could even make a weaker invariant such as
"equivalence" which might be defined in some terms that the first two
items of a pair of structs must have the same value (foreign keys
across records in different tables for example?)

Mutexes are a tool that can be used to help you guard those
invariants. I don't think anyone is saying that if you use mutexes
you're guaranteed to avoid breaking your invariants. It's just that
recursive mutexes invite more opportunities to lose the the tracking
of those invariants, whereas a regular mutex might deadlock, even in
singly threaded testing of such code, signifying a problem.

The alternative offered by channels is to let a goroutine do all the
work of applying updates to the memory it owns. This turns the
problem of preserving invariants a bit sideways by either spawning a
short-lived goroutine that can perform an update to data, and
synchronize with the main goroutine on a channel when it's done, or
you can try other approaches like having a long running goroutine that
updates the data it owns based on requests down the channel, or even
put the functions to be performed on the channel itself via Go's
support for closures. You've got a lot of flexibility here.

Even Apple's GCD uses a mutex-removing approach, except they use
queues attached to threads in a pool that can have independent
contexts, and run functions with arguments or closures (which are
basically functions with arguments pre-bound, unless they have __block
storage...). I'm much more a fan of Go's approach than this but I
would happily code with libdispatch over explicitly locked threaded
code.



>
> If a mutex is not released properly then the function that
> fails to do this is not preserving an invariant that is in its
> contract - but this has nothing to do with the mutex type.

Rob's example would deadlock if the code was modified as he suggested
with a regular mutex. It would destroy the stated invariant in the
case of the recursive mutex. Again, this is not about the mutex, but
the invariant preservation. Rob's suggested change is a bug in both
the regular mutex and the recursive mutex case, but the bug is very
easily spotted in the non-recursive mutex case.


>
> Regards

David Leimbach

unread,
Jun 9, 2010, 12:50:21 AM6/9/10
to golang-nuts


On Jun 8, 7:57 pm, Steven <[email protected]> wrote:
> On Tue, Jun 8, 2010 at 9:05 PM, Dibyendu Majumdar <[email protected]>wrote:
>
> > I think I am kind of getting where you are coming from.
> > But in my view your argument is incorrect. A mutex can never
> > enforce invariants in the code; it is meant to protect access to a
> > data
> > item by ensuring only one thread has access to it. And this
> > invariant is preserved whether the mutex is reentrant or not.
>
> > If a mutex is not released properly then the function that
> > fails to do this is not preserving an invariant that is in its
> > contract - but this has nothing to do with the mutex type.
>
> > Regards
>
> Here's how I see it:
>
> A mutex represents a contract, not an enforcement. But as long as everyone
> follows the contract, then it is in force.

The only contract a mutex represents is that it works as it is
defined. That's incredibly vague, but a normal non-recursive mutex
only allows one process to continue running at a time, and that's its
contract. A recursive mutex can allow the same scheduleable entity to
recursively continue past a lock, rather than deadlock, as a
modification of the contract of a normal mutex.

The concept of invariants and their management is so important here,
that avoiding them in the discussion is probably not going to make
things clearer unfortunately.

You can think of an invariant as a predicate on some state in a
program. An invariant is not broken when some predicate holds true
before and after a sequence of instructions have completed. A mutex
is one tool that may be used to prevent invariants from breaking when
a series of instructions needs to change the underlying state that is
"tested" by that invariant.

The part that I think makes invariants so difficult to grasp is that
they're often abstract, and not necessarily asserted anywhere
explicitly in code... well that is until a bug crashes something, or
someone reads corrupted data, or the program dies.

>
> Honouring a standard mutex protects algorithms against accessing
> incompletely manipulated data, and from having their data corrupted by a
> third party while they are still manipulating it.
>
> Honouring a reentrant mutex protects separate algorithms (in separate
> goroutines) from butting in on each other. If one algorithm butts in on
> another, then this can screw up the algorithms by messing with the order of
> operations.
>
> Both prevent concurrent manipulations, but this is not, in and of itself,
> enough to protect a complex algorithm from corrupt data. If the algorithm is
> partway through manipulating the data, then accidentally access the data
> while it is still malformed, then corruption can (and will) occur. This
> makes code using (only) an reentrant mutex contract bug prone, since it may
> be next to impossible to notice this mistake in a sufficiently complex
> algorithm. The mistake is essentially breaking an informal standard mutex
> contract. If you make the contract formal (by using a mutex), then you can
> be (more or less) certain that the mistake won't happen.
>
> I can't really draw any conclusions from this since my analysis undoubtedly
> contains holes, and I'd tend to agree with Russ, simply because he's (a
> lot!) more experienced than I am, but from where I stand, it seems that a
> reentrant mutex is dangerous unless used right, and I can't say for sure
> that there is a right way to use it, and I'm almost certain that for any
> problem where it would be useful, there's another tool for it that does have
> a (relatively) safe way to use it.

I have a really difficult time coming up with a situation where a
recursive mutex is *needed* where a revisit of the design of the
software shouldn't also be done for the sake of maintainability. I
really feel like recursive mutexes are band-aids.

Ian Lance Taylor

unread,
Jun 9, 2010, 1:18:30 AM6/9/10
to Dibyendu Majumdar, golang-nuts
Dibyendu Majumdar <[email protected]> writes:

> In complex concurrent applications, you sometimes need
> to hold more than one mutex simultaneously across many
> function calls, unless you adopt a global mutex. The mutex
> acquisitions may occur in different parts of the program. It is
> simpler and less error prone for each component to acquire/release the
> mutex without having to worry about whether this mutex was
> already acquired by a some other function in the call tree.
> Without reentrant mutexes, it can be quite difficult, as the called
> routine may not know whether the mutex is already acquired
> or not.

Acquiring mutexes in a different order in different code sounds like a
recipe for deadlock. And this approach in turn sounds very hard to
show to be correct. One of the great things about Go is cheap channel
communication mean that you don't have to reason your way through
complex scenarios like this. You can write code that is much easier
to understand and much more likely to be correct on a multicore
system.

Ian

Dan Adkins

unread,
Jun 9, 2010, 1:42:58 AM6/9/10
to golang-nuts
On Tue, Jun 8, 2010 at 5:28 PM, Dibyendu Majumdar
<[email protected]> wrote:
> In complex concurrent applications, you sometimes need
> to hold more than one mutex simultaneously across many
> function calls, unless you adopt a global mutex. The mutex

Given that you're trying trying to implement a database lock manager,
perhaps there's some confusion between mutexes and locks. One way
databases implement concurrent transactions is via read/write locks.
But these are not the same as mutexes. A transaction might acquire a
read or write lock on a row prior to examining or updating it. These
locks are typically held for the duration of the transaction to
prevent other transactions from seeing uncommitted data, thus
achieving the illusion of atomicity. But, once a transaction has a
write lock, it doesn't need another one later to read or write the
same row. Or if it already has a read lock and wants to write, it
must upgrade to a write lock.

I can see how you would think this implies re-entrant locks, since the
same transaction seemingly needs to acquire that lock twice. But
either that transaction already knows what locks it has (how else
would it release them all on commit), or the lock manager is keeping
track for us.

None of these issues are unique to Go. The only thing that requires
coordination here is the lock manager state, and channels seem like a
good fit there. Communicating lock requests via a channel ensures
that you can process lock requests serially without need for mutexes.
You'll probably also want to include a response channel with each lock
request, to send the response when the lock is acquired, or an error
in the case of deadlock. The lock manager will have to maintain the
wait queues explicitly anyway to do deadlock detection.

-Dan

Dibyendu Majumdar

unread,
Jun 9, 2010, 3:18:32 AM6/9/10
to golang-nuts
Hi Ian,

Deadlock avoidance is of course an issue when multiple mutexes or
locks
are involved; there needs to be an ordering between those to ensure
that that deadlocks are avoided. Actually in this scenario, you would
not
not acquire a mutex in a reentrant manner unless it was at the top of
the
stack.

My point though was that in complex interactions, life is easier
if a particular function did not have to worry about whether a
particular mutex was already acquired or not. If the functions
behaviour
needs to be synchronized because this is part of its contract then it
should acquire and release the mutex. It is more complex and
error prone to have two different functions, one that locks and
another that doesn't and to keep track of which one to call, except
for simple scenarios where the caller has obtained the mutex and
therefore knows which one to call.

It seems to me that the argument about mutexes somehow breaking
invariants is spurious because if we assign an invariant to a mutex
that is never in its contract - such as correct behavior as opposed to
protecting access to shared data - then of course the invariant will
not hold. But as I said before this is true of any mutex or lock.

There are other valid arguments against reentrant mutexes such as
complexity of implementing them, the extra state such as lock count
that needs to be maintained etc.

I am not arguing against channels, which do seem useful in
many use cases. I am sure you are not suggesting that they are
the most appropriate all the time. If so, perhaps all Go code should
be changed to use channels instead of mutexes. Whether channels
will actually take off thanks to Go is still to be seen. I love the
idea
of goroutines because it solves real problems for me. Channels
do not per se solve real problems for me as you can achieve the same
functionality (say in Java) using fifo queues.

In my humble opinion, Go would not be poorer for supporting
the standard locking primitives in a comprehensive way. This would in
no way detract from the use of channels where this is more
appropriate.

There seems to be misunderstanding about recursive or reentrant
mutexes and how they work. I am not going to go into that
anymore as it seems a pointless exercise.

Regards

Dibyendu

roger peppe

unread,
Jun 9, 2010, 4:42:34 AM6/9/10
to Dibyendu Majumdar, golang-nuts
On 8 June 2010 23:40, Dibyendu Majumdar <[email protected]> wrote:
> - The mutexes do not provide  TryLock() or Lock(with timeout) which in
> my view are essential

both of these are easily implemented with channels.

here's an alternative implementation of a simple mutex,
which uses a channel with a buffer size of 1:

type Mutex chan bool
func NewMutex() Mutex { return make(chan bool, 1) }
func (m Mutex) Lock() { m <- false }
func (m Mutex) Unlock() { <-m }

given this, TryLock is simple:

func (m Mutex) TryLock() bool { return m <- false }

and here's a lock with a timeout (assuming import "time"):

func (m Mutex) LockWithTimeout(t int64) (locked bool) {
tick := time.NewTicker(t)
select {
case m <- false:
locked = true
case <-tick.C:
break
}
tick.Stop()
return
}

sync.Mutex is useful because it's considerably more
efficient than using a channel, but in many circumstances,
the above code will run acceptably fast.

roger peppe

unread,
Jun 9, 2010, 5:27:02 AM6/9/10
to Dibyendu Majumdar, golang-nuts
On 9 June 2010 09:42, roger peppe <[email protected]> wrote:
> sync.Mutex is useful because it's considerably more
> efficient than using a channel, but in many circumstances,
> the above code will run acceptably fast.

sync.Mutex is also nice because it requires no allocation and
no initialization, unlike the channel-based mutex.

BTW i just did some timing tests.
on my machine, given:

var sm sync.Mutex
m := NewMutex()

{sm.Lock(); sm.Unlock()} takes 0.054us
{m.Lock(); m.Unlock()} takes 0.172us
{m.Trylock(); m.Unlock()} takes 0.176us
m.Trylock() on a locked channel takes 0.082us
m.LockWithTimeout(1) on a locked channel takes 32.40us


so sync.Mutex is about 3 times faster than using
a channel, which isn't so bad, considering.

the outlier is LockWithTimeout, which is about 200 times slower
than Lock (or 600 times slower than sync.Mutex.Lock).
most timings are more-or-less unchanged with GOMAXPROCS>1,
but LockWithTimeout goes up to 82.27us, 1500 times slower than
sync.Mutex.Lock.

i wonder if there's room for a lighter-weight timer mechanism
within go, perhaps as part of the language itself?

Dibyendu Majumdar

unread,
Jun 9, 2010, 5:43:07 AM6/9/10
to golang-nuts
Hi Roger,

Thanks for the example. I won't use this as a Mutex ...

There is however a use case where I am using runtime.Semacquire()
- please see my code - where I could use a channel to implement
a timeout feature - which I had to omit when porting from the
original
Java code.

Regards
Dibyendu

On Jun 9, 10:27 am, roger peppe <[email protected]> wrote:

Dibyendu Majumdar

unread,
Jun 9, 2010, 7:22:57 AM6/9/10
to golang-nuts
Hi All,

Firstly thanks to everyone who has joined in on this conversation.
I really appreciate the feedback. Rather than responding individually
to each post re the re-entrant mutex issue, I would just like to
summarize my position on this.

Firstly, we need to be clear about what a mutex is and the
guarantees it provides. For simplicity, I shall only consider
exclusive
mutexes here. Read/write mutexes are more complex.

An exclusive mutex should guarantee that only one thread can
acquire it at any point in time. The thread that acquires the mutex
is said to own it.

Further, the exclusive mutex must guarantee that only the
thread that owns the mutex can release it.

A re-entrant mutex is an extension of this to say that if a
thread has already acquired the mutex, then it is allowed to acquire
it again without going into a deadlock. Reentrant mutexes must be
released the same number of times they were acquired.

A mutex, whether re-entrant or not, does not provide any other
guarantee.

A re-entrant mutex does not imply that another thread can acquire
the mutex while one thread owns it.

The idea that a mutex can somehow break an invariant in
the code that uses it is wrong because the mutex itself
never guarantees this. Any function that acquires a mutex must
also release it properly – even in the presence of panics.
However, this is an invariant provided the function that uses the
mutex; not the mutex itself.

In my opinion, correct handling of mutexes is easier in Java
because of two language features. Firstly the synchronized
block guarantees correct release. Secondly for mutexes acquired
explicitly, the try/finally block is used to guarantee the correct
release. In Java, it is not possible to not release the mutex
even when exceptions occur provided the code uses either of
these constructs.

Go supports the defer facility which can be used to implement
Mutex cleanup. This is not as flexible as the Java approach as it
is bound to a function only and cannot operate at smaller scopes.
A workaround is to break up functions so that all mutexes are
acquired/released at function scope. This should provide correct
release of a mutex even when panics occur.

I am familiar with three major implementations of mutexes. Each
supports re-entrant mutexes.

Pthreads –
see http://opengroup.org/onlinepubs/007908775/xsh/pthread_mutex_lock.html

Windows –
See http://msdn.microsoft.com/en-us/library/ms682608(v=VS.85).aspx

Java –
See http://java.sun.com/j2se/1.5.0/docs/api/java/util/concurrent/locks/ReentrantLock.html

I really think that Go should have the full set of primitive locking
functionality if it intends to seriously compete with other languages.
If these facilities are not present, it will have two consequences:

1. People will use other languages where they could have used Go.
2. People will write custom synchronisation code to provide the
missing functionality.

The first outcome may be exactly what some people want, which
is fair enough, although it would be shame in my opinion.

Second outcome is bad for those who have to take this
approach.

My final argument for providing comprehensive locking facilities
is that other languages like Java which tried to dumb this down have
had to change their approach. This is a lesson that eventually
these facilities must be provided because they are necessary for
efficient implementation of many use cases.

Regards
Dibyendu

roger peppe

unread,
Jun 9, 2010, 7:37:07 AM6/9/10
to Dibyendu Majumdar, golang-nuts
On 9 June 2010 12:22, Dibyendu Majumdar <[email protected]> wrote:
> Further, the exclusive mutex must guarantee that only the
> thread that owns the mutex can release it.

i can easily envisage a scenario where one
goroutine hands a locked data structure off to another
goroutine that does some more work and finally unlocks it.

why should this be disallowed?

Dibyendu Majumdar

unread,
Jun 9, 2010, 7:49:43 AM6/9/10
to roger peppe, golang-nuts
Just not a mutex that's all.
Need to use something else such as a channel?


Sent from my BlackBerry smartphone from Virgin Media

roger peppe

unread,
Jun 9, 2010, 7:57:36 AM6/9/10
to [email protected], golang-nuts
not a mutex as you're used to it, but still a mutex, surely?

go has a different approach to concurrency - mutexes are
barely needed at all - if you find yourself using them a lot,
you're probably doing something wrong.

your original lockmgr package seemed very un-go-like to me.
it seems to me that the java program you're porting
could do with some restructuring to make better use
of the facilities that go provides, rather than trying to
use java-style concurrency in go.

Dibyendu Majumdar

unread,
Jun 9, 2010, 8:09:16 AM6/9/10
to golang-nuts
On Jun 9, 12:57 pm, roger peppe <[email protected]> wrote:
> not a mutex as you're used to it, but still a mutex, surely?

I think you have a good point here.

The go documentation on mutex states:
A locked Mutex is not associated with a particular goroutine. It is
allowed for one goroutine to lock a Mutex and then arrange
for another goroutine to unlock it.

So it seems that Go mutexes do not obey the same
semantics as mutexes in other languages. Hmmm.

I need to think about what the implications are.

> go has a different approach to concurrency - mutexes are
> barely needed at all - if you find yourself using them a lot,
> you're probably doing something wrong.

I see mutexes sprinkled all over the go libraries ...

> your original lockmgr package seemed very un-go-like to me.
> it seems to me that the java program you're porting
> could do with some restructuring to make better use
> of the facilities that go provides, rather than trying to
> use java-style concurrency in go.

This may well be the case, and is the reason why I have
posted this here. I would very much appreciate someone
taking the trouble to explain how this should be restructured.
Happy to explain what the use case is.

Regards

roger peppe

unread,
Jun 9, 2010, 8:22:44 AM6/9/10
to Dibyendu Majumdar, golang-nuts
On 9 June 2010 13:09, Dibyendu Majumdar <[email protected]> wrote:
> I need to think about what the implications are.

one implication is that you can't do recursive mutexes,
because it's perfectly legitimate for a process to try
to lock a mutex that it itself locked and has not yet been
unlocked (it should block until it is unlocked).

> I see mutexes sprinkled all over the go libraries ...

yes, but they are invariably guarding a simple piece
of shared state. i don't know of any places where
two locks are acquired at once.

> This may well be the case, and is the reason why I have
> posted this here. I would very much appreciate someone
> taking the trouble to explain how this should be restructured.
> Happy to explain what the use case is.

please do.

Dibyendu Majumdar

unread,
Jun 9, 2010, 8:58:53 AM6/9/10
to golang-nuts

On Jun 9, 1:22 pm, roger peppe <[email protected]> wrote:
> > This may well be the case, and is the reason why I have
> > posted this here. I would very much appreciate someone
> > taking the trouble to explain how this should be restructured.
> > Happy to explain what the use case is.
>
> please do.

Hi,

I will need to write this up properly, but here is existing
documentation:

http://simpledbm.googlecode.com/hg/simpledbm-docs/docs/html/developerguide.html#lock-manager

Also see:

http://code.google.com/p/simpledbm/source/browse/simpledbm-rss/code/simpledbm-rss/src/main/java/org/simpledbm/rss/api/locking/LockManager.java

In Go terms, the way I see this working:

There is a single lockmgr within a server instance.

The database server processes requests by clients. Each server
request
will be handled by a goroutine. The goroutine will invoke the lockmgr
to acquire/release locks as required.

A separate goroutine will periodically check for deadlocks and
abort locks that are deadlocked. I have not ported this code
yet.

The lockmgr may be called by many goroutines concurrently.
The performance and scalability of the lockmgr is critical to the
performance of the database server.

Regards

Dibyendu Majumdar

unread,
Jun 9, 2010, 9:15:37 AM6/9/10
to golang-nuts
I should add that currently the lockmgr uses
mutexes for following.

A global read/write mutex is used. Every
request acquires this in read mode. The
deadlock detector acquires this in write mode.
This ensures that when the deadlock detector
runs, there is no concurrent activity.
There is also a case where a request may
temporarily acquire the global mutex in write mode - when
it needs to resize the lock hash table.

A hash table is used to facilitate looking
up of lock tables. For maximum concurrency
each bucket is protected by an exclusive mutex.
Concurrent operations are permitted as long as
they do not hit the same hash bucket.

In Java, mutexes are thin until there is contention.
So the large number of mutexes is not an issue.
Not sure whether this could be an issue in Go.

Finally, when a request blocks, it is put into
a queue. Sometime later when the lock is
granted, the lockmgr wakes up the request. In
Java I use a facility similar to runtime.Semacquire().
Given the description of Go mutexes, this should also
work using a mutex in Go.

In my honest opinion, the basic structure of the program
would remain the same in Go as I cannot see how the
code can be made more efficient. The only way to improve
efficiency is make the code lock free in the uncontended
scenario, but this is not easy to do given the various
types of lock modes, etc.

Anyway, I shall look forward to your views with interest.

Regards
Dibyendu

Andrew Gerrand

unread,
Jun 9, 2010, 9:37:40 AM6/9/10
to Dibyendu Majumdar, golang-nuts
Instead of using locks to share the hash table, why not have a
goroutine that owns the hash table, and serves requests to it via
channels? You could even set up several goroutines, one to handle each
of the hash buckets.

You then wouldn't need the global mutex, as the goroutine that owns
the hash table can ensure that nothing is accessing it while it
resizes or otherwise modifies it.

I'm pretty sure that you could replace a lot of the current use of
mutexes with channels and a for/select loop. The code would get a lot
simpler too.

Apologies if I've missed some subtleties of your program, I've only
looked at it quite briefly.

Andrew

Dibyendu Majumdar

unread,
Jun 9, 2010, 9:59:01 AM6/9/10
to golang-nuts
Hi Andrew,

Before going into the pros/cons of your suggestion
I have a question for you.

I have successfully ported my lockmgr to Go, with the
exception that I was unable to provide a lock timeout.
I still have to port the deadlock detector - which may pose
a problem with scheduling - but other than that I am
confident I can port this.

My biggest problem in the Java version is reducing
memory overhead and improving performance and
scalability.

My question is this:
You are asking that I completely redo this code using
channels. What is the benefit? Will it solve the problems
I have? I don't want to do this for academic reasons - I need
real tangible improvement in the three areas I mention.

Regards

Andrew Gerrand

unread,
Jun 9, 2010, 10:07:09 AM6/9/10
to Dibyendu Majumdar, golang-nuts

What I'm suggesting is nowhere near a complete rewrite. Most of the
code you already have will remain intact. I'm suggesting this because
I think it will make your code simpler and thus easier to understand
and maintain. It will also allow you to see first-hand the benefits of
some of Go's core features.

This thread is titled "Experiments in Go," therefore I am suggesting
you conduct an experiment. :-)

Andrew

Jessta

unread,
Jun 9, 2010, 9:35:22 AM6/9/10
to Dibyendu Majumdar, golang-nuts
On Wed, Jun 9, 2010 at 9:22 PM, Dibyendu Majumdar
<[email protected]> wrote:
> I really think that Go should have the full set of primitive locking
> functionality if it intends to seriously compete with other languages.
> If these facilities are not present, it will have two consequences:
>
> 1. People will use other languages where they could have used Go.
> 2. People will write custom synchronisation code to provide the
> missing functionality.

The third option is that people learn to write Go code in Go instead
of writing Java code in Go. By not having certain features that other
languages have programmers coming from other languages are forced to
actually think about their programs in a Go way instead of just
writing the same thing they'd write in another language.
There is pretty much no point in rewriting a Java programming in Go if
you don't use Go idioms.

From a brief look at the code you linked to,(a bit of constructive criticism)
1. You have a linked list implementation in there to implement a
queue, you could use the linked list that is part of the standard lib
and use the "range" keyword in your for loop so you don't have to do
this "for link := item.queue.GetHead(); link != nil; link =
link.Next() {"
2. In that whole program you haven't used a single channel or a
goroutine, which is kind of weird for >1000 lines of Go. A program
like this with lots of complex locking stuff seems like it could
really show off Go's concurrency features, but this just reads like a
Java program with slightly different syntax.
3. I also suggest that you break this code up in to a number of packages.

If you read some of the standard library packages or some of the
programs at http://go-lang.cat-v.org you might get a better idea of
some of the more common Go idioms.

- jessta
--
=====================
http://jessta.id.au

roger peppe

unread,
Jun 9, 2010, 12:50:45 PM6/9/10
to Dibyendu Majumdar, golang-nuts
as an alternative to andrew's approach, it may be possible
to let go's channels do quite a lot of the work of queuing etc.

to show what i mean, i've attached a toy version of your code
that uses a goroutine per contended lock, which means that
for the presumably common case of an uncontended lock
that there's no need to interact with a central goroutine.

it relies heavily on the fact that when doing a select,
nil channels are ignored. the locker() process makes
sure that it will only accept channel operations on
those channels that correspond to acceptable operations.
other operations will delay until they
are acceptable.

i don't know if it's possible to do everything you need in terms
of this kind of structure, but i thought you might find it an interesting
illustration of a possible approach regardless.

i've barely tested it.

lockmanager.go

Michael Hoisie

unread,
Jun 9, 2010, 2:45:33 PM6/9/10
to golang-nuts
I think people are missing the point in this thread.

Dibyendu is trying to implement a database transaction manager, which
requires locking techniques that Go doesn't currently support. Whether
or not these locks are desirable from a program design standpoint
shouldn't be the issue. There's tons of research (even textbooks) on
locking mechanisms in database transactions, and labelling this as the
'wrong' approach doesn't make much sense.

I've never had to use reentrant locks in my programs, but there are
plenty of times I've used locks instead of channels because they make
my program a lot simpler. And I can see how reentrant locks can
simplify a program if you want to use lots of fine-grained locks
instead of a few global ones. Sure it can lead to bugs but anyone
writing a database transaction manager is probably well equipped to
handle them.

Whether or not Go provides all these locking primitives in the
standard library doesn't matter in my opinion, but I do believe that
Go authors should be open to providing the basic facilities that can
be used to implement them on top of the standard library. Like
Dibyendu says it won't detract from the language.

- Mike

Dibyendu Majumdar

unread,
Jun 9, 2010, 3:46:48 PM6/9/10
to golang-nuts
On Jun 9, 5:50 pm, roger peppe <[email protected]> wrote:
> to show what i mean, i've attached a toy version of your code
> that uses a goroutine per contended lock, which means that
> for the presumably common case of an uncontended lock
> that there's no need to interact with a central goroutine.
>
> it relies heavily on the fact that when doing a select,
> nil channels are ignored. the locker() process makes
> sure that it will only accept channel operations on
> those channels that correspond to acceptable operations.
> other operations will delay until they
> are acceptable.

Hi Roger,

I am amazed at the speed with which you have produced this.
Thanks for taking the trouble. I will try to understand it and then
get back to you.

Regards

Dibyendu Majumdar

unread,
Jun 9, 2010, 4:01:52 PM6/9/10
to golang-nuts

On Jun 9, 7:45 pm, Michael Hoisie <[email protected]> wrote:
> I've never had to use reentrant locks in my programs, but there are
> plenty of times I've used locks instead of channels because they make
> my program a lot simpler. And I can see how reentrant locks can
> simplify a program if you want to use lots of fine-grained locks
> instead of a few global ones. Sure it can lead to bugs but anyone
> writing a database transaction manager is probably well equipped to
> handle them.

Hello Michael,

I don't want to give the impression that I desperately need a
reentrant mutex. If you read my original post, I said that the
TryLock()
and LockWithTimeout() were essential.

In my response to Russ I focused on the reentrant mutex because
I thought that the argument being presented against them was
spurious.

It seems that Go does not track ownership of mutexes. This is
a practical issue when implementing reentrant mutexes as such
mutexes must know who owns them. So this may be the real
reason why Go mutexes are not reentrant.

As to what the implications are of having mutexes that do
not follow the standard paradigm for mutexes I am not sure.
As far as I can tell, from a usage point of view, they seem
to follow the regular pattern - ie, the goroutine that locks a
mutex also unlocks it.

Database locks have to be reentrant, and as you say, are
more complex and there is a lot of existing knowledge around
how they should be implemented. My implementation is
pretty much textbook and follows literally the algorithms
published in the classic Transaction Processing book by
Andreas Reuter and Jim Gray.

I am amazed that anyone thinks that these can be implemented
efficiently using channels, but I am willing to try and understand
this point of view. Without a full implementation and comparison
tests it is hard to say whether a channel based implementation
is viable.

Regardless of this debate, Go seems to be very good language.
And for the usage I have in mind, good support for locking primitives
is essential. Thankfully I can implement these myself in C and
use them in Go code.

Regards
Dibyendu

Dibyendu Majumdar

unread,
Jun 9, 2010, 4:07:03 PM6/9/10
to golang-nuts
On Jun 9, 2:35 pm, Jessta <[email protected]> wrote:
> From a brief look at the code you linked to,(a bit of constructive criticism)
> 1. You have a linked list implementation in there to implement a
> queue, you could use the linked list that is part of the standard lib
> and use the "range" keyword in your for loop so you don't have to do
> this "for link := item.queue.GetHead(); link != nil; link =
> link.Next() {"

Hi, why do you think I use my own implementation? Its not
gratuitous I assure you.

> 2. In that whole program you haven't used a single channel or a
> goroutine, which is kind of weird for >1000 lines of Go. A program
> like this with lots of complex locking stuff seems like it could
> really show off Go's concurrency features, but this just reads like a
> Java program with slightly different syntax.

The lockmgr is one component in a large program.
Goroutines will be started by the server and not the lockmgr.

Regards

Dibyendu Majumdar

unread,
Jun 9, 2010, 6:29:15 PM6/9/10
to golang-nuts

On Jun 9, 5:50 pm, roger peppe <[email protected]> wrote:
> as an alternative to andrew's approach, it may be possible
> to let go's channels do quite a lot of the work of queuing etc.
>
> to show what i mean, i've attached a toy version of your code
> that uses a goroutine per contended lock, which means that
> for the presumably common case of an uncontended lock
> that there's no need to interact with a central goroutine.
>
> it relies heavily on the fact that when doing a select,
> nil channels are ignored. the locker() process makes
> sure that it will only accept channel operations on
> those channels that correspond to acceptable operations.
> other operations will delay until they
> are acceptable.
>

Hi Roger,
I think I understand how your implementation works.
However, I ma getting some errors when running the following
simple test:


func main() {
m := NewLockManager()
/*
go func() {
for i:=0; i<10000; i++ {
m.Acquire(1, Exclusive, true)
m.Release(1, Exclusive)
}
}()
*/
for i:=0; i<10000; i++ {
m.Acquire(1, Exclusive, true)
m.Release(1, Exclusive)
}
}

The error I get is:


panic: runtime error: invalid memory address or nil pointer
dereference

panic PC=0x2b2f50
runtime.panic+0x7c /Users/dibyendumajumdar/go/src/pkg/runtime/proc.c:
1012
runtime.panic(0x0, 0xbd6a)
panicstring+0x60 /Users/dibyendumajumdar/go/src/pkg/runtime/runtime.c:
83
panicstring(0x52fd8, 0x2d41f0)
sigpanic+0x84 /Users/dibyendumajumdar/go/src/pkg/runtime/darwin/
thread.c:459
sigpanic()
runtime.chansend1+0x11 /Users/dibyendumajumdar/go/src/pkg/runtime/
chan.c:400
runtime.chansend1(0x8, 0xffffffff)
main.*Lock·Release+0x28 /Users/dibyendumajumdar/try-catch-finally/
lockmgr.go/lockmanager.go:124
main.*Lock·Release(0x0, 0x6, 0x1e81, 0x2d13e0)
main.*LockManager·Release+0x40 /Users/dibyendumajumdar/try-catch-
finally/lockmgr.go/lockmanager.go:81
main.*LockManager·Release(0x2d13e0, 0x6, 0x100000001, 0x2d13e0,
0x1ca4, ...)
main.main+0xa4 /Users/dibyendumajumdar/try-catch-finally/lockmgr.go/
lockmanager.go:43
main.main()
mainstart+0xf /Users/dibyendumajumdar/go/src/pkg/runtime/amd64/asm.s:
60
mainstart()
goexit /Users/dibyendumajumdar/go/src/pkg/runtime/proc.c:145
goexit()

Please see my next email.

Regards

Dibyendu Majumdar

unread,
Jun 9, 2010, 6:46:23 PM6/9/10
to golang-nuts


On Jun 9, 5:50 pm, roger peppe <[email protected]> wrote:
> as an alternative to andrew's approach, it may be possible
> to let go's channels do quite a lot of the work of queuing etc.

Hi Roger,

I am trying to run a simple test to compare how your
version performs to mine. Please see my previous post
for the error I am getting.

My version has a bug too - as it is going into
an endless loop in the method findLock() - I have a fixed
version that I shall check in. But after the fix, the timing from
my version for the same test is:

Dibyendu-Majumdars-MacBook:lockmgr.go dibyendumajumdar$ time ./lockmgr

real 0m0.063s
user 0m0.048s
sys 0m0.003s

We shall see how the channel based version performs after you have
fixed the issue.

I'd also like to visualize what is happening in the two
implementations in terms
of resource usage, mutexes, etc.

Lets take a scenario where there are 10000 different locks acquired by
various
tasks.

In my implementation, this will probably lead to a hashTable of size
12289.
Each bucket will have a mutex - so we will have 12289 mutexes.
Assuming 10000 different locks, there will be 10000 lock headers.
Assuming that there is just one requester, there will be 1 request per
lock,
so 10000 requests.
There will be 1 goroutine for the requester.
0 channels.

Now, with your version, for the same scenario, if my understanding of
your code is correct, we will have:
10000 goroutines, one for each lock.
80000 channels - 6 per mode per lock, 1 unlock per lock, 1 freec per
lock
10000 mutexes - 1 per lock
Plus I am not counting the mutexes used by channels - I believe it is
more than one per channel?

Have I understood this correctly? Please correct me if I have got the
numbers wrong.

Regards

Dibyendu

Dibyendu Majumdar

unread,
Jun 9, 2010, 7:08:54 PM6/9/10
to golang-nuts
And here is another test result with following code:

func main() {

runtime.GOMAXPROCS(2)
var i IntObject = 33
var j IntObject = 33
var k IntObject = 34
m := NewLockManager()

go func() {
for n:=0; n < 10000; n++ {
m.Acquire(i, j, EXCLUSIVE, MANUAL_DURATION, 0)
if n % 500 == 0 {
fmt.Printf("goroutine %v iteration\n",
n)
}
m.Release(i, j, false)
}
}()

for n:=0; n < 10000; n++ {
m.Acquire(k, j, EXCLUSIVE, MANUAL_DURATION, 0)
if n % 500 == 0 {
fmt.Printf("main %v iteration\n", n)
}
m.Release(k, j, false)
}
}


Dibyendu-Majumdars-MacBook:lockmgr.go dibyendumajumdar$ time ./lockmgr
main 0 iteration
goroutine 0 iteration
main 500 iteration
main 1000 iteration
goroutine 500 iteration
main 1500 iteration
goroutine 1000 iteration
main 2000 iteration
main 2500 iteration
main 3000 iteration
goroutine 1500 iteration
main 3500 iteration
main 4000 iteration
goroutine 2000 iteration
goroutine 2500 iteration
main 4500 iteration
goroutine 3000 iteration
goroutine 3500 iteration
goroutine 4000 iteration
goroutine 4500 iteration
main 5000 iteration
goroutine 5000 iteration
main 5500 iteration
goroutine 5500 iteration
main 6000 iteration
goroutine 6000 iteration
main 6500 iteration
goroutine 6500 iteration
main 7000 iteration
main 7500 iteration
goroutine 7000 iteration
main 8000 iteration
main 8500 iteration
goroutine 7500 iteration
main 9000 iteration
goroutine 8000 iteration
goroutine 8500 iteration
goroutine 9000 iteration
main 9500 iteration
goroutine 9500 iteration

real 0m0.175s
user 0m0.147s
sys 0m0.101s

roger peppe

unread,
Jun 10, 2010, 7:01:49 AM6/10/10
to Dibyendu Majumdar, golang-nuts
On 9 June 2010 23:46, Dibyendu Majumdar <[email protected]> wrote:
> Now, with your version, for the same scenario, if my understanding of
> your code is correct, we will have:
> 10000 goroutines, one for each lock.
> 80000 channels - 6 per mode per lock, 1 unlock per lock, 1 freec per
> lock
> 10000 mutexes - 1 per lock
> Plus I am not counting the mutexes used by channels - I believe it is
> more than one per channel?
>
> Have I understood this correctly? Please correct me if I have got the
> numbers wrong.

those numbers would be correct (well, almost correct - the freec is
shared between all locks) if all locks are contended.

if a lock is uncontended, then it uses only one allocation (the lock header),
as mutexes do not require any allocation.

so if 1% of the channels are contended, then my code will use 100 goroutines,
700 channels and 10000 mutexes. oh yes, current implementation detail:
once a lock is contended, its goroutine never goes away. the freeLocker
channel (currently unused) can be used to garbage collect
locker processes.

so the number of resources consumed would depend very much on the
actual scenario.

> real 0m0.175s
> user 0m0.147s
> sys 0m0.101s

having fixed things so that Release works, an equivalent test to yours
gives the following timing, which isn't too much different. it might be slower,
but it's perhaps more versatile, as it could easily be adapted to use timeouts,
for example. note that it does test the worst case scenario - a continually
contended lock.

0.21 real 0.17 user 0.13 sys

i attach my code. i was a bit cheeky and rearranged the API somewhat according
to my whims and to allow reentrant locks. this may or may not fit with how
you envisage using it :-). my code also doesn't implement upgrading/degrading,
which may or may not require a complete rewrite of Lock...

as i said before, this code is a toy - i thought it was perhaps a neat way
to solve (some of) this interesting problem, not that it's necessarily
the most efficient
way.

finally, i think it's fair to say that the problem is something of a
special case, being specifically
to do with locks - most problems are not like this.

lockmanager.go

Dibyendu Majumdar

unread,
Jun 10, 2010, 7:59:05 AM6/10/10
to golang-nuts
Hi Roger,

Many thanks for taking the time to do this.


On 10 June 2010 12:01, roger peppe <[email protected]> wrote:

those numbers would be correct (well, almost correct - the freec is
shared between all locks) if all locks are contended.

if a lock is uncontended, then it uses only one allocation (the lock header),
as mutexes do not require any allocation.


Okay my bad. So in the scenario I outlined, you would have 10000 Locks, with mutex each. Because it is uncontended there won't be any channels allocated or goroutines?

That's not too bad.
 
so if 1% of the channels are contended, then my code will use 100 goroutines,
700 channels and 10000 mutexes. oh yes, current implementation detail:
once a lock is contended, its goroutine never goes away. the freeLocker
channel (currently unused) can be used to garbage collect
locker processes.

Fine, I understand now.
 
so the number of resources consumed would depend very much on the
actual scenario.

> real    0m0.175s
> user    0m0.147s
> sys     0m0.101s

having fixed things so that Release works, an equivalent test to yours
gives the following timing, which isn't too much different. it might be slower,
but it's perhaps more versatile, as it could easily be adapted to use timeouts,
for example. note that it does test the worst case scenario - a continually
contended lock.

       0.21 real         0.17 user         0.13 sys

i attach my code. i was a bit cheeky and rearranged the API somewhat according
to my whims and to allow reentrant locks. this may or may not fit with how
you envisage using it :-). my code also doesn't implement upgrading/degrading,
which may or may not require a complete rewrite of Lock...

as i said before, this code is a toy - i thought it was perhaps a neat way
to solve (some of) this interesting problem, not that it's necessarily
the most efficient
way.

That's ok. I haven't looked at the completeness of your implementation, but it is good enough for simple comparisons. Amongst missing features a key one is that you have no concept of a lock owner, whereas in my version, locks are owned. Also various other aspects may be missing - but like I said, for the purposes of simple comparison your implementation is sufficient.

BTW, I am truly impressed by the speed with which you churned this out.
 
finally, i think it's fair to say that the problem is something of a
special case, being specifically
to do with locks - most problems are not like this.

Well, thanks for that. In this forum, it seems the vast majority of people think channels should solve all problems, irrespective of the requirement.

I will test your version again tonight and let you know the results.

Thanks and Regards

Dibyendu

Andrew Gerrand

unread,
Jun 10, 2010, 8:52:37 AM6/10/10
to Dibyendu Majumdar, golang-nuts
On 10 June 2010 13:59, Dibyendu Majumdar <[email protected]> wrote:
> Well, thanks for that. In this forum, it seems the vast majority of people
> think channels should solve all problems, irrespective of the requirement.

FWIW that's not my contention. But you shouldn't be surprised at such
a reaction to someone porting a mutex-heavy program from Java to Go
without even _trying_ Go's concurrency features. :-)

Andrew

roger peppe

unread,
Jun 10, 2010, 8:55:57 AM6/10/10
to golang-nuts
On 10 June 2010 12:57, Dibyendu Majumdar <[email protected]> wrote:
> Many thanks for taking the time to do this.

it was a fun problem to have a look at.
if one has something that one thinks might be a universal
solvent, it's always interesting to see whether and how it dissolves
a new material :-)

> That's ok. I haven't looked at the completeness of your implementation, but
> it is good enough for simple comparisons. Amongst missing features a key one
> is that you have no concept of a lock owner, whereas in my version, locks
> are owned.

actually, in the last version i sent you, locks are owned
(see the LockOwner type), and thus can be reentrant.

i wasn't sure how reentrancy should work if the
the same process tries to change the lock mode though.

Dibyendu Majumdar

unread,
Jun 10, 2010, 1:01:58 PM6/10/10
to golang-nuts
On Jun 10, 1:52 pm, Andrew Gerrand <[email protected]> wrote:
> FWIW that's not my contention. But you shouldn't be surprised at such
> a reaction to someone porting a mutex-heavy program from Java to Go
> without even _trying_ Go's concurrency features. :-)

Hi Andrew,

I prefer a different approach to porting my code. I would first like
to port with minimum changes so that I know I have working version
that can run all the test cases. Once this is done, then I shall start
looking at where refactoring would benefit the implementation.

The reason for this is that I could be spending ages trying to
change the way things work without any real benefit. I am pretty
clear about what I see as Go advantages (refer my other post),
and these I shall use in the port. The rest is really nice to have,
and can wait until there is a need for it.

I am quite happy to try Go's concurrency features where they
make sense. I shall use channels happily in the tcp server that
handles client requests because this is where I think channels are
most useful.

Regards
Dibyendu

Dibyendu Majumdar

unread,
Jun 10, 2010, 5:21:43 PM6/10/10
to golang-nuts


On Jun 10, 1:55 pm, roger peppe <[email protected]> wrote:
> actually, in the last version i sent you, locks are owned
> (see the LockOwner type), and thus can be reentrant.
>
> i wasn't sure how reentrancy should work if the
> the same process tries to change the lock mode though.

Thanks Roger. It is performing surprisingly well which has
me puzzled a bit - not because your version is performing so well,
but why mine isn't performing better.

I am checking what's going on ... will get back once I have done
some more investigation.

Regards
Dibyendu

Dibyendu Majumdar

unread,
Jun 11, 2010, 8:31:13 PM6/11/10
to golang-nuts
Hi Roger,

I am still trying to figure out what helps and what doesn't
hep Go performance. Finding some weird stuff that I can't
explain. Running a modified version of my program (will describe
the changes later) with 2 parallel 100000 iterations I get:

real 0m1.572s
user 0m1.162s
sys 0m1.321s

Now I add a debug fmt.Printf() in one of the methods and try
again:

real 0m0.562s
user 0m0.494s
sys 0m0.120s

What could be going on here? How can a debug Printf() cause
performance to improve?

Now with 3 goroutines/debug statement:

real 0m1.129s
user 0m0.889s
sys 0m0.384s

If I remove the debug statement - with 3 goroutines I get:

real 0m12.738s
user 0m6.368s
sys 0m12.436s

What????

Just doesn't make sense.

Just as a reference: running your version with 2 parallel
iterations of 100000 I am getting:

real 0m2.470s
user 0m2.306s
sys 0m1.575s

And with 3 parallel iterations:

real 0m2.836s
user 0m2.987s
sys 0m1.849s

Regards
Dibyendu

Dibyendu Majumdar

unread,
Jun 12, 2010, 11:09:06 AM6/12/10
to roger peppe, golang-nuts
Hello Roger,

I have a query regarding your implementation:
Is the algorithm fair - ie - will it honour the order 
in which requests arrive?

Many thanks
Dibyendu

On 12 June 2010 12:02, Dibyendu Majumdar <[email protected]> wrote:
Hi,

In order to compare the performance of the two implementations
I have been trying to eliminate any spurious differences that may
affect performance. Example of stuff in my version that could be
adversely affecting the performance:

1. Use of reflection/interfaces in the linked list.
2. The LockOwner and Lockable implementations are generic
    in my version as opposed to simple types
3. The no contention case has more stuff going on in
    my implementation - new request acquired, added to a
    list etc.
4. The need to search for lockOwner, whereas in your
    implementation the lockOwner invokes the call.
5. The freeing of locks when they are not locked, and subsequent
    allocation.
6. The use of memory allocation as locks and requests are
     acquired and released.

As a baseline I want to have version that is as fast as
yours in the simple no contention case, at present this isn't
true because of the extra stuff that is going on.

Second aim is to have a version where the only difference is
due to how locks are managed - i.e., using your channels/
goroutine based approach versus mine.

I have more work to do, and some tidying up, and then
I shall post the new version.

Anyway, the thing that is really throwing me is the behaviour
when the number of goroutines is increased beyond 2. Seems
like some kind of contention but can't figure out what.

Regards
Dibyendu


roger peppe

unread,
Jun 12, 2010, 2:26:27 PM6/12/10
to Dibyendu Majumdar, golang-nuts
On 12 June 2010 16:09, Dibyendu Majumdar <[email protected]> wrote:
> Hello Roger,
> I have a query regarding your implementation:
> Is the algorithm fair - ie - will it honour the order
> in which requests arrive?
> Many thanks
> Dibyendu

no, it won't. several requests at the same locking level
will queue, but requests at different levels will
be received by "fair sharing" i.e. randomly.

as for your performance issue, it's just possible
that you're triggering an issue that i discovered quite
a while ago,

http://groups.google.com/group/golang-nuts/browse_thread/thread/ea7ca41edeee5c7b/43f8766ab1c09272

i did actually look into this and found that it
was a scheduling issue where two processes would
get into lock-step and end up triggering the garbage collection
every time a process was scheduled.


as for your implementation, i think realistically, my approach
probably isn't viable, although technically interesting, as
you won't have enough control over the locking semantics
(e.g. fairness). i think i'd probably go for a similarly hybrid
approach but have the locker process receive on a single
channel and send replies to unlock clients when needed.
it would need to manually manage the queues, as your
current implementation does.

Reply all
Reply to author
Forward
0 new messages